This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Hybrid PHD-PMB Trajectory Smoothing Using Backward Simulation

Yuxuan Xia1, Ángel F. García-Fernández2, and Lennart Svensson3 This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.1Yuxuan Xia is with Zenseact, Gothenburg, Sweden, and the Department of Electrical Engineering, Linköping University, Linköping, Sweden. [email protected]2Ángel F. García-Fernández is with the Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool, United Kingdom, and ARIES Research Centre, Universidad Antonio de Nebrija, Madrid, Spain. [email protected]3Lennart Svensson is with the Department of Electrical Engineering, Chalmers University of Technology, Gothenburg, Sweden. [email protected]
Abstract

The probability hypothesis density (PHD) and Poisson multi-Bernoulli (PMB) filters are two popular set-type multi-object filters. Motivated by the fact that the multi-object filtering density after each update step in the PHD filter is a PMB without approximation, in this paper we present a multi-object smoother involving PHD forward filtering and PMB backward smoothing. This is achieved by first running the PHD filtering recursion in the forward pass and extracting the PMB filtering densities after each update step before the Poisson Point Process approximation, which is inherent in the PHD filter update. Then in the backward pass we apply backward simulation for sets of trajectories to the extracted PMB filtering densities. We call the resulting multi-object smoother hybrid PHD-PMB trajectory smoother. Notably, the hybrid PHD-PMB trajectory smoother can provide smoothed trajectory estimates for the PHD filter without labeling or tagging, which is not possible for existing PHD smoothers. Also, compared to the trajectory PHD filter, which can only estimate alive trajectories, the hybrid PHD-PMB trajectory smoother enables the estimation of the set of all trajectories. Simulation results demonstrate that the hybrid PHD-PMB trajectory smoother outperforms the PHD filter in terms of both state and cardinality estimates, and the trajectory PHD filter in terms of false detections.

I Introduction

Multi-object filtering concerns the estimation of the number of objects and their current states based on a sequence of noisy measurements [1, 2]. This problem is compounded by missed detections, clutter, and data association uncertainties. To address the multi-object filtering problem in a computationally efficient way, the probability hypothesis density (PHD) filter recursively propagates the first-order moment approximation of the multi-object posterior density, which is also known as the PHD [3]. By doing so, the PHD filter operates on the single-object space and, consequently, avoids the data association problem arising from multiple objects. Due to its computational efficiency, the PHD filter has been widely used in many applications [4, 5, 6, 7, 8, 9].

The estimation of multi-object states by also using measurements at later time steps is called multi-object smoothing. The PHD smoother involves first a forward PHD filtering recursion and then a backward smoothing recursion [10, 11]; the latter is performed by propagating the PHD of the multi-object smoothing density backward in time. Particle filter implementations of the PHD smoother were provided in [10, 11, 12, 13, 14] and the closed-form Gaussian implementations were presented in [15, 16]. However, it has been reported in [11, 12, 16] that, compared to the PHD filter, the PHD smoother reduces the localization error, but it does not necessarily improve the cardinality estimate. In particular, the PHD smoother suffers from premature object deaths [12] by reporting missed detections at earlier time steps when objects disappear, with a lag equal to the smoothing lag.

A common drawback of the PHD filter and smoother is that they do not maintain track continuity, and thus, cannot provide object trajectory estimates. A simple track building procedure for the PHD filter is by adding labels/tags to the object states or PHD components, and track continuity can be established by linking the object states or PHD components with the same label/tag over time [17, 18]. These methods may work well in some scenarios but can also result in track switches, and even missed and false detections, see [19].

A mathematically rigorous way to build trajectories for the PHD filter without using any labels/tags is by considering sets of trajectories [20]. The resulting filter is called the trajectory PHD filter, which directly provides object trajectory estimates by recursively propagating a Poisson multi-object density on the space of sets of trajectories [19]. Furthermore, smoothed trajectories can be obtained in forward filtering by considering smoothing-while-filtering. However, the trajectory PHD filter is mainly used for estimating trajectories of alive objects, and it cannot properly estimate the set of all trajectories, which also includes trajectories of dead objects.

The possibility of generating sets of smoothed trajectories for both alive and dead objects using a sequence of (unlabeled) multi-object filtering densities has been explored in [21]. In particular, this work showed that we can sample sets of trajectories using backward simulation [22] even when the forward multi-object filter is an unlabeled filter that does not explicitly maintain track continuity. The presented multi-trajectory smoothing solution has been applied to the Poisson multi-Bernoulli (PMB) filters [23, 24], and the resulting multi-trajectory smoother has achieved state-of-the-art multi-trajectory estimation performance. However, there are no similar methods for the PHD filters111Backward simulation was first applied to the particle-based PHD filter in [13] to generate smoothed trajectories with labels, but the algorithm developed in [13] can only track up to two objects..

In this paper, we present a hybrid PHD-PMB trajectory smoother, which involves PHD forward filtering and PMB backward smoothing. In PHD filtering, the predicted multi-object density is of the form Poisson Point Process (PPP) without approximation, but the updated multi-object density is of the form PMB [23]. The Bayesian filtering recursion is achieved by finding the Poisson approximation of the PMB density that matches its PHD. The key difference between our proposed hybrid PHD-PMB trajectory smoother and all the existing PHD smoothers [10, 11, 12, 14, 15, 16, 13] is that we perform backward smoothing on the PMB multi-object densities in the PHD filtering recursion before the Poisson approximation. Considering that the PMB representation of the multi-object posterior is an inherent intermediate result that we obtain for free in the PHD update step (as we will show in Section IV), it makes sense to perform smoothing using this more informative multi-object posterior representation.

The hybrid PHD-PMB trajectory smoother leverages the simplicity of PHD filtering [4] and the ability to generate smoothed trajectories using backward simulation [25, 21]. Importantly, the hybrid PHD-PMB trajectory smoother makes it possible to generate smoothed trajectories for all the objects that have existed in a theoretically sound manner without using any labels/tags. We compare the proposed hybrid PHD-PMB trajectory smoother to the PHD [4] and trajectory PHD [19] filters in a simulation study, and the results demonstrate that the proposed method has superior multi-object state and trajectory estimation performance. The results also show that the hybrid PHD-PMB trajectory smoother is more robust to the premature object death problem [11, 12].

The rest of the paper is organized as follows. In Section II, we introduce the background. The problem formulation is given in Section III. We present the filtering and smoothing recursions of the hybrid PHD-PMB trajectory smoother in Section IV. The results are presented in Section V, and the conclusions are drawn in Section VI.

II Background

In this section, we introduce the state variables of interest, densities on sets of objects and sets of trajectories, and the multi-object models. The set of finite subsets of a generic space DD is denoted by (D){\cal F}(D), and the cardinality of a set A(D)A\in{\cal F}(D) is |A||A|. The sequence of ordered positive integers (α,α+1,,γ1,γ)(\alpha,\alpha+1,\dots,\gamma-1,\gamma) is α:γ\alpha:\gamma. We use \uplus to denote a union of sets that are mutually disjoint and f,g\langle f,g\rangle to denote the inner product f(x)g(x)𝑑x\int f(x)g(x)dx. In addition, we use δx()\delta_{x}(\cdot) and δx[]\delta_{x}[\cdot] to represent the Dirac and Kronecker delta functions, respectively, centered at xx.

II-A State Variables

The single-object state is described by a vector xnxx\in\mathbb{R}^{n_{x}}. A trajectory is represented as a variable X=(t,x1:ν)X=(t,x^{1:\nu}) where tt is the initial time step of the trajectory, ν\nu is its length, and x1:ν=(x1,,xν)x^{1:\nu}=(x^{1},\dots,x^{\nu}) denotes a finite sequence of length ν\nu that contains the object states at time steps t:t+ν1t:t+\nu-1. For two time steps α\alpha and γ\gamma, αγ\alpha\leq\gamma, a trajectory (t,x1:ν)(t,x^{1:\nu}) in the time interval α:γ\alpha:\gamma existing from time step tt to t+ν1t+\nu-1 satisfies that αtt+ν1γ\alpha\leq t\leq t+\nu-1\leq\gamma, and the variable (t,ν)(t,\nu) hence belongs to the set I(α,γ)={(t,ν):αtγand1νγt+1}I_{(\alpha,\gamma)}=\{(t,\nu):\alpha\leq t\leq\gamma~{}\text{and}~{}1\leq\nu\leq\gamma-t+1\}. A single trajectory in time interval α:γ\alpha:\gamma therefore belongs to the space T(α,γ)=(t,ν)I(α,γ){t}×νnxT_{(\alpha,\gamma)}=\uplus_{(t,\nu)\in I_{(\alpha,\gamma)}}\{t\}\times\mathbb{R}^{\nu n_{x}} [20].

A set 𝐱(nx){\bf x}\in{\cal F}(\mathbb{R}^{n_{x}}) of single-object states is a finite subset of nx\mathbb{R}^{n_{x}}, and a set 𝐗α:γ(T(α,γ)){\bf X}_{\alpha:\gamma}\in{\cal F}(T_{(\alpha,\gamma)}) of trajectories is a finite subset of T(α,γ)T_{(\alpha,\gamma)}. The subset of trajectories in 𝐗α:γ{\bf X}_{\alpha:\gamma} that were alive at time step η\eta where αηγ\alpha\leq\eta\leq\gamma is denoted by

𝐗α:γη={(t,x1:ν)𝐗α:γ:tηt+ν1}.{\bf X}_{\alpha:\gamma}^{\eta}=\left\{\left(t,x^{1:\nu}\right)\in{\bf X}_{\alpha:\gamma}:t\leq\eta\leq t+\nu-1\right\}. (1)

Given a single-object trajectory X=(t,x1:ν)X=(t,x^{1:\nu}), the set of object states at time step kk is

τk(X)={{xk+1t},tkt+ν1,otherwise\tau^{k}(X)=\begin{cases}\left\{x^{k+1-t}\right\},&t\leq k\leq t+\nu-1\\ \emptyset,&\text{otherwise}\end{cases} (2)

and given a set 𝐗α:γ{\bf X}_{\alpha:\gamma} of trajectories, the set of object states at time step kk is τk(𝐗α:γ)=X𝐗α:γτk(X)\tau^{k}({\bf X}_{\alpha:\gamma})=\bigcup_{X\in{\bf X}_{\alpha:\gamma}}\tau^{k}(X).

II-B Densities on Sets of Objects

Given a real-valued function π()\pi(\cdot) on the space (nx){\cal F}(\mathbb{R}^{n_{x}}) of finite sets of objects, its set integral is [26]

π(𝐱)δ𝐱=π()+n=11n!π({x1,,xn})d(x1,,xn).\int\pi({\mathbf{x}})\delta{\mathbf{x}}=\pi(\emptyset)+\sum_{n=1}^{\infty}\frac{1}{n!}\int\pi\left(\left\{x_{1},\dots,x_{n}\right\}\right)d(x_{1},\cdots,x_{n}). (3)

A function π()\pi(\cdot) on the space (nx){\cal F}(\mathbb{R}^{n_{x}}) is a multi-object density if π()0\pi(\cdot)\geq 0 and its set integral is one.

A PMB density is of the form [23]

f(𝐱)\displaystyle f({\bf x}) =𝐲𝐰=𝐱fp(𝐲)fmb(𝐰),\displaystyle=\sum_{\mathbf{y}\uplus\mathbf{w}={\bf x}}f^{p}(\mathbf{y})f^{mb}(\mathbf{w}), (4)
fp(𝐱)\displaystyle f^{p}({\bf x}) =eλ,1x𝐱λ(x),\displaystyle=e^{-\left\langle\lambda,1\right\rangle}\prod_{x\in\mathbf{x}}\lambda(x), (5)
fmb(𝐱)\displaystyle f^{mb}({\bf x}) =l=1n𝐱l=𝐱i=1nfi(𝐱i),\displaystyle=\sum_{\uplus_{l=1}^{n}{\bf x}^{l}={\bf x}}\prod_{i=1}^{n}f^{i}\left({\bf x}^{i}\right), (6)
fi(𝐱)\displaystyle f^{i}({\bf x}) ={1ri𝐱=ripi(x)𝐱={x}0otherwise.\displaystyle=\begin{cases}1-r^{i}&{\bf x}=\emptyset\\ r^{i}p^{i}(x)&{\bf x}=\{x\}\\ 0&\text{otherwise}.\end{cases} (7)

The PMB in (4) is the union of two independent sets: a PPP with density (5), parameterized by Poisson intensity λ()\lambda(\cdot), and an MB with density (6), where the ii-th Bernoulli component has density (7) with existence probability rir^{i} and single-object density pi()p^{i}(\cdot).

II-C Densities on Sets of Trajectories

Given a real-valued function π()\pi(\cdot) on the single trajectory space T(α,γ)T_{(\alpha,\gamma)}, its integral is [20]

π(X)𝑑X=(t,ν)I(α,γ)π(t,x1:ν)𝑑x1:ν,\int\pi(X)dX=\sum_{(t,\nu)\in I_{(\alpha,\gamma)}}\int\pi\left(t,x^{1:\nu}\right)dx^{1:\nu}, (8)

which goes through all possible start times, lengths and object states of trajectory XT(α,γ)X\in T_{(\alpha,\gamma)}. A function π()\pi(\cdot) on the space (T(α,γ)){\cal F}(T_{(\alpha,\gamma)}) is a multi-trajectory density if π()0\pi(\cdot)\geq 0 and its set integral is one. The trajectory PPP and trajectory MB densities are defined similar to their counterparts in (5) and (6), respectively. Given two single-object trajectories X=(t,x1:ν)X=(t,x^{1:\nu}) and Y=(t,y1:ν)Y=(t^{\prime},y^{1:\nu^{\prime}}), the trajectory Dirac delta function is defined as

δY(X)=δt[t]δν[ν]δy1:ν(x1:ν),\delta_{Y}(X)=\delta_{t^{\prime}}[t]\delta_{\nu^{\prime}}[\nu]\delta_{y^{1:\nu^{\prime}}}\left(x^{1:\nu}\right), (9)

and the multi-trajectory Dirac delta function centred at 𝐘{\bf Y} is defined as [26, Sec. 11.3.4.3]

δ𝐘(𝐗)={0|𝐗||𝐘|,1𝐗=𝐘=,σΓni=1nδYσi(Xi){𝐗={Xi}i=1n𝐘={Yi}i=1n.\delta_{{\bf Y}}({\bf X})=\begin{cases}0&|{\bf X}|\neq|{\bf Y}|,\\ 1&{\bf X}={\bf Y}=\emptyset,\\ \sum_{\sigma\in\Gamma_{n}}\prod_{i=1}^{n}\delta_{Y_{\sigma_{i}}}(X_{i})&\begin{cases}{\bf X}=\{X_{i}\}_{i=1}^{n}\\ {\bf Y}=\{Y_{i}\}_{i=1}^{n}.\end{cases}\end{cases} (10)

Given a set 𝐱k{\bf x}_{k} of object states at time step kk, the set of trajectories in the time interval k:kk:k is

𝐗k:k={X=(k,x1:1):x1𝐱k},{\bf X}_{k:k}=\left\{X=\left(k,x^{1:1}\right):x^{1}\in{\bf x}_{k}\right\}, (11)

where trajectory X=(t,x1:ν)𝐗k:kX=(t,x^{1:\nu})\in{\bf X}_{k:k} has start time t=kt=k and length ν=1\nu=1. Therefore, it holds that the multi-trajectory density π(𝐗k:k)\pi({\bf X}_{k:k}), which is zero for trajectories outside the interval k:kk:k, takes the same value as the multi-object state density f(τk(𝐗k:k))f(\tau^{k}({\bf X}_{k:k})).

II-D Multi-Object Models

We consider the standard multi-object models [26]. For a given multi-object state 𝐱k\mathbf{x}_{k} at time kk, each object state x𝐱x\in\mathbf{x} is either detected with probability pD(x)p^{D}(x) and generates one measurement zz with density (z|x)\ell(z|x), or missed detected with probability 1pD(x)1-p^{D}(x). The set 𝐳k\mathbf{z}_{k} of measurements at time step kk is the union of the object-generated measurements and PPP clutter with intensity λC()\lambda^{C}(\cdot).

Given the current multi-object state 𝐱k{\bf x}_{k}, each object x𝐱kx\in{\bf x}_{k} survives with probability pS(x)p^{S}(x), and moves to a new state with a Markovian transition density g(|x)g(\cdot|x), or dies with probability 1pS()1-p^{S}(\cdot). The multi-object state at the next time step is the union of the surviving objects and new objects, which are born independently of the rest. The set of newborn objects is a PPP with intensity λB()\lambda^{B}(\cdot).

II-E Multi-Trajectory Dynamic Model

Performing backward smoothing on multi-trajectory density requires a dynamic model for the set of all trajectories that have existed up to the current time step. Given a set 𝐗1:k{\bf X}_{1:k} of all trajectories in the time interval 1:k1:k, each trajectory X=(t,x1:ν)𝐗1:kX=(t,x^{1:\nu})\in{\bf X}_{1:k} “survives” with probability one, pS(X)=1p^{S}(X)=1, and moves to a new state according to [20]

gk+1(t,y1:ν|X)\displaystyle g^{k+1}\left(t^{\prime},y^{1:\nu^{\prime}}|X\right) =|τk(X)|[(1pS(xν))δX(t,y1:ν)\displaystyle=\left|\tau^{k}(X)\right|\left[\left(1-p^{S}\left(x^{\nu}\right)\right)\delta_{X}\left(t^{\prime},y^{1:\nu^{\prime}}\right)\right.
+pS(xν)g(yν|xν)δX(t,y1:ν1)]\displaystyle+\left.p^{S}\left(x^{\nu}\right)g\left(y^{\nu^{\prime}}|x^{\nu}\right)\delta_{X}\left(t^{\prime},y^{1:\nu^{\prime}-1}\right)\right]
+(1|τk(X)|)δX(t,y1:ν).\displaystyle+\left(1-\left|\tau^{k}(X)\right|\right)\delta_{X}\left(t^{\prime},y^{1:\nu^{\prime}}\right). (12)

That is, if the object trajectory XX has died before time step kk, the trajectory remains unaltered with probability one. If trajectory XX exists at time step kk, it remains unaltered with probability 1pS(xν)1-p^{S}(x^{\nu}), or the new final object state yνy^{\nu^{\prime}} is generated according to the single-object transition density with probability pS(xν)p^{S}(x^{\nu}). The set 𝐗1:k+1{\bf X}_{1:k+1} of trajectories in the time interval 1:k+11:k+1 is the union of “surviving” trajectories and newborn trajectories, where each new trajectory (t,x1:ν)(t,x^{1:\nu}) has a deterministic start time t=k+1t=k+1, length ν=1\nu=1, and its multi-object state is a PPP with intensity λB()\lambda^{B}(\cdot). The trajectory PPP birth at time step k+1k+1 has intensity

λk+1B(t,x1:ν)=δk+1[t]δ1[ν]λB(xν).\lambda^{B}_{k+1}\left(t,x^{1:\nu}\right)=\delta_{k+1}[t]\delta_{1}[\nu]\lambda^{B}\left(x^{\nu}\right). (13)

III Problem Formulation

In backward smoothing for sets of trajectories, we aim to compute the multi-trajectory posterior density in a given time interval using a sequence of multi-object filtering densities and the multi-trajectory dynamic model.

We denote the multi-object state density at time step k{k,k+1}k^{\prime}\in\{k,k+1\} and the multi-trajectory density in time interval α:γ\alpha:\gamma, both conditioned on the sequence of sets of measurements 𝐳1:k=(𝐳1,,𝐳k){\bf z}_{1:k}=({\bf z}_{1},\dots,{\bf z}_{k}) up to and including time step kk, by fk|k()f_{k^{\prime}|k}(\cdot) and πα:γ|k()\pi_{\alpha:\gamma|k}(\cdot), respectively. In forward filtering, the Bayes filter propagates the multi-object posterior density of 𝐱k\mathbf{x}_{k} in time via the prediction and update steps:

fk|k1(𝐱)\displaystyle f_{k|k-1}({\bf x}) =g(𝐱|𝐱)fk1|k1(𝐱)δ𝐱,\displaystyle=\int g({\bf x}|{\bf x}^{\prime})f_{k-1|k-1}({\bf x}^{\prime})\delta{\bf x}^{\prime}, (14)
fk|k(𝐱)\displaystyle f_{k|k}({\bf x}) =(𝐳k|𝐱)fk|k1(𝐱)(𝐳k|𝐱)fk|k1(𝐱)δ𝐱,\displaystyle=\frac{\ell({\bf z}_{k}|{\bf x})f_{k|k-1}({\bf x})}{\int\ell({\bf z}_{k}|{\bf x})f_{k|k-1}({\bf x})\delta{\bf x}}, (15)

where g(|𝐱)g(\cdot|{\bf x}) is the multi-object transition density and (𝐳k|)\ell({\bf z}_{k}|\cdot) is the measurement likelihood. Given the multi-object models described in Section II-D, the predicted and posterior densities on the current set of object states are PMB mixtures (PMBMs) [23]. A PMB is a special case of PMBM with a single MB.

The multi-trajectory density π1:K|K(𝐗1:K)\pi_{1:K|K}(\mathbf{X}_{1:K}) can be obtained by applying the following multi-trajectory smoothing equation recursively backwards in time [21, Theorem 2]:

πk:K|K(𝐗k:K)=πk:k+1|k(𝐗k:k+1)πk+1:K|K(𝐗k+1:K)fk+1|k(τk+1(𝐗k+1:k+1)),\pi_{k:K|K}({\bf X}_{k:K})=\frac{\pi_{k:k+1|k}({\bf X}_{k:k+1})\pi_{k+1:K|K}({\bf X}_{k+1:K})}{f_{k+1|k}(\tau^{k+1}({\bf X}_{k+1:k+1}))}, (16)

where the predicted multi-trajectory density for the multi-trajectory dynamic model in Section II-E is given by [21, Theorem 1]

πk:k+1|k(𝐗k:k+1)=(k,x1:ν)𝐗k:k+1k[(1+pS(xν)(δ2[ν]1))\displaystyle\pi_{k:k+1|k}({\bf X}_{k:k+1})=\prod_{\left(k,x^{1:\nu}\right)\in{\bf X}_{k:k+1}^{k}}\Bigg{[}\left(1+p^{S}\left(x^{\nu}\right)\left(\delta_{2}[\nu]-1\right)\right)
×δ2[ν]g(xν|xν1)pS(xν1)]πk:k|k(𝐗k:k)\displaystyle~{}~{}~{}\times\delta_{2}[\nu]g\left(x^{\nu}|x^{\nu-1}\right)p^{S}\left(x^{\nu-1}\right)\Bigg{]}\pi_{k:k|k}({\bf X}_{k:k})
×eλk+1B,1(k+1,x1:1)𝐗k+1:k+1λk+1B(k+1,x1:1).\displaystyle~{}~{}~{}\times e^{-\left\langle\lambda_{k+1}^{B},1\right\rangle}\prod_{\left(k+1,x^{1:1}\right)\in\mathbf{X}_{k+1:k+1}}\lambda^{B}_{k+1}(k+1,x^{1:1}). (17)

It can be observed that the predicted multi-trajectory density can be evaluated by multiplying the multi-trajectory density πk:k|k(𝐗k:k)\pi_{k:k|k}({\bf X}_{k:k}), which takes the same value as fk|k(τk(𝐗k:k))f_{k|k}(\tau^{k}({\bf X}_{k:k})), 1pS()1-p^{S}(\cdot) for trajectories that end at time step kk, g(|)pS()g(\cdot|\cdot)p^{S}(\cdot) for trajectories that exist at both time step kk and k+1k+1, and the density of newborn trajectories at time step k+1k+1.

The backward smoothing equation (16), in general, cannot be computed in closed form. An effective solution to evaluate the multi-trajectory posterior is given by backward simulation, which allows us to draw samples in the multi-object trajectory space [21]. Performing backward simulation requires a backward kernel to recursively draw samples of sets of trajectories.

The backward kernel for sets of trajectories conditioned on the set 𝐘{\bf Y} of trajectories in the time interval k+1:Kk+1:K and the sequence of measurement sets up to and including time step KK, satisfies [21, Lemma 3]

πk:K|K(𝐗|𝐘)πk:k+1|k(𝐗k:k+1)δ𝐘(𝐗k+1:K).\pi_{k:K|K}({\bf X}|{\bf Y})\propto\pi_{k:k+1|k}({\bf X}_{k:k+1})\delta_{{\bf Y}}({\bf X}_{k+1:K}). (18)

That is, the backward kernel πk:K|K(𝐗|𝐘)\pi_{k:K|K}({\bf X}|{\bf Y}) is proportional to the predicted multi-trajectory density πk:k+1|k(𝐗k:k+1)\pi_{k:k+1|k}({\bf X}_{k:k+1}) if 𝐘=𝐗k+1:K{\bf Y}={\bf X}_{k+1:K}, and zero otherwise. Evaluating the backward kernel πk:K|K(𝐗|𝐘)\pi_{k:K|K}({\bf X}|{\bf Y}) considers all different ways of associating the trajectories in 𝐗k:k+1{\bf X}_{k:k+1} to the trajectories in 𝐘{\bf Y}.

In this paper, the forward filtering is achieved using a PHD filter [3], and the PMB multi-object filtering densities fk|k(𝐱)f_{k|k}(\mathbf{x}) before the Poisson approximation are stored at each time step. Then backward simulation is performed on these PMB filtering densities to obtain the approximate multi-trajectory posterior.

IV Hybrid PHD-PMB Trajectory Smoother

In this section, we present the forward filtering and backward smoothing recursions of the hybrid PHD-PMB trajectory smoother. An illustration of the complete recursion is shown in Fig. 1. We also discuss some practical considerations for a tractable linear Gaussian implementation.

Refer to caption
Figure 1: Forward filtering and backward smoothing recursions of hybrid PHD-PMB trajectory smoother. TPMBM represents trajectory PMBM [27].

IV-A Forward PHD Filtering

Lemma 1

Given the Poisson multi-object filtering density fk1|k1(𝐱)f_{k-1|k-1}(\mathbf{x}) of the form (5) with Poisson intensity λk1|k1()\lambda_{k-1|k-1}(\cdot) and the multi-object dynamic model described in Section II-D, the predicted multi-object density is Poisson, with intensity

λk|k1(x)=λB(x)+λk1|k1,g(x|)pS().\lambda_{k|k-1}(x)=\lambda^{B}(x)+\left\langle\lambda_{k-1|k-1},g(x|\cdot)p^{S}(\cdot)\right\rangle. (19)

Lemma 1 is given by the PHD prediction step [3].

Lemma 2

Given the Poisson multi-object predicted density fk|k1(𝐱)f_{k|k-1}(\mathbf{x}) of the form (5) with Poisson intensity λk|k1()\lambda_{k|k-1}(\cdot) and the multi-object measurement model described in Section  II-D, the updated multi-object density by the measurement set 𝐳k={z1,,zmk}\mathbf{z}_{k}=\{z_{1},\dots,z_{m_{k}}\} is a PMB of the form (4), with mkm_{k} Bernoulli components, one for each measurement. The updated Poisson intensity is

λk|kp(x)=(1pD(x))λk|k1(x),\lambda_{k|k}^{p}(x)=\left(1-p^{D}(x)\right)\lambda_{k|k-1}(x), (20)

and the ii-th new Bernoulli component created by measurement zkiz_{k}^{i}, i{1,,mk}i\in\{1,\dots,m_{k}\}, is parameterized by

rk|ki\displaystyle r_{k|k}^{i} =λk|k1,(zki|)pD()λC(zki)+λk|k1,(zki|)pD(),\displaystyle=\frac{\left\langle\lambda_{k|k-1},\ell\left(z_{k}^{i}|\cdot\right)p^{D}(\cdot)\right\rangle}{\lambda^{C}\left(z_{k}^{i}\right)+\left\langle\lambda_{k|k-1},\ell\left(z_{k}^{i}|\cdot\right)p^{D}(\cdot)\right\rangle}, (21a)
pk|ki(x)\displaystyle p_{k|k}^{i}(x) =(zki|x)pD(x)λk|k1(x)λk|k1,(zki|)pD().\displaystyle=\frac{\ell\left(z_{k}^{i}|x\right)p^{D}(x)\lambda_{k|k-1}(x)}{\left\langle\lambda_{k|k-1},\ell\left(z_{k}^{i}|\cdot\right)p^{D}(\cdot)\right\rangle}. (21b)

Note that we use superscript pp in (20) to represent the updated PPP intensity before the Kullback-Leibler divergence (KLD) minimization. Lemma 2 is a special case of the PMBM update step [23, Theorem 2] with the predicted MB mixture being empty. In the updated PMB, the PPP represents missed detected objects, whereas the MB represents detected objects.

Lemma 3

The best PPP approximation of the updated PMB density in terms of minimizing the KLD yields the Poisson multi-object density with intensity

λk|k(x)\displaystyle\lambda_{k|k}(x) =λk|kp(x)+i=1mkrk|kipk|ki(x)\displaystyle=\lambda^{p}_{k|k}(x)+\sum_{i=1}^{m_{k}}r^{i}_{k|k}p^{i}_{k|k}(x)
=(1pD(x))λk|k1(x)+\displaystyle=\left(1-p^{D}(x)\right)\lambda_{k|k-1}(x)+
i=1mk(zki|x)pD(x)λk|k1(x)λC(zki)+λk|k1,(zki|)pD().\displaystyle~{}~{}~{}\sum_{i=1}^{m_{k}}\frac{\ell\left(z_{k}^{i}|x\right)p^{D}(x)\lambda_{k|k-1}(x)}{\lambda^{C}\left(z_{k}^{i}\right)+\left\langle\lambda_{k|k-1},\ell\left(z_{k}^{i}|\cdot\right)p^{D}(\cdot)\right\rangle}. (22)

Lemma 3 is proved in [28, Sec. IV], and it is the same as the PHD update step [3]. Note that in PHD filtering, the multi-object intensity (3) is recursively computed over time without explicitly computing the Bernoulli densities (21). Lemma 2 and Lemma 3 describe the PHD filter update as consisting of two steps, where the first step is implicit and gives a PMB representation of the multi-object posterior as an intermediate result before PPP approximation. Lemma 3 can be also interpreted as recycling all the Bernoulli components in PMB filtering [28], meaning that every Bernoulli component is implicitly approximated as a PPP. This yields a less accurate cardinality distribution, but it greatly simplifies the subsequent update steps as the approximate multi-object posterior only contains a single global hypothesis. Since the PMB is a more accurate representation of the multi-object posterior than the PPP, performing backward smoothing on the PMB filtering densities before PPP approximation allows us to exploit more information freely available in forward filtering.

IV-B Backward PMB Smoothing

We first present the backward kernel (18) for PMB filtering densities. Then, we describe how to recursively draw samples of sets of trajectories from the backward kernel.

Theorem 1

Given the PMB filtering density fk|k()f_{k|k}(\cdot), parameterized by Poisson intensity (20) and Bernoulli densities (21), at time step kk, the set 𝐘{\bf Y} of trajectories in the time interval k+1:Kk+1:K, and the multi-trajectory dynamic model described in Section II-E, the backward kernel in the time interval k:Kk:K (18) is a PMBM of the form

πk:K|K(𝐗|𝐘)\displaystyle\pi_{k:K|K}({\bf X}|{\bf Y}) =𝐖𝐕=𝐗πk:K|Kp(𝐖)πk:K|Kmb(𝐕|𝐘),\displaystyle=\sum_{{\bf W}\uplus{\bf V}={\bf X}}\pi_{k:K|K}^{p}({\bf W})\pi_{k:K|K}^{mb}({\bf V}|{\bf Y}), (23)
πk:K|Kp(𝐗)\displaystyle\pi_{k:K|K}^{p}({\bf X}) =eλk:K|Kp,1X𝐗λk:K|Kp(X),\displaystyle=e^{-\left\langle\lambda^{p}_{k:K|K},1\right\rangle}\prod_{X\in{\bf X}}\lambda^{p}_{k:K|K}(X), (24)
πk:K|Kmb(𝐗|𝐘)\displaystyle\pi_{k:K|K}^{mb}({\bf X}|{\bf Y}) =a𝒜k:K|Kwal=1nk:K|K𝐗l=𝐗i=1nk:K|Kπk:K|Ki,ai(𝐗i).\displaystyle=\sum_{a\in{\cal A}_{k:K|K}}w^{a}\sum_{\biguplus^{n_{k:K|K}}_{l=1}{\bf X}^{l}={\bf X}}\prod_{i=1}^{n_{k:K|K}}\pi^{i,a^{i}}_{k:K|K}\left({\bf X}^{i}\right). (25)

The set of global hypotheses is defined as

𝒜k:K|K={(a1,,ank:K|K):ai{1,,hk:K|Ki}i,|k+1:Ki,ai|1,i=1nk:K|Kk+1:Ki,ai=k+1:K},{\cal A}_{k:K|K}=\left\{\left(a^{1},\dots,a^{n_{k:K|K}}\right):a^{i}\in\left\{1,\dots,h^{i}_{k:K|K}\right\}\forall i,\right.\\ \left.\left|{\cal M}^{i,a^{i}}_{k+1:K}\right|\leq 1,\biguplus_{i=1}^{n_{k:K|K}}{\cal M}^{i,a^{i}}_{k+1:K}={\cal M}_{k+1:K}\right\}, (26)

and the weight of global hypothesis a𝒜k:K|Ka\in{\cal A}_{k:K|K} satisfies

wai=1nk:K|Kwk:K|Ki,ai,w^{a}\propto\prod_{i=1}^{n_{k:K|K}}w^{i,a^{i}}_{k:K|K}, (27)

where the proportionality means that normalization is required to ensure that a𝒜k:K|Kwa=1\sum_{a\in{\cal A}_{k:K|K}}w^{a}=1.

In (26), k+1:K={1,,nk+1:K}{\cal M}_{k+1:K}=\{1,\dots,n_{k+1:K}\} is the set of indices of trajectories in 𝐘={Y1,,Ynk+1:K}{\bf Y}=\{Y^{1},\dots,Y^{n_{k+1:K}}\} with Y1,,YlY^{1},\dots,Y^{l} being trajectories of objects that existed at time step k+1k+1, and Yl,,Ynk+1:KY^{l},\dots,Y^{n_{k+1:K}} being trajectories of objects that appeared after time step k+1k+1. Each trajectory in set 𝐘{\bf Y} creates a unique trajectory Bernoulli component, and thus the number of trajectory Bernoulli components in (25) is nk:K|K=mk+nk+1:Kn_{k:K|K}=m_{k}+n_{k+1:K}, indexed by variable i{1,,nk:K|K}i\in\{1,\dots,n_{k:K|K}\}. A global hypothesis is a=(a1,,ank:K|K)a=(a^{1},\dots,a^{n_{k:K|K}}), where ai{1,,hk:K|Ki}a^{i}\in\{1,\dots,h^{i}_{k:K|K}\} is the index to the local hypothesis for the ii-th trajectory Bernoulli component, and hk:K|Kih^{i}_{k:K|K} is its number of local hypotheses.

The Poisson intensity in (24) is

λk:K|Kp(t,x1:ν)=δk[t]δ1[ν](1pS(x1))λk|kp(x1).\lambda^{p}_{k:K|K}\left(t,x^{1:\nu}\right)=\delta_{k}[t]\delta_{1}[\nu]\left(1-p^{S}\left(x^{1}\right)\right)\lambda^{p}_{k|k}\left(x^{1}\right). (28)

For each trajectory Bernoulli component i{1,,mk}i\in\{1,\dots,m_{k}\} in the predicted trajectory PMB πk:k+1|k(𝐗k:k+1)\pi_{k:k+1|k}({\bf X}_{k:k+1}), there are hk:K|Ki=l+1h_{k:K|K}^{i}=l+1 local hypotheses. The local hypothesis that corresponds to the case that the trajectory ended at time step kk, is given by k+1:Ki,1={\cal M}_{k+1:K}^{i,1}=\emptyset and

wk:K|Ki,1\displaystyle w^{i,1}_{k:K|K} =1rk|ki+rk|kipk|ki,1pS,\displaystyle=1-r^{i}_{k|k}+r^{i}_{k|k}\left\langle p^{i}_{k|k},1-p^{S}\right\rangle, (29a)
rk:K|Ki,1\displaystyle r^{i,1}_{k:K|K} =rk|kipk|ki,1pSwk:K|Ki,1,\displaystyle=\frac{r^{i}_{k|k}\left\langle p^{i}_{k|k},1-p^{S}\right\rangle}{w^{i,1}_{k:K|K}}, (29b)
pk:K|Ki,1(t,x1:ν)\displaystyle p^{i,1}_{k:K|K}\left(t,x^{1:\nu}\right) =δk[t]δ1[ν]pk|ki(x1)(1pS(x1))pk|ki,1pS.\displaystyle=\delta_{k}[t]\delta_{1}[\nu]\frac{p^{i}_{k|k}\left(x^{1}\right)\left(1-p^{S}\left(x^{1}\right)\right)}{\left\langle p^{i}_{k|k},1-p^{S}\right\rangle}. (29c)

The local hypothesis that corresponds to the case that the trajectory Bernoulli component is updated by trajectory Yj=(tj,y1:νj)Y^{j}=(t^{j},y^{1:\nu^{j}}), j{1,,l}j\in\{1,\dots,l\} (present at time step k+1k+1, i.e., tj=k+1t^{j}=k+1), is given by k+1:Ki,j+1={j}{\cal M}_{k+1:K}^{i,j+1}=\{j\} and

wk:K|Ki,j+1\displaystyle w^{i,j+1}_{k:K|K} =rk|kipk|ki,g(y1|)pS,\displaystyle=r^{i}_{k|k}\left\langle p^{i}_{k|k},g\left(y^{1}|\cdot\right)p^{S}\right\rangle, (30a)
rk:K|Ki,j+1\displaystyle r^{i,j+1}_{k:K|K} =1,\displaystyle=1, (30b)
pk:K|Ki,j+1(t,x1:ν)\displaystyle p^{i,j+1}_{k:K|K}\left(t,x^{1:\nu}\right) =δk[t]δνj+1[ν]δy1:νj(x2:ν)\displaystyle=\delta_{k}[t]\delta_{\nu^{j}+1}[\nu]\delta_{y^{1:\nu^{j}}}\left(x^{2:\nu}\right)
×g(y1|x1)pk|ki(x1)pS(x1)pk|ki,g(y1|)pS.\displaystyle~{}~{}~{}\times\frac{g\left(y^{1}|x^{1}\right)p^{i}_{k|k}\left(x^{1}\right)p^{S}\left(x^{1}\right)}{\left\langle p^{i}_{k|k},g\left(y^{1}|\cdot\right)p^{S}\right\rangle}. (30c)

The trajectory Bernoulli component created by trajectory Yj=(tj,y1:νj)Y^{j}=(t^{j},y^{1:\nu^{j}}), j{1,,l}j\in\{1,\dots,l\}, has two local hypotheses hk:K|Ki=2h^{i}_{k:K|K}=2, i=mk+ji=m_{k}+j. The first local hypothesis represents the case that the Bernoulli comment does not exist, and it is given by k+1:Ki,1={\cal M}_{k+1:K}^{i,1}=\emptyset and

wk:K|Ki,1=1,rk:K|Ki,1=0.w^{i,1}_{k:K|K}=1,\quad r_{k:K|K}^{i,1}=0. (31)

The second one is given by k+1:Ki,2={j}{\cal M}_{k+1:K}^{i,2}=\{j\} and

wk:K|Ki,2\displaystyle w^{i,2}_{k:K|K} =λk+1B(y1)+λk|kp,g(y1|)pS,\displaystyle=\lambda^{B}_{k+1}\left(y^{1}\right)+\left\langle\lambda^{p}_{k|k},g\left(y^{1}|\cdot\right)p^{S}\right\rangle, (32a)
rk:K|Ki,2\displaystyle r^{i,2}_{k:K|K} =1,\displaystyle=1, (32b)
pk:K|Ki,2(t,x1:ν)\displaystyle p^{i,2}_{k:K|K}\left(t,x^{1:\nu}\right) =w¯k:K|Ki,2δ(tj,y1:νj)(t,x1:ν)\displaystyle=\underline{w}^{i,2}_{k:K|K}\delta_{\left(t^{j},y^{1:\nu^{j}}\right)}\left(t,x^{1:\nu}\right)
+w¯k:K|Ki,2δk[t]δνj+1[ν]δy1:νj(x2:ν)\displaystyle~{}~{}~{}+\overline{w}^{i,2}_{k:K|K}\delta_{k}[t]\delta_{\nu^{j}+1}[\nu]\delta_{y^{1:\nu^{j}}}\left(x^{2:\nu}\right)
×g(y1|x1)λk|kp(x1)pS(x1)λk|kp,g(y1|)pS,\displaystyle~{}~{}~{}\times\frac{g\left(y^{1}|x^{1}\right)\lambda^{p}_{k|k}\left(x^{1}\right)p^{S}\left(x^{1}\right)}{\left\langle\lambda^{p}_{k|k},g\left(y^{1}|\cdot\right)p^{S}\right\rangle}, (32c)
w¯k:K|Ki,2\displaystyle\underline{w}^{i,2}_{k:K|K} =λk+1B(y1)wk:K|Ki,2,\displaystyle=\frac{\lambda^{B}_{k+1}\left(y^{1}\right)}{w^{i,2}_{k:K|K}}, (32d)
w¯k:K|Ki,2\displaystyle\overline{w}^{i,2}_{k:K|K} =1w¯k:K|Ki,2.\displaystyle=1-\underline{w}^{i,2}_{k:K|K}. (32e)

Finally, the trajectory Bernoulli component created by trajectory YjY^{j}, j{l+1,,nk+1:K}j\in\{l+1,\dots,n_{k+1:K}\} (not present at time step k+1k+1) has a single local hypothesis hk:K|Ki=1h_{k:K|K}^{i}=1, i=mk+ji=m_{k}+j, given by k+1:Ki,1={j}{\cal M}_{k+1:K}^{i,1}=\{j\} and

wk:K|Ki,1\displaystyle w^{i,1}_{k:K|K} =1,rk:K|Ki,1=1,pk:K|Ki,1(X)=δYj(X).\displaystyle=1,~{}r^{i,1}_{k:K|K}=1,~{}p^{i,1}_{k:K|K}(X)=\delta_{Y^{j}}(X). (33)

Theorem 1 is proved in [21, App. E].

By running the backward simulation for sets of trajectories TT times for k=K1,,1k=K-1,\dots,1, where we recursively draw samples of 𝐗k:K{\bf X}_{k:K} from the backward kernel (18), we can obtain TT samples of {𝐗1:K(i)}i=1T\{{\bf X}^{(i)}_{1:K}\}_{i=1}^{T}. This gives a particle representation of the multi-trajectory density

π(𝐗k:K)i=1T1Tδ𝐗(i)(𝐗k:K),\pi({\bf X}_{k:K})\approx\sum_{i=1}^{T}\frac{1}{T}\delta_{{\bf X}^{(i)}}({\bf X}_{k:K}), (34)

where the ii-th particle has state 𝐗(i){\bf X}^{(i)} and weight 1/T1/T.

IV-C Practical Considerations for a Tractable Implementation

In this work, we consider the linear Gaussian implementations of the hybrid PHD-PMB trajectory smoother. The linear Gaussian implementations of the PHD filter and the PMB smoother using backward simulation have been described in [4] and [21], respectively. Here, we highlight some practical considerations that facilitate an efficient implementation.

In forward PHD filtering, the Poisson intensity (3) is of the form Gaussian mixture, and its number of mixture components increases with the number of measurements over time without bound. To obtain a tractable implementation, Gaussian mixture reduction needs to be performed by pruning Gaussian components with small weights and merging similar Gaussian components [4].

We extract the PMB filtering densities after the PHD update step before PPP approximation. It should be noted that doing so only involves minor modifications of an existing implementation of the PHD update step. By comparing (21) and (3), we can see that the Poisson intensity in PMB is directly given by the first term in (3), and that the single-object density of the ii-th Bernoulli density can be easily obtained by normalizing the ii-th term in the summation in (3), where the normalization factor gives the existence probability rk|kir_{k|k}^{i} in (21). In addition, since the predicted PPP intensity (19) is a Gaussian mixture, the single-object density (21b) of each Bernoulli component in the PMB is also a Gaussian mixture. This means that in backward simulation we may need many particles to find the mode of the multi-trajectory density π1:K|K(𝐗)\pi_{1:K|K}(\mathbf{X}). For efficient sampling, we can reduce each single-object density (21b) to a single Gaussian by moment matching.

In backward simulation, it is trivial to parallize the sampling process since multiple backward sets of trajectories can be generated independently. The main challenge lies in sampling the MB mixture (25) in the PMBM backward kernel (23), which has an intractable number of MB components due to the unknown associations between the different components in the PMB filtering density and the trajectories. To avoid enumerating every global hypothesis, which is of combinatorial complexity, we can draw samples only from MB components with non-negligible weights [21]. One way to do so is by solving a ranked assignment problem using Murty’s algorithm [29]. In addition, ellipsoidal gating can be applied to remove unlikely local hypotheses to simplify the computation of the assignment problem.

V Simulation Results

In this section, we evaluate the performance of the proposed hybrid PHD-PMB trajectory smoother in a simulation study in comparison with the PHD filter [4] and the trajectory PHD filter [19].

We consider the same two-dimensional scenario as in [19] with 100 time steps and 4 objects. The single object state is x=[px,px˙,py,py˙]Tx=[p_{x},\dot{p_{x}},p_{y},\dot{p_{y}}]^{T}, consisting of position and velocity. The single object dynamic model is nearly constant velocity with transition matrix FF and process noise covariance QQ given by

F=I2[1Ts01],Q=σq2I2[Ts3/3Ts2/2Ts2/2Ts],F=I_{2}\otimes\begin{bmatrix}1&T_{s}\\ 0&1\end{bmatrix},\quad Q=\sigma_{q}^{2}I_{2}\otimes\begin{bmatrix}T_{s}^{3}/3&T_{s}^{2}/2\\ T_{s}^{2}/2&T_{s}\end{bmatrix},

where I2I_{2} is a 2×22\times 2 identity matrix, \otimes denotes the Kronecker product, Ts=0.5sT_{s}=0.5\,\text{s} is the sampling period, and σq=1.8\sigma_{q}=1.8. Each object survives with probability pS=0.99p^{S}=0.99. The birth model is a PPP, and its Poisson intensity is a Gaussian mixture with three components. Each component has the same weight 0.10.1 and covariance diag([225,100,225,100])\text{diag}([225,100,225,100]); the means are [85,0,140,0]T[85,0,140,0]^{T}, [5,0,220,0]T[-5,0,220,0]^{T}, and [7,0,50,0]T[7,0,50,0]^{T}. The ground truth object trajectories are illustrated in Fig. 2.

Refer to caption
Figure 2: True object trajectories. The up triangles and the numbers next to them indicate the starting position and time steps of different trajectories. The down triangles and the numbers next to them indicate the ending position and time steps of different trajectories, respectively.

The single object measurement model is linear and Gaussian with observation matrix HH and measurement noise covariance RR given by

H=I2[10],R=σr2I2,H=I_{2}\otimes\begin{bmatrix}1&0\end{bmatrix},\quad R=\sigma_{r}^{2}I_{2},

where σr=2\sigma_{r}=2. The detection probability is pD=0.9p^{D}=0.9. The clutter is uniformly distributed in the area [0m,2000m]×[0m,2000m][0\,\text{m},2000\,\text{m}]\times[0\,\text{m},2000\,\text{m}] with Poisson clutter rate γC=50\gamma^{C}=50.

The PHD filter [4] is implemented with pruning threshold 10410^{-4}, merging threshold 44, and maximum number of Gaussian components 3030. Object state estimates are obtained using the estimator described in [26, Sec. 9.5.4.4]. The trajectory PHD filter [19] is implemented also with pruning threshold 10410^{-4}, absorption threshold 44, and maximum number of Gaussian components 3030, and it performs smoothing-while-filtering without LL-scan approximation. Its trajectory estimates are obtained using the estimator described in [19, Sec. VII-D]. The hybrid PHD-PMB trajectory smoother is implemented with 10001000 particles, and only a maximum of 100 global hypotheses are sampled. Its trajectory estimates are obtained using the estimator described in [21, App. G]. An exemplar output of the hybrid PHD-PMB trajectory smoother is shown in Fig. 3.

Refer to caption
Figure 3: Exemplar output of the hybrid PHD-PMB trajectory smoother. It can be seen that the hybrid PHD-PMB trajectory smoother can produce smoothed estimates for all the trajectories. In addition, it can recover missed detections and disregard false detections in the forward PHD filtering.

The multi-object state estimation performance is evaluated using the generalized optimal subpattern assignment (GOSPA) metric [30] at each time step with α=2\alpha=2, c=10c=10, and p=1p=1. GOSPA allows for the decomposition of the estimation error into localization error, missed and false detection errors. The multi-trajectory smoothing estimation performance is evaluated for the set of all trajectories using the trajectory GOSPA (TGOSPA) metric [31] at the last time step with c=10c=10, p=1p=1, and γ=1\gamma=1. TGOSPA allows for the decomposition of the estimation error into localization error, missed and false detection errors, and track switch error. We note that the trajectory PHD filter cannot estimate the set of all trajectories accurately. To report the set of all trajectory estimates, we tag each Gaussian component and extract the trajectory estimates with unique tags via post-processing. We report the results averaged over 100 Monte Carlo runs.

Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 4: GOSPA error and its decomposition over time.
TABLE I: Average GOSPA error and its decomposition per time step for different scenario parameters.
PHD Hybrid PHD-PMB
GOSPA Localization Missed False GOSPA Localization Missed False
No change 7.82 5.38 1.58 0.86 4.20 3.79 0.31 0.10
γC=10\gamma^{C}=10 7.26 5.42 1.55 0.28 4.06 3.76 0.23 0.06
γC=100\gamma^{C}=100 8.28 5.41 1.59 1.27 4.30 3.81 0.37 0.12
pD=0.98p^{D}=0.98 6.53 5.73 0.35 0.45 3.80 3.58 0.16 0.06
pD=0.8p^{D}=0.8 10.95 5.21 2.55 3.19 4.76 4.11 0.56 0.10
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 5: TGOSPA error and its decomposition over time.
TABLE II: Average TGOSPA error and its decomposition per time step for different scenario parameters.
Trajectory PHD Hybrid PHD-PMB
TGOSPA Localization Missed False Track switch TGOSPA Localization Missed False Track switch
No change 5.43 4.61 0.12 0.71 0.00 5.22 4.79 0.31 0.10 0.01
γC=10\gamma^{C}=10 5.01 4.65 0.13 0.23 0.00 5.14 4.82 0.23 0.07 0.02
γC=100\gamma^{C}=100 5.90 4.68 0.18 1.04 0.00 5.34 4.84 0.37 0.12 0.01
pD=0.98p^{D}=0.98 4.92 4.51 0.01 0.39 0.00 4.78 4.56 0.16 0.06 0.00
pD=0.8p^{D}=0.8 6.28 4.86 0.29 1.13 0.00 5.91 5.22 0.56 0.10 0.02

Let us first study the multi-object estimation performance. The GOSPA error and its decomposition for the the PHD filter and the hybrid PHD-PMB trajectory smoother over time are shown in Fig. 4. Overall, the hybrid PHD-PMB trajectory smoother outperforms the PHD filter in terms of localization errors, missed and false detection errors by a significant margin. One exception is that the hybrid PHD-PMB trajectory smoother presents higher localization errors at the first few time steps due to the mismatch between the Gaussian means in the birth intensity and the true object birth positions. Moreover, the PHD-PMB trajectory smoother shows one apparent anomaly that it has increased missed detection error a few time steps before object disappearing. This anomaly is due to the premature object death, a problem also observed in PHD smoothers [11, 12], where missed detections happen at earlier time steps when objects disappear, with a lag equal to the smoothing lag. However, we note that in the hybrid PHD-PMB trajectory smoother this problem is less severe in the sense that it does not always happen for every disappearing object, and that missed detections due to premature object death are not propagated all the way back to earlier time steps. We also show the average GOSPA error and its decomposition per time step for different scenario parameters in Table I. We can see that the hybrid PHD-PMB trajectory smoother consistently outperforms the PHD filter by a significant margin, and it is especially robust to false detections.

We then discuss the trajectory estimation performance of the hybrid PHD-PMB trajectory smoother and the trajectory PHD filter. The TGOSPA error and its decomposition for the trajectory PHD filter and the hybrid PHD-PMB trajectory smoother over time are shown in Fig. 5. The hybrid PHD-PMB trajectory smoother surpasses the trajectory PHD filter by producing fewer false detections. However, the trajectory PHD filter exhibits slightly superior performance in localization accuracy, reduced missed detections, and fewer track switches. The average TGOSPA error and its decomposition per time step for different scenario parameters are presented in Table II. We can see that the hybrid PHD-PMB trajectory smoother generally outperforms the trajectory PHD filter, except for the scenario with low clutter rate.

VI Conclusions

In this paper, we have presented a hybrid PHD-PMB multi-object smoother, which performs backward simulation using the PMB densities obtained in the update step of forward PHD filtering to provide smoothed trajectory estimates. The hybrid PHD-PMB trajectory smoother makes it possible for the PHD filter to generate smoothed trajectory estimates for all the objects. Possible future work includes the development of a hybrid PHD-PMB trajectory smoother for non-Gaussian, non-linear scenarios using sequential Monte Carlo implementations and experiments with real data [32].

References

  • [1] F. Meyer, T. Kropfreiter, J. L. Williams, R. Lau, F. Hlawatsch, P. Braca, and M. Z. Win, “Message passing algorithms for scalable multitarget tracking,” Proceedings of the IEEE, vol. 106, no. 2, pp. 221–259, 2018.
  • [2] R. L. Streit, R. B. Angle, and M. Efe, Analytic combinatorics in multiple object tracking.   Springer, 2021.
  • [3] R. P. Mahler, “Multitarget bayes filtering via first-order multitarget moments,” IEEE Transactions on Aerospace and Electronic systems, vol. 39, no. 4, pp. 1152–1178, 2003.
  • [4] B.-N. Vo and W.-K. Ma, “The Gaussian mixture probability hypothesis density filter,” IEEE Transactions on Signal Processing, vol. 54, no. 11, p. 4091, 2006.
  • [5] B. Ristic, B.-N. Vo, and D. Clark, “A note on the reward function for PHD filters with sensor control,” IEEE Transactions on Aerospace and Electronic Systems, vol. 47, no. 2, pp. 1521–1529, 2011.
  • [6] K. Granström, C. Lundquist, and O. Orguner, “Extended target tracking using a Gaussian-mixture PHD filter,” IEEE Transactions on Aerospace and Electronic Systems, vol. 48, no. 4, pp. 3268–3286, 2012.
  • [7] K. Granström and U. Orguner, “A PHD filter for tracking multiple extended targets using random matrices,” IEEE Transactions on Signal Processing, vol. 60, no. 11, pp. 5657–5671, 2012.
  • [8] C. Lundquist, L. Hammarstrand, and F. Gustafsson, “Road intensity based mapping using radar measurements with a probability hypothesis density filter,” IEEE Transactions on Signal Processing, vol. 59, no. 4, pp. 1397–1408, 2010.
  • [9] L. Gao, G. Battistelli, and L. Chisci, “PHD-SLAM 2.0: Efficient SLAM in the presence of missdetections and clutter,” IEEE Transactions on Robotics, vol. 37, no. 5, pp. 1834–1843, 2021.
  • [10] N. Nadarajah, T. Kirubarajan, T. Lang, M. McDonald, and K. Punithakumar, “Multitarget tracking using probability hypothesis density smoothing,” IEEE Transactions on Aerospace and Electronic Systems, vol. 47, no. 4, pp. 2344–2360, 2011.
  • [11] R. P. Mahler, B.-T. Vo, and B.-N. Vo, “Forward-backward probability hypothesis density smoothing,” IEEE Transactions on Aerospace and Electronic Systems, vol. 48, no. 1, pp. 707–728, 2012.
  • [12] S. Nagappa and D. E. Clark, “Fast sequential Monte Carlo PHD smoothing,” in 14th International Conference on Information Fusion.   IEEE, 2011, pp. 1–7.
  • [13] R. Georgescu, P. Willett, and L. Svensson, “Particle PHD forward filter-backward simulator for targets in close proximity,” in IEEE International Conference on Acoustics, Speech and Signal Processing.   IEEE, 2013, pp. 6387–6391.
  • [14] P. Feng, W. Wang, S. M. Naqvi, and J. Chambers, “Adaptive retrodiction particle PHD filter for multiple human tracking,” IEEE Signal Processing Letters, vol. 23, no. 11, pp. 1592–1596, 2016.
  • [15] B.-N. Vo, B.-T. Vo, and R. P. Mahler, “Closed-form solutions to forward-backward smoothing,” IEEE Transactions on Signal Processing, vol. 60, no. 1, pp. 2–17, 2011.
  • [16] X. He and G. Liu, “Improved Gaussian mixture probability hypothesis density smoother,” Signal Processing, vol. 120, pp. 56–63, 2016.
  • [17] K. Panta, D. E. Clark, and B.-N. Vo, “Data association and track management for the Gaussian mixture probability hypothesis density filter,” IEEE Transactions on Aerospace and Electronic Systems, vol. 45, no. 3, pp. 1003–1016, 2009.
  • [18] Z. Lu, W. Hu, and T. Kirubarajan, “Labeled random finite sets with moment approximation,” IEEE Transactions on Signal Processing, vol. 65, no. 13, pp. 3384–3398, 2017.
  • [19] Á. F. García-Fernández and L. Svensson, “Trajectory PHD and CPHD filters,” IEEE Transactions on Signal Processing, vol. 67, no. 22, pp. 5702–5714, 2019.
  • [20] Á. F. García-Fernández, L. Svensson, and M. R. Morelande, “Multiple target tracking based on sets of trajectories,” IEEE Transactions on Aerospace and Electronic Systems, vol. 56, no. 3, pp. 1685–1707, 2020.
  • [21] Y. Xia, L. Svensson, Á. F. García-Fernández, J. L. Williams, D. Svensson, and K. Granström, “Multiple object trajectory estimation using backward simulation,” IEEE Transactions on Signal Processing, vol. 70, pp. 3249–3263, 2022.
  • [22] F. Lindsten and T. B. Schön, “Backward simulation methods for Monte Carlo statistical inference,” Foundations and Trends® in Machine Learning, vol. 6, no. 1, pp. 1–143, 2013.
  • [23] J. L. Williams, “Marginal multi-Bernoulli filters: RFS derivation of MHT, JIPDA, and association-based member,” IEEE Transactions on Aerospace and Electronic Systems, vol. 51, no. 3, pp. 1664–1687, 2015.
  • [24] ——, “An efficient, variational approximation of the best fitting multi-Bernoulli filter,” IEEE Transactions on Signal Processing, vol. 63, no. 1, pp. 258–273, 2015.
  • [25] Y. Xia, L. Svensson, Á. F. García-Fernández, K. Granström, and J. L. Williams, “Backward simulation for sets of trajectories,” in IEEE 23rd International Conference on Information Fusion (FUSION).   IEEE, 2020, pp. 1–8.
  • [26] R. P. Mahler, Statistical Multisource-Multitarget Information Fusion.   Artech House Norwood, MA, 2007.
  • [27] K. Granström, L. Svensson, Y. Xia, J. Williams, and Á. F. García-Femández, “Poisson multi-Bernoulli mixture trackers: Continuity through random finite sets of trajectories,” in 21st International Conference on Information Fusion.   IEEE, 2018, pp. 1–8.
  • [28] J. L. Williams, “Hybrid Poisson and multi-Bernoulli filters,” in Proceedings of International Conference on Information Fusion.   IEEE, 2012, pp. 1103–1110.
  • [29] D. F. Crouse, “On implementing 2D rectangular assignment algorithms,” IEEE Transactions on Aerospace and Electronic Systems, vol. 52, no. 4, pp. 1679–1696, 2016.
  • [30] A. S. Rahmathullah, Á. F. García-Fernández, and L. Svensson, “Generalized optimal sub-pattern assignment metric,” in Proceedings of International Conference on Information Fusion.   IEEE, 2017, pp. 1–8.
  • [31] Á. F. García-Fernández, A. S. Rahmathullah, and L. Svensson, “A metric on the space of finite sets of trajectories for evaluation of multi-target tracking algorithms,” IEEE Transactions on Signal Processing, vol. 68, pp. 3917–3928, 2020.
  • [32] J. Liu, L. Bai, Y. Xia, T. Huang, B. Zhu, and Q.-L. Han, “GNN-PMB: A simple but effective online 3D multi-object tracker without bells and whistles,” IEEE Transactions on Intelligent Vehicles, vol. 8, no. 2, pp. 1176–1189, 2022.