This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Estimation of Scalar Field Distribution in the Fourier Domain

Alex S. Leong and Alexei T. Skvortsov
The authors are with Defence Science and Technology Group, Melbourne, Australia. E-mail: [email protected], [email protected]
Abstract

In this paper we consider the problem of estimation of scalar field distribution (e.g. pollutant, moisture, temperature) from noisy measurements collected by unmanned autonomous vehicles such as UAVs. The field is modelled as a sum of Fourier components/modes, where the number of modes retained and estimated determines in a natural way the approximation quality. An algorithm for estimating the modes using an online optimization approach is presented, under the assumption that the noisy measurements are quantized. The algorithm can also estimate time-varying fields through the introduction of a forgetting factor. Simulation studies demonstrate the effectiveness of the proposed approach.

I Introduction

Estimation of scalar field distribution from a set of point measurements is an important problem often emerging in domains such as atmospheric monitoring, risk assessment, and hazard mitigation. Examples include concentration of pollutant, carbon dioxide emission, methane sources, radiation, temperature in urban areas, and many others, see [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23] and the references therein. This approach is often used for indirect inference of scalar fields (pollutant concentration, pressure, temperature, radiation) in inaccessible locations where the direct measurements are prohibited due to some geometrical or physical constraints (blocking obstacles, high temperature, or exposure to hazards). The methods of source localisation [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14] and mapping [15, 16, 17, 18, 19, 20, 21, 22, 23] employing remote (and noisy) measurements have attracted increasing attention in recent years due to tremendous progress in instrumentation for aerial and remote sensing using unmanned autonomous vehicles such as UAVs. This technological advancement necessitates the development and evaluation of some statistical methods and algorithms that can be applied for the timely estimation of the structure (map) of the scalar field in the environment from an ever-increasing set of noisy measurements acquired in a sequential or concurrent manner (e.g., sensing signals from unmanned vehicles operating over the hazardous area, trigger signals from meteorological stations, the intermittent concentration of methane emissions in the atmosphere or ocean floor, oil surface concentration due to androgenic spill, etc). These algorithms may become critical for backtracking and characterization of the main sources of the scalar field in the environment which is important for the remediation effectiveness and retrospective forensic analysis. This was the main motivation for the present study.

In work on estimation of scalar fields, the field is often modelled as a sum of radial basis functions (RBFs) or Gaussian mixture models, see, e.g., [17, 18, 19, 20, 21, 22, 23]. Field estimation then reduces to a problem of estimating the parameters of these models. By contrast, for the current work, we assume the field to be an arbitrary 2D function which can be viewed in the Fourier domain using, e.g., the discrete Fourier transform (DFT) or the discrete cosine transform (DCT) [24]. For some intuition behind this approach, suppose we regard the plot of the field as an image. From image processing, it is well-known that the most important parts of an image are concentrated in the lowest (spatial) frequency components/modes. Our approach to field estimation is then to estimate the low frequency Fourier components.111We will use the terms Fourier component and DCT component interchangeably in this paper. One of the advantages for using this Fourier component approach compared to the RBF approach is that it offers a perhaps more natural way to control the accuracy of the approximation, e.g., by controlling the number of Fourier modes used/retained. Furthermore, if one wants to refine the field estimate by estimating more modes, existing estimates of the lower order modes can be reused.

The main contributions of this paper are:

  • Rather than the use of radial basis function field models, we model the 2D scalar field in the Fourier domain as a sum of Fourier components.

  • A numerical comparison of the approximation capabilities of the Fourier components and RBF field models is carried out.

  • For the quantized measurements model, we present in detail how Fourier component estimation can be carried out using an online optimization approach similar to [22]. We further extend the approach of [22] from binary measurements to multi-level quantized measurements, and from static to time-varying fields.

The organization of the paper is as follows: Section II gives preliminaries on the DCT and motivation for its use in field modelling. Section III presents the system model. Section IV compares our Fourier component field model with the RBF field model in terms of approximation performance. Section V considers in detail the estimation of Fourier components using quantized measurements. Numerical studies are presented in Section VI.

II Preliminaries

Consider a region of interest 𝒮=[Xmin,Xmax]×[Ymin,Ymax]\mathcal{S}=[X_{\textnormal{min}},X_{\textnormal{max}}]\times[Y_{\textnormal{min}},Y_{\textnormal{max}}]. Discretize [Xmin,Xmax][X_{\textnormal{min}},X_{\textnormal{max}}] into NxN_{x} points and [Ymin,Ymax][Y_{\textnormal{min}},Y_{\textnormal{max}}] into NyN_{y} points as

𝒳d{Xmin+(12+Ix)Δx:Ix{0,,Nx1}}\displaystyle\mathcal{X}_{d}\triangleq\left\{X_{\textnormal{min}}+\Big{(}\frac{1}{2}+I_{x}\Big{)}\Delta_{x}:I_{x}\in\{0,\dots,N_{x}-1\}\right\}
𝒴d{Ymin+(12+Iy)Δy:Iy{0,,Ny1}},\displaystyle\mathcal{Y}_{d}\triangleq\left\{Y_{\textnormal{min}}+\Big{(}\frac{1}{2}+I_{y}\Big{)}\Delta_{y}:I_{y}\in\{0,\dots,N_{y}-1\}\right\},

where

ΔxXmaxXminNx,ΔyYmaxYminNy.\Delta_{x}\triangleq\frac{X_{\textnormal{max}}-X_{\textnormal{min}}}{N_{x}},\quad\Delta_{y}\triangleq\frac{Y_{\textnormal{max}}-Y_{\textnormal{min}}}{N_{y}}.

Our aim is the estimation of 2D distribution of a scalar field ϕ(x,y)\phi(x,y), (x,y)𝒮(x,y)\in\mathcal{S}, which is assumed either static or slowly varying. We define

ϕd(Ix,Iy)ϕ(Xmin+(1/2+Ix)Δx,Ymin+(1/2+Iy)Δy)\phi_{d}(I_{x},I_{y})\triangleq\phi\Big{(}X_{\textnormal{min}}+\left(1/2+I_{x}\right)\Delta_{x},Y_{\textnormal{min}}+\left(1/2+I_{y}\right)\Delta_{y}\Big{)}

as the field value at the discretized position (Xmin+(1/2+Ix)Δx,Ymin+(1/2+Iy)Δy)𝒳d×𝒴d\big{(}X_{\textnormal{min}}+\left(1/2+I_{x}\right)\Delta_{x},Y_{\textnormal{min}}+\left(1/2+I_{y}\right)\Delta_{y}\big{)}\in\mathcal{X}_{d}\times\mathcal{Y}_{d}, where (Ix,Iy)(I_{x},I_{y}) can be regarded as a position index. Recall the (Type-II) discrete cosine transform (DCT), see, e.g., [24, 25]:

C(u,v)\displaystyle C(u,v) =Ix=0Nx1Iy=0Ny1αx(u)αy(v)ϕd(Ix,Iy)cos((2Ix+1)πu2Nx)cos((2Iy+1)πv2Ny),\displaystyle=\sum_{I_{x}=0}^{N_{x}-1}\sum_{I_{y}=0}^{N_{y}-1}\alpha_{x}(u)\alpha_{y}(v)\phi_{d}(I_{x},I_{y})\cos\left(\frac{(2I_{x}+1)\pi u}{2N_{x}}\right)\cos\left(\frac{(2I_{y}+1)\pi v}{2N_{y}}\right),
u=0,,Nx1,v=0,,Ny1,\displaystyle\quad\quad u=0,\dots,N_{x}-1,\quad v=0,\dots,N_{y}-1,

where

αx(u){1Nx,u=02Nx,u0,αy(v){1Ny,v=02Ny,v0.\alpha_{x}(u)\triangleq\left\{\begin{array}[]{ll}\sqrt{\frac{1}{N_{x}}},&u=0\\ \sqrt{\frac{2}{N_{x}}},&u\neq 0\end{array}\right.,\quad\alpha_{y}(v)\triangleq\left\{\begin{array}[]{ll}\sqrt{\frac{1}{N_{y}}},&v=0\\ \sqrt{\frac{2}{N_{y}}},&v\neq 0.\end{array}\right.

The inverse DCT is given by:

ϕd(Ix,Iy)\displaystyle\phi_{d}(I_{x},I_{y}) =u=0Nx1v=0Ny1αx(u)αy(v)C(u,v)cos((2Ix+1)πu2Nx)cos((2Iy+1)πv2Ny),\displaystyle=\sum_{u=0}^{N_{x}-1}\sum_{v=0}^{N_{y}-1}\alpha_{x}(u)\alpha_{y}(v)C(u,v)\cos\left(\frac{(2I_{x}+1)\pi u}{2N_{x}}\right)\cos\left(\frac{(2I_{y}+1)\pi v}{2N_{y}}\right), (1)
Ix=0,,Nx1,Iy=0,,Ny1.\displaystyle\quad\quad I_{x}=0,\dots,N_{x}-1,\quad I_{y}=0,\dots,N_{y}-1.

It will be convenient for our purposes to rewrite (1) as

ϕd(Ix,Iy)\displaystyle\phi_{d}(I_{x},I_{y}) =(u,v)𝒰αx(u)αy(v)C(u,v)cos((2Ix+1)πu2Nx)cos((2Iy+1)πv2Ny),\displaystyle=\sum_{(u,v)\in\mathcal{U}}\alpha_{x}(u)\alpha_{y}(v)C(u,v)\cos\left(\frac{(2I_{x}+1)\pi u}{2N_{x}}\right)\cos\left(\frac{(2I_{y}+1)\pi v}{2N_{y}}\right), (2)
Ix=0,,Nx1,Iy=0,,Ny1,\displaystyle\quad\quad I_{x}=0,\dots,N_{x}-1,\quad I_{y}=0,\dots,N_{y}-1,

where

𝒰{(u,v):u{0,,Nx1},v{0,,Ny1}}.\mathcal{U}\triangleq\{(u,v):u\in\{0,\dots,N_{x}-1\},v\in\{0,\dots,N_{y}-1\}\}.

The most important information about the field distribution is concentrated in the low order modes, i.e. the components corresponding to cos((2Ix+1)πu2Nx)cos((2Iy+1)πv2Ny)\cos\left(\frac{(2I_{x}+1)\pi u}{2N_{x}}\right)\cos\left(\frac{(2I_{y}+1)\pi v}{2N_{y}}\right) with uu and vv small, while higher order modes define the fine structure of the field distribution. See Figs. 1 and 3 for examples of how retaining different numbers of modes affects the quality of the approximation to the true field.

III System Model

III-A Field Model

Motivated by the above discussion, we propose to approximate (2) by

ϕd(Ix,Iy)(u,v)𝒰~αx(u)αy(v)C(u,v)cos((2Ix+1)πu2Nx)cos((2Iy+1)πv2Ny),Ix=0,,Nx1,Iy=0,,Ny1ϕ~d(Ix,Iy),\begin{split}\phi_{d}(I_{x},I_{y})&\approx\sum_{(u,v)\in\tilde{\mathcal{U}}}\alpha_{x}(u)\alpha_{y}(v)C(u,v)\cos\left(\frac{(2I_{x}+1)\pi u}{2N_{x}}\right)\cos\left(\frac{(2I_{y}+1)\pi v}{2N_{y}}\right),\\ &\quad\quad I_{x}=0,\dots,N_{x}-1,\quad I_{y}=0,\dots,N_{y}-1\\ &\triangleq\tilde{\phi}_{d}(I_{x},I_{y}),\end{split} (3)

where 𝒰~𝒰\tilde{\mathcal{U}}\subseteq\mathcal{U} is the subset of low order modes that we wish to retain.222In general, we could in (3) use coefficients C~(u,v)\tilde{C}(u,v) which are not necessarily equal to C(u,v)C(u,v). One reason for taking the coefficients to be equal to C(u,v)C(u,v) is given in Lemma 1.

For example, we could retain the first N~x×N~y\tilde{N}_{x}\times\tilde{N}_{y} modes, with N~xNx,N~yNy\tilde{N}_{x}\leq N_{x},\tilde{N}_{y}\leq N_{y}, so that

𝒰~={(u,v):u{0,,N~x1},v{0,,N~y1}}.\tilde{\mathcal{U}}=\{(u,v):u\in\{0,\dots,\tilde{N}_{x}-1\},v\in\{0,\dots,\tilde{N}_{y}-1\}\}. (4)

The total number of modes retained N~\tilde{N} is thus equal to N~=N~xN~y\tilde{N}=\tilde{N}_{x}\tilde{N}_{y}.

Another possibility is the following:

𝒰~={N~ pairs (u,v) with smallest values of (u+1)2+(v+1)2}\tilde{\mathcal{U}}=\{\tilde{N}\textnormal{ pairs }(u,v)\textnormal{ with smallest values of }(u+1)^{2}+(v+1)^{2}\} (5)

which tries to retain the N~\tilde{N} “largest” (in magnitude) modes.333This is of course an approximation, as exactly determining the N~\tilde{N} largest modes depends on and requires knowledge of the very field that we are trying to estimate. The motivation for (5) comes from a result that the DCT coefficients C(u,v)C(u,v) decay as O(1(u+1)2+(v+1)2)O\big{(}\frac{1}{(u+1)^{2}+(v+1)^{2}}\big{)} for u,vu,v\rightarrow\infty [26]. Thus the larger components will usually have smaller values of (u+1)2+(v+1)2(u+1)^{2}+(v+1)^{2}, leading to the choice (5). In numerical simulations, we have found (5) to give better approximations than (4) (for the same number of retained modes N~\tilde{N}) in many, though not all, cases.

III-B Measurement Model

In this paper we consider the following noisy quantized measurement model for a vehicle at position index (Ix,Iy)(I_{x},I_{y}):

z(Ix,Iy)=q(ϕd(Ix,Iy)+n(Ix,Iy))z(I_{x},I_{y})=q(\phi_{d}(I_{x},I_{y})+n(I_{x},I_{y})) (6)

where n(,)n(\bm{\cdot},\bm{\cdot}) is random noise and q()q(\bm{\cdot}) is a quantizer of LL levels, say {0,1,,L1}\{0,1,\dots,L-1\}. The quantizer can be expressed in the form

q(x)={0,x<τ01,τ0x<τ1L2,τL3x<τL2L1,xτL2q(x)=\left\{\begin{array}[]{cc}0,&x<\tau_{0}\\ 1,&\tau_{0}\leq x<\tau_{1}\\ \vdots&\vdots\\ L-2,&\tau_{L-3}\leq x<\tau_{L-2}\\ L-1,&x\geq\tau_{L-2}\end{array}\right. (7)

where the quantizer thresholds {τ0,,τL2}\{\tau_{0},\dots,\tau_{L-2}\} satisfy τ0τ1τL2\tau_{0}\leq\tau_{1}\leq\dots\leq\tau_{L-2}. The use of a quantized measurement model is motivated by the fact that many chemical sensors can only provide output from a small number of discrete bars [27, 28].

Remark 1.

The special case of (6) corresponding to a 2-level quantizer, or binary measurements, is considered in [21, 22, 23]. It can be expressed as

z(Ix,Iy)=𝟙𝕖𝕟𝕧(ϕd(Ix,Iy)+n(Ix,Iy)>τ),z(I_{x},I_{y})=\mathds{1env}\big{(}\phi_{d}(I_{x},I_{y})+n(I_{x},I_{y})>\tau\big{)}, (8)

where τ\tau is the quantizer threshold, and 𝟙()\mathds{1}(\bm{\cdot}) is the indicator function that returns 1 if its argument is true and 0 otherwise. Other measurement models which have been considered in the literature include additive noise models [18, 19] and Poisson measurement models [17].

III-C Problem Statement

The problem we wish to consider in this paper is to estimate the coefficients444When we refer to estimation of components/modes in this paper, we specifically mean estimation of the coefficients C(u,v)C(u,v).

C(u,v),(u,v)𝒰~C(u,v),\,\,(u,v)\in\tilde{\mathcal{U}}

of the field ϕd(Ix,Iy)\phi_{d}(I_{x},I_{y}), from quantized measurements {z(Ix,Iy)}\{z(I_{x},I_{y})\} collected by an unmanned autonomous vehicle under the measurement model (6). The estimation should be done in an online manner such that the estimates are continually updated as new measurements are collected.

IV Comparison with RBF Field Model

Before we consider the problem of estimating the coefficients C(u,v)C(u,v) (which will be studied in Section V), we will in this section compare the use of our Fourier component model (3) with the radial basis function model considered in [20, 21, 22, 23] (see also [18, 19, 17] for similar models), in terms of how well they can approximate a field for a given number of modes (for the Fourier component model) or basis functions (for the RBF model).

IV-A Fourier Component Field Model

Define the mean squared error (MSE):

MSE1NxNyIx=0Nx1Iy=0Ny1(ϕd(Ix,Iy)ϕ~d(Ix,Iy))2,\textnormal{MSE}\triangleq\frac{1}{N_{x}N_{y}}\sum_{I_{x}=0}^{N_{x}-1}\sum_{I_{y}=0}^{N_{y}-1}\Big{(}\phi_{d}(I_{x},I_{y})-\tilde{\phi}_{d}(I_{x},I_{y})\Big{)}^{2}, (9)

where ϕd(Ix,Iy)\phi_{d}(I_{x},I_{y}) is the (discretized) true field given by (2) and

ϕ~d(Ix,Iy)(u,v)𝒰~αx(u)αy(v)C~(u,v)cos((2Ix+1)πu2Nx)cos((2Iy+1)πv2Ny)\tilde{\phi}_{d}(I_{x},I_{y})\triangleq\sum_{(u,v)\in\tilde{\mathcal{U}}}\alpha_{x}(u)\alpha_{y}(v)\tilde{C}(u,v)\cos\left(\frac{(2I_{x}+1)\pi u}{2N_{x}}\right)\cos\left(\frac{(2I_{y}+1)\pi v}{2N_{y}}\right)

is the approximation of the true field using a subset of modes U~\tilde{U} and coefficients C~(u,v)\tilde{C}(u,v). The expression for ϕ~d(Ix,Iy)\tilde{\phi}_{d}(I_{x},I_{y}) is the same as (3) except that the coefficients C~(u,v)\tilde{C}(u,v) may be different from C(u,v)C(u,v). However, it turns out that setting C~(u,v)\tilde{C}(u,v) to be equal to C(u,v)C(u,v) will minimize the MSE.

Lemma 1.

Given a subset of modes U~\tilde{U}, the optimal values of C~(u,v)\tilde{C}(u,v) that minimize (9) satisfy

C~(u,v)=C(u,v),(u,v)U~.\tilde{C}^{*}(u,v)=C(u,v),\,\forall(u,v)\in\tilde{U}. (10)
Proof.

See the Appendix. ∎

Remark 2.

One can also minimize (9) by treating it as a linear least squares problem [29, 30], which will numerically give the same solution, however the analytical expression (10) provided by Lemma 1 is much more explicit.

IV-B RBF Field Model

The following RBF field model is used in [20, 21, 22, 23]:

ϕ(𝐱)j=1JβjKj(𝐱),\phi(\mathbf{x})\approx\sum_{j=1}^{J}\beta_{j}K_{j}(\mathbf{x}), (11)

where 𝐱(x,y)\mathbf{x}\triangleq(x,y) and Kj(𝐱),j=1,,JK_{j}(\mathbf{x}),j=1,\dots,J are radial basis functions. In particular, we consider the choice

Kj(𝐱)=exp(𝐜j𝐱2σj2),j=1,,J,K_{j}(\mathbf{x})=\exp\left(-\frac{\|\mathbf{c}_{j}-\mathbf{x}\|^{2}}{\sigma_{j}^{2}}\right),\quad j=1,\dots,J, (12)

which results in a Gaussian mixture model [17]. For a given number of basis functions JJ, we assume that the 𝐜j\mathbf{c}_{j}’s and σj\sigma_{j}’s are chosen,555The case where the 𝐜j\mathbf{c}_{j}’s and σj\sigma_{j}’s are also estimated has been considered, but was found to suffer from identifiability issues and sometimes give very unreliable results [21]. while the βj\beta_{j}’s are free parameters. Algorithms for estimating the βj\beta_{j}’s are studied in, e.g., [20, 21, 22, 23]. Here we consider instead the problem of finding the optimal βj\beta_{j}’s in order to minimize the mean squared error, to see how good the RBF model can be when approximating a field for a given set of basis functions. Define

MSERBF1|𝒮d|𝐱𝒮d(ϕ(𝐱)j=1JβjKj(𝐱))2,\textnormal{MSE}_{RBF}\triangleq\frac{1}{|\mathcal{S}_{d}|}\sum_{\mathbf{x}\in\mathcal{S}_{d}}\Big{(}\phi(\mathbf{x})-\sum_{j=1}^{J}\beta_{j}K_{j}(\mathbf{x})\Big{)}^{2}, (13)

where ϕ(𝐱)\phi(\mathbf{x}) is the true field value at position 𝐱\mathbf{x}, 𝒮d\mathcal{S}_{d} is a discretized set of points in the search region 𝒮\mathcal{S}, and |𝒮d||\mathcal{S}_{d}| is the cardinality of 𝒮d\mathcal{S}_{d}.

Lemma 2.

Given a set of radial basis functions {K1(.),,Kj(.)}\{K_{1}(.),\dots,K_{j}(.)\} and an ordering {𝐱1,,𝐱|𝒮d|}\{\mathbf{x}_{1},\dots,\mathbf{x}_{|\mathcal{S}_{d}|}\} of the elements in 𝒮d\mathcal{S}_{d}, the optimal values of (β1,,βJ)(\beta_{1},\dots,\beta_{J}) that minimize (13) satisfy

𝜷=(𝒦T𝒦)1𝒦Tϕ,\bm{\beta}^{*}=\left(\mathcal{K}^{T}\mathcal{K}\right)^{-1}\mathcal{K}^{T}\bm{\phi},

where 𝛃=[β1βJ]T\bm{\beta}=\begin{bmatrix}\beta_{1}&\dots&\beta_{J}\end{bmatrix}^{T}, ϕ=[ϕ(𝐱1),,ϕ(𝐱|𝒮d|)]T\bm{\phi}=\begin{bmatrix}\phi(\mathbf{x}_{1}),\dots,\phi(\mathbf{x}_{|\mathcal{S}_{d}|})\end{bmatrix}^{T}, and

𝒦=[K1(𝐱1)KJ(𝐱1)K1(𝐱|𝒮d|)KJ(𝐱|𝒮d|)].\mathcal{K}=\begin{bmatrix}K_{1}(\mathbf{x}_{1})&\dots&K_{J}(\mathbf{x}_{1})\\ \vdots&\ddots&\vdots\\ K_{1}(\mathbf{x}_{|\mathcal{S}_{d}|})&\dots&K_{J}(\mathbf{x}_{|\mathcal{S}_{d}|})\end{bmatrix}.
Proof.

This is a standard application of the optimal solution to a linear least squares / linear regression problem [29, 30]. ∎

IV-C Numerical Experiments

Refer to caption
Figure 1: True field and approximations obtained by retaining different numbers of modes
Refer to caption
Figure 2: True field and approximations obtained by using different numbers of basis functions
Refer to caption
Figure 3: True field and approximations obtained by retaining different numbers of modes
Refer to caption
Figure 4: True field and approximations obtained by using different numbers of basis functions

In Figs.1-4 we show two example fields, and the field approximations that are obtained when various different numbers of modes (for Fourier component model) or radial basis functions (for RBF model) are used. The discretization in the true fields is set as Nx=Ny=100N_{x}=N_{y}=100 (so that there are 1002=10000100^{2}=10000 modes in total). For the RBF model we set 𝒮d=𝒳d×𝒴d\mathcal{S}_{d}=\mathcal{X}_{d}\times\mathcal{Y}_{d}, so that the discretized set of points are the same in the MSE calculations. For the Fourier component model, we use the optimal choice of C~(u,v)\tilde{C}^{*}(u,v) given in Lemma 1 (which corresponds to the model (3)), and we choose 𝒰~\tilde{\mathcal{U}} as in (5) to retain the N~\tilde{N} “largest” modes. For the RBF model, we use J=Jx×JyJ=J_{x}\times J_{y} radial basis functions with 𝐜j\mathbf{c}_{j}’s in (12) placed uniformly on a grid at locations 𝒳RBF×𝒴RBF\mathcal{X}_{RBF}\times\mathcal{Y}_{RBF}, where

𝒳RBF{Xmin+(12+ix)δx:ix{0,,Jx1}}\displaystyle\mathcal{X}_{RBF}\triangleq\left\{X_{\textnormal{min}}+\Big{(}\frac{1}{2}+i_{x}\Big{)}\delta_{x}:i_{x}\in\{0,\dots,J_{x}-1\}\right\}
𝒴RBF{Ymin+(12+iy)δy:iy{0,,Jy1}}\displaystyle\mathcal{Y}_{RBF}\triangleq\left\{Y_{\textnormal{min}}+\Big{(}\frac{1}{2}+i_{y}\Big{)}\delta_{y}:i_{y}\in\{0,\dots,J_{y}-1\}\right\}

and

δxXmaxXminJx,δyYmaxYminJy.\delta_{x}\triangleq\frac{X_{\textnormal{max}}-X_{\textnormal{min}}}{J_{x}},\quad\delta_{y}\triangleq\frac{Y_{\textnormal{max}}-Y_{\textnormal{min}}}{J_{y}}.

The σj\sigma_{j}’s in (12) are chosen to be equal to σj=max(δx,δy),j\sigma_{j}=\max(\delta_{x},\delta_{y}),\forall j. The βj\beta_{j}’s used in (11) are the optimal values computed according to Lemma 2.

In the figures we show two performance measures, 1) the MSE, and 2) the structural similarity (SSIM) index, which originated in [31] and has been widely adopted in the image processing community. The structural similarity index is a measure of the similarity between two images. In our case, we can regard Φ={ϕd(Ix,Iy):Ix=0,,Nx1,Iy=0,,Ny1}\Phi=\{\phi_{d}(I_{x},I_{y}):I_{x}=0,\dots,N_{x}-1,I_{y}=0,\dots,N_{y}-1\} and Φ~={ϕ~d(Ix,Iy):Ix=0,,Nx1,Iy=0,,Ny1}\tilde{\Phi}=\{\tilde{\phi}_{d}(I_{x},I_{y}):I_{x}=0,\dots,N_{x}-1,I_{y}=0,\dots,N_{y}-1\} as the image representations of the true and approximated fields respectively, and compute the SSIM between these two images. The SSIM gives a scalar value between 0 and 1, with SSIM=1\textnormal{SSIM}=1 if the two images to be compared are identical. We refer to [31, 32] for the specific equations used to compute the SSIM.

We see from Figs.1-4 that as more modes (for Fourier component model) or basis functions (for RBF model) are used, the approximations to the true field improves. When using a smaller number of modes / basis functions the RBF model seems to give better approximations than the Fourier component model, while for larger numbers of modes / basis functions the two approaches perform similarly. We also observe that for these two examples, using a relatively small number of modes (when compared to the total number of modes of 10000) or basis functions will still result in a qualitatively reasonable approximation to the true field.

While the Fourier component model does not seem to offer a significant advantage in terms of approximation quality, there are other reasons where one may consider its use. One advantage of the Fourier model is that it provides a natural way to control the number of model parameters (the coefficients C(u,v)C(u,v)) that need to be estimated, in that we simply choose however many modes we wish to retain, whereas with the RBF model one would need to also choose the locations 𝐜j\mathbf{c}_{j} to place the basis functions and what the values of σj\sigma_{j} should be. Additionally, if we want to refine our field model with finer structure by including more model parameters, in the Fourier component model we can reuse any previous estimates (and further improve them) of the lower order modes (see Section V-C), since these remain the same in a model with more modes, whereas in the RBF model one would likely need to recalculate the estimates of all the parameter values when more basis functions are utilized.

V Estimation of Fourier Components

We now return to the problem of estimating the coefficients C(u,v),(u,v)𝒰~C(u,v),\,(u,v)\in\tilde{\mathcal{U}} stated in Section III-C. Given a set of modes to be retained 𝒰~\tilde{\mathcal{U}}, of cardinality N~\tilde{N}, define an ordering on 𝒰~\tilde{\mathcal{U}} indexed by j{0,,N~1}j\in\{0,\dots,\tilde{N}-1\}. For instance, the elements of 𝒰~\tilde{\mathcal{U}} could be sorted in lexicographic order. Denote the jj-th element under this ordering by (uj,vj)(u_{j},v_{j}), and define

CjC(uj,vj).C_{j}\triangleq C(u_{j},v_{j}).

Then we can express

ϕ~d(Ix,Iy)\displaystyle\tilde{\phi}_{d}(I_{x},I_{y}) =(u,v)𝒰~αx(u)αy(v)C(u,v)cos((2Ix+1)πu2Nx)cos((2Iy+1)πv2Ny)\displaystyle=\sum_{(u,v)\in\tilde{\mathcal{U}}}\alpha_{x}(u)\alpha_{y}(v)C(u,v)\cos\left(\frac{(2I_{x}+1)\pi u}{2N_{x}}\right)\cos\left(\frac{(2I_{y}+1)\pi v}{2N_{y}}\right)

in the alternative form

ϕ~d(Ix,Iy)=j=0N~1αx(uj)αy(vj)Cjcos((2Ix+1)πuj2Nx)cos((2Iy+1)πvj2Ny),\tilde{\phi}_{d}(I_{x},I_{y})=\sum_{j=0}^{\tilde{N}-1}\alpha_{x}(u_{j})\alpha_{y}(v_{j})C_{j}\cos\left(\frac{(2I_{x}+1)\pi u_{j}}{2N_{x}}\right)\cos\left(\frac{(2I_{y}+1)\pi v_{j}}{2N_{y}}\right), (14)

which is a linear function of (C0,,CN~1)(C_{0},\dots,C_{\tilde{N}-1}).

Remark 3.

The DCT coefficients which we are trying to estimate can be of substantially different orders of magnitude, with the higher order components being much smaller in magnitude than the “DC” component corresponding to u=v=0u=v=0, due to the result that the DCT coefficients decay as O(1(u+1)2+(v+1)2)O\big{(}\frac{1}{(u+1)^{2}+(v+1)^{2}}\big{)} [26]. In order to estimate parameters with such large differences in magnitude, it is desirable to appropriately scale the parameters that are to be estimated, see (15) below.

Remark 4.

Comparing (14) with the RBF field model (11), we see that they are both linear functions of the parameters that are to be estimated. Thus the algorithms developed in e.g. [18, 19, 20, 21, 22, 23, 17] for estimating fields can in principle also be adapted to work for the field model (14), under their various assumed measurement models.

V-A Estimation of Fourier Components Using Quantized Measurements

In this subsection we describe an approach to estimating the parameters C(u,v),u=0,,N~1C(u,v),\,u=0,\dots,\tilde{N}-1, which assumes the quantized measurement model (6)-(7), with the parameters estimated recursively. The algorithm uses an online optimization approach similar to [22], however in this paper we will generalize [22] from binary measurements to multi-level quantized measurements, and also extend the approach to handle time-varying fields.

For the measurement model (6)-(7), n(,)n(\bm{\cdot},\bm{\cdot}) is taken as zero mean noise (not necessarily Gaussian). Recalling the observation in Remark 3, we consider the following scaling of the DCT coefficients:

βj((uj+1)2+(vj+1)2)Cj,\beta_{j}\triangleq\big{(}(u_{j}+1)^{2}+(v_{j}+1)^{2}\big{)}C_{j}, (15)

and define

𝜷(β0,,βN~1)\bm{\beta}\triangleq(\beta_{0},\dots,\beta_{\tilde{N}-1})

as the vector of parameters that are to be estimated.

We first introduce some notation. Let zkz_{k} denote the measurement, and (Ix,k,Iy,k)(I_{x,k},I_{y,k}) the position index, at time/iteration kk. For notational compactness we also denote

𝑰x,k(Ix,k,Iy,k)\bm{I}_{\textbf{x},k}\triangleq(I_{x,k},I_{y,k}) (16)

and

𝐊(𝑰x,k)[K0(𝑰x,k)K1(𝑰x,k)KN~1(𝑰x,k)]T,\mathbf{K}(\bm{I}_{\textbf{x},k})\triangleq\left[\begin{array}[]{cccc}K_{0}(\bm{I}_{\textbf{x},k})&K_{1}(\bm{I}_{\textbf{x},k})&\dots&K_{\tilde{N}-1}(\bm{I}_{\textbf{x},k})\end{array}\right]^{T}, (17)

where

Kj(𝑰x,k)αx(uj)αy(vj)(uj+1)2+(vj+1)2cos((2Ix,k+1)πuj2Nx)cos((2Iy,k+1)πvj2Ny).K_{j}(\bm{I}_{\textbf{x},k})\triangleq\frac{\alpha_{x}(u_{j})\alpha_{y}(v_{j})}{(u_{j}\!+\!1)^{2}\!+\!(v_{j}\!+\!1)^{2}}\cos\Big{(}\frac{(2I_{x,k}+1)\pi u_{j}}{2N_{x}}\Big{)}\cos\Big{(}\frac{(2I_{y,k}+1)\pi v_{j}}{2N_{y}}\Big{)}. (18)

Denote

z1:k{z1,,zk}z_{1:k}\triangleq\{z_{1},\dots,z_{k}\} (19)

as the set of measurements collected up to time kk, with corresponding position indices

𝑰x,1:k{(Ix,1:k,Iy,1:k)}.\bm{I}_{\textbf{x},1:k}\triangleq\{(I_{x,1:k},I_{y,1:k})\}. (20)

The idea is to recursively estimate 𝜷\bm{\beta} by trying to minimize a cost function

Jk(𝜷;𝑰x,1:k,z1:k)=t=0kgt(𝜷;𝑰x,t,zt)J_{k}(\bm{\beta};\bm{I}_{\textbf{x},1:k},z_{1:k})=\sum_{t=0}^{k}g_{t}(\bm{\beta};\bm{I}_{\textbf{x},t},z_{t}) (21)

using online optimization techniques [33]. For binary measurements (8), the following per stage cost function from [22] can be used:

gt(𝜷;𝑰x,z)={log(1+exp(η(𝜷T𝐊(𝑰x)τ))),z=0log(1+exp(η(𝜷T𝐊(𝑰x)τ))),z=1g_{t}(\bm{\beta};\bm{I}_{\textbf{x}},z)=\left\{\begin{array}[]{cc}\log(1+\exp(\eta(\bm{\beta}^{T}\mathbf{K}(\bm{I}_{\textbf{x}})-\tau))),&z=0\\ \log(1+\exp(-\eta(\bm{\beta}^{T}\mathbf{K}(\bm{I}_{\textbf{x}})-\tau))),&z=1\end{array}\right. (22)

where η>0\eta>0 is a parameter in the logistic function (x)1/(1+exp(ηx))\ell(x)\triangleq 1/(1+\exp(\eta x)), where larger values of η\eta will more closely approximate the function 𝟙𝕖𝕟𝕧(x>0)\mathds{1env}(x>0). The cost function (22) is similar to cost functions used in binary logistic regression problems [29, p.516]. In the current work, we wish to define a cost suitable for multi-level quantized measurements. Note that there are cost functions used in multinomial logistic regression problems [30], however they are unsuitable for our problem as they usually involve multiple sets of parameters for each possible output zz, whereas here we just have a single set of parameters 𝜷\bm{\beta}.

To motivate our cost function, let us look more closely at the binary measurements cost function (22). In the case where the measurement ztz_{t} at time tt and position index 𝑰x,t\bm{I}_{\textbf{x},t} is equal to 0, the cost gt(𝜷;𝑰x,t,zt)g_{t}(\bm{\beta};\bm{I}_{\textbf{x},t},z_{t}) will be small if 𝜷T𝐊(𝑰x,t)\bm{\beta}^{T}\mathbf{K}(\bm{I}_{\textbf{x},t}) is less than the quantizer threshold τ\tau, and large otherwise. Similarly, when zt=1z_{t}=1, gt(𝜷;𝑰x,t,zt)g_{t}(\bm{\beta};\bm{I}_{\textbf{x},t},z_{t}) will be small if 𝜷T𝐊(𝑰x,t)\bm{\beta}^{T}\mathbf{K}(\bm{I}_{\textbf{x},t}) is greater than τ\tau, and large otherwise. For the case of multi-level quantized measurements with LL levels given by (7), we would like to have a cost function such that 1) when zt=0z_{t}=0, gt(𝜷;𝑰x,t,zt)g_{t}(\bm{\beta};\bm{I}_{\textbf{x},t},z_{t}) is small for 𝜷T𝐊(𝑰x,t)<τ0\bm{\beta}^{T}\mathbf{K}(\bm{I}_{\textbf{x},t})<\tau_{0}, and large otherwise, 2) when zt=l,l{1,,L2}z_{t}=l,\,l\in\{1,\dots,L-2\}, gt(𝜷;𝑰x,t,zt)g_{t}(\bm{\beta};\bm{I}_{\textbf{x},t},z_{t}) is small for τl1𝜷T𝐊(𝑰x,t)<τl\tau_{l-1}\leq\bm{\beta}^{T}\mathbf{K}(\bm{I}_{\textbf{x},t})<\tau_{l}, and large otherwise, and 3) when zt=L1z_{t}=L-1, gt(𝜷;𝑰x,t,zt)g_{t}(\bm{\beta};\bm{I}_{\textbf{x},t},z_{t}) is small for 𝜷T𝐊(𝑰x,t)>τL2\bm{\beta}^{T}\mathbf{K}(\bm{I}_{\textbf{x},t})>\tau_{L-2}, and large otherwise. In this paper we will choose the following per stage cost function, which can be easily checked to satisfy these three requirements:

gt(𝜷;𝑰x,z){log(1+exp(η(𝜷T𝐊(𝑰x)τ0))),z=0log(1+exp(η(𝜷T𝐊(𝑰x)τz1)))+log(1+exp(η(𝜷T𝐊(𝑰x)τz))),z{1,,L2}log(1+exp(η(𝜷T𝐊(𝑰x)τL2))),z=L1.g_{t}(\bm{\beta};\bm{I}_{\textbf{x}},z)\triangleq\left\{\begin{array}[]{ll}\log(1+\exp(\eta(\bm{\beta}^{T}\mathbf{K}(\bm{I}_{\textbf{x}})-\tau_{0}))),&z=0\\ \log(1+\exp(-\eta(\bm{\beta}^{T}\mathbf{K}(\bm{I}_{\textbf{x}})-\tau_{z-1})))&\\ \quad+\log(1+\exp(\eta(\bm{\beta}^{T}\mathbf{K}(\bm{I}_{\textbf{x}})-\tau_{z}))),&z\in\{1,\dots,L-2\}\\ \log(1+\exp(-\eta(\bm{\beta}^{T}\mathbf{K}(\bm{I}_{\textbf{x}})-\tau_{L-2}))),&z=L-1.\end{array}\right. (23)

We remark that (23) reduces to (22) when the measurements are binary.

Now that the per stage cost (23) has been defined, we will present the online estimation algorithm. First, the gradient of gt(;,)g_{t}(\bm{\cdot};\bm{\cdot},\bm{\cdot}) can be derived as

gt(𝜷;𝑰x,z)={η1+exp(η(𝜷T𝐊(𝑰x)τ0))𝐊(𝑰x),z=0(η1+exp(η(𝜷T𝐊(𝑰x)τz1))+η1+exp(η(𝜷T𝐊(𝑰x)τz)))𝐊(𝑰x),z{1,,L2}η1+exp(η(𝜷T𝐊(𝑰x)τL2))𝐊(𝑰x),z=L1,\nabla g_{t}(\bm{\beta};\bm{I}_{\textbf{x}},z)=\left\{\begin{array}[]{ll}\frac{\eta}{1+\exp(-\eta(\bm{\beta}^{T}\mathbf{K}(\bm{I}_{\textbf{x}})-\tau_{0}))}\mathbf{K}(\bm{I}_{\textbf{x}}),&z=0\\ \Big{(}\frac{-\eta}{1+\exp(\eta(\bm{\beta}^{T}\mathbf{K}(\bm{I}_{\textbf{x}})-\tau_{z-1}))}&\\ \quad\quad+\frac{\eta}{1+\exp(-\eta(\bm{\beta}^{T}\mathbf{K}(\bm{I}_{\textbf{x}})-\tau_{z}))}\Big{)}\mathbf{K}(\bm{I}_{\textbf{x}}),&z\in\{1,\dots,L-2\}\\ \frac{-\eta}{1+\exp(\eta(\bm{\beta}^{T}\mathbf{K}(\bm{I}_{\textbf{x}})-\tau_{L-2}))}\mathbf{K}(\bm{I}_{\textbf{x}}),&z=L-1,\end{array}\right. (24)

while the Hessian of gt(;,)g_{t}(\bm{\cdot};\bm{\cdot},\bm{\cdot}) can be derived as

2gt(𝜷;𝑰x,z)\displaystyle\nabla^{2}g_{t}(\bm{\beta};\bm{I}_{\textbf{x}},z)
={η2exp(η(𝜷T𝐊(𝑰x)τ0))(1+exp(η(𝜷T𝐊(𝑰x)τ0)))2𝐊(𝑰x)𝐊(𝑰x)T,z=0(η2exp(η(𝜷T𝐊(𝑰x)τz1))(1+exp(η(𝜷T𝐊(𝑰x)τz1)))2+η2exp(η(𝜷T𝐊(𝑰x)τz))(1+exp(η(𝜷T𝐊(𝑰x)τz)))2)𝐊(𝑰x)𝐊(𝑰x)T,z{1,,L2}η2exp(η(𝜷T𝐊(𝑰x)τL2))(1+exp(η(𝜷T𝐊(𝑰x)τL2)))2𝐊(𝑰x)𝐊(𝑰x)T,z=L1.\displaystyle=\left\{\begin{array}[]{ll}\frac{\eta^{2}\exp(-\eta(\bm{\beta}^{T}\mathbf{K}(\bm{I}_{\textbf{x}})-\tau_{0}))}{(1+\exp(-\eta(\bm{\beta}^{T}\mathbf{K}(\bm{I}_{\textbf{x}})-\tau_{0})))^{2}}\mathbf{K}(\bm{I}_{\textbf{x}})\mathbf{K}(\bm{I}_{\textbf{x}})^{T},&z=0\\ \Big{(}\frac{\eta^{2}\exp(\eta(\bm{\beta}^{T}\mathbf{K}(\bm{I}_{\textbf{x}})-\tau_{z-1}))}{(1+\exp(\eta(\bm{\beta}^{T}\mathbf{K}(\bm{I}_{\textbf{x}})-\tau_{z-1})))^{2}}&\\ \quad\quad+\frac{\eta^{2}\exp(-\eta(\bm{\beta}^{T}\mathbf{K}(\bm{I}_{\textbf{x}})-\tau_{z}))}{(1+\exp(-\eta(\bm{\beta}^{T}\mathbf{K}(\bm{I}_{\textbf{x}})-\tau_{z})))^{2}}\Big{)}\mathbf{K}(\bm{I}_{\textbf{x}})\mathbf{K}(\bm{I}_{\textbf{x}})^{T},&z\in\{1,\dots,L-2\}\\ \frac{\eta^{2}\exp(\eta(\bm{\beta}^{T}\mathbf{K}(\bm{I}_{\textbf{x}})-\tau_{L-2}))}{(1+\exp(\eta(\bm{\beta}^{T}\mathbf{K}(\bm{I}_{\textbf{x}})-\tau_{L-2})))^{2}}\mathbf{K}(\bm{I}_{\textbf{x}})\mathbf{K}(\bm{I}_{\textbf{x}})^{T},&z=L-1.\end{array}\right. (29)
={η2exp(η(𝜷T𝐊(𝑰x)τ0))(1+exp(η(𝜷T𝐊(𝑰x)τ0)))2𝐊(𝑰x)𝐊(𝑰x)T,z=0(η2exp(η(𝜷T𝐊(𝑰x)τz1))(1+exp(η(𝜷T𝐊(𝑰x)τz1)))2+η2exp(η(𝜷T𝐊(𝑰x)τz))(1+exp(η(𝜷T𝐊(𝑰x)τz)))2)𝐊(𝑰x)𝐊(𝑰x)T,z{1,,L2}η2exp(η(𝜷T𝐊(𝑰x)τL2))(1+exp(η(𝜷T𝐊(𝑰x)τL2)))2𝐊(𝑰x)𝐊(𝑰x)T,z=L1.\displaystyle=\left\{\begin{array}[]{ll}\frac{\eta^{2}\exp(\eta(\bm{\beta}^{T}\mathbf{K}(\bm{I}_{\textbf{x}})-\tau_{0}))}{(1+\exp(\eta(\bm{\beta}^{T}\mathbf{K}(\bm{I}_{\textbf{x}})-\tau_{0})))^{2}}\mathbf{K}(\bm{I}_{\textbf{x}})\mathbf{K}(\bm{I}_{\textbf{x}})^{T},&z=0\\ \Big{(}\frac{\eta^{2}\exp(\eta(\bm{\beta}^{T}\mathbf{K}(\bm{I}_{\textbf{x}})-\tau_{z-1}))}{(1+\exp(\eta(\bm{\beta}^{T}\mathbf{K}(\bm{I}_{\textbf{x}})-\tau_{z-1})))^{2}}&\\ \quad\quad+\frac{\eta^{2}\exp(\eta(\bm{\beta}^{T}\mathbf{K}(\bm{I}_{\textbf{x}})-\tau_{z}))}{(1+\exp(\eta(\bm{\beta}^{T}\mathbf{K}(\bm{I}_{\textbf{x}})-\tau_{z})))^{2}}\Big{)}\mathbf{K}(\bm{I}_{\textbf{x}})\mathbf{K}(\bm{I}_{\textbf{x}})^{T},&z\in\{1,\dots,L-2\}\\ \frac{\eta^{2}\exp(\eta(\bm{\beta}^{T}\mathbf{K}(\bm{I}_{\textbf{x}})-\tau_{L-2}))}{(1+\exp(\eta(\bm{\beta}^{T}\mathbf{K}(\bm{I}_{\textbf{x}})-\tau_{L-2})))^{2}}\mathbf{K}(\bm{I}_{\textbf{x}})\mathbf{K}(\bm{I}_{\textbf{x}})^{T},&z=L-1.\end{array}\right. (34)

An approximate online Newton method [22] for estimating the parameters 𝜷\bm{\beta} is now given by:

𝜷^k+1=𝜷^k(Hk(𝜷^k;𝑰x,1:k,z1:k))1Gk(𝜷^k;𝑰x,k,zk),\hat{\bm{\beta}}_{k+1}=\hat{\bm{\beta}}_{k}-\left(H_{k}(\hat{\bm{\beta}}_{k};\bm{I}_{\textbf{x},1:k},z_{1:k})\right)^{-1}G_{k}(\hat{\bm{\beta}}_{k};\bm{I}_{\textbf{x},k},z_{k}), (35)

where

Gk(𝜷^k;𝑰x,k,zk)\displaystyle G_{k}(\hat{\bm{\beta}}_{k};\bm{I}_{\textbf{x},k},z_{k}) =gk(𝜷^k;𝑰x,k,zk)\displaystyle=\nabla g_{k}(\hat{\bm{\beta}}_{k};\bm{I}_{\textbf{x},k},z_{k})
Hk(𝜷^k;𝑰x,1:k,z1:k)\displaystyle H_{k}(\hat{\bm{\beta}}_{k};\bm{I}_{\textbf{x},1:k},z_{1:k}) =Hk1(𝜷^k1;𝑰x,1:k1,z1:k1)+2gk(𝜷^k;𝑰x,k,zk)\displaystyle=H_{k-1}(\hat{\bm{\beta}}_{k-1};\bm{I}_{\textbf{x},1:k-1},z_{1:k-1})+\nabla^{2}g_{k}(\hat{\bm{\beta}}_{k};\bm{I}_{\textbf{x},k},z_{k})
H0(𝜷^0)\displaystyle H_{0}(\hat{\bm{\beta}}_{0}) =ςI.\displaystyle=\varsigma I. (36)

The terms GkG_{k} and HkH_{k} represent approximate gradients and Hessians respectively for the cost function (21). The initialization H0(𝜷^0)=ςIH_{0}(\hat{\bm{\beta}}_{0})=\varsigma I is a Levenberg-Marquardt type modification [34] to ensure that the matrices {Hk}\{H_{k}\} are always non-singular.666In [22] this is equivalently expressed as a full rank initialization on (H0(𝜷^0))1\left(H_{0}(\hat{\bm{\beta}}_{0})\right)^{-1}.

In the case where the field (and hence the parameters 𝜷)\bm{\beta}) is time-varying, the algorithm (35)-(36) may not be able to respond quickly to changes in 𝜷\bm{\beta}, due to all past Hessians (including Hessians from old fields) being used in the computation of Hk(𝜷^k;𝑰x,1:k,z1:k)H_{k}(\hat{\bm{\beta}}_{k};\bm{I}_{\textbf{x},1:k},z_{1:k}) in (36). To overcome this problem, we will introduce a forgetting factor [35] into the algorithm, where the forgetting factor δ\delta satisfies 0<δ10<\delta\leq 1, and typically chosen to be close to one. The final estimation procedure is summarized as Algorithm 1. Compared to (36), we note that the Levenberg-Marquardt modification in Algorithm 1 is done at every time step by adding ςI\varsigma I to H~k\tilde{H}_{k}, as we found that only doing it once at the beginning can lead to algorithm instability due to exponential decay of initial conditions when using a forgetting factor. We also remark that Algorithm 1 reduces to (35)-(36) when the forgetting factor δ=1\delta=1.

Algorithm 1 Estimation of Fourier components using online optimization approach
1:Algorithm Parameters: Logistic function parameter η>0\eta>0, Levenberg-Marquardt parameter ς>0\varsigma>0, forgetting factor δ(0,1]\delta\in(0,1]
2:Inputs: Initial position index 𝑰x,0\bm{I}_{\textbf{x},0}
3:Outputs: Parameter estimates {𝜷^k}\{\hat{\bm{\beta}}_{k}\}
4:Initialize H~0(𝜷^0)=𝟎\tilde{H}_{0}(\hat{\bm{\beta}}_{0})=\mathbf{0}
5:for k=0,1,2,,k=0,1,2,\dots, do
6:     Update estimates
𝜷^k+1\displaystyle\hat{\bm{\beta}}_{k+1} =𝜷^k(Hk(𝜷^k;𝑰x,1:k,z1:k))1Gk(𝜷^k;𝑰x,k,zk)\displaystyle=\hat{\bm{\beta}}_{k}-\left(H_{k}(\hat{\bm{\beta}}_{k};\bm{I}_{\textbf{x},1:k},z_{1:k})\right)^{-1}G_{k}(\hat{\bm{\beta}}_{k};\bm{I}_{\textbf{x},k},z_{k})
Gk(𝜷^k;𝑰x,k,zk)\displaystyle G_{k}(\hat{\bm{\beta}}_{k};\bm{I}_{\textbf{x},k},z_{k}) =gk(𝜷^k;𝑰x,k,zk)\displaystyle=\nabla g_{k}(\hat{\bm{\beta}}_{k};\bm{I}_{\textbf{x},k},z_{k})
H~k(𝜷^k;𝑰x,1:k,z1:k)\displaystyle\tilde{H}_{k}(\hat{\bm{\beta}}_{k};\bm{I}_{\textbf{x},1:k},z_{1:k}) =δH~k1(𝜷^k1;𝑰x,1:k1,z1:k1)+2gk(𝜷^k;𝑰x,k,zk)\displaystyle=\delta\tilde{H}_{k-1}(\hat{\bm{\beta}}_{k-1};\bm{I}_{\textbf{x},1:k-1},z_{1:k-1})+\nabla^{2}g_{k}(\hat{\bm{\beta}}_{k};\bm{I}_{\textbf{x},k},z_{k})
Hk(𝜷^k;𝑰x,1:k,z1:k)\displaystyle H_{k}(\hat{\bm{\beta}}_{k};\bm{I}_{\textbf{x},1:k},z_{1:k}) =H~k(𝜷^k;𝑰x,1:k,z1:k)+ςI,\displaystyle=\tilde{H}_{k}(\hat{\bm{\beta}}_{k};\bm{I}_{\textbf{x},1:k},z_{1:k})+\varsigma I,
        where gk(;,)\nabla g_{k}(\bm{\cdot};\bm{\cdot},\bm{\cdot}) and 2gk(;,)\nabla^{2}g_{k}(\bm{\cdot};\bm{\cdot},\bm{\cdot}) are computed using (24)-(34)
7:     Determine 𝑰x,k+1=ActiveSensing(𝑰x,k,𝜷^k+1)\bm{I}_{\textbf{x},k+1}=\texttt{ActiveSensing}(\bm{I}_{\textbf{x},k},\hat{\bm{\beta}}_{k+1}) using Algorithm 2
8:end for

V-B Measurement Location Selection Using Active Sensing

For choosing the positions in which the unmanned vehicle should take measurements from, an “active sensing” approach [36, 19, 37] can be used, which aims to cleverly choose the next position given information collected so far, in order to more quickly obtain a good estimate of the field.

In the case of binary measurements, a method for choosing the next measurement location is proposed in [22], that tries to maximize the minimum eigenvalue of an “expected Hessian” term H+(𝑰x)H^{+}(\bm{I}_{\textbf{x}^{\prime}}) over candidate future position indices 𝑰x\bm{I}_{\textbf{x}^{\prime}}. Formally, the problem is:

𝑰x,k+1=argmax𝑰xk+1λmin(H+(𝑰x)),\bm{I}_{\textbf{x},k+1}=\textnormal{arg}\max\limits_{\bm{I}_{\textbf{x}^{\prime}}\in\mathcal{I}_{k+1}}\lambda_{\min}(H^{+}(\bm{I}_{\textbf{x}^{\prime}})),

where λmin(H+(𝑰x))\lambda_{\min}(H^{+}(\bm{I}_{\textbf{x}^{\prime}})) is the minimum eigenvalue of H+(𝑰x)H^{+}(\bm{I}_{\textbf{x}^{\prime}}), k+1\mathcal{I}_{k+1} is the set of possible future position indices777The set k+1\mathcal{I}_{k+1} may, e.g., capture the set of reachable positions from the current state of the mobile sensor platform. and

H+(𝑰x)Hk(𝜷^k;𝑰x,1:k,z1:k)+η2exp(η(𝜷^k+1T𝐊(𝑰x)τ))(1+exp(η(𝜷^k+1T𝐊(𝑰x)τ)))2𝐊(𝑰x)𝐊(𝑰x)T(z=0)+η2exp(η(𝜷^k+1T𝐊(𝑰x)τ))(1+exp(η(𝜷^k+1T𝐊(𝑰x)τ)))2𝐊(𝑰x)𝐊(𝑰x)T(z=1)=Hk(𝜷^k;𝑰x,1:k,z1:k)+η2exp(η(𝜷^k+1T𝐊(𝑰x)τ))(1+exp(η(𝜷^k+1T𝐊(𝑰x)τ)))2𝐊(𝑰x)𝐊(𝑰x)T,\begin{split}H^{+}(\bm{I}_{\textbf{x}^{\prime}})&\triangleq H_{k}(\bm{\hat{\beta}}_{k};\bm{I}_{\textbf{x},1:k},z_{1:k})+\frac{\eta^{2}\exp(\eta(\bm{\hat{\beta}}_{k+1}^{T}\mathbf{K}(\bm{I}_{\textbf{x}^{\prime}})-\tau))}{\big{(}1+\exp(\eta(\bm{\hat{\beta}}_{k+1}^{T}\mathbf{K}(\bm{I}_{\textbf{x}^{\prime}})-\tau))\big{)}^{2}}\mathbf{K}(\bm{I}_{\textbf{x}^{\prime}})\mathbf{K}(\bm{I}_{\textbf{x}^{\prime}})^{T}\mathbb{P}(z^{\prime}=0)\\ &\quad+\frac{\eta^{2}\exp(\eta(\bm{\hat{\beta}}_{k+1}^{T}\mathbf{K}(\bm{I}_{\textbf{x}^{\prime}})-\tau))}{\big{(}1+\exp(\eta(\bm{\hat{\beta}}_{k+1}^{T}\mathbf{K}(\bm{I}_{\textbf{x}^{\prime}})-\tau))\big{)}^{2}}\mathbf{K}(\bm{I}_{\textbf{x}^{\prime}})\mathbf{K}(\bm{I}_{\textbf{x}^{\prime}})^{T}\mathbb{P}(z^{\prime}=1)\\ &=H_{k}(\bm{\hat{\beta}}_{k};\bm{I}_{\textbf{x},1:k},z_{1:k})+\frac{\eta^{2}\exp(\eta(\bm{\hat{\beta}}_{k+1}^{T}\mathbf{K}(\bm{I}_{\textbf{x}^{\prime}})-\tau))}{\big{(}1+\exp(\eta(\bm{\hat{\beta}}_{k+1}^{T}\mathbf{K}(\bm{I}_{\textbf{x}^{\prime}})-\tau))\big{)}^{2}}\mathbf{K}(\bm{I}_{\textbf{x}^{\prime}})\mathbf{K}(\bm{I}_{\textbf{x}^{\prime}})^{T},\end{split} (37)

The last line of (37) holds since (z=0)+(z=1)=1\mathbb{P}(z^{\prime}=0)+\mathbb{P}(z^{\prime}=1)=1, irrespective of the distribution of the noise n(,)n(\bm{\cdot},\bm{\cdot}).

If we attempt to generalize (37) to multi-level measurements, we find that there will be terms (z=0),(z=1),,(z=L1)\mathbb{P}(z^{\prime}=0),\mathbb{P}(z^{\prime}=1),\dots,\mathbb{P}(z^{\prime}=L-1) which cannot all be cancelled, and we will need to specify a noise distribution in order to compute these terms. Since exact knowledge of the noise distribution is usually unavailable in practice, we will instead consider a slightly different objective to optimize, namely a “predicted Hessian”

H^(𝑰x)Hk(𝜷^k;𝑰x,1:k,z1:k)+2gk+1(𝜷^k+1;𝑰x,z^)\begin{split}\hat{H}(\bm{I}_{\textbf{x}^{\prime}})&\triangleq H_{k}(\bm{\hat{\beta}}_{k};\bm{I}_{\textbf{x},1:k},z_{1:k})+\nabla^{2}g_{k+1}(\bm{\hat{\beta}}_{k+1};\bm{I}_{\textbf{x}^{\prime}},\hat{z}^{\prime})\end{split} (38)

where z^q(𝜷^k+1T𝐊(𝑰x))\hat{z}^{\prime}\triangleq q\big{(}\bm{\hat{\beta}}_{k+1}^{T}\mathbf{K}(\bm{I}_{\textbf{x}^{\prime}})\big{)} is the predicted future measurement, with the quantizer q()q(\bm{\cdot}) given by (7). Note that (38) reduces to (37) in the case of binary measurements. We then maximize the minimum eigenvalue of the predicted Hessian to determine the next measurement location target:

𝑰xtarget=argmax𝑰xk+1λmin(H^(𝑰x)).\bm{I}_{\textbf{x}}^{\textnormal{target}}=\textnormal{arg}\max\limits_{\bm{I}_{\textbf{x}^{\prime}}\in\mathcal{I}_{k+1}}\lambda_{\min}(\hat{H}(\bm{I}_{\textbf{x}^{\prime}})). (39)

For the set of candidate future position indices k+1\mathcal{I}_{k+1}, one possible choice could be positions distributed uniformly on a grid within the search region 𝒮\mathcal{S}. Once a new location target 𝑰xtarget\bm{I}_{\textbf{x}}^{\textnormal{target}} has been determined, the vehicle heads in that direction. The vehicle will collect measurements and update 𝜷^\hat{\bm{\beta}} along the way, where we collect a new measurement after every ρ0\rho_{0} in distance has been travelled until 𝑰xtarget\bm{I}_{\textbf{x}}^{\textnormal{target}} is reached, at which time a new location target is determined. The procedure is summarized in Algorithm 2, where 𝑰closest(𝐱)\bm{I}_{\textnormal{closest}}(\mathbf{x}) denotes the closest position index (Ix,Iy)(I_{x},I_{y}) to 𝐱𝒮\mathbf{x}\in\mathcal{S}. The condition in line 7 of Algorithm 2 means the location target has been reached, so that a new location target is determined and a location index 𝑰x,k+1\bm{I}_{\textbf{x},k+1} in the direction of the new target is returned. Some random exploration is also included in the algorithm, such that the new location target is random with probability ε\varepsilon, similar to ε\varepsilon-greedy algorithms used in reinforcement learning [38]. The condition in line 10 means that the vehicle is within ρ0\rho_{0} of the target, which will be reached at the next time step, while the condition in line 12 means the vehicle will continue heading towards the target and collect measurements along the way.

Algorithm 2 Active sensing algorithm for online optimization approach: 𝑰x,k+1=ActiveSensing(𝑰x,k,𝜷^k+1)\bm{I}_{\textbf{x},k+1}=\texttt{ActiveSensing}(\bm{I}_{\textbf{x},k},\hat{\bm{\beta}}_{k+1})
1:Algorithm Parameters: Distance ρ00\rho_{0}\geq 0, candidate position indices k+1\mathcal{I}_{k+1}, search region 𝒮\mathcal{S}, exploration probability ε\varepsilon
2:Inputs: 𝑰x,k\bm{I}_{\textbf{x},k}, 𝜷^k+1\hat{\bm{\beta}}_{k+1}
3:Output: Next position index 𝑰x,k+1\bm{I}_{\textbf{x},k+1}
4:if k=0k=0 then
5:     Initialize 𝑰xtarget=𝑰x,0\bm{I}_{\textbf{x}}^{\textnormal{target}}=\bm{I}_{\textbf{x},0}
6:end if
7:if 𝑰x,k=𝑰xtarget\bm{I}_{\textbf{x},k}=\bm{I}_{\textbf{x}}^{\textnormal{target}} then
8:     With probability ε\varepsilon, set new 𝑰xtarget\bm{I}_{\textbf{x}}^{\textnormal{target}} to a random location index in {0,,Nx1}×{0,,Ny1}\{0,\dots,N_{x}-1\}\times\{0,\dots,N_{y}-1\}, otherwise compute new 𝑰xtarget=argmax𝑰xk+1λmin(H^(𝑰x)),\bm{I}_{\textbf{x}}^{\textnormal{target}}=\textnormal{arg}\max\limits_{\bm{I}_{\textbf{x}^{\prime}}\in\mathcal{I}_{k+1}}\lambda_{\min}(\hat{H}(\bm{I}_{\textbf{x}^{\prime}})), where H^(𝑰x)\hat{H}(\bm{I}_{\textbf{x}^{\prime}}) is given by (38)
9:     Set 𝐱k+1=𝐱k+ρ0(𝐱target𝐱k)/𝐱target𝐱k\mathbf{x}_{k+1}=\mathbf{x}_{k}+\rho_{0}(\mathbf{x}^{\textnormal{target}}-\mathbf{x}_{k})/||\mathbf{x}^{\textnormal{target}}-\mathbf{x}_{k}|| and return 𝑰x,k+1=𝑰closest(𝐱k+1)\bm{I}_{\textbf{x},k+1}=\bm{I}_{\textnormal{closest}}(\mathbf{x}_{k+1})
10:else if 𝐱k𝐱target<ρ0||\mathbf{x}_{k}-\mathbf{x}^{\textnormal{target}}||<\rho_{0} then
11:     Set 𝐱k+1=𝐱target\mathbf{x}_{k+1}=\mathbf{x}^{\textnormal{target}} and return 𝑰x,k+1=𝑰xtarget\bm{I}_{\textbf{x},k+1}=\bm{I}_{\textbf{x}}^{\textnormal{target}}
12:else
13:     Set 𝐱k+1=𝐱k+ρ0(𝐱target𝐱k)/𝐱target𝐱k\mathbf{x}_{k+1}=\mathbf{x}_{k}+\rho_{0}(\mathbf{x}^{\textnormal{target}}-\mathbf{x}_{k})/||\mathbf{x}^{\textnormal{target}}-\mathbf{x}_{k}|| and return 𝑰x,k+1=𝑰closest(𝐱k+1)\bm{I}_{\textbf{x},k+1}=\bm{I}_{\textnormal{closest}}(\mathbf{x}_{k+1})
14:end if

V-C Refinement of Field Model

At the end of Section IV we mentioned that we can refine our field model as we go along by including more higher order modes, while reusing previous estimates of the lower order modes. One reason for doing so could occur if we realize that the original set of modes chosen is not sufficient to provide an adequate estimate of the field, so that more modes need to be added. In this subsection we briefly describe how this refinement can be done.

Suppose that originally, modes in 𝒰~\tilde{\mathcal{U}} of cardinality 𝒩~\tilde{\mathcal{N}} are being estimated, with an ordering on 𝒰~\tilde{\mathcal{U}} indexed by j{0,,𝒩~1}j\in\{0,\dots,\tilde{\mathcal{N}}-1\}. After kk iterations, suppose we wish to increase the set of estimated modes to the set 𝒰~𝒰~\tilde{\mathcal{U}}^{\prime}\supseteq\tilde{\mathcal{U}}, of cardinality 𝒩~\tilde{\mathcal{N}}^{\prime}, and define an ordering on 𝒰~\tilde{\mathcal{U}}^{\prime} indexed by j{0,,𝒩~1}j^{\prime}\in\{0,\dots,\tilde{\mathcal{N}}^{\prime}-1\}.

For the indices j{0,,𝒩~1}j\in\{0,\dots,\tilde{\mathcal{N}}-1\}, define a mapping

m(.):{0,,𝒩~1}{0,,𝒩~1}m(.):\{0,\dots,\tilde{\mathcal{N}}-1\}\rightarrow\{0,\dots,\tilde{\mathcal{N}}^{\prime}-1\}

such that m(j)=jm(j)=j^{\prime} gives the corresponding index j{0,,𝒩~1}j^{\prime}\in\{0,\dots,\tilde{\mathcal{N}}^{\prime}-1\}. Note that in general m(.)m(.) may not be surjective.

Let 𝜷^\hat{\bm{\beta}}^{\prime} denote the estimate (of dimension N~\tilde{N}^{\prime}), and H~\tilde{H}^{\prime} the matrix (of dimension N~×N~\tilde{N}^{\prime}\times\tilde{N}^{\prime}) used in the computation of the approximate Hessians, for this new set of modes. Then we can re-initialize the estimates 𝜷^\hat{\bm{\beta}}^{\prime} by setting

𝜷^m(j),k+1=𝜷^j,k+1,j{0,,𝒩~1},\hat{\bm{\beta}}^{\prime}_{m(j),k+1}=\hat{\bm{\beta}}_{j,k+1},\quad\forall j\in\{0,\dots,\tilde{\mathcal{N}}-1\},

which copies the estimates of the existing components 𝜷^\hat{\bm{\beta}} (of dimension 𝒩~\tilde{\mathcal{N}}) across, with the other (new) components to be initialized appropriately. The term H~k\tilde{H}^{\prime}_{k} should also have components corresponding to the existing set of modes copied across, by setting

H~m(i)m(j),k=H~ij,k,i,j{0,,𝒩~1},\tilde{H}^{\prime}_{m(i)m(j),k}=\tilde{H}_{ij,k},\quad\forall i,j\in\{0,\dots,\tilde{\mathcal{N}}-1\},

with other components of H~k\tilde{H}^{\prime}_{k} set to zero. After this re-initialization, Algorithm 1 then proceeds as before.

Remark 5.

Instead of adding more modes, the case where we refine the field model by deleting some modes can also be handled in a similar manner.

VI Numerical Studies

For performance evaluation of the field estimation algorithms, we will consider two performance measures, the mean squared error (MSE) and structural similarity index (SSIM). These are defined similar to Section IV, except that we replace the approximated field with the estimated field

ϕ^d(Ix,Iy)(u,v)𝒰~αx(u)αy(v)C^(u,v)cos((2Ix+1)πu2Nx)cos((2Iy+1)πv2Ny).\hat{\phi}_{d}(I_{x},I_{y})\triangleq\sum_{(u,v)\in\tilde{\mathcal{U}}}\alpha_{x}(u)\alpha_{y}(v)\hat{C}(u,v)\cos\left(\frac{(2I_{x}+1)\pi u}{2N_{x}}\right)\cos\left(\frac{(2I_{y}+1)\pi v}{2N_{y}}\right).

VI-A Static Fields

We consider estimation of the (true) field shown in Fig. 5, with search region 𝒮=[0,100]×[0,100]\mathcal{S}=[0,100]\times[0,100]. The field is discretized using Nx=100N_{x}=100 and Ny=100N_{y}=100. We use (5) to select the largest modes that we wish to retain and estimate.

Refer to caption
Figure 5: Static field

We use Algorithm 1 with η=5\eta=5 and ς=1/20000\varsigma=1/20000. As the field is assumed static, the forgetting factor is set to δ=1\delta=1. The initial position index is set to 𝑰x,0=(50,50)\bm{I}_{\textbf{x},0}=(50,50), close to the center of the search region 𝒮\mathcal{S}. A four level quantizer is used with quantizer thresholds τ0=1,τ1=2,τ2=3\tau_{0}=1,\tau_{1}=2,\tau_{2}=3. The measurement noise n(,)n(\bm{\cdot},\bm{\cdot}) is i.i.d. Gaussian with zero mean and variance equal to 0.1. For choosing the measurement locations, we use Algorithm 2 with ρ0=10\rho_{0}=10. The candidate position indices k+1\mathcal{I}_{k+1} are chosen to correspond to 36 points placed uniformly on a grid within the search region 𝒮\mathcal{S}. The exploration probability is chosen as ε=0.1\varepsilon=0.1.

Refer to caption
Figure 6: Static field: MSE vs. kk
Refer to caption
Figure 7: Static field: SSIM vs. kk

Fig. 6 shows the MSE vs. kk (corresponding to the number of measurements collected), when various numbers of modes are estimated. Fig. 7 shows the SSIM vs. kk. Each point in Figs. 6 and 7 is obtained by averaging over 10 runs. We see from the figures that there is a trade-off between the estimation quality, number of modes/parameters that need to be estimated, and number of measurements collected. If a lot of measurements can be collected, then estimating more modes will allow for a better estimate of the field.888For example, if multiple vehicles can be utilized [21] or one has a sensor network, then more measurements can be collected in a limited amount of time. On the other hand, if fewer measurements are available, estimating fewer modes more accurately may give a better field estimate than estimating lots of modes inaccurately. In Fig. 8 we show a sample plot of the estimated field when 60 modes are estimated, after 2000 measurements have been collected.

Refer to caption
Figure 8: Static field: Estimated field using 2000 measurements

VI-B Time-varying Fields

We now consider an example with time-varying fields. Suppose the true field is the same of that of Fig. 1 for the first 1000 iterations, but then switches to the true field in Fig. 3 for the next 1000 iterations. We will use Algorithms 1 and 2 with forgetting factor δ=0.995\delta=0.995, with other parameters the same as in the previous example.

Figs. 9 and 10 show respectively the MSE and SSIM vs. kk, when the 60 largest modes are estimated. We see that after the field changes at k=1000k=1000 the accuracy of the field estimate drops, but Algorithm 1 is able to recover and estimate the new field as more measurements are collected.

For comparison, the MSE and SSIM obtained using forgetting factor δ=1\delta=1 are also shown. In this case, as there is no forgetting of old information, the field estimates will take much longer to adjust to the new field.

Refer to caption
Figure 9: Time varying field: MSE vs. kk
Refer to caption
Figure 10: Time varying field: SSIM vs. kk

VI-C Refinement of Field Model

Here we consider the use of a refined field model as in Section V-C, for estimation of the static field shown in Fig. 5. For the first 1000 iterations the 40 largest modes are estimated, while the 80 largest modes are estimated for the next 1000 iterations, with the estimates of the 40 largest modes reused as described in Section V-C. Figs. 11 and 12 show respectively the MSE and SSIM vs. kk. We see that just after k=1000k=1000 the estimate quality using 80 modes decreases slightly, due to the new modes not being accurately estimated, but the performance quickly improves and eventually outperforms the use of 40 modes.

Refer to caption
Figure 11: Estimating 40 modes for first 1000 iterations, then 80 modes for next 1000 iterations: MSE vs. kk
Refer to caption
Figure 12: Estimating 40 modes for first 1000 iterations, then 80 modes for next 1000 iterations: SSIM vs. kk

VII Conclusion

This paper has studied the estimation of scalar fields, where the field is viewed in the Fourier domain. An algorithm has been presented for estimating the lower order modes of the field, under the assumption of noisy quantized measurements collected from the environment. Our approach assumed an unmanned autonomous vehicle travelling around a region in order to collect these measurements. The setup can also be extended to multiple vehicles, with the vehicles sharing measurements with each other similar to [21]. Future work will consider the use of a sensor network for field estimation, with algorithms constrained by local communication and distributed computation.

Acknowledgment

The authors thank Mr. Shintaro Umeki for suggesting the use of structural similarity as a performance measure while working at Defence Science and Technology Group as a summer vacation student.

Appendix A Proof of Lemma 1

By definition,

MSE =1NxNyIx=0Nx1Iy=0Ny1(ϕd(Ix,Iy)ϕ~d(Ix,Iy))2\displaystyle=\frac{1}{N_{x}N_{y}}\sum_{I_{x}=0}^{N_{x}-1}\sum_{I_{y}=0}^{N_{y}-1}\Big{(}\phi_{d}(I_{x},I_{y})-\tilde{\phi}_{d}(I_{x},I_{y})\Big{)}^{2}
=1NxNyIx=0Nx1Iy=0Ny1((u,v)𝒰αx(u)αy(v)C(u,v)cos((2Ix+1)πu2Nx)cos((2Iy+1)πv2Ny)\displaystyle=\frac{1}{N_{x}N_{y}}\sum_{I_{x}=0}^{N_{x}-1}\sum_{I_{y}=0}^{N_{y}-1}\Bigg{(}\sum_{(u,v)\in\mathcal{U}}\alpha_{x}(u)\alpha_{y}(v)C(u,v)\cos\left(\frac{(2I_{x}+1)\pi u}{2N_{x}}\right)\cos\left(\frac{(2I_{y}+1)\pi v}{2N_{y}}\right)
(u,v)𝒰~αx(u)αy(v)C~(u,v)cos((2Ix+1)πu2Nx)cos((2Iy+1)πv2Ny))2\displaystyle\quad-\sum_{(u,v)\in\tilde{\mathcal{U}}}\alpha_{x}(u)\alpha_{y}(v)\tilde{C}(u,v)\cos\left(\frac{(2I_{x}+1)\pi u}{2N_{x}}\right)\cos\left(\frac{(2I_{y}+1)\pi v}{2N_{y}}\right)\Bigg{)}^{2}
=1NxNyIx=0Nx1Iy=0Ny1((u,v)𝒰~αx(u)αy(v)(C(u,v)C~(u,v))\displaystyle=\frac{1}{N_{x}N_{y}}\sum_{I_{x}=0}^{N_{x}-1}\sum_{I_{y}=0}^{N_{y}-1}\Bigg{(}\sum_{(u,v)\in\tilde{\mathcal{U}}}\alpha_{x}(u)\alpha_{y}(v)\big{(}C(u,v)-\tilde{C}(u,v)\big{)}
×cos((2Ix+1)πu2Nx)cos((2Iy+1)πv2Ny)\displaystyle\quad\quad\times\cos\left(\frac{(2I_{x}+1)\pi u}{2N_{x}}\right)\cos\left(\frac{(2I_{y}+1)\pi v}{2N_{y}}\right)
+(u,v)𝒰𝒰~αx(u)αy(v)C(u,v)cos((2Ix+1)πu2Nx)cos((2Iy+1)πv2Ny))2\displaystyle\quad+\sum_{(u,v)\in\mathcal{U}\setminus\tilde{\mathcal{U}}}\alpha_{x}(u)\alpha_{y}(v)C(u,v)\cos\left(\frac{(2I_{x}+1)\pi u}{2N_{x}}\right)\cos\left(\frac{(2I_{y}+1)\pi v}{2N_{y}}\right)\Bigg{)}^{2}
=1NxNyIx=0Nx1Iy=0Ny1[((u,v)𝒰~αx(u)αy(v)(C(u,v)C~(u,v))\displaystyle=\frac{1}{N_{x}N_{y}}\sum_{I_{x}=0}^{N_{x}-1}\sum_{I_{y}=0}^{N_{y}-1}\Bigg{[}\Bigg{(}\sum_{(u,v)\in\tilde{\mathcal{U}}}\alpha_{x}(u)\alpha_{y}(v)\big{(}C(u,v)-\tilde{C}(u,v)\big{)}
×cos((2Ix+1)πu2Nx)cos((2Iy+1)πv2Ny))2\displaystyle\quad\quad\times\cos\left(\frac{(2I_{x}+1)\pi u}{2N_{x}}\right)\cos\left(\frac{(2I_{y}+1)\pi v}{2N_{y}}\right)\Bigg{)}^{2}
+2((u,v)𝒰~αx(u)αy(v)(C(u,v)C~(u,v))cos((2Ix+1)πu2Nx)cos((2Iy+1)πv2Ny))\displaystyle\quad+2\Bigg{(}\sum_{(u,v)\in\tilde{\mathcal{U}}}\alpha_{x}(u)\alpha_{y}(v)\big{(}C(u,v)-\tilde{C}(u,v)\big{)}\cos\left(\frac{(2I_{x}+1)\pi u}{2N_{x}}\right)\cos\left(\frac{(2I_{y}+1)\pi v}{2N_{y}}\right)\Bigg{)}
×((u,v)𝒰𝒰~αx(u)αy(v)C(u,v)cos((2Ix+1)πu2Nx)cos((2Iy+1)πv2Ny))\displaystyle\quad\quad\times\Bigg{(}\sum_{(u,v)\in\mathcal{U}\setminus\tilde{\mathcal{U}}}\alpha_{x}(u)\alpha_{y}(v)C(u,v)\cos\left(\frac{(2I_{x}+1)\pi u}{2N_{x}}\right)\cos\left(\frac{(2I_{y}+1)\pi v}{2N_{y}}\right)\Bigg{)}
+((u,v)𝒰𝒰~αx(u)αy(v)C(u,v)cos((2Ix+1)πu2Nx)cos((2Iy+1)πv2Ny))2]\displaystyle\quad+\Bigg{(}\sum_{(u,v)\in\mathcal{U}\setminus\tilde{\mathcal{U}}}\alpha_{x}(u)\alpha_{y}(v)C(u,v)\cos\left(\frac{(2I_{x}+1)\pi u}{2N_{x}}\right)\cos\left(\frac{(2I_{y}+1)\pi v}{2N_{y}}\right)\Bigg{)}^{2}\Bigg{]}
=1NxNyIx=0Nx1Iy=0Ny1[((u,v)𝒰~αx(u)αy(v)(C(u,v)C~(u,v))\displaystyle=\frac{1}{N_{x}N_{y}}\sum_{I_{x}=0}^{N_{x}-1}\sum_{I_{y}=0}^{N_{y}-1}\Bigg{[}\Bigg{(}\sum_{(u,v)\in\tilde{\mathcal{U}}}\alpha_{x}(u)\alpha_{y}(v)\big{(}C(u,v)-\tilde{C}(u,v)\big{)}
×cos((2Ix+1)πu2Nx)cos((2Iy+1)πv2Ny))2\displaystyle\quad\quad\times\cos\left(\frac{(2I_{x}+1)\pi u}{2N_{x}}\right)\cos\left(\frac{(2I_{y}+1)\pi v}{2N_{y}}\right)\Bigg{)}^{2}
+((u,v)𝒰𝒰~αx(u)αy(v)C(u,v)cos((2Ix+1)πu2Nx)cos((2Iy+1)πv2Ny))2]\displaystyle\quad+\Bigg{(}\sum_{(u,v)\in\mathcal{U}\setminus\tilde{\mathcal{U}}}\alpha_{x}(u)\alpha_{y}(v)C(u,v)\cos\left(\frac{(2I_{x}+1)\pi u}{2N_{x}}\right)\cos\left(\frac{(2I_{y}+1)\pi v}{2N_{y}}\right)\Bigg{)}^{2}\Bigg{]} (40)

The last equality follows since

Ix=0Nx1Iy=0Ny1\displaystyle\sum_{I_{x}=0}^{N_{x}-1}\sum_{I_{y}=0}^{N_{y}-1} αx(u)αy(v)(C(u,v)C~(u,v))cos((2Ix+1)πu2Nx)cos((2Iy+1)πv2Ny)\displaystyle\alpha_{x}(u)\alpha_{y}(v)\big{(}C(u,v)-\tilde{C}(u,v)\big{)}\cos\left(\frac{(2I_{x}+1)\pi u}{2N_{x}}\right)\cos\left(\frac{(2I_{y}+1)\pi v}{2N_{y}}\right)
×αx(u)αy(v)C(u,v)cos((2Ix+1)πu2Nx)cos((2Iy+1)πv2Ny)\displaystyle\quad\times\alpha_{x}(u^{\prime})\alpha_{y}(v^{\prime})C(u^{\prime},v^{\prime})\cos\left(\frac{(2I_{x}+1)\pi u^{\prime}}{2N_{x}}\right)\cos\left(\frac{(2I_{y}+1)\pi v^{\prime}}{2N_{y}}\right)

is equal to zero for all (u,v)𝒰(u,v)\in\mathcal{U} and (u,v)𝒰𝒰~(u^{\prime},v^{\prime})\in\mathcal{U}\setminus\tilde{\mathcal{U}}, by orthogonality of the DCT basis vectors [39, 25]. To conclude the proof, we note that the expression for the MSE given in the last equality of (40) is clearly minimized when C~(u,v)=C(u,v),(u,v)U~.\tilde{C}(u,v)=C(u,v),\,\forall(u,v)\in\tilde{U}.

References

  • [1] M. Hutchinson, H. Oh, and W.-H. Chen, “A review of source term estimation methods for atmospheric dispersion events using static or mobile sensors,” Inf. Fusion, vol. 36, pp. 130–148, 2017.
  • [2] B. Ristic, M. Morelande, and A. Gunatilaka, “Information driven search for point sources of gamma radiation,” Signal Process., vol. 90, pp. 1225–1239, 2010.
  • [3] T. Yardibi, J. Li, P. Stoica, M. Xue, and A. B. Baggeroer, “Source localization and sensing: A nonparametric iterative adaptive approach based on weighted least squares,” IEEE Trans. Aerosp. Electron. Syst., vol. 46, no. 1, pp. 425–443, Jan. 2010.
  • [4] A. J. Annunzio, G. S. Young, and S. E. Haupt, “Utilizing state estimation to determine the source location for a contaminant,” Atmos. Environ., vol. 46, pp. 580–589, 2012.
  • [5] P. P. Neumann, V. Hernandez Bennetts, A. J. Lilienthal, M. Bartholmai, and J. H. Schiller, “Gas source localization with a micro-drone using bio-inspired and particle filter-based algorithms,” Advanced Robotics, vol. 27, no. 9, pp. 725–738, 2013.
  • [6] D. Wade and I. Senocak, “Stochastic reconstruction of multiple source atmospheric contaminant dispersion events,” Atmos. Environ., vol. 74, pp. 45–51, 2013.
  • [7] A. A. R. Newaz, S. Jeong, H. Lee, H. Ryu, and N. Y. Chong, “UAV-based multiple source localization and contour mapping of radiation fields,” Robotics and Autonomous Systems, vol. 85, pp. 12–25, 2016.
  • [8] B. Ristic, A. Gunatilaka, and R. Gailis, “Localisation of a source of hazardous substance dispersion using binary measurements,” Atmos. Environ., vol. 142, pp. 114–119, 2016.
  • [9] D. D. Selvaratnam, I. Shames, D. V. Dimarogonas, J. H. Manton, and B. Ristic, “Co-operative estimation for source localisation using binary sensors,” in Proc. IEEE Conf. Decision and Control, Melbourne, Australia, Dec. 2017, pp. 1572–1577.
  • [10] M. Hutchinson, C. Liu, and W.-H. Chen, “Source term estimation of a hazardous airborne release using an unmanned aerial vehicle,” J. Field Robotics, vol. 36, pp. 797–817, 2019.
  • [11] P. W. Eslinger, J. M. Mendez, and B. T. Schrom, “Source term estimation in the presence of nuisance signals,” J. Environ. Radioact., vol. 203, pp. 220–225, 2019.
  • [12] D. Li, F. Chen, Y. Wang, and X. Wang, “Implementation of a UAV-sensory-system-based hazard source estimation in a chemical plant cluster,” in IOP Conf. Series, 2019, p. 012043.
  • [13] M. Park, S. An, J. Seo, and H. Oh, “Autonomous source search for UAVs using Gaussian mixture model-based infotaxis: Algorithm and flight experiments,” IEEE Trans. Aerosp. Electron. Syst., vol. 57, no. 6, pp. 4238–4254, Dec. 2021.
  • [14] D. Weidmann, B. Hirst, M. Jones, R. Ijzermans, D. Randell, N. Macleod, A. Kannath, J. Chu, and M. Dean, “Locating and quantifying methane emissions by inverse analysis of path-integrated concentration data using a Markov-Chain Monte Carlo approach,” ACS Earth Space Chem., vol. 6, pp. 2190–2198, Jun. 2022.
  • [15] P. Martin, O. Payton, J. Fardoulis, D. Richards, Y. Yamashiki, and T. Scott, “Low altitude unmanned aerial vehicle for characterising remediation effectiveness following the FDNPP accident,” J. Environ. Radioact., vol. 151, pp. 58–63, Jun. 2016.
  • [16] Z. Wang, J. Yang, and J. Wu, “Level set estimation of spatial-temporally correlated random fields with active sparse sensing,” IEEE Trans. Aerosp. Electron. Syst., vol. 53, no. 2, pp. 862–876, Apr. 2017.
  • [17] M. R. Morelande and A. Skvortsov, “Radiation field estimation using a Gaussian mixture,” in Proc. Intl. Conf. Inf. Fusion, Seattle, USA, Jul. 2009, pp. 2247–2254.
  • [18] H. M. La and W. Sheng, “Distributed sensor fusion for scalar field mapping using mobile sensor networks,” IEEE Trans. Cybern., vol. 43, no. 2, pp. 766–778, Apr. 2013.
  • [19] H. M. La, W. Sheng, and J. Chen, “Cooperative and active sensing in mobile sensor networks for scalar field mapping,” IEEE Trans. Syst., Man, Cybern., Syst., vol. 45, no. 1, pp. 1–12, Jan. 2015.
  • [20] R. A. Razak, S. Sukumar, and H. Chung, “Scalar field estimation with mobile sensor networks,” Int. J. Robust Nonlinear Control, vol. 31, pp. 4287–4305, 2021.
  • [21] A. S. Leong and M. Zamani, “Field estimation using binary measurements,” Signal Process., vol. 194, no. 108430, 2022.
  • [22] A. S. Leong, M. Zamani, and I. Shames, “A logistic regression approach to field estimation using binary measurements,” IEEE Signal Process. Lett., vol. 29, pp. 1848–1852, 2022.
  • [23] V. P. Tran, M. A. Garratt, K. Kasmarik, S. G. Anavatti, A. S. Leong, and M. Zamani, “Multi-gas source localization and mapping by flocking robots,” Inf. Fusion, vol. 91, pp. 665–680, 2023.
  • [24] V. Britanik, P. Yip, and K. R. Rao, Discrete Cosine and Sine Transforms.   Academic Press, 2007.
  • [25] G. Strang, “The discrete cosine transform,” SIAM Review, vol. 41, no. 1, pp. 135–147, 1999.
  • [26] K. Yamatani and N. Saito, “Improvement of DCT-based compression algorithms using Poisson’s equation,” IEEE Trans. Image Process., vol. 15, no. 12, pp. 3672–3689, 2006.
  • [27] P. Robins, V. Rapley, and P. Thomas, “A probabilistic chemical sensor model for data fusion,” in Proc. Int. Conf. Inf. Fusion, Philadelphia, USA, Jul. 2005, pp. 1116–1122.
  • [28] Y. Cheng, U. Konda, T. Singh, and P. Scott, “Bayesian estimation for CBRN sensors with non-Gaussian likelihood,” IEEE Trans. Aerosp. Electron. Syst., vol. 47, no. 1, pp. 684–701, Jan. 2011.
  • [29] G. Calafiore and L. El Ghaoui, Optimization Models.   Cambridge University Press, 2014.
  • [30] K. P. Murphy, Probabilistic Machine Learning: An Introduction.   The MIT Press, 2022.
  • [31] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, Apr. 2004.
  • [32] Z. Wang and A. C. Bovik, “Mean squared error: Love it or leave it?” IEEE Signal Process. Mag., vol. 26, no. 1, pp. 98–117, Jan. 2009.
  • [33] A. Lesage-Landry, J. A. Taylor, and I. Shames, “Second-order online nonconvex optimization,” IEEE Trans. Autom. Control, vol. 66, no. 10, pp. 4866–4872, Oct. 2021.
  • [34] E. K. P. Chong and S. H. Żak, An Introduction to Optimization, 4th ed.   John Wiley & Sons, 2013.
  • [35] D. G. Manolakis, V. K. Ingle, and S. M. Kogon, Statistical and Adaptive Signal Processing.   Artech House, 2005.
  • [36] C. M. Kreucher, K. D. Kastella, and A. O. Hero, “Sensor management using an active sensing approach,” Signal Process., vol. 85, pp. 607–624, 2005.
  • [37] B. Ristic, A. Skvortsov, and A. Gunatilaka, “A study of cognitive strategies for an autonomous search,” Inf. Fusion, vol. 28, pp. 1–9, 2016.
  • [38] R. S. Sutton and A. G. Barto, Reinforcement Learning, 2nd ed.   The MIT Press, 2018.
  • [39] N. Ahmed, T. Natarajan, and K. R. Rao, “Discrete cosine transform,” IEEE Trans. Comput., vol. C-23, no. 1, pp. 90–93, Jan. 1974.