This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Alternating projections gridless covariance-based estimation for DOA

Abstract

We present a gridless sparse iterative covariance-based estimation method based on alternating projections for direction-of-arrival (DOA) estimation. The gridless DOA estimation is formulated in the reconstruction of Toeplitz-structured low rank matrix, and is solved efficiently with alternating projections. The method improves resolution by achieving sparsity, deals with single-snapshot data and coherent arrivals, and, with co-prime arrays, estimates more DOAs than the number of sensors. We evaluate the proposed method using simulation results focusing on co-prime arrays.

Index Terms—  DOA estimation, sparse signal recovery, off-grid sparse model, alternating projections, compressive sensing

1 Introduction

Direction-of-arrival (DOA) estimation is localizing several sources arriving at an array of sensors. It is an important problem in a wide range of applications, including radar, sonar, etc. Compressive sensing (CS) based DOA estimation, which promotes sparse solutions, has advantages over traditional DOA estimation methods. [1, 2] DOAs exist on a continuous angular domain, and gridless CS can be employed. [3, 4, 5] We propose a gridless sparsity-promoting DOA estimation method and apply it to co-prime arrays, which can resolve more sources than the number of sensors.

CS-based DOA estimation exploits the framework of CS, which promotes sparse solutions, for DOA estimation and has a high-resolution capability, deals with single-snapshot data, and performs well with coherent arrivals. [2, 3] To estimate DOAs in a continuous angular domain, non-linear estimation of the DOAs is linearized by using a discretized angular-search grid of potential DOAs (“steering vectors”). Grid-based CS has a basis mismatch problem [6, 7] when true DOAs do not fall on the angular-search grid. To overcome the basis mismatch, gridless CS [8, 9] has been utilized for DOA estimation. [3, 4, 5, 10, 11]

Gridless SPICE (GLS) [5, 12], one of the off-grid sparse methods, is a gridless version of sparse iterative covariance-based estimation (SPICE) [13]. GLS re-parameterizes the data covariance, or sample covariance matrix (SCM), using a positive semi-definite (PSD) Toeplitz matrix, and finds the lowest rank Toeplitz matrix which fits the SCM. The Toeplitz-structured SCM is related to a Fourier-series, which is composed of harmonics. [14, 15] GLS-based DOA estimation retrieves DOA-dependent harmonics from the SCM parameter. [5] The GLS solver uses a semi-definite programming problem (SDP), which is infeasible in practice for high-dimensional problems.

Alternating projections (AP) algorithm [16, 17] has been introduced to solve matrix completion [18, 19, 20, 21] and structured low rank matrix recovery [22, 23] and consists of projecting a matrix onto the intersection of a linear subspace and a nonconvex manifold. Atomic norm minimization (ANM) [6, 8] solves gridless CS and is equivalent to a recovery of a Toeplitz-structured low rank matrix [24]. AP based on ANM has been applied to gridless CS for DOA estimation. [25, 26]

We propose AP-based GLS for gridless CS for DOA estimation. GLS reconstructs a DOA-dependent SCM matrix, which is a Toeplitz-structured low rank matrix and has a PSD matrix in its constraint. AP-GLS solves the reconstruction of the Toeplitz-structured low rank matrix by using a sequence of projections onto the following sets: Toeplitz set, rank-constraint set, and PSD set.

Co-prime arrays are introduced for DOA estimation and offer the capability of identifying more sources than the number of sensors. [27] Sparse Bayesian learning (SBL) deals with co-prime arrays without constructing a co-array based covariance matrix and shows accurate DOAs identifying more sources than the number of sensors. [28, 29] We apply AP-GLS to co-prime arrays and show that AP-GLS with co-prime arrays estimates more DOAs than the number of sensors.

We study the performance of AP-GLS with co-prime arrays for single- and multiple-snapshot data, incoherent and coherent sources, and when the number of sources exceeds the number of sensors.

2 Signal model and co-prime array

2.1 Signal model

We consider KK narrowband sources for LL snapshot data with complex signal amplitude sk,ls_{k,l}\in\mathbb{C}, k=1,,Kk=1,\dotsc,K, l=1,,Ll=1,\dotsc,L. The sources have stationary DOAs for LL snapshots θkΘ[90,90)\theta_{k}\in\Theta\triangleq[-90^{\circ},90^{\circ}), k=1,,Kk=1,\dotsc,K in the far-field of a linear array with MM sensors. The observed data 𝐘M×L\mathbf{Y}\in\mathbb{C}^{M\times L} is modeled as

𝐘=k=1K𝐚(θk)𝐬k:+𝐄=k=1Kck𝐚(θk)ϕk:+𝐄,\mathbf{Y}=\sum_{k=1}^{K}\mathbf{a}(\theta_{k})\mathbf{s}_{k:}+\mathbf{E}=\sum_{k=1}^{K}c_{k}\mathbf{a}(\theta_{k})\boldsymbol{\phi}_{k:}+\mathbf{E}, (1)

where 𝐬k:=[sk,1sk,L]1×L\mathbf{s}_{k:}=[s_{k,1}\dotsc s_{k,L}]\in\mathbb{C}^{1\times L}, ck=𝐬k:2>0c_{k}=\lVert\mathbf{s}_{k:}\rVert_{2}>0, ϕk:=ck1𝐬k:1×L\boldsymbol{\phi}_{k:}=c_{k}^{-1}\mathbf{s}_{k:}\in\mathbb{C}^{1\times L} with ϕk:2=1\lVert\boldsymbol{\phi}_{k:}\rVert_{2}=1, 𝐄M×L\mathbf{E}\in\mathbb{C}^{M\times L} is the measurement noise, and 𝐚(θk)M\mathbf{a}(\theta_{k})\in\mathbb{C}^{M} is the steering vector. The steering vector is given by (λ\lambda is the signal wavelength and dmd_{m} is the distance from sensor 11 to sensor mm)

𝐚(θk)=[1ej2πλd2sinθkej2πλdMsinθk]𝖳.\mathbf{a}(\theta_{k})=\Big{[}1\ e^{-j\frac{2\pi}{\lambda}d_{2}\sin\theta_{k}}\ \dotsc\ e^{-j\frac{2\pi}{\lambda}d_{M}\sin\theta_{k}}\Big{]}^{\mathsf{T}}. (2)

2.2 Co-prime array

Consider the sensor positions in an array is given by dm=δmdd_{m}=\delta_{m}d, m=1,,Mm=1,\dotsc,M, where the integer δm\delta_{m} is the normalized sensor location of mmth sensor and dd is the minimum sensor spacing. A uniform linear array (ULA) is composed of uniformly spaced sensors with 𝜹=[0 1M1]𝖳\boldsymbol{\delta}=[0\ 1\dotsc M-1]^{\mathsf{T}} and d=λ/2d=\lambda/2.

A co-prime array involves two ULAs with spacing M1dM_{1}d and M2dM_{2}d where M1M_{1} and M2M_{2} are co-prime, i.e., their greatest common divisor is 1. [27] A co-prime array consists of a ULA with 𝜹=[0M2(M11)M2]𝖳\boldsymbol{\delta}=[0\ M_{2}\dotsc(M_{1}-1)M_{2}]^{\mathsf{T}} and a ULA with 𝜹=[M1 2M1(2M21)M1]𝖳\boldsymbol{\delta}=[M_{1}\ 2M_{1}\dotsc(2M_{2}-1)M_{1}]^{\mathsf{T}}, a total of M1+2M21M_{1}+2M_{2}-1 sensors.

We used a 16-sensor ULA with 𝜹=[0 115]𝖳\boldsymbol{\delta}=[0\ 1\dotsc 15]^{\mathsf{T}} and a 8-sensor co-prime array with M1=5M_{1}=5 and M2=2M_{2}=2, i.e., 𝜹=[0 2 4 5 6 8 10 15]𝖳\boldsymbol{\delta}=[0\ 2\ 4\ 5\ 6\ 8\ 10\ 15]^{\mathsf{T}}.

3 Alternating projections gridless SPICE

Consider the ULA case and assume incoherent sources. (GLS is robust to source correlations. [5, 12, 13]) In the noiseless case, the SCM 𝐑M×M\mathbf{R}^{\ast}\in\mathbb{C}^{M\times M} is given by

𝐑=1L𝐘𝐘𝖧=k=1Kpk𝐚(θk)𝐚𝖧(θk)\mathbf{R}^{\ast}=\frac{1}{L}\mathbf{Y}^{\ast}\mathbf{Y}^{\ast\mathsf{H}}=\sum_{k=1}^{K}p_{k}\mathbf{a}(\theta_{k})\mathbf{a}^{\mathsf{H}}(\theta_{k}) (3)

where 𝐘\mathbf{Y}^{\ast} is noise-free data and pk>0p_{k}>0, k=1,,Kk=1,\dotsc,K is the power of sources, i.e., pk=ck2p_{k}=c_{k}^{2}. The SCM 𝐑\mathbf{R}^{\ast} is a (Hermitian) Toeplitz matrix,

𝐑=Toep(𝐫)=[r1r2rMr2𝖧r1rM1rM𝖧rM1𝖧r1],{\mathbf{R}^{\ast}}=\mathrm{Toep}(\mathbf{r})=\begin{bmatrix}{r}_{1}&{r}_{2}&\cdots&{r}_{M}\\ {r}_{2}^{\mathsf{H}}&{r}_{1}&\cdots&{r}_{M-1}\\ \vdots&\vdots&\ddots&\vdots\\ {r}_{M}^{\mathsf{H}}&{r}_{M-1}^{\mathsf{H}}&\cdots&{r}_{1}\\ \end{bmatrix}, (4)

where 𝐫M\mathbf{r}\in\mathbb{C}^{M}. Moreover, 𝐑{\mathbf{R}^{\ast}} is PSD and has rank KK. A PSD Toeplitz matrix of rank K<MK<M can be uniquely decomposed (Vandermonde decomposition) [5, 6, 30] as

𝐑=k=1Kpk𝐚(θk)𝐚𝖧(θk)=𝐀diag(𝐩)𝐀𝖧,\mathbf{R}^{\ast}=\sum_{k=1}^{K}p_{k}\hskip 0.85358pt\mathbf{a}(\theta_{k})\hskip 0.85358pt\mathbf{a}^{\mathsf{H}}(\theta_{k})=\mathbf{A}\hskip 0.85358pt\mathrm{diag}(\mathbf{p})\mathbf{A}^{\mathsf{H}}, (5)

where 𝐀=[𝐚(θ1)𝐚(θK)]M×K\mathbf{A}=[\mathbf{a}(\theta_{1})\ \dotsc\ \mathbf{a}(\theta_{K})]\in\mathbb{C}^{M\times K}.

GLS uses a SCM-related parameter 𝐑M×M\mathbf{R}\in\mathbb{C}^{M\times M}, which is a rank-KK PSD Toeplitz matrix, and fits the parameter 𝐑\mathbf{R} to SCM 𝐑~=𝐘𝐘𝖧/LM×M\tilde{\mathbf{R}}=\mathbf{Y}\mathbf{Y}^{\mathsf{H}}/L\in\mathbb{C}^{M\times M}. The covariance fitting is implemented, in the case of LML\geq M whenever 𝐑~\tilde{\mathbf{R}} is non-singular, by minimizing the criterion, [5, 12, 13]

𝐑12(𝐑~𝐑)𝐑~12F2.\left\lVert\mathbf{R}^{-\frac{1}{2}}\left(\tilde{\mathbf{R}}-\mathbf{R}\right)\tilde{\mathbf{R}}^{-\frac{1}{2}}\right\rVert_{\mathrm{F}}^{2}. (6)

In the case of L<ML<M, when 𝐑~\tilde{\mathbf{R}} is singular, the following criterion is used instead, [5, 12]

𝐑12(𝐑~𝐑)F2=tr(𝐑~𝐑1𝐑~)+tr(𝐑)2tr(𝐑~).\left\lVert\mathbf{R}^{-\frac{1}{2}}\left(\tilde{\mathbf{R}}-\mathbf{R}\right)\right\rVert_{\mathrm{F}}^{2}=\mathrm{tr}\big{(}\tilde{\mathbf{R}}\mathbf{R}^{-1}\tilde{\mathbf{R}}\big{)}+\mathrm{tr}\left(\mathbf{R}\right)-2\mathrm{tr}(\tilde{\mathbf{R}}). (7)

GLS is achieved using the following optimization,

min𝐑tr(𝐑~𝐑1𝐑~)+tr(𝐑)\displaystyle\underset{\mathbf{R}}{\text{min}}\ \ \mathrm{tr}\big{(}\tilde{\mathbf{R}}\mathbf{R}^{-1}\tilde{\mathbf{R}}\big{)}+\mathrm{tr}(\mathbf{R}) subject to𝐑0\displaystyle\text{subject to}\ \ \mathbf{R}\succeq 0
min𝐑,𝐙tr(𝐙)+tr(𝐑)\displaystyle\Leftrightarrow\underset{\mathbf{R},\mathbf{Z}}{\text{min}}\ \ \mathrm{tr}(\mathbf{Z})+\mathrm{tr}(\mathbf{R}) subject to{𝐑0𝐙𝐑~𝐑1𝐑~,\displaystyle\text{subject to}\ \begin{cases}\mathbf{R}\succeq 0\\ \mathbf{Z}\succeq\tilde{\mathbf{R}}\mathbf{R}^{-1}\tilde{\mathbf{R}},\end{cases}
min𝐑,𝐙tr(𝐙)+tr(𝐑)\displaystyle\Leftrightarrow\underset{\mathbf{R},\mathbf{Z}}{\text{min}}\ \ \mathrm{tr}(\mathbf{Z})+\mathrm{tr}(\mathbf{R}) subject to[𝐑𝐑~𝐑~𝐙]0,\displaystyle\text{subject to}\ \begin{bmatrix}\mathbf{R}&\tilde{\mathbf{R}}\\ \tilde{\mathbf{R}}&\mathbf{Z}\end{bmatrix}\succeq 0, (8)

where 𝐑0\mathbf{R}\succeq 0 denotes 𝐑\mathbf{R} is a PSD matrix and 𝐙M×M\mathbf{Z}\in\mathbb{C}^{M\times M} is a free variable. Consider the case of 𝐑=𝐑\mathbf{R}=\mathbf{R}^{\ast}, then tr(𝐑)=Mk=1Kpk\mathrm{tr}(\mathbf{R})=M\sum_{k=1}^{K}p_{k}. Defining tr(𝐙)=Mk=1Kpk\mathrm{tr}(\mathbf{Z})=M\sum_{k=1}^{K}p_{k}, the objective in (8), divided by 2M2M, equals,

12Mtr(𝐑)+12Mtr(𝐙)=k=1Kpk.\frac{1}{2M}\mathrm{tr}(\mathbf{R})+\frac{1}{2M}\mathrm{tr}(\mathbf{Z})=\sum_{k=1}^{K}p_{k}. (9)

Note that, in ANM, [6, 8] minimizing k=1Kpk=k=1Kck2\sum_{k=1}^{K}p_{k}=\sum_{k=1}^{K}c_{k}^{2} is equivalent to minimizing the atomic norm,

𝐘𝒜=infck,θk,ϕk:{k=1Kck:𝐘=k=1Kck𝐚(θk)ϕk:}.\left\lVert\mathbf{Y}^{\ast}\right\rVert_{\mathcal{A}}=\underset{c_{k},\theta_{k},\boldsymbol{\phi}_{k:}}{\text{inf}}\left\{\sum_{k=1}^{K}c_{k}:\mathbf{Y}^{\ast}=\sum_{k=1}^{K}c_{k}\hskip 0.85358pt\mathbf{a}(\theta_{k})\boldsymbol{\phi}_{k:}\right\}. (10)

The atomic norm is a convex relaxation of the atomic l0l_{0} norm, [6]

𝐘𝒜,0=infck,θk,ϕk:{K:𝐘=k=1Kck𝐚(θk)ϕk:}.\left\lVert\mathbf{Y}^{\ast}\right\rVert_{\mathcal{A},0}=\underset{c_{k},\theta_{k},\boldsymbol{\phi}_{k:}}{\text{inf}}\left\{K:\mathbf{Y}^{\ast}=\sum_{k=1}^{K}c_{k}\hskip 0.85358pt\mathbf{a}(\theta_{k})\boldsymbol{\phi}_{k:}\right\}. (11)

Minimizing the atomic l0l_{0} norm is equivalent to minimizing rank of 𝐑=𝐘𝐘𝖧/L\mathbf{R}^{\ast}=\mathbf{Y}^{\ast}\mathbf{Y}^{\ast\mathsf{H}}/L[5, 6] Summarizing, the term tr(𝐑)=k=1Kpk\mathrm{tr}(\mathbf{R})=\sum_{k=1}^{K}p_{k} is the nuclear norm, used as a convex relaxation of rank(𝐑)\mathrm{rank}(\mathbf{R}).

By using the rank minimization in (8), the resulting optimization is as follows,

min𝐑,𝐙rank(𝐑)subject to[𝐑𝐑~𝐑~𝐙]0.\underset{\mathbf{R},\mathbf{Z}}{\text{min}}\ \ \mathrm{rank}(\mathbf{R})\quad\text{subject to}\ \ \begin{bmatrix}\mathbf{R}&\tilde{\mathbf{R}}\\ \tilde{\mathbf{R}}&\mathbf{Z}\end{bmatrix}\succeq 0. (12)

For the coprime array, we use the row-selection matrix 𝚪Ω{0,1}M×MΩ\mathbf{\Gamma}_{\Omega}\in\{0,1\}^{M\times M_{\Omega}}, i.e.,

𝐘Ω=𝚪Ω𝐘or𝐘=𝚪Ω𝐘Ω,\vspace{-.6mm}\mathbf{Y}_{\Omega}=\mathbf{\Gamma}_{\Omega}\mathbf{Y}\ \text{or}\ \mathbf{Y}=\mathbf{\Gamma}_{\Omega}^{\dagger}\mathbf{Y}_{\Omega}, (13)

where 𝐘\mathbf{Y} is data of full MM-element ULA and the Moore-Penrose pseudo-inverse 𝚪Ω\mathbf{\Gamma}_{\Omega}^{\dagger}. The optimization for the coprime array is given as,

min𝐑,𝐙rank(𝐑)subject to[𝐑Ω𝐑~Ω𝐑~Ω𝐙]0,\vspace{-.6mm}\underset{\mathbf{R},\mathbf{Z}}{\text{min}}\ \ \mathrm{rank}(\mathbf{R})\quad\text{subject to}\ \ \begin{bmatrix}\mathbf{R}_{\Omega}&\tilde{\mathbf{R}}_{\Omega}\\ \tilde{\mathbf{R}}_{\Omega}&\mathbf{Z}\end{bmatrix}\succeq 0, (14)

where 𝐑~Ω=𝐘Ω𝐘Ω𝖧/LMΩ×MΩ\tilde{\mathbf{R}}_{\Omega}=\mathbf{Y}_{\Omega}\mathbf{Y}_{\Omega}^{\mathsf{H}}/L\in\mathbb{C}^{M_{\Omega}\times M_{\Omega}} and 𝐑Ω=𝚪Ω𝐑𝚪Ω𝖳MΩ×MΩ\mathbf{R}_{\Omega}=\mathbf{\Gamma}_{\Omega}\mathbf{R}\mathbf{\Gamma}_{\Omega}^{\mathsf{T}}\in\mathbb{C}^{M_{\Omega}\times M_{\Omega}}. To minimize rank(𝐑)\mathrm{rank}(\mathbf{R}), 𝐑\mathbf{R} is calculated,

𝐑=𝚪Ω𝐑Ω(𝚪Ω)𝖳.\vspace{-1mm}\mathbf{R}=\mathbf{\Gamma}_{\Omega}^{\dagger}\mathbf{R}_{\Omega}{(\mathbf{\Gamma}_{\Omega}^{\dagger})}^{\mathsf{T}}. (15)

4 Alternating projections

We suggest alternating projections to reconstruct Toeplitz-structured low rank matrix in (12) and (14). AP-GLS involves the following sets: Toeplitz set, positive semi-definite (PSD) set, and rank-constraint set.

4.1 Projection onto the Toeplitz set

The SCM-related parameter 𝐑\mathbf{R} is a Toeplitz matrix, and the projection of 𝐑\mathbf{R} onto the Toeplitz set 𝒯\mathcal{T} is implemented by finding the closest Toeplitz matrix, [22, 31]

P𝒯(𝐑)=Toep(𝐫),\displaystyle P_{\mathcal{T}}(\mathbf{R})=\mathrm{Toep}(\mathbf{r}),\quad\quad\quad\quad\ \ (16)
rm=12(Mm)i=1MmRi,i+m1+Ri+m1,i𝖧.\displaystyle r_{m}=\frac{1}{2(M-m)}\sum_{i=1}^{M-m}{R}_{i,i+m-1}+{R}_{i+m-1,i}^{\mathsf{H}}. (17)

Note that, mmth component of 𝐫M\mathbf{r}\in\mathbb{C}^{M} is obtained by averaging mmth diagonal and the conjugate diagonal components.

4.2 Projection onto the PSD set

The constraints (12) and (14) include PSD matrices, which is obtained by projecting the matrix in the constraint onto the PSD set 𝒫\mathcal{P}, defined by the PSD cone. The projection of a (Hermitian) matrix 𝐒\mathbf{S} onto the PSD set is achieved from the eigen-decomposition 𝐒=i=12Mμi𝐪i𝐪i𝖧\mathbf{S}=\sum_{i=1}^{2M}\mu_{i}\mathbf{q}_{i}\mathbf{q}_{i}^{\mathsf{H}}[16, 17]

P𝒫(𝐒)=i=12Mmax{0,μi}𝐪i𝐪i𝖧.P_{\mathcal{P}}(\mathbf{S})=\sum_{i=1}^{2M}\mathrm{max}\{0,\mu_{i}\}\mathbf{q}_{i}\mathbf{q}_{i}^{\mathsf{H}}. (18)

4.3 Projection onto the rank-constraint set

The objectives (12) and (14) include rank-constraints. Consider the case of rank-KK matrix 𝐑\mathbf{R}. The projection of 𝐑\mathbf{R} onto the rank-constraint set \mathcal{R} is achieved from the singular value decomposition and taking the KK-largest singular values, [19, 23]

P(𝐑)=k=1Kσk𝐮k𝐯k𝖧,P_{\mathcal{R}}(\mathbf{R})=\sum_{k=1}^{K}\sigma_{k}\mathbf{u}_{k}\mathbf{v}_{k}^{\mathsf{H}}, (19)

where σk\sigma_{k}, 𝐮kM\mathbf{u}_{k}\in\mathbb{C}^{M}, 𝐯kM\mathbf{v}_{k}\in\mathbb{C}^{M}, k=1,,Kk=1,\dotsc,K, are the KK-largest singular values and the corresponding left and right singular vectors. We remark that 𝐑\mathbf{R} is an SCM, thus the eigen-decomposition and the singular value decomposition result in the same results.

4.4 Alternating projections

Initialized parameters 𝐑\mathbf{R} and 𝐙\mathbf{Z} form 𝐒\mathbf{S}, which is PSD, i.e., 𝐒=P𝒫(𝐒)\mathbf{S}=P_{\mathcal{P}}(\mathbf{S}) (18). 𝐑\mathbf{R} is obtained from 𝐒\mathbf{S}, 𝐑=𝚪Ω𝐒(1:M,1:M)(𝚪Ω)𝖳\mathbf{R}=\mathbf{\Gamma}_{\Omega}^{\dagger}\mathbf{S}(1:M,1:M){(\mathbf{\Gamma}_{\Omega}^{\dagger})}^{\mathsf{T}} (15), and the projection P(𝐑)P_{\mathcal{R}}(\mathbf{R}) (19) is carried out to make 𝐑\mathbf{R} be KK-rank. The projection P𝒯(𝐑)P_{\mathcal{T}}(\mathbf{R}) (16) is followed for a Toeplitz structure. Submatrix 𝚪Ω𝐑𝚪Ω𝖳\mathbf{\Gamma}_{\Omega}\mathbf{R}\mathbf{\Gamma}_{\Omega}^{\mathsf{T}} is implemented in 𝐒\mathbf{S}. AP-GLS iterates the projections until it converges to a solution. AP-GLS is summarized in Algorithm 1.

4.5 DOA retrieval

DOAs θk\theta_{k}, k=1,,Kk=1,\dotsc,K, are recovered by the Vandermonde decomposition (5) for the rank-KK PSD Toeplitz matrix 𝐑\mathbf{R}[5, 6, 8] The Vandermonde decomposition is computed efficiently via root-MUSIC [26]:

1. Perform the eigen-decomposition in signal- and noise-subspace, i.e., 𝐑=𝐔S𝚲S𝐔S𝖧+𝐔N𝚲N𝐔N𝖧\mathbf{R}=\mathbf{U}_{S}\boldsymbol{\Lambda}_{S}\mathbf{U}_{S}^{\mathsf{H}}+\mathbf{U}_{N}\boldsymbol{\Lambda}_{N}\mathbf{U}_{N}^{\mathsf{H}}.

2. Compute the root-MUSIC polynomial 𝒬(z)=𝐚𝖳(1/z)\mathcal{Q}(z)=\mathbf{a}^{\mathsf{T}}(1/z) 𝐔N𝐔N𝖧𝐚(z)\mathbf{U}_{N}\mathbf{U}_{N}^{\mathsf{H}}\mathbf{a}(z), where 𝐚=[1,z,,zM1]𝖳\mathbf{a}=[1,z,\dotsc,z^{M-1}]^{\mathsf{T}} and z=ej(2π/λ)dsinθz=e^{-j(2\pi/\lambda)d\sin\theta}.

3. Find the roots of 𝒬(z)\mathcal{Q}(z) and choose the KK roots that are inside the unit circle and closest to the unit circle, i.e., z^i\hat{z}_{i}, i=1,,Ki=1,\dotsc,K.

4. DOA estimates are recovered, i.e., θ^i=sin1(λz^i2πd)\hat{\theta}_{i}=-\sin^{-1}(\frac{\lambda\angle\hat{z}_{i}}{2\pi d}), i=1,,Ki=1,\dotsc,K.

Algorithm 1 AP-GLS
1:  Input: 𝐘M×L\mathbf{Y}\in\mathbb{C}^{M\times L}, KK, 𝚪Ω\mathbf{\Gamma}_{\Omega}
2:  Parameters: ϵmin=103\epsilon_{\mathrm{min}}=10^{-3}
3:  Initialization: 𝐑M×M\mathbf{R}\in\mathbb{C}^{M\times M}, 𝐙M×M\mathbf{Z}\in\mathbb{C}^{M\times M} with uniformly (0,1)(0,1) distributed random for real and imaginary part.
4:  𝐑old=𝐑\mathbf{R}^{\text{old}}=\mathbf{R}, 𝐙old=𝐙\mathbf{Z}^{\text{old}}=\mathbf{Z}
5:  while 𝐒𝐒oldF<ϵmin\lVert\mathbf{S}-\mathbf{S}^{\text{old}}\rVert_{\mathrm{F}}<\epsilon_{\mathrm{min}} do
6:     𝐒=[𝚪Ω𝐑old𝚪Ω𝖳𝐑~Ω𝐑~Ω𝐙old]\mathbf{S}=\begin{bmatrix}\mathbf{\Gamma}_{\Omega}\mathbf{R}^{\text{old}}\mathbf{\Gamma}_{\Omega}^{\mathsf{T}}&\tilde{\mathbf{R}}_{\Omega}\\ \tilde{\mathbf{R}}_{\Omega}&\mathbf{Z}^{\text{old}}\end{bmatrix}
7:     PSD projection: 𝐒=P𝒫(𝐒)\mathbf{S}=P_{\mathcal{P}}(\mathbf{S}) (18)
8:     𝐑=𝚪Ω𝐒(1:M,1:M)(𝚪Ω)𝖳\mathbf{R}=\mathbf{\Gamma}_{\Omega}^{\dagger}\mathbf{S}(1:M,1:M){(\mathbf{\Gamma}_{\Omega}^{\dagger})}^{\mathsf{T}} (15)
9:     Rank-constraint projection: 𝐑=P(𝐑)\mathbf{R}=P_{\mathcal{R}}(\mathbf{R}) (19)
10:     Toeplitz projection: 𝐑=P𝒯(𝐑)\mathbf{R}=P_{\mathcal{T}}(\mathbf{R}) (16)
11:     𝐒(1:M,1:M)=𝚪Ω𝐑𝚪Ω𝖳\mathbf{S}(1:M,1:M)=\mathbf{\Gamma}_{\Omega}\mathbf{R}\mathbf{\Gamma}_{\Omega}^{\mathsf{T}}
12:     𝐑old=𝐑\mathbf{R}^{\text{old}}=\mathbf{R}, 𝐙old=𝐒(M+1:2M,M+1:2M)\mathbf{Z}^{\text{old}}=\mathbf{S}(M+1:2M,M+1:2M)
13:  end while
14:  Output: 𝐑\mathbf{R}

Refer to caption

Fig. 1: DOA estimation from LL snapshots for K=4K=4 sources with an M=8M=8 co-prime array. CBF, SBL, and AP-GLS for (a) incoherent sources with SNR 20 dB and one snapshot, L=1L=1, (b) SNR 20 dB and L=20L=20, (c) SNR 0 dB and L=L= 2020, and (d) for coherent sources with SNR 20 dB and L=20L=20.

Refer to caption

Fig. 2: Histogram of the estimated DOAs of SBL and AP-GLS for K=10K=10 sources with an M=8M=8 co-prime array for 100 trials. (M<KM<K)

Refer to caption

Fig. 3: RMSE [] comparison versus SNR. Each RMSE is averaged over 100 trials. K=4K=4 incoherent sources for (a) an M=16M=16 ULA with L=20L=20, (b) an M=8M=8 co-prime array with L=20L=20, (c) L=1L=1, and (d) coherent sources for a co-prime array with L=20L=20.

Refer to caption

Fig. 4: RMSE [] comparison versus KK. Each RMSE is averaged over 100 trials. An M=8M=8 co-prime array is used with L=L= 2020 and SNR 20 dB.

5 Simulation results

We consider a ULA with M=16M=16 elements, half-wavelength spacing, and a co-prime array with M=8M=8 elements with M1=5M_{1}=5 and M2=2M_{2}=2 (elements 1, 3, 5, 6, 7, 9, 11, 16).

The signal-to-noise ratio (SNR) is defined, SNR=10\text{SNR}=10 log10[𝔼{𝐀𝐬l}22/𝔼{𝐞l}22]\log_{10}[\mathbb{E}\{\lVert\mathbf{A}\mathbf{s}_{l}\rVert\}_{2}^{2}/\mathbb{E}\{\lVert\mathbf{e}_{l}\rVert\}_{2}^{2}], where 𝐬lK\mathbf{s}_{l}\in\mathbb{C}^{K} and 𝐞lM\mathbf{e}_{l}\in\mathbb{C}^{M}, l=1,,Ll=1,\dotsc,L, are the source amplitude and the measurement noise for the llth snapshot. The root mean squared error (RMSE) is, RMSE=𝔼[1Kk=1K(θ^kθk)2]\text{RMSE}=\sqrt{\mathbb{E}\left[\frac{1}{K}\sum_{k=1}^{K}\left(\hat{\theta}_{k}-{\theta}_{k}\right)^{2}\right]}, where θ^k\hat{\theta}_{k} and θk{\theta}_{k} represent estimated and true DOA of the kkth source.

We consider an M=8M=8 co-prime array and K=4K=4 stationary sources at DOAs [60,3,3,50][-60,-3,3,50]^{\circ} with snapshot-varying magnitudes [12,20][12,20] dB in four scenarios, see Fig. 1. Conventional beamforming (CBF), SBL [28, 29], and AP-GLS are compared. CBF cannot distinguish close two DOAs [3,3][-3,3]^{\circ}. AP-GLS solves the single snapshot case and resolve the close arrivals. We also consider the DOA performance with a coherent sources due to multipath arrivals. AP-GLS still shows accurate DOAs.

Co-prime arrays can estimate more sources than the number of sensors, see Fig. 2. We consider the same co-prime array and K=10K=10 stationary sources uniformly distributed in [60,60][-60,60]^{\circ}, with L=20L=20 and SNR 20 dB. The histogram shows the distribution of the DOA estimates of AP-GLS.

The DOA performance is evaluated with the RMSE versus SNR, see Fig. 3. RMSE larger than 10 times the median is outlier and eliminated. We consider K=4K=4 sources, same as in Fig. 1 but with equal strengths. Cramér-Rao bound (CRB) [32], MUSIC and MUSIC with co-array interpolation (MUSIC-I) [33] are also compared. Compared to full ULA cases, AP-GLS has a bounded error even with high SNR, which is come from the fact that 𝐑\mathbf{R} is recovered from its submatrix 𝚪Ω𝐑𝚪Ω𝖳\mathbf{\Gamma}_{\Omega}\mathbf{R}\mathbf{\Gamma}_{\Omega}^{\mathsf{T}}.

The DOA performance is evaluated with the RMSE versus number of sources KK, see Fig. 4. We consider the same co-prime array and for each case, KK equal strength sources are generated randomly in [65,65][-65,65]^{\circ}, with L=20L=20 and SNR 20 dB. AP-GLS has higher estimation accuracy than CBF and estimating more sources than the number of sensors.

6 Conclusion

We introduced alternating projections based gridless sparse iterative covariance-based estimation for direction-of-arrival estimation that is gridless and promotes sparse solutions. Numerical evaluations indicated that the proposed method shows a favorable performance even with single-snapshot data and coherent arrivals. For co-prime array data, the proposed algorithm resolved more sources than the number of sensors.

References

  • [1] D. Malioutov, M. Cetin, and A. S. Willsky, “A sparse signal reconstruction perspective for source localization with sensor arrays,” IEEE Trans. Signal Process., vol. 53, no. 8, pp. 3010–3022, Aug 2005.
  • [2] P. Gerstoft, A. Xenaki, and C.F. Mecklenbräuker, “Multiple and single snapshot compressive beamforming,” J. Acoust. Soc. Am., vol. 138, no. 4, pp. 2003–2014, 2015.
  • [3] A. Xenaki and P. Gerstoft, “Grid-free compressive beamforming,” J. Acoust. Soc. Am., vol. 137, no. 4, pp. 1923–1935, 2015.
  • [4] Y. Park, Y. Choo, and W. Seong, “Multiple snapshot grid free compressive beamforming,” J. Acoust. Soc. Am., vol. 143, no. 6, pp. 3849–3859, 2018.
  • [5] Z. Yang, J. Li, P. Stoica, and L. Xie, “Sparse methods for direction-of-arrival estimation,” in Academic Press Library in Signal Processing: Array, Radar and Communications Engineering, vol. 7, chapter 11, pp. 509 – 581. Academic Press, 2018.
  • [6] G. Tang, B. N. Bhaskar, P. Shah, and B. Recht, “Compressed sensing off the grid,” IEEE Trans. Inf. Theory, vol. 59, no. 11, pp. 7465–7490, 2013.
  • [7] Y. Chi, L. L. Scharf, A. Pezeshki, and A. R. Calderbank, “Sensitivity to basis mismatch in compressed sensing,” IEEE Trans. Signal Process., vol. 59, no. 5, pp. 2182–2195, 2011.
  • [8] Y. Chi and M. Ferreira Da Costa, “Harnessing sparsity over the continuum: Atomic norm minimization for superresolution,” IEEE Signal Process. Mag., vol. 37, no. 2, pp. 39–57, 2020.
  • [9] Y. Wang, Y. Zhang, Z. Tian, G. Leus, and G. Zhang, “Super-resolution channel estimation for arbitrary arrays in hybrid millimeter-wave massive MIMO systems,” IEEE J. Sel. Topics Signal Process., vol. 13, no. 5, pp. 947–960, Sep. 2019.
  • [10] A. G. Raj and J. H. McClellan, “Super-resolution DOA estimation for arbitrary array geometries using a single noisy snapshot,” in Proc. IEEE ICASSP, 2019, pp. 4145–4149.
  • [11] S. Semper, F. Roemer, T. Hotz, and G. Del Galdo, “Grid-free direction-of-arrival estimation with compressed sensing and arbitrary antenna arrays,” in Proc. IEEE ICASSP, 2018, pp. 3251–3255.
  • [12] Z. Yang, L. Xie, and C. Zhang, “A discretization-free sparse and parametric approach for linear array signal processing,” IEEE Trans. Signal Process., vol. 62, no. 19, pp. 4959–4973, 2014.
  • [13] P. Stoica, P. Babu, and J. Li, “SPICE: A sparse covariance-based estimation method for array processing,” IEEE Trans. Signal Process., vol. 59, no. 2, pp. 629–638, 2011.
  • [14] Z. Zhu and W. B. Wakin, “On the asymptotic equivalence of circulant and Toeplitz matrices,” IEEE Trans. Inf. Theory, vol. 63, no. 5, pp. 2975–2992, 2017.
  • [15] D. Romero and G. Leus, “Wideband spectrum sensing from compressed measurements using spectral prior information,” IEEE Trans. Signal Process., vol. 61, no. 24, pp. 6232–6246, 2013.
  • [16] S. Boyd and L. Vandenberghe, Convex optimization, Cambridge university press, Cambridge, U.K., 2004.
  • [17] J. Dattorro, Convex optimization & Euclidean distance geometry, Meboo Publishing USA, Palo Alto, CA, USA, 2010.
  • [18] H. Q. Cai, J.-F. Cai, and K. Wei, “Accelerated alternating projections for robust principal component analysis,” J. Mach. Learn. Res., vol. 20, no. 1, pp. 685–717, 2019.
  • [19] X. Jiang, Z. Zhong, X. Liu, and H. C. So, “Robust matrix completion via alternating projection,” IEEE Signal Process. Lett., vol. 24, no. 5, pp. 579–583, 2017.
  • [20] A. S. Lewis and J. Malick, “Alternating projections on manifolds,” Math. Oper. Res., vol. 33, no. 1, pp. 216–234, 2008.
  • [21] P. Netrapalli, U. N. Niranjan, S. Sanghavi, A. Anandkumar, and P. Jain, “Non-convex robust PCA,” in Proc. NIPS, 2014, pp. 1107–1115.
  • [22] M. Cho, J. Cai, S. Liu, Y. C. Eldar, and W. Xu, “Fast alternating projected gradient descent algorithms for recovering spectrally sparse signals,” in Proc. IEEE ICASSP, 2016, pp. 4638–4642.
  • [23] L. Condat and A. Hirabayashi, “Cadzow denoising upgraded: A new projection method for the recovery of Dirac pulses from noisy linear measurements,” Sampl. Theory Signal Image Process., vol. 14, no. 1, pp. 17–47, 2015.
  • [24] X. Wu, W. Zhu, and J. Yan, “A Toeplitz covariance matrix reconstruction approach for direction-of-arrival estimation,” IEEE Trans. Veh., vol. 66, no. 9, pp. 8223–8237, Sep. 2017.
  • [25] M. Wagner, P. Gerstoft, and Y. Park, “Gridless DOA estimation via alternating projections,” in Proc. IEEE ICASSP, 2019, pp. 4215–4219.
  • [26] M. Wagner, Y. Park, and P. Gerstoft, “Gridless DOA estimation and root-MUSIC for non-uniform arrays,” arXiv preprint arXiv:2003.04457, 2020.
  • [27] P. P. Vaidyanathan and P. Pal, “Sparse sensing with co-prime samplers and arrays,” IEEE Trans. Signal Process., vol. 59, no. 2, pp. 573–586, 2010.
  • [28] S. Nannuru, A. Koochakzadeh, K. L. Gemba, P. Pal, and P. Gerstoft, “Sparse Bayesian learning for beamforming using sparse linear arrays,” J. Acoust. Soc. Am., vol. 144, no. 5, pp. 2719–2729, 2018.
  • [29] S. Nannuru, P. Gerstoft, A. Koochakzadeh, and P. Pal, “Sparse Bayesian learning for DOA estimation using co-prime and nested arrays,” in Proc. 10th IEEE SAM, 2018, pp. 519–523.
  • [30] C. Steffens, M. Pesavento, and M. E. Pfetsch, “A compact formulation for the 2,1\ell_{2,1} mixed-norm minimization problem,” IEEE Trans. Signal Process., vol. 66, no. 6, pp. 1483–1497, 2018.
  • [31] M. G. Eberle and M. C. Maciel, “Finding the closest Toeplitz matrix,” Comput. Appl. Math., vol. 22, no. 1, pp. 1–18, 2003.
  • [32] P. Stoica and A. Nehorai, “Music, maximum likelihood, and Cramér-Rao bound,” IEEE Trans. Acoust., Speech, Signal Process., vol. 37, no. 5, pp. 720–741, 1989.
  • [33] C. Liu and P. P. Vaidyanathan, “Remarks on the spatial smoothing step in coarray MUSIC,” IEEE Signal Process. Lett., vol. 22, no. 9, pp. 1438–1442, 2015.