EM and SAGE algorithms for DOA estimation in the presence of unknown uniform noise
Abstract
The expectation-maximization (EM) and space-alternating generalized EM (SAGE) algorithms have been applied to direction of arrival (DOA) estimation in known noise. In this work, the two algorithms are proposed for DOA estimation in unknown uniform noise. Both the deterministic and stochastic signal models are considered. Moreover, a modified EM (MEM) algorithm applicable to the noise assumption is also proposed. These proposed algorithms are improved to ensure the stability when the powers of sources are unequal. After being improved, numerical results illustrate that the EM algorithm has similar convergence with the MEM algorithm and the SAGE algorithm outperforms the EM and MEM algorithms for the deterministic signal model. Furthermore, numerical results show that processing the same samples from the stochastic signal model, the SAGE algorithm for the deterministic signal model requires the fewest iterations.
Index Terms:
Array signal processing, DOA estimation, EM algorithm, Maximum likelihood, Statistical signal processing.I Introduction
Direction of arrival (DOA) estimation is an important part of array signal processing and some high resolution estimation techniques have been proposed. In particular, the maximum-likelihood (ML) technique plays a critical role due to its superior performance. However, ML direction finding problems are non-convex and difficult to obtain their closed-form solutions, thus leading to various iterative methods of solution [1], [2].
One computationally efficient method to compute ML estimators is the expectation-maximization (EM) algorithm in [3]. [4] and [5] have employed the EM algorithm to solve ML direction finding problems. The EM algorithm consists of two sequential steps at every iteration [4], [5]: an expectation step (E-step) is to estimate the complete-data sufficient statistics by finding their conditional expectations, and a maximization step (M-step) is to estimate the signal parameters by parallel maximizations. However, the EM algorithm updates all of the parameter estimates simultaneously, which results in slow convergence. In order to speed up the convergence of the EM algorithm, [6] proposes the space-alternating generalized EM (SAGE) algorithm. [7] and [8] show that the SAGE algorithm does yield faster convergence in terms of DOA estimation.
The EM and SAGE algorithms in ML direction finding are usually derived under known noise [4], [5], [7], [8]. Know noise is with a known statistical model and without unknown parameters, which may be unrealistic in certain applications. In fact, many seminal works in ML direction finding consider the so-called unknown uniform noise model [1], [2], [9]–[11]. The covariance matrix of unknown uniform noise can be expressed as , where is an unknown common variance and is the -order identity matrix. Under this noise assumption, [9] presents a computationally attractive alternating projection algorithm for computing the deterministic ML estimator, [10] investigates the statistical performance of this ML estimator and derives the Cramer-Rao lower bound, [11] compares some statistical properties of both the deterministic and stochastic ML estimators. In addition to uniform noise, nonuniform noise has also attracted increasing attention. Nonuniform noise has an arbitrary diagonal covariance matrix and thus makes associated ML direction finding problems complex. For efficiently computing the deterministic and stochastic ML estimators in unknown nonuniform noise, [12] and [13] have presented two alternating optimization algorithms, respectively.
In this work, we develop the EM and SAGE algorithms in ML direction finding for unknown uniform noise. Theoretical analyses indicate that has little effect on the two algorithms for the deterministic signal model. However, the M-step in the EM algorithm for the stochastic signal model can be no longer simplified to parallel subproblems easily when is unknown. Hence, we divide the M-step into two conditional maximization steps (CM-steps) based on the expectation-CM (ECM) algorithm [14]. Besides, we propose a modified EM (MEM) algorithm applicable to unknown uniform noise. Note that although the EM algorithm in [15] is similar to the MEM algorithm, it is incorrectly derived.
Existing simulations about the EM and SAGE algorithms always adopt sources of equal power [4], [5], [7], [8], [15]. However, we find that when the powers of sources are unequal, multiple DOA estimates obtained by the EM, MEM, and SAGE algorithms tend to be consistent with the true DOA of the source with the largest power. Hence, we improve these algorithms. After being improved, numerical results illustrate that 1) the EM algorithm has similar convergence with the MEM algorithm, 2) the SAGE algorithm outperforms the EM and MEM algorithms for the deterministic signal model, i.e., the SAGE algorithm converges faster and is more efficient for avoiding the convergence to an unwanted stationary point of the log-likelihood function, and 3) the SAGE algorithm cannot always outperform the EM and MEM algorithms for the stochastic signal model.
The EM, MEM, and SAGE algorithms for the deterministic signal model can process samples from the stochastic signal model. Hence, we via simulation compare the convergence of the EM and SAGE algorithms for both models. Numerical results illustrate that under the same samples, initial DOA estimates, and stopping criterion, the SAGE algorithm for the deterministic signal model requires the fewest iterations.
The contributions of this work are summarized as follows:
- •
-
•
We propose an MEM algorithm applicable to the unknown uniform noise assumption.
-
•
We improve the EM, MEM, and SAGE algorithms to ensure the stability when the powers of sources are unequal.
-
•
We via simulation show that the EM algorithm has similar convergence with the MEM algorithm and the SAGE algorithm outperforms the EM and MEM algorithms for the deterministic signal model. However, the SAGE algorithm cannot always outperform the EM and MEM algorithms for the stochastic signal model.
-
•
We via simulation show that processing the same samples from the stochastic signal model, the SAGE algorithm for the deterministic signal model requires the fewest iterations.
Notations: , , and are the transposition, conjugate transposition, and Euclidean norm of a vector , respectively. and . , , and are the inversion, determinant, and trace of a square matrix , respectively. is the -order zero matrix. and denote that the -order square matrix is positive semi-definite and definite, respectively. and denote expectation and variance, respectively. is the imaginary unit.
II Data Model and Problem Formulation
We consider an array composed of isotropic sensors receiving the signals emitted by narrowband far-field sources with the same known center wavelength . The array geometry is arbitrary and known in a Cartesian coordinate system and let the th sensor be at a known position close to the origin. Spherical coordinates are used to show the DOAs of the sources. The elevation and azimuth angles of the th source are denoted by and , respectively. Thus, its unit directional vector can be expressed as in the Cartesian coordinate system.
We use the origin as the reference point of the array. Then, the phase difference, with respect to the th source, between the two signals, respectively, received at the origin and the th sensor can be approximated as . After being down-converted to baseband, the received signal vector of the array can be characterized by [1], [2]
(1) |
where , , is the th source baseband signal received at the origin and its power is , represents a white Gaussian noise vector with a common noise variance .
EM-type algorithms require the definition of the underlying complete data and their associated log-likelihood functions. According to the EM paradigm in [4] and [5], after sampling we design the samples (“snapshots”) or incomplete data as
(2) | |||||
where and the ’s are mutually uncorrelated, and , the ’s are the complete data, is the number of samples.
Since the joint probability density functions (PDFs) of the incomplete and complete data depend also on the statistical model of the ’s, we consider the deterministic and stochastic signal models separately.
II-A Deterministic Signal Model
The deterministic signal model requires the deterministic, arbitrary, and unknown ’s [4], [5], [9]–[11], i.e., , with , , and . Then, the incomplete- and complete-data log-likelihood functions are, respectively, expressed as
(3a) | |||||
(3b) | |||||
where , , , and denotes the signal parameters while is the only noise parameter. Thus, the ML estimation problem, i.e., , can be simplified to
(4) |
II-B Stochastic Signal Model
In the stochastic signal model, . For simplicity, all of the ’s and ’s are assumed to be mutually uncorrelated [4], [5], i.e., with , and with . Then, the incomplete- and complete-data log-likelihood functions are, respectively, expressed as
(5a) | |||||
(5b) | |||||
where and . Finally, the ML estimation problem can be simplified to
(6) |
where is the sample covariance matrix.
III EM Algorithm
In this section, we derive the EM algorithm for solving problems (4) and (6). For convenience, we define the following notations:
-
1)
denotes an iterative value obtained during the th iteration and denotes an initial value.
-
2)
.
-
3)
.
-
4)
.
-
5)
.
III-A Deterministic Signal Model
At the th E-step, the conditional expectation of (3b) is computed by
(7) | |||||
where the conditional PDF of can be derived from [17] and
(8a) | |||||
(8b) | |||||
At the th M-step, based on (7) the EM algorithm updates the estimates of and by
(9) | |||||
which can be solved easily in a separable fashion. Then, we have the parameter estimates:
(10a) | |||
(10b) | |||
(10c) |
where and if .
Remark 1.
Eliminating , problem (4) can be simplified to
(11) |
which indicates that for the deterministic signal model, the ML estimator of is unrelated to . Additionally, note that the ’s in (8a), the ’s in (10a), and the ’s in (10b) are unrelated to and respectively identical with these in the EM algorithm under the known [5], [8]. Then, we can conclude that the EM algorithm under the unknown is equivalent to that under the known when not estimating . Accordingly, (10c) can be omitted when the algorithm does not consider the nuisance parameter .
III-B Stochastic Signal Model
Note that is a complete-data sufficient statistic for [5], [16], which contains , , and . At the th E-step, the EM algorithm estimates the ’s by finding their conditional expectations [3]:
(12) | |||||
where
At the th M-step, based on (5b) and (12) the EM algorithm updates the estimates of and by
(13) | |||||
where with and with being the signal to noise ratio of the th source.
Due to , it seems to be very difficult that problem (13) can be simplified to parallel subproblems. To perform the M-step simply, we divide it into two CM-steps based on the ECM algorithm [14].
-
•
First CM-step: estimate but hold fixed. Then, (13) can be simplified to the parallel subproblems
(14) which can be solved using the method in [5]. Accordingly, the estimate of is updated by
(15a) (15b) where is indeterminate if .
-
•
Second CM-step: estimate but hold fixed. Then, (13) can be simplified to
(16) Thus, the estimate of is updated by
(17) where if .
IV MEM Algorithm
In the previous section, is fixed and known. In this section, we regard as a parameter to be estimated and thus propose an MEM algorithm applicable to the unknown uniform noise assumption.
In order to estimate and in the MEM algorithm easily, we introduce as the common noise variance of the th source, i.e.,
(18) |
Then, and can be estimated by and after estimating at the M-step of the MEM algorithm. In addition, we first assume for .
IV-A Deterministic Signal Model
According to (18), (3b) is rewritten as
(19) | |||||
With the help of (19), at the th E-step the MEM algorithm computes the conditional expectation:
(20) | |||||
where
(21a) | |||||
(21b) | |||||
At the th M-step, based on (20) the MEM algorithm updates the estimates of and by the following parallel subproblems:
(22) | |||||
where . Then, we can obtain the parameter estimates:
(23a) | |||
(23b) | |||
(23c) |
where . Note that when , based on (23c) we have , and , i.e., .
Remark 3.
The ’s in (21a) are related to , so the ’s and ’s in (23) are related to , i.e., iterative knowledge associated with is utilized to estimate in the MEM algorithm for the deterministic signal model.
IV-B Stochastic Signal Model
According to (18), with and (5b) is rewritten as
(24) |
With the help of (24), at the th E-step the MEM algorithm estimates the ’s by finding their conditional expectations:
(25) | |||||
where
At the th M-step, based on (24) and (25) the MEM algorithm estimates and by the following parallel subproblems111Unlike the M-step (13) in the EM algorithm for the stochastic signal model, the M-step in the MEM algorithm for the stochastic signal model can be easily simplified to parallel subproblems.:
(26) | |||||
where with and . Since and , subproblems (26) can be rewritten as
(27) |
We first simplify subproblems (27) by eliminating [18], [19]. Hence, after estimating and , and are estimated by
which imply that if , will be indeterminate and by (27). However, to estimate and , we must assume . After eliminating , subproblems (27) are simplified to
(28) | |||||
Next, we simplify subproblems (28) by eliminating . Thus, after estimating , is estimated by
After eliminating , subproblems (28) are simplified to
(29) | |||||
where due to the fact that when , and
Note that is a monotonically decreasing function of for , subproblems (29) are thus equivalent to
(30) |
Based on the above analysis, the estimates of and are updated by
(31a) | |||
(31d) | |||
(31e) |
Finally, we give the following remark.
Remark 4.
In the MEM algorithm for the stochastic signal model, for if .
Proof.
We utilize a proof by contradiction. Without loss of generality, assume for and . Obviously, we have
(32) | |||||
Based on (31b) and , we first consider . Then, as , resulting to that and by (25) we have
which indicate and . Thus, , which contradicts by (32).
Next, we consider , i.e., as . Then, by (25) we have
where and , resulting in
and
which contradicts by (32). The proof is completed. ∎
V SAGE Algorithm
In this section, the SAGE algorithm is proposed. We design that the SAGE algorithm updates the DOA estimates in the ascending order of source number at each iteration and one iteration finishes when all the parameter estimates are updated. Besides, let denote an iterative value obtained during updating the estimate of at the th iteration, .
The SAGE algorithm updates the parameter estimates by the E- and M-steps of the EM or MEM algorithm. When updating the estimate of at the th iteration, the SAGE algorithm first associates all of the noise with the th source signal component [6], [7]:
(33) |
V-A Deterministic Signal Model
According to (33), , for , and (3b) is rewritten as
(34) |
At the E-step, the SAGE algorithm computes the conditional expectation of by
(35) |
where and
(36a) | |||||
(36b) |
At the M-step, based on (35) the SAGE algorithm estimates , , and by
(37) |
Thus, the estimates of , , and are updated by
(38a) | |||
(38b) | |||
(38c) |
where .
Finally, the other parameter estimates are not updated and their iterative values are
(39a) | |||
(39b) |
At each iteration of the SAGE algorithm, the E- and M-steps are repeated times, leading to that and all elements in are estimated once while is estimated times.
Remark 5.
Notice that the ’s in (36a), in (38a), and the ’s in (38b) are unrelated to and respectively identical with these in the SAGE algorithm under the known [8], i.e., the SAGE algorithm under the unknown is equivalent to that under the known when not estimating . Accordingly, (38c) can be omitted when the algorithm does not consider the nuisance parameter .
V-B Stochastic Signal Model
According to (33), with and for , the statistical model of depends only on due to . Thus, (5b) is rewritten as
(40) | |||||
where denotes the modulus of a complex number , is a sufficient statistic for [16],
At the E-step, the SAGE algorithm computes the conditional expectation of by
(41) | |||||
where ,
(42) | |||||
with , and
(43) | |||||
At the M-step, based on (41) the SAGE algorithm updates the estimates of , , and by
(44) | |||||
resulting in that
(45) |
while the estimates of , , and are updated by
(46) |
where with and the solution can be obtained from the solutions of subproblems (26). Following (31), , , and can be estimated by
(47a) | |||
(47d) | |||
(47e) |
where is possible although its probability is very low. For example, if , (a single snapshot), and after updating the estimate of , we will have
and by (43) when updating the estimate of . Furthermore, if , we will obtain by (47a) and by (47b).
To avoid , we use two CM-steps based on the ECM algorithm to reestimate , , and by problem (46) if in (47b).
-
•
First CM-step: estimate and but hold fixed. Then, problem (46) is simplified to
(48) which can be solved by referring to (15). Thus, the estimates of and are updated by
(49a) (49b) where is indeterminate if .
-
•
Second CM-step: estimate but hold and fixed. Then, problem (46) is simplified to
(50) where . Thus, the estimate of is updated by
(51) which implies if .
Finally, the other parameter estimate(s) is(are) not updated and the iterative value(s) is(are)
(52) |
The E- and M-steps are repeated times at each iteration of the SAGE algorithm, so and all elements in are estimated once, and all elements in are estimated times, each element of is estimated times or once ().
VI Properties of the Proposed EM, MEM, and SAGE Algorithms
VI-A Convergence Point
It is well known that under the known , the EM and SAGE algorithms satisfy standard regularity conditions [4], [6], [20] and converge to different stationary points or the same stationary point of , which is with fixed, implying that is a well-behaved objective function. Thus, the proposed algorithms also satisfy the regularity conditions and converge to different stationary points or the same stationary point of .
Of course, the convergence points of the proposed algorithms depend on their initial points. Given a poor initial point, the proposed algorithms may never converge to the maximum point of . To generate an appropriate initial point, the effective initialization procedure in [9] can be adopted using the deterministic signal model.
VI-B Complexity and Stability
Note that the computational complexities of the EM, MEM, and SAGE algorithms are dominated by searching the ’s:
(53) |
Hence, if we adopt brute force to search , i.e., evaluating the objective function on a coarse grid to locate a grid point, close to the maximum point of , as the initial point of a gradient algorithm and then applying this gradient algorithm to search the maximum point as [4], these algorithms will have almost the same computational complexity at each iteration [7].
However, when the powers of sources are unequal, we have found via simulation that the DOA estimates of multiple sources, updated by (53), tend to be consistent with the true DOA of the source with the largest power and the algorithms may be unstable.
To address this issue, we can choose as the initial point of a gradient algorithm and then apply this gradient algorithm to search a local maximum point of as , e.g., Algorithm 2 in the next section has given excellent numerical results. Under this choice, we still have
(54) |
which actually guarantee the monotonicity of the algorithms but only meet the requirement of GEM algorithms [3]. For convenience, we do not change the names of the algorithms under this choice.
VII Numerical Results
In this section, numerical results are provided to illustrate the convergence performances of the proposed algorithms. To ensure that a two-dimensional scatter plot can reflect the DOAs of the sources, the array is assumed to be a uniform linear array with , , and , then and are known while and are to be estimated. SAGE222In this section, the EM, MEM, and SAGE algorithms are simply written as EM, MEM, and SAGE, respectively. for the stochastic signal model is given in Algorithm 1 and the other algorithms in this section can be obtained by referring to Algorithm 1. For comparison, let all tolerances be , and in the deterministic signal model is also generated by the independent random numbers .
Furthermore, in the EM algorithm and the gradient ascent method with the backtracking line search [21], given in Algorithm 2, is adopted to search the ’s in (54). Several simulation parameters in Algorithm 2 are , , and .
VII-A Deterministic Signal Model
For comparing the convergence of EM, MEM, and SAGE, Fig. 1 plots their ’s, ’s, and ’s as functions of under one realization. We observe that given a good initial point, the algorithms can converge to consistent DOA estimates, and EM has similar convergence with MEM while SAGE converges faster than EM and MEM.

Figs. 2 and 3 show two scatter plots of the DOA estimates obtained by the algorithms under 200 independent realizations. The same samples of each realization are processed by the algorithms. In Fig. 2, the total numbers of wanted points from EM, MEM, and SAGE are 68, 72, and 179, respectively. In Fig. 3, the total numbers of wanted points from EM, MEM, and SAGE are 159, 157, and 190, respectively. Figs. 2 and 3 imply that given a poor initial point, SAGE is more efficient for avoiding the convergence to an unwanted stationary point of than EM and MEM.
Note that both sources in Fig. 2 are not closely spaced, so it is very difficult to mix up both sources and the wanted points in Fig. 2 are centered around the true position . However, both sources in Fig. 3 are closely spaced and the wanted points are centered around or , i.e., the algorithms are likely to mix up closely spaced sources. Fortunately, we only focus on the DOAs of sources in DOA estimation and mixing up sources has no effect on the result.


According to these simulations, we can conclude that for the deterministic signal model, 1) EM has similar convergence with MEM, and 2) SAGE outperforms EM and MEM.
VII-B Stochastic Signal Model
For comparing the convergence of EM, MEM, and SAGE, Fig. 4 plots their ’s, ’s, and ’s as functions of . We can observe that EM has similar convergence with MEM while SAGE converges faster than EM and MEM.

Figs. 5 and 6 show two scatter plots of the DOA estimates obtained by the algorithms under 200 independent realizations. The same samples of each realization are processed by the algorithms. In Fig. 5, the total numbers of wanted points from EM, MEM, and SAGE are 185, 186, and 175, respectively. In Fig. 6, the total numbers of wanted points from EM, MEM, and SAGE are 161, 161, and 172, respectively. Figs. 5 and 6 indicate that EM has similar convergence with MEM but compared to EM and MEM, SAGE is less and more efficient for avoiding the convergence to an unwanted stationary point of in Figs. 5 and 6, respectively. The algorithms are likely to mix up closely spaced sources, so the wanted points in Fig. 5 are centered around and the wanted points in Fig. 6 are centered around or .
From Figs. 4–6, we can conclude that for the stochastic signal model, 1) EM has similar convergence with MEM, and 2) SAGE cannot always outperform EM and MEM.


VII-C Deterministic and Stochastic Signal Models
The EM, MEM, and SAGE algorithms for the deterministic signal model can process samples from the stochastic signal model, which means that the algorithms for the stochastic signal model can be compared to these for the deterministic signal model. The above numerical results have shown that EM has similar convergence with MEM, so we only compare EM and SAGE for both models in this subsection for simplicity.
Since both models have the same DOA parameter , the stopping criterion is suitable. Fig. 7 shows a scatter plot of the DOA estimates obtained by EM and SAGE under 50 independent realizations. The same samples of each realization are processed by both algorithms for both models. From Fig. 7, we can observe that both algorithms for both models obtain consistent DOA estimates.
Based on Fig. 7, Fig. 8 compares the numbers of iterations. We can easily observe that EM for the deterministic signal model generally requires a larger number of iterations than EM for the stochastic signal model. Moreover, SAGE for the deterministic signal model generally requires a smaller number of iterations than SAGE for the stochastic signal model. More importantly, SAGE for the deterministic signal model always requires the smallest number of iterations for each realization. Thus, we can conclude that SAGE for the deterministic signal model is superior to the other algorithms in the computational cost.


VIII Conclusion
We have developed the EM and SAGE algorithms for DOA estimation in unknown uniform noise and proposed an MEM algorithm applicable to the noise assumption. Then, we improve the EM, MEM, and SAGE algorithms to ensure the stability when the powers of sources are unequal. After being improved, numerical results illustrate that the EM algorithm has similar convergence with the MEM algorithm, the SAGE algorithm outperforms the EM and MEM algorithms for the deterministic signal model, and the SAGE algorithm converges faster than the EM and MEM algorithms for the stochastic signal model. In addition, numerical results indicate that when these algorithms process the same samples from the stochastic signal model, the SAGE algorithm for the deterministic signal model requires the fewest iterations.
References
- [1] H. Krim and M. Viberg, “Two decades of array signal processing research: the parametric approach,” IEEE Signal Processing Magazine, vol. 13, no. 4, pp. 67–94, Jul. 1996.
- [2] L. C. Godara, “Application of antenna arrays to mobile communications. II. Beam-forming and direction-of-arrival considerations,” Proceeding of the IEEE, vol. 85, no. 8, pp. 1195–1245, Aug. 1997.
- [3] A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum likelihood from incomplete data via the EM algorithm,” Journal of the Royal Statistical Society. Series B (Methodological), vol. 39, no. 1, pp. 1–38, 1977.
- [4] M. Feder and E. Weinstein, “Parameter estimation of superimposed signals using the EM algorithm,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 36, no. 4, pp. 477–489, Apr. 1988.
- [5] M. I. Miller and D. R. Fuhrmann, “Maximum-likelihood narrow-band direction finding and the EM algorithm,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 38, no. 9, pp. 1560–1577, Sep. 1990.
- [6] J. A. Fessler and A. O. Hero, “Space-alternating generalized expectation-maximization algorithm,” IEEE Transactions on Signal Processing, vol. 42, no. 10, pp. 2664–2677, Oct. 1994.
- [7] P. Chung and J. F. Bohme, “Comparative convergence analysis of EM and SAGE algorithms in DOA estimation,” IEEE Transactions on Signal Processing, vol. 49, no. 12, pp. 2940–2949, Dec. 2001.
- [8] M. Gong and B. Lyu, “Alternating maximization and the EM algorithm in maximum-likelihood direction finding,” IEEE Transactions on Vehicular Technology, vol. 70, no. 10, pp. 9634–9645, Oct. 2021.
- [9] I. Ziskind and M. Wax, “Maximum likelihood localization of multiple sources by alternating projection,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 36, no. 10, pp. 1553–1560, Oct. 1988.
- [10] P. Stoica and A. Nehorai, “MUSIC, maximum likelihood, and Cramer-Rao bound,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 37, no. 5, pp. 720–741, May 1989.
- [11] P. Stoica and A. Nehorai, “Performance study of conditional and unconditional direction-of-arrival estimation,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 38, no. 10, pp. 1783–1795, Oct. 1990.
- [12] M. Pesavento and A. B. Gershman, “Maximum-likelihood direction-of-arrival estimation in the presence of unknown nonuniform noise,” IEEE Transactions on Signal Processing, vol. 49, no. 7, pp. 1310–1324, Jul. 2001.
- [13] C. E. Chen, F. Lorenzelli, R. E. Hudson, and K. Yao, “Stochastic maximum-likelihood DOA estimation in the presence of unknown nonuniform noise,” IEEE Transactions on Signal Processing, vol. 56, no. 7, pp. 3038–3044, Jul. 2008.
- [14] X. Meng and D. B. Rubin, “Maximum likelihood estimation via the ECM algorithm: A general framework,” Biometrika, vol. 80, no. 2, pp. 267–278, Jun. 1993.
- [15] P. Chung and J. F. Bohme, “DOA estimation using fast EM and SAGE algorithms,” Signal Processing, vol. 82, no. 11, pp. 1753–1762, Nov. 2002.
- [16] S. M. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory. Upper Saddle River, USA: Prentice-Hall PTR, 1993.
- [17] I. B. Rhodes, “A tutorial introduction to estimation and filtering,” IEEE Transactions on Automatic Control, vol. 16, no. 6, pp. 688–706, Dec. 1971.
- [18] A. G. Jaffer, “Maximum likelihood direction finding of stochastic sources: a separable solution,” in Proc. ICASSP, New York, USA, Apr. 1988.
- [19] P. Stoica and A. Nehorai, “On the concentrated stochastic likelihood function in array signal processing,” Circuits, Systems Signal Processing, vol. 14, no. 5, pp. 669–674, Sep. 1995.
- [20] C. F. Jeff Wu, “On the convergence properties of the EM algorithm,” Annals of Statistics, vol. 11, no. 1, pp. 95–103, Mar. 1983.
- [21] S. Boyd and L. Vandenberghe, Convex Optimization. New York, USA: Cambridge University Press, 2004.
Ming-yan Gong received the B.Eng. degree in metal material engineering from the Jiangsu University of Science and Technology, Zhenjiang, China, in 2016 and the M.Eng. degree in signal and information processing from the Nanjing University of Posts and Telecommunications, Nanjing, China, in 2019. He is currently working toward the Ph.D. degree with the Beijing Institute of Technology, Beijing, China. His research interests include array signal processing and MIMO communications. |
Bin Lyu received the B.E. and Ph.D. degrees from the Nanjing University of Posts and Telecommunications (NJUPT), Nanjing, China, in 2013 and 2018, respectively. He is currently an Associate Professor with NJUPT. His research interests include wireless communications and signal processing. |