Maximum Correntropy Ensemble Kalman Filter
Abstract
In this article, a robust ensemble Kalman filter (EnKF) called MC-EnKF is proposed for nonlinear state-space model to deal with filtering problems with non-Gaussian observation noises. Our MC-EnKF is derived based on maximum correntropy criterion (MCC) with some technical approximations. Moreover, we propose an effective adaptive strategy for kernel bandwidth selection. Besides, the relations between the common EnKF and MC-EnKF are given, i.e., MC-EnKF will converge to the common EnKF when the kernel bandwidth tends to infinity. This justification provides a complementary understanding of the kernel bandwidth selection for MC-EnKF. In experiments, non-Gaussian observation noises significantly reduce the performance of the common EnKF for both linear and nonlinear systems, whereas our proposed MC-EnKF with a suitable kernel bandwidth maintains its good performance at only a marginal increase in computing cost, demonstrating its robustness and efficiency to non-Gaussian observation noises.
I Introduction
Filtering [1] is the fundamental issue for the state estimation of the state-space model [2]. It has a large number of applications in many fields, such as robot vision [3] and data assimilation [4]. For the linear state-space model with Gaussian noise, the optimal filtering solution can be computed analytically, which is the well-known Kalman filter (KF) [5]. Since then, a large number of filtering algorithms for nonlinear state-space model with Gaussian noise have been proposed. The representatives among them are extended Kalman filter (EKF) [6], unscented Kalman filter (UKF) [7], cubature Kalman filter (CKF) [8] and ensemble Kalman filter (EnKF) [9].
In the case of Gaussian noise, the KF and its expansions work well. However, the noise frequently does not fit the Gaussian distribution in real-world application circumstances. For instance, in many practical settings of target tracking [10] and power systems [11], impulsive interferences and observation outliers are frequent. These disturbances are often modelled by some heavy-tailed impulsive noises (such as some mixed-Gaussian distributions). The main reason for this problem is that the KF is based on the well-known minimum mean square error (MMSE) criterion [12], which is sensitive to large outliers and deteriorates the KF’s robustness in non-Gaussian noise [13]. Numerous studies have attempted to address the filtering problems with non-Gaussian noise, such as filters based on information theoretic quantities [14], the Huber-based KFs [15] and robust Student’s t filter [16]. Besides these filters, a local similarity measure called correntropy [17] has been used to develop new robust filters [18, 19, 20, 21]. These filters are derived by maximum correntropy criterion (MCC) and called MCC-based filters, which can achieve great performance in the presence of heavy-tailed non-Gaussian noises since correntropy is insensitive to large outliers.
In this article, we formulate the estimation problem under MCC framework. Under this framework, the goal of this article is to develop a robust EnKF called the maximum correntropy EnKF (MC-EnKF) based on the MCC, to address nonlinear filtering problems when observations contain large outliers. With the help of suitable approximation techniques, we derive the recursion of ensembles for MC-EnKF based on MCC cost function. It is necessary to note that MC-EnKF employs the empirical prediction covariance calculated by ensembles. This procedure is consistent with the common EnKF, which allows us to prove that MC-EnKF will converge to the common EnKF when the kernel bandwidth tends to infinity. This justification provides a complementary knowledge of understanding the kernel bandwidth selection for MC-EnKF. Although MC-EnKF aims to handle non-Gaussian noise cases, it also has the flexibility to handle Gaussian noise cases with a large enough kernel bandwidth since EnKF performs well in this scenario. The contributions of this article are listed as follows.
-
•
We introduce the idea of the cost function to the field of EnKF, which inspires us to derive a robust EnKF called MC-EnKF based on the MCC cost function. Moreover, we propose an effective adaptive strategy for kernel bandwidth selection. Besides, we provide a complementary understanding of the kernel bandwidth selection theoretically, i.e., MC-EnKF will converge to the common EnKF when the kernel bandwidth is large enough. This justification gives the flexibility of MC-EnKF to handle cases involving both Gaussian and non-Gaussian noise.
-
•
Our theoretical result is supported by numerical experiments on several filtering benchmarks, i.e., our proposed MC-EnKF will perform like the common EnKF when the kernel bandwidth is large enough. Furthermore, experiments demonstrate that our proposed adaptive strategy for kernel bandwidth selection is effective. And with the appropriate kernel bandwidth, MC-EnKF can outperform EnKF with only a minor increase in computing cost when the underlying observation system is hampered by some heavy-tailed non-Gaussian noises.
The remainder of this article is organized as follows. In section II, we present the concept of MCC, and briefly review the nonlinear filtering problems and EnKF algorithm. In section III, we derive our MC-EnKF algorithm and propose an effective adaptive strategy for kernel bandwidth selection with necessary discussions. The experiments and discussions are presented in Section IV. The conclusions are drawn in Section V.
I-A Notations
Throughout the article, we use boldface lower-case letters for vectors and boldface upper-case letters for matrices. The transpose and mathematical expectation are denoted by and , respectively. The Gaussian distribution with mean and covariance is denoted by . and denote, respectively, the -dimensional Euclidean space and the set of all real matrices. Especially, denotes the n-dimensional identity matrix. weighted norm with and non-singular matrix .
II Background
In this section, we shall first introduce the concept of correntropy [17], which is a local similarity measure between two random vectors. Then we shall formulate the estimation problems based on MCC and MMSE estimate [12]. At last, we shall briefly review the framework of nonlinear filtering problems and introduce the EnKF algorithm.
II-A Maximum Correntropy Criterion and Minimum Mear Square Error
II-A1 Correntropy
For given two random vectors and , the correntropy between and is denoted by , which is defined as follows:
(1) |
where is a given non-singular matrix with suitable dimension, is the Gaussian Kernel given by , and stands for the kernel bandwidth. Here we shall recall the important property for , more details can be found in [17]. By taking the Taylor series expansion of the Gaussian Kernel, we have
(2) |
The correntropy can be seen as the weighted sum of all even order moments of the error vectors . The parameter to weight the second order and higher order moments appears to be the kernel bandwidth. The second order moment will predominate in the correntropy when the kernel bandwidth is particularly large compared to the error vectors range.
II-A2 Estimation Problems Formulation
Correntropy can be used as the optimality criterion for estimation problems. Suppose our goal is to learn a parameter for a given estimator , and let denote the desired output. Then the MCC-based estimation problem can be formulated as solving the following optimzation problem:
(3) |
where denotes its optimal solution and denotes its feasible set for parameters. It is necessary to compare MCC with the conventional MMSE criterion. Let denotes its feasible set for parameters with as optimal solution. Then for the given non-singular matrix with suitable dimension, the MMSE-based estimation problem is formulated as follows:
(4) | ||||
II-B Nonlinear Filtering Problems
In this article, we consider the nonlinear state-space model given by the following state and observation equations:
(5) | ||||
where state , and observation . and are nonlinear functions called state equation and observation equation, respectively. State noise and observation noise are zero means with nominal covariance and , respectively. Let denote the -algebra generated by noisy observations . The filtering problem refers to estimating the posterior distribution , which is called filtering distribution.
In the follow-up, we focus on the scenario in which the observation noises are not Gaussian, i.e., they are induced by unknown large outliers. Hence, the real distribution of observation noise is unknown to us in fact. For example, we consider , where is the unknown probability, and is an arbitrary unknown distribution with large covariance . Here is known to us, so it is called the nominal covariance matrix. Additionally, in what follows, we shall develop our novel filter to deal with such noises above by utilizing the MCC as the cost function involving the nominal observation covariance .
II-C Ensemble Kalman Filter
The EnKF sequentially approximates the filtering distributions using equally weighted ensembles . At prediction steps, each ensemble is propagated using the state equation, while at update steps a Kalman-type update is performed for each ensemble:
(6) |
and
(7) |
where , and the Kalman gain
(8) |
is defined using the empirical prediction covariance of prediction ensembles , namely
(9) |
with
(10) |
and
(11) |
III Proposed Algorithm
In this section, we shall present the derivation of MC-EnKF. Then we shall give the discussions on the adaptive strategy for kernel bandwidth selection and the convergence of MC-EnKF with respect to kernel bandwidth.
III-A Derivation of the Algorithm
Here we present how to derive MC-EnKF based on the MCC estimate (3). The prediction step of MC-EnKF is the same as those of EnKF, i.e., (6). Therefore, we shall focus on deriving its update step. Let and denote the prediction mean and prediction covariance , respectively. As shown in [22], for the underlying linear system of (5), i.e., and , the update step of KF can be derived by using least square cost function,
(13) |
Motivated by (13), we shall consider a new cost function based on MCC. Besides, in view of Remark II.1, we shall replace and in (13) by the empirical prediction mean in (10) and the empirical prediction covariance in (9). Therefore, our modified cost function for ensembles for is given by
(14) | ||||
Then based on , the update step for MC-EnKF can be obtained solving the following optimization problem for :
(15) |
In what follows, the derivation contains some approximations. For the sake of obtaining our algorithm, we heuristically treat them as exact equalities. Let us denote
(16) | ||||
Then recall defined in (11), we consider this approximation when taking the derivative,
(17) | ||||
Now letting , we have
(18) |
Adopting the first-order Taylor series to approximate the nonlinear observation function at , i.e.,
(19) |
Substituting (19) into (18), we have
(20) | ||||
Then we obtain the following stochastic ensemble update rule like (7) for with drawing :
(21) |
where the new Kalman gain is given by
(22) | ||||
Here we shall discuss the practical algorithm, since and all contain the item . Here we introduce a novel technical approximation. Recall (16), we use to approximate contained in and , respectively, i.e.,
(23) | ||||
It follows that we summarise the specific algorithm steps of MC-EnKF in Algorithm 1.
Remark III.1
The proposed MC-EnKF is still of Kalman type, i.e., it has a similar recursive structure as the common EnKF. Moreover, its computational complexity is only slightly higher than the common EnKF, which is verified in later experiments.
Remark III.2
In (22), we note that MC-EnKF is potential to handle non-Gaussian observation noises thanks to the two weights, and , which could adjust and adaptively. We shall also verify this argument in later experiments.
III-B Adaptive Strategy for Kernel Bandwidth Selection and Convergence Regarding to Kernel Bandwidth
In general, the kernel bandwidth (scale parameter) in the cost function balances the convergence rates of the regression model and its robustness [23]. Since our algorithm does not involve solving such regression problems, we focus on the how to tune this scale parameter for robustness (with respect to outliers). Intuitively, if the arrived observation contains large outliers, will be large, where denote the -norm. In this case, we need a smaller to make our algorithm more robust. Motivated by this intuition, we propose to set adaptively, where denote the kernel bandwidth used in the -th step for MC-EnKF. As shown in our simulation experiments, this adaptive strategy can achieve a good estimation performance. Additionally, the MC-EnKF will act more and more like the common EnKF algorithm as increases. In particular, the following convergence theorem holds .
Theorem III.1
If the kernel bandwidth , the proposed MC-EnKF will converge to the common EnKF.
Proof:
See Appendix A. ∎
Remark III.3
This justification in Theorem III.1 gives the flexibility of MC-EnKF to handle Gaussian noise cases since the common EnKF performs well in this case.
IV Experiments
In this section, we will conduct performance comparisons of our proposed MC-EnKF with the common EnKF and the maximum liklihood EnKF (ML-EnKF) [24], which optimizes a nonlinear cost function through maximum likelihood. These evaluations will be carried out on various filtering benchmarks that incorporate non-Gaussian noises, i.e., the observation noises with large outliers. In these experiments, we consider independent Monte Carlo runs. In each run, samples are used to evaluate the MSE of the state. The EnKF and MC-EnKF are implemented by Numpy [25] and run on Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz.
IV-A Linear System
The first benchmark we used here is a linear system, which is given by
(24) | ||||
where , and . Let be the nominal observation covariance, the observation noises are sampled from the mixture of Gaussian, i.e.,
(25) |
Table I lists the MSEs and average CPU times of EnKF and MC-EnKF with different kernel bandwidths in this example, where the number of ensembles of EnKF and MC-EnKF are set as . Note that the MC-EnKF outperforms EnKF in most cases and their average CPU times are virtually identical to the common EnKF. This demonstrates that our proposed MC-EnKF has better performance than those of EnKF with nearly no extra cost on CPU time. It means that MC-EnKF is more robust and efficient when we deal with observation noises with large outliers. In fact, our proposed adaptive strategy (MC-EnKF-Ada) outperforms most different cases and is only slightly worse than the best case. This suggests that our adaptive strategy works well for linear system. When we conduct the simulation for large enough , for example, , the result verifies Theorem III.1 for linear system since we note that the performance of MC-EnKF is almost identical to the EnKF in this case.
Methods | MSE | CPU Time |
---|---|---|
EnKF | 3.1004 | 0.4676 |
MC-EnKF-Ada | 2.2344 | 0.4697 |
MC-EnKF () | 2.9655 | 0.4634 |
MC-EnKF () | 2.9247 | 0.4636 |
MC-EnKF () | 2.1031 | 0.4689 |
MC-EnKF () | 1.9411 | 0.4629 |
MC-EnKF () | 2.3073 | 0.4642 |
MC-EnKF () | 3.1605 | 0.4725 |
In order to better present the comparison results, we choose the case where to illustrate the true state and EnKF estimation and MC-EnKF estimation over time, which is shown in Fig 1. Here, the selected dimension is indicated by the - suffix of the legend in the figures; for example, EnKF-1 denotes the EnKF estimation on the first state dimension. This setting will also be used for the following figure. We can clearly see that this MC-EnKF gives accurate estimates but EnKF performs poorly when the observations contain large outliers.

IV-B Nonlinear System
The second benchmark is a nonlinear system, which is given as follows:
(26) | ||||
where and . are constants controlling the state dynamics. Let be the nominal observation covariance, the observation noises are sampled from the mixture of Gaussian, i.e.,
(27) |
We list the MSEs and average CPU times of EnKF, ML-EnKF and MC-EnKF with different kernel bandwidths in Table II for this nonlinear example, where the number of ensembles of these filters still be set as . It is obvious that for nonlinear system the EnKF degrades greatly in the presence of non-Gaussian observation noises and the linear approximation error of observation function, hence has the worst estimate performance. ML-EnKF performs better than EnKF, but is still affected by outliers. Better results can be obtained with our proposed adaptive strategy (MC-EnKF-Ada) than with some kernel bandwidths, indicating that it is still useful for nonlinear systems. Note that the MC-EnKF outperforms EnKF in all cases, which demonstrate that our proposed MC-EnKF can effectively eliminate the effect of the non-Gaussian observation noises. Additionally, we notice that they share almost identical CPU times, which demonstrates the efficiency of MC-EnKF. Moreover, with suitable kernel bandwidth (), MC-EnKF can achieve very accurate estimates while EnKF is heavily affected by large outliers, which is presented in Fig 2. We also note that when is large enough (), the performance of MC-EnKF is identical to the EnKF, which verifies the result of Theorem III.1 for nonlinear system.
Methods | MSE | CPU Time |
---|---|---|
EnKF | 4.0929 | 0.4768 |
ML-EnKF | 3.3567 | 0.5059 |
MC-EnKF-Ada | 2.9282 | 0.4775 |
MC-EnKF () | 3.0150 | 0.4793 |
MC-EnKF () | 2.9703 | 0.4769 |
MC-EnKF () | 1.5849 | 0.4713 |
MC-EnKF () | 1.3012 | 0.4762 |
MC-EnKF () | 1.3752 | 0.4729 |
MC-EnKF () | 4.0448 | 0.4799 |

IV-C Discussions
According to the simulation experiments mentioned above, we notice the following two arguments:
-
•
With suitable kernel bandwidth, MC-EnKF can significantly improve its robustness to observation noises containing large outliers with nearly no CPU time loss.
-
•
With large kernel bandwidth, MC-EnKF performs like the common EnKF, which implies its flexibility to handle Gaussian noise cases.
These two arguments make our viewpoint a very fruitful area for further study. Besides, we note that the kernel bandwidth plays a significant role in the algorithm and will have an impact on its robustness to non-Gaussian observation (such as outliers). Although our proposed adaptive strategy is effective, it cannot achieve the performance with suitable kernel bandwidth. Therefore, a highly intriguing future work direction may be a more effective adaptive strategy to pick the right kernel bandwidth with theoretical guarantee since it is crucial for our proposed MC-EnKF. In Addition, this article only investigates using the special case (MCC) of generalized MCC [26] as a cost function to derive a robust EnKF with stochastic update steps. Hence how to develop robust variants of EnKF [9] based on the generalized MCC will be our next research focus.
V Conclusion
This article proposes a robust EnKF filtering algorithm called maximal correntropy ensemble Kalman filter (MC-EnKF), which is flexible to handle cases involving both Gaussian and non-Gaussian noise. Instead of the well-known minimum mean square error (MMSE) criterion, the MC-EnKF is derived by employing the maximum correntropy criterion (MCC) as the optimality criterion. Ensemble propagation equations remain to be of the Kalman type. The MC-EnKF will behave like the EnKF when the kernel bandwidth is large enough, and we also theoretically demonstrate this argument. With the proper kernel bandwidth, MC-EnKF can perform much better than the EnKF at only a slight increase in computational cost, especially when the underlying observation system is disturbed by some heavy-tailed non-Gaussian noises. Besides, we propose an adaptive strategy to help us choose kernel bandwidth, and its effectiveness is also been verified by experiments.
Appendix A Proof of the Theorem III.1
Here we shall give the proof of the Theorem III.1. Before proceeding, we need the following technical lemma.
Lemma A.1 (Matrix Inversion Lemma [27])
If , are non-singular, , ,
(28) |
Proof:
Now we state the proof of Theorem III.1. Note that when the kernel bandwidth ,
(29) | ||||
Similarly, one can conclude that when . These arguments imply in (22) with the approximation (23) become
(30) |
Then in view of Lemma A.1 (we choose , , and ), one can conclude that
(31) | ||||
Now use the Lemma A.1 again (we choose , , and ), one can further conclude that
(32) | ||||
which is equal to (8). ∎
References
- [1] S. Särkkä, Bayesian filtering and smoothing. Cambridge university press, 2013, no. 3.
- [2] J. D. Hamilton, “State-space models,” Handbook of econometrics, vol. 4, pp. 3039–3080, 1994.
- [3] S. Chen, “Kalman filter for robot vision: a survey,” IEEE Transactions on industrial electronics, vol. 59, no. 11, pp. 4409–4420, 2011.
- [4] K. Law, A. Stuart, and K. Zygalakis, “Data assimilation,” Cham, Switzerland: Springer, vol. 214, p. 52, 2015.
- [5] R. E. Kalman, “A new approach to linear filtering and prediction problems,” Transactions of the ASME–Journal of Basic Engineering, vol. 82, no. Series D, pp. 35–45, 1960.
- [6] M. Hoshiya, E. Saito et al., “Structural identification by extended kalman filter,” Jour. of Eng. Mech., ASCE, vol. 110, no. 12, 1984.
- [7] S. J. Julier and J. K. Uhlmann, “New extension of the kalman filter to nonlinear systems,” in Signal processing, sensor fusion, and target recognition VI, vol. 3068. International Society for Optics and Photonics, 1997, pp. 182–193.
- [8] I. Arasaratnam and S. Haykin, “Cubature kalman filters,” IEEE Transactions on automatic control, vol. 54, no. 6, pp. 1254–1269, 2009.
- [9] G. Evensen, “The ensemble kalman filter: Theoretical formulation and practical implementation,” Ocean dynamics, vol. 53, no. 4, pp. 343–367, 2003.
- [10] Y. Lu, B. Li, H. R. Karimi, and N. Zhang, “Measurement outlier-resistant target tracking in wireless sensor networks with energy harvesting constraints,” Journal of the Franklin Institute, 2022.
- [11] Y. Wang, Y. Sun, V. Dinavahi, S. Cao, and D. Hou, “Adaptive robust cubature kalman filter for power system dynamic state estimation against outliers,” IEEE Access, vol. 7, pp. 105 872–105 881, 2019.
- [12] A. H. Jazwinski, Stochastic processes and filtering theory. Courier Corporation, 2007.
- [13] Z. Wu, J. Shi, X. Zhang, W. Ma, B. Chen, and I. Senior Member, “Kernel recursive maximum correntropy,” Signal Processing, vol. 117, pp. 11–16, 2015.
- [14] J. C. Principe, Information theoretic learning: Renyi’s entropy and kernel perspectives. Springer Science & Business Media, 2010.
- [15] V. Stojanovic and N. Nedic, “Robust kalman filtering for nonlinear multivariable stochastic systems in the presence of non-gaussian noise,” International Journal of Robust and Nonlinear Control, vol. 26, no. 3, pp. 445–460, 2016.
- [16] Y. Huang, Y. Zhang, N. Li, Z. Wu, and J. A. Chambers, “A novel robust student’s t-based kalman filter,” IEEE Transactions on Aerospace and Electronic Systems, vol. 53, no. 3, pp. 1545–1554, 2017.
- [17] W. Liu, P. P. Pokharel, and J. C. Principe, “Correntropy: Properties and applications in non-gaussian signal processing,” IEEE Transactions on signal processing, vol. 55, no. 11, pp. 5286–5298, 2007.
- [18] B. Chen, X. Liu, H. Zhao, and J. C. Principe, “Maximum correntropy kalman filter,” Automatica, vol. 76, pp. 70–77, 2017.
- [19] G. Wang, N. Li, and Y. Zhang, “Maximum correntropy unscented kalman and information filters for non-gaussian measurement noise,” Journal of the Franklin Institute, vol. 354, no. 18, pp. 8659–8677, 2017.
- [20] G. Wang, Y. Zhang, and X. Wang, “Iterated maximum correntropy unscented kalman filters for non-gaussian systems,” Signal Processing, vol. 163, pp. 87–94, 2019.
- [21] Y. Tao and S. S.-T. Yau, “Outlier-robust iterative extended kalman filtering,” IEEE Signal Processing Letters, vol. 30, pp. 743–747, 2023.
- [22] H. E. Rauch, F. Tung, and C. T. Striebel, “Maximum likelihood estimates of linear dynamic systems,” AIAA journal, vol. 3, no. 8, pp. 1445–1450, 1965.
- [23] Y. Feng, X. Huang, L. Shi, Y. Yang, J. A. Suykens et al., “Learning with the maximum correntropy criterion induced losses for regression.” J. Mach. Learn. Res., vol. 16, no. 30, pp. 993–1034, 2015.
- [24] M. Zupanski, “Maximum likelihood ensemble filter: Theoretical aspects,” Monthly Weather Review, vol. 133, no. 6, pp. 1710–1726, 2005.
- [25] C. R. Harris, K. J. Millman, S. J. Van Der Walt, R. Gommers, P. Virtanen, D. Cournapeau, E. Wieser, J. Taylor, S. Berg, N. J. Smith et al., “Array programming with numpy,” Nature, vol. 585, no. 7825, pp. 357–362, 2020.
- [26] B. Chen, L. Xing, H. Zhao, N. Zheng, J. C. Prı et al., “Generalized correntropy for robust adaptive filtering,” IEEE Transactions on Signal Processing, vol. 64, no. 13, pp. 3376–3387, 2016.
- [27] W. W. Hager, “Updating the inverse of a matrix,” SIAM review, vol. 31, no. 2, pp. 221–239, 1989.