∎
11institutetext: Tin Lok James Ng 22institutetext: School of Computer Science and Statistics
Trinity College Dublin, Ireland
22email: [email protected]
Penalized Maximum Likelihood Estimator for Mixture of von Mises-Fisher Distributions
Abstract
The von Mises-Fisher distribution is one of the most widely used probability distributions to describe directional data. Finite mixtures of von Mises-Fisher distributions have found numerous applications. However, the likelihood function for the finite mixture of von Mises-Fisher distributions is unbounded and consequently the maximum likelihood estimation is not well defined. To address the problem of likelihood degeneracy, we consider a penalized maximum likelihood approach whereby a penalty function is incorporated. We prove strong consistency of the resulting estimator. An Expectation-Maximization algorithm for the penalized likelihood function is developed and experiments are performed to examine its performance.
Keywords:
mixture of von Mises-Fisher distributions penalized maximum likelihood estimation strong consistency1 Introduction
In many areas of statistical modelling, data are represented as directions or unit length vectors (Mardia, 1972; Jupp, 1995; Mardia and Jupp, 2000). The analysis of directional data has attracted much research interests in various disciplines, from hydrology (Chen et al., 2013) to biology (Boomsma et al., 2006), and from image analysis (Zhe et al., 2019) to text mining (Banerjee et al., 2005). The von Mises-Fisher (vMF) distribution is one of the most commonly used distributions to model data distributed on the surface of the unit hypersphere (Fisher, 1953; Mardia and Jupp, 2000). The vMF distribution has been applied successfully in many domains (e.g., (Sinkkonen and Kaski, 2002; Mcgraw et al., 2006; Bangert et al., 2010)).
A mixture of vMF distributions (Banerjee et al., 2005) assumes that each observation is drawn from one of the vMF distributions. Applications of the vMF mixture model are diverse, including image analysis (Mcgraw et al., 2006) and text mining (Banerjee et al., 2005). More recently, it has been shown that the vMF mixture model can approximate any continuous density function on the unit hypersphere to arbitrary degrees of accuracy given sufficient numbers of mixture component (Ng and Kwong, 2021).
Various estimation strategies have been developed to perform model estimation, including the maximum likelihood approach (Banerjee et al., 2005) and Bayesian methods (Bagchi and Kadane, 1991; Taghia et al., 2014). The maximum likelihood approach, which is typically performed using the Expectation-Maximization (EM) algorithm (Dempster et al., 1977; Banerjee et al., 2005), is among the most popular approach to parameter estimation. However, as we show in Section 3 the likelihood function is unbounded from above, and consequently a global maximum likelihood estimate (MLE) fails to exist.
The unboundedness of likelihood function occurs in various mixture modelling context, particularly for mixture models with location-scale family distributions including the mixture of normal distributions (Ciuperca et al., 2003; Chen et al., 2008) and the mixture of Gamma distributions (Chen et al., 2016). Various approaches have been developed in order to tackle the likelihood degeneracy problem, including resorting to local estimates (Peters et al., 1978), compactification of the parameter space (Redner, 1981), and constrained maximization of the likelihood function (Hathaway, 1985).
An alternative solution to the likelihood degeneracy problem is to penalized the likelihood function such that the resulting penalized likelihood function is bounded and the existence of the penalized MLE is guaranteed. The approach of penalized maximum likelihood was applied to normal mixture models (Ciuperca et al., 2003; Chen et al., 2008), two-parameter Gamma mixture models (Chen et al., 2016). A penalized likelihood approach has also a Bayesian interpretation (Goodd and Gaskins, 1971; Ciuperca et al., 2003), whereby the penalized likelihood function corresponds to a posterior density and the penalized maximum likelihood solution to the maximum a posterior estimate.
Previously, the penalized maximum likelihood approach is applied to study the mixture of von-Mises distributions (Chen et al., 2007) where consistency results were obtained. The von-Mises distribution is a special case of the von Mises-Fisher distribution defined on the circle. We generalize the results in Chen et al. (2007) to the arbitrary dimensional sphere. The consistency proof in Chen et al. (2007) relies heavily on the univariate properties of the von Mises distribution and generalization of the result to higher dimensions is not straightforward. In this paper we prove a few useful technical lemmas before proving the main results. To handle the non-identifiability of the mixture models, we use the framework of Redner (1981) to obtain consistency in the quotient space.
In this paper, we also consider the penalized likelihood approach to tackle the problem of likelihood unboundedness for the mixture of vMF distributions. We incorporate a penalty term into the likelihood function and maximize the resulting penalized likelihood function. We study conditions on the penalty function to ensure consistency of the penalized maximum likelihood estimator (PMLE). We develop an Expectation-Maximization algorithm to perform model estimation based on the penalized likelihood function. The rest of the paper is structured as follows. Section 2 introduces the background on vMF mixtures and key notations used in the subsequent sections. The problem of likelihood degeneracy is formally presented in Section 3. Section 4 develops the penalized maximum likelihood approach and discusses conditions on the penalty function to ensure strong consistency of the resulting estimator. An Expectation-Maximization algorithm is also developed in Section 4, and its performance is examined in Section 5. Section 6 illustrate the proposed EM algorithm using a data application. We conclude the paper with a discussion section.
2 Background
The probability density function of a -dimensional vMF distribution is given by:
(1) |
where is a -dimensional unit vector (i.e. where is the norm), is the mean direction, and is the concentration parameter. The normalizing constant has the form
where is the modified Bessel function of the first kind of order . The vMF distribution becomes increasingly concentrated at the mean direction as the concentration parameter increases. The case corresponds to the uniform distribution on .
The probability density function of a mixture of vMF distributions with mixture components can be expressed as
(2) |
where is the mixing proportions, are the parameters for the th component of the mixture, and is the vMF density function defined in (1).
We let be the parameter space of the vMF distribution, with the metric defined as
(3) |
for . For any , we write for the density function and for the corresponding measure. The space of mixing probabilities is denoted by . A -component mixture of vMF distributions can be expressed as where and , and where is the product of the parameter spaces. We define the product space , and we slightly abuse notations to let denote both the mixing measure and the parameters in . While is a natural parameterization of the family of mixture of vMF distributions, elements of are not identifiable. Thus, we let be the quotient topological space obtained from by identifying all parameters such that their corresponding densities are equal (almost) everywhere. For the rest of the paper, we assume that the number of mixture components is known.
3 Likelihood Degeneracy
We investigate the likelihood degeneracy problem of the vMF mixture model in this section. For any observations generated from a vMF mixture model with two or more mixture components, we show that the resulting likelihood function on the parameter space is unbounded above. As discussed in Section 1, likelihood degeneracy is a common problem for mixture models with location-scale distributions, including the normal mixtures. In the case of normal mixture distributions, one can show that by letting the mean parameter of a mixture component equal to one of the observations and letting the variance of the same mixture component converge to zero while holding other parameters fixed, the likelihood function diverges to positive infinity (Chen et al., 2008).
For the vMF mixture distributions, the likelihood unboundedness can be best understood in the special case of , or the mixture of von Mises distributions. The von Mises distribution, also known as the circular normal distribution, approaches a normal distribution with large concentration parameter :
with , and the approximation converges uniformly as goes to infinity. Therefore, the likelihood function of a mixture of von Mises distributions diverges to infinity by letting the mean parameter of a mixture component equal to one of the observations and letting the concentration parameter diverges to infinity.
We now consider the general case of the vMF mixture models. Let be the observations generated from a mixture of vMF distributions with density function where . The likelihood function can be expressed as:
where and . We can show that by letting the mean direction of one of the mixture components equals to an arbitrary observation and letting the corresponding concentration parameter goes to infinity, the resulting likelihood function diverges.
Theorem 3.1.
For any observations , there exists a sequence , such that as .
The proof of Theorem 3.1 is provided in the Appendix. The unboundedness of the likelihood function on the parameter space implies that the maximum likelihood estimator is not well defined.
4 Penalized Maximum Likelihood Estimation
4.1 Preliminary
Let be the true mixing measure for the mixture of vMF distributions with corresponding density function on . We let be the maximum of the true density :
(4) |
and define the metric on as the angle between two unit vectors .
For any fixed and positive number , the -ball in centered at is defined as . For any measurable set , the spherical measure of is given by , where is the standard surface measure on .
For any and small positive number , the measure of the ball in is given by (Li, 2011)
(5) | |||||
(6) |
where
(7) |
We define the function by
(8) |
where the constants and are defined in Equation (4) and (7), respectively. The function plays a crucial role in Lemmas 1 and 2. Lemmas 1 and 2 are analogous to Lemmas 1 and 2 in Chen et al. (2008). They provide (almost sure) upper bounds on the number of observations in a small -ball in . The upper bound in Lemma 1 is for each fixed in an interval whereas the upper bound in Lemma 2 holds uniformly for all in the same interval. The proof of Lemma 1 is given in the Appendix. The proof of Lemma 2 is similar to the proof of Lemma 2 in Chen et al. (2008) and is omitted. Lemmas 1 and 2 are crucial to ensure consistency of the penalized maximum likelihood estimator.
We note that Lemmas 1 and 2 may be generalized by relaxing the assumption that the true density is a mixture of vMF densities. This is possible because the vMF assumption does not play a crucial role. Such a generalization has been obtained for normal mixtures (Chen, 2017, Lemma 3.2). However, this is not required for the proof of our main result.
Lemma 1.
For any sufficiently small positive number , as , and for each fixed such that
the following inequalities hold except for a zero probability event:
(9) |
Uniformly for all such that
the following inequalities hold except for a zero probability event:
(10) |
Lemma 2.
For any sufficiently small positive number , as , uniformly for all such that
the following inequality holds except for a zero probability event:
(11) |
Uniformly for all such that
the following inequalities hold except for a zero probability event:
(12) |
4.2 Penalized Maximum Likelihood Estimator
For any mixing measure of a -component mixture in , and i.i.d. observations , the penalized log-likelihood function is defined as
(13) |
where is the log-likelihood function:
and is a penalty function that depends on . Note that we slightly abuse notations and let denotes the penalty function and denotes the number of mixture components. We impose the following conditions on the penalty function .
-
C1
,
-
C2
For , and for each fixed ,
-
C3
For , and for
for large enough .
Conditions C1 - C3 on the penalty function are analogous to the three conditions proposed in (Chen et al., 2008). Condition C1 assumes that the penalty function is of additive form. Condition C2 ensures that the penalty is not overly strong while condition C3 allows the penalty to be severe when the concentration parameter is very large. Recall the true mixing measure , and let denote the maximizer of the penalized log-likelihood function defined in Equation (13). We have the following main result of this paper demonstrating that the maximizer of the penalized log-likelihood function is strongly consistency.
Theorem 4.1.
Let be the maximizer of the penalized log-likelihood , then almost surely in the quotient topological space .
4.3 EM Algorithm
We develop an Expectation-Maximization algorithm to maximize the penalized log-likelihood function defined in Equation (13). By condition C1, the penalty function is assumed to have the form . We consider to have the form for all where the constant that depends on the sample size . In particular, we may set for some constant or where is the sample circular variance.
The resulting penalty function clearly satisfies condition C2. We note that condition C3 is also satisfied since for
we have
The EM algorithm developed in (Banerjee et al., 2005) can be easily modified to incorporate an additional penalty function. The E-Step of the penalized EM involves computing the conditional probabilities:
(14) |
where is the latent variable denoting the cluster membership of the th observation. For the M-step, using the method of Lagrange multipliers, we optimize the full conditional penalized log-likelihood function below
with respect to for , which gives:
(15) | |||||
(16) | |||||
(17) |
where . We note that the assumption on implies that almost surely as . However, for a finite sample size, there is a non-zero possibility that , and the updating equation for is not well defined since the left hand side of Equation (17) is non-negative. However, the left hand side of Equation (17) is a strictly monotonically increasing function from to (Schou, 1978; Hornik and Grün, 2014), and in particular whenever
Thus, we can simply set whenever . To solve Equation (17) for , various approximations have been proposed (Banerjee et al., 2005; Tanabe et al., 2007; Song et al., 2012). Section 2.2 of Hornik and Grün (2014) contains a detailed review of available approximations. We consider the approximation used in Banerjee et al. (2005):
with
We initialize the EM algorithm by randomly assigning the observations into mixture components, and the algorithm is terminated if the change in the penalized log-likelihood falls below a small threshold which is set at in the experiements.
5 Simulation Studies
We perform simulation studies to investigate the performance of the proposed EM algorithm for maximizing the penalized likelihood function. We generate data from the mixture of vMF distributions with two and three mixture components and with dimensions . For each model, data are generated with increasing samples sizes to assess the convergence of the estimated parameters toward the true parameters. The concentration parameters and the mixing proportions are pre-specficied whereas the mean directions are drawn from the uniform distribution on the surface of the unit hypersphere.
For the two mixture components model, we specify the mixing proportions as and the concentration parameters . For the model with three mixture components, we set and . For illustrative purpose, we consider the penalty function . For each combination of dimension and sample size , we simulate 500 random samples from the model and the EM algorithm developed in Section 4.3 is used to obtain the parameter estimates. We measure the distance between the estimated parameters and the true parameters for each random sample. For the mean direction parameters , the distance is measured using the metric .
Simulation results for the two and three mixture cases are presented in Table 1 and 2, respectively. The average distance and the standard deviation between the true and the estimated parameters from 500 replications are reported. We observe that the estimated parameters converge to the true parameter as increases. We notice that the mean direction parameter can be estimated with higher precision when the corresponding concentration parameter is large. This is expected since observations are more closely clustered with a large concentration parameter.
Table 3 and 4 show the number of degeneracies when running the EM algorithm for computing the ordinary MLE for mixture of vMF distributions. Observations are generated from mixture of vMF distributions with one mixture component for Table 3 and with two mixture components for Table 4. We vary the dimension of the data from to and the sample size from to . Mixtures of vMF distributions with components are fitted to the generated data. We compute the ordinary MLE using the EM algorithm and record the number of times that the EM fails to converge from 1000 simulation runs. The EM algorithm is considered fail to converge if one of the concentration parameters becomes exceedingly large (greater than ). From Table 3 and 4, the EM algorithm tends to fail to converge with smaller sample sizes. We also note that when the fitted model has a larger number of mixture components , the EM algorithm is more likely to fail to converge.
2 | 100 | 0.047 | 0.035 | 0.152 | 2.488 | 0.207 |
---|---|---|---|---|---|---|
(0.050) | (0.023) | (0.124) | (2.339) | (0.159) | ||
2 | 500 | 0.026 | 0.016 | 0.071 | 1.594 | 0.081 |
(0.024) | (0.010) | (0.062) | (1.181) | (0.064) | ||
2 | 1000 | 0.022 | 0.010 | 0.046 | 1.410 | 0.078 |
(0.019) | (0.007) | (0.034) | (1.037) | (0.075) | ||
3 | 100 | 0.037 | 0.048 | 0.275 | 2.175 | 0.171 |
(0.034) | (0.026) | (0.154) | (1.712) | (0.143) | ||
3 | 500 | 0.025 | 0.023 | 0.126 | 1.345 | 0.098 |
(0.024) | (0.013) | (0.067) | (0.894) | (0.087) | ||
3 | 1000 | 0.022 | 0.018 | 0.085 | 1.299 | 0.068 |
(0.017) | (0.009) | (0.047) | (0.680) | (0.058) | ||
4 | 100 | 0.039 | 0.075 | 0.324 | 1.623 | 0.194 |
(0.033) | (0.032) | (0.229) | (1.406) | (0.122) | ||
4 | 500 | 0.019 | 0.024 | 0.161 | 0.868 | 0.103 |
(0.017) | (0.013) | (0.065) | (0.518) | (0.083) | ||
4 | 1000 | 0.018 | 0.020 | 0.142 | 0.842 | 0.060 |
(0.011) | (0.011) | (0.052) | (0.431) | (0.051) |
2 | 100 | 0.071 | 0.039 | 0.046 | 0.085 | 0.327 | 2.828 | 2.016 | 0.293 |
---|---|---|---|---|---|---|---|---|---|
(0.042) | (0.026) | (0.050) | (0.091) | (0.279) | (2.571) | (1.289) | (0.302) | ||
2 | 500 | 0.058 | 0.028 | 0.039 | 0.062 | 0.209 | 1.703 | 1.514 | 0.255 |
(0.501) | (0.023) | (0.044) | (0.061) | (0.154) | (1.616) | (1.176) | (0.202) | ||
2 | 1000 | 0.046 | 0.025 | 0.022 | 0.040 | 0.167 | 1.431 | 1.318 | 0.209 |
(0.032) | (0.185) | (0.036) | (0.047) | (0.125) | (1.307) | (0.892) | (0.185) | ||
3 | 100 | 0.037 | 0.041 | 0.053 | 0.113 | 0.452 | 1.717 | 1.720 | 0.249 |
(0.034) | (0.044) | (0.028) | (0.096) | (0.258) | (1.224) | (1.050) | (0.274) | ||
3 | 500 | 0.033 | 0.031 | 0.043 | 0.067 | 0.285 | 1.120 | 1.018 | 0.206 |
(0.024) | (0.023) | (0.037) | (0.040) | (0.138) | (1.010) | (0.914) | (0.246) | ||
3 | 1000 | 0.026 | 0.022 | 0.024 | 0.052 | 0.255 | 1.051 | 1.039 | 0.183 |
(0.026) | (0.021) | (0.018) | (0.029) | (0.126) | (0.806) | (0.747) | (0.138) | ||
4 | 100 | 0.051 | 0.021 | 0.073 | 0.121 | 0.417 | 1.432 | 1.356 | 0.334 |
(0.045) | (0.017) | (0.026) | (0.058) | (0.267) | (1.207) | (1.110) | (0.260) | ||
4 | 500 | 0.030 | 0.022 | 0.031 | 0.068 | 0.313 | 1.154 | 1.088 | 0.246 |
(0.026) | (0.016) | (0.016) | (0.028) | (0.018) | (0.873) | (0.760) | (0.209) | ||
4 | 1000 | 0.033 | 0.021 | 0.028 | 0.059 | 0.277 | 1.100 | 1.072 | 0.227 |
(0.027) | (0.017) | (0.015) | (0.029) | (0.163) | (0.675) | (0.736) | (0.180) |
3 | 100 | 13 | 47 | 86 | 218 |
3 | 200 | 2 | 8 | 34 | 53 |
3 | 500 | 0 | 2 | 2 | 5 |
4 | 100 | 6 | 33 | 64 | 169 |
4 | 200 | 1 | 3 | 12 | 39 |
4 | 500 | 0 | 0 | 1 | 4 |
3 | 100 | 0 | 51 | 147 | 233 |
3 | 200 | 0 | 27 | 45 | 87 |
3 | 500 | 0 | 4 | 11 | 23 |
4 | 100 | 0 | 49 | 113 | 193 |
4 | 200 | 0 | 14 | 47 | 81 |
4 | 500 | 0 | 2 | 8 | 11 |
6 Data Application
We illustrate the EM algorithm for maximum penalized log-likelihood using the household data set from R package HSAUR3. The data set contains the household expenditures of 20 single men and 20 single women on four commodity group. As in Hornik and Grün (2014), we will also focus on the three commodity groups (housing, food and service). The EM algorithms for ordinary MLE and for the penalized MLE with 2 and 3 mixture components are fitted to the data. The results are shown in Table 5 and 6, respectively, where the estimated parameters are shown for all cases. The estimated paramters for the MLE and for the penalized MLE are very similar for both and . The log-likelihood evaluated at the MLE is slightly larger than the penalized log-likelihood evaluated at the penalized MLE. More interestingly, we observe that for each case the largest concentration parameter obtained under the penalized MLE is smaller than that obtained under the MLE. This behavior suggests that the incorporation of a penalty function pulls back the estimate of largest concentration parameter towards 0 and prevents the divergence of the likelihood function.
housing | food | service | log likelihood | |||
---|---|---|---|---|---|---|
0.47 | 0.95 | 0.13 | 0.27 | 114.70 | 113.08 | |
0.53 | 0.67 | 0.63 | 0.40 | 17.96 | ||
0.52 | 0.95 | 0.15 | 0.27 | 83.26 | 126.06 | |
0.13 | 0.67 | 0.31 | 0.68 | 181.21 | ||
0.35 | 0.59 | 0.76 | 0.28 | 62.91 |
housing | food | service | penalized log likelihood | |||
---|---|---|---|---|---|---|
0.47 | 0.95 | 0.13 | 0.27 | 112.20 | 112.94 | |
0.53 | 0.67 | 0.63 | 0.40 | 18.48 | ||
0.52 | 0.95 | 0.15 | 0.27 | 82.97 | 125.26 | |
0.13 | 0.67 | 0.31 | 0.68 | 165.71 | ||
0.35 | 0.59 | 0.76 | 0.28 | 62.69 |
7 Discussion
In this paper we considered a penalized maximum likelihood approach to the estimation of the mixture of vMF distributions. By incorporating a suitable penalty function, we showed that the resulting penalized MLE is strongly consistent. An EM algorithm was derived to maximize the penalized likelihood function, and its performance and behavior were examined using simulation studies and a data application. The techniques used in this work to prove consistency could be applicable to study other mixture models for spherical observations.
8 Conflict of Interests
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Appendix A Proofs
A.1 Proof of Theorem 3.1
Proof.
We fix such that for some pair . We construct a sequence and show that as .
For and , we let , and . It is easy to verify that . For each , we let , and for , we let . Since , the likelihood is lower bounded by:
For the th observation, we have
For any , and , we have
Therefore, the likelihood function can be lower bounded by
Since , we have as . ∎
A.2 Technical Lemmas
The following technical lemmas are useful for the proof of the main results. Lemma 3 gives the asymptotic expansion of the modified Bessel function of the first kind and is straight forward to derive.
Lemma 3.
Let be the modified Bessel function of the first kind with degree . As , we have
Lemma 4 concerns the covering of the surface of the unit hypersphere with -balls and is needed for the proof of Lemma 1.
Lemma 4.
For any sufficiently small positive number , there exists points with where is some constant which depends on the dimension such that for any , there exists with .
Proof.
Fix and consider an open cover of . By compactness of , there exists a finite subcover of . Let be fixed, and let be arbitrary. We must show that for some .
Since is an open cover of , we must have for some . Therefore,
Hence, we have . Since is arbitrary, the result follows.
The statement that for some constant is clearly true for , and the general case holds using proof by induction with a geometric argument.
∎
A.3 Proof of Lemma 1
Proof of Lemma 1
Proof.
Let be a small positive number. By Lemma 4, there exists with such that for any , we have for some . Consequently, we have that
where is the probability that a random variable generated from the takes value in the ball . For each , can be bounded above by
where we recall that the constants and are defined in Equation (4) and (7), respectively, and the function is defined in Equation (8). This implies that
(18) |
Define the quantities
For , by Bernstein’s inequality we have
(19) | |||||
We note that whenever . Letting in the inequality above, we obtain
Since implies for sufficiently large , we apply the union upper bound to obtain
(20) |
Combining the two inequalities (18) and (20), the first conclusion of the lemma follows by applying the Borel-Cantelli lemma.
For the second statement of the lemma, we observe that implies that . Let , for large enough , we have
Substituting into inequality (19) gives
The second conlusion of the lemma follows from an application of the Borel-Cantelli lemma.
∎
A.4 Proof of Strong Consistency of PMLE
Proof.
We prove the strong consistency of PMLE for the case of two mixture components. The proof of strong consistency for the general case of mixture components is analogous but significantly more tedious. We briefly outline the key steps for the proof of the mixture components case, which are also along the lines of Section 3.3 of (Chen et al., 2008).
Recall that a two component mixture mixing measure has the form , where , , and the corresponding penalized log-likelihood function is given by
Let where denotes the expectation under the true probability measure . We follow the strategy in (Chen et al., 2008) to divide the parameter space into three regions
We require and to be sufficiently large where the exact magnitude are to be specified later. We will show that the penalized MLE almost surely is not in or . Therefore, must be in and its consistency follows from Theorem 5 of Redner (1981).
We first consider the region , and define the following index sets:
where
and consist of observations that are very close to and , respectively. We separately assess the likelihood contributions of the observations in and in Lemmas 5 and 6.
By Lemmas 5 and 6 the maximizer of is almost surely in . Lemmas 5 and 6 also imply that . We want to apply Theorem 5 of Redner (1981) to conclude the strong consistency of the estimator . First, it is clear following the definition of the metric on given in Equation 3 that is a compact subset of containing . For any , we can choose sufficiently small such that for all with , the density is bounded. Thus, we have
for some constant . Therefore, Assumption 2a of Redner (1981) is also satisfied. For any where and , since is bounded, we have
Therefore, Assumption 4 of Redner (1981) is also satisfied. We can conclude by applying Theorem 5 of Redner (1981) that almost surely in the quotient space .
We outline the key steps of the proof for the general case of mixture components, which follows the same strategy as the proof for the 2 components case. We divide the parameter space into regions
for , and .
Given suitable constraints on the values of , an extension of Lemma 5 shows that
and an extension of Lemma 6 shows that
for . Therefore, the probability that , the maximizer of belongs to goes to zero. With almost surely in which is a compact subset of , we can apply Theorem 5 of Redner (1981) to conclude the strong consistency of .
∎
Lemma 5.
Proof.
The log-likelihood contributions of observations in any index set is given by
For any observation in , its likelihood contribution is bounded above by , and by the asymptotic expansion of the modified Bessel function of the first kind in Lemma 3, we have
(21) |
for some constant . Consequently, the log-likelihood of observations in is bounded above by
where is the number of observations in . By Lemma 2, for
is almost surely bounded above by
Therefore, recalling that , can be bounded above by:
(22) | |||||
where is some constant, and the last inequality follows from . For
we have almost surely by Lemma 2. Therefore, with condition C3 on the penalty function , almost surely
(23) |
The two bounds (22) and (23) can be combined to form
(24) |
The same approach can be used to derive the same bound for observations in :
(25) |
For any observation that falls outside both and , we have that and . Using the Taylor expansion of for positive around 0, we can show that for , we have
Consequently, recalling the inequality (21), and for large enough , the log-likelihood contribution of such is bounded above by
For large enough , we must have almost surely. Therefore, almost surely the log-likelihood of the observations in is bounded above by
(26) |
For sufficiently large , the following inequalities hold
Therefore, combining the bounds (24), (25), (26), the penalized log-likelihood can be bounded above by
By strong law of large numbers, we have almost surely. Therefore,
almost surely.
∎
Lemma 6.
Proof.
To establish a similar result for , we define
We note that the first part of the RHS above is not a vMF density, and is well defined as . Straightforward calculation shows that for all , we have
Therefore, by Jensen’s inequality, for all ,
where we recall that is the true density function. Since is bounded and is continuous in almost surely w.r.t. , it follows that
(27) |
where is an increasing function of . Hence, we can find such that
(28) |
For any observation in , its log-likelihood contribution is no larger than
For sufficiently large , the log-likelihood contribution of any observation not in is no more than . This follows since for large enough ,
Therefore, the penalized log-likelihood is almost surely bounded above by
where is the vector of the concentration parameters of the true measure , the second inequality follows from (24) and (27), and the last inequality follows from (28).
∎
References
- Bagchi and Kadane (1991) Bagchi, P. and Kadane, J. B. (1991), “Laplace approximations to posterior moments and marginal distributions on circles, spheres, and cylinders,” Can. J. Stat., 19, 67–77.
- Banerjee et al. (2005) Banerjee, A., Dhillon, I. S., Ghosh, J., and Sra, S. (2005), “Clustering on the Unit Hypersphere using von Mises-Fisher Distributions.” J. Mach. Learn. Res., 6, 1345–1382.
- Bangert et al. (2010) Bangert, M., Hennig, P., and Oelfke, U. (2010), “Using an Infinite Von Mises-Fisher Mixture Model to Cluster Treatment Beam Directions in External Radiation Therapy,” in 2010 Ninth International Conference on Machine Learning and Applications, pp. 746–751.
- Boomsma et al. (2006) Boomsma, W., Kent, J., Mardia, K., Taylor, C., and Hamelryck, T. (2006), “Graphical models and directional statistics capture protein structure,” in Interdisciplinary Statistics and Bioinformatics, Leeds University Press, pp. 91–94.
- Chen (2017) Chen, J. (2017), “Consistency of the MLE under mixture models,” Statist. Sci., 32, 47–63.
- Chen et al. (2007) Chen, J., Li, P., and Tan, X. (2007), “Inference for von Mises mixture in mean directions and concentration parameters.” Journal of Systems Science and Mathematical Sciences, 27, 27, 59–67.
- Chen et al. (2016) Chen, J., Li, S., and Tan, X. (2016), “Consistency of the penalized MLE for two-parameter gamma mixture models,” Sci. China Math., 59, 2301–2318.
- Chen et al. (2008) Chen, J., Tan, X., and Zhang, R. (2008), “Inference for normal mixtures in mean and variance,” Statist. Sinica, 18, 443–465.
- Chen et al. (2013) Chen, L., Singh, V. P., Guo, S., Fang, B., and Liu, P. (2013), “A new method for identification of flood seasons using directional statistics,” Hydrological Sciences Journal, 58, 28–40.
- Ciuperca et al. (2003) Ciuperca, G., Ridolfi, A., and Idier, J. (2003), “Penalized maximum likelihood estimator for normal mixtures,” Scand. J. Statist., 30, 45–59.
- Dempster et al. (1977) Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977), “Maximum likelihood from incomplete data via the EM algorithm,” J. Roy. Statist. Soc. Ser. B, 39, 1–38, with discussion.
- Fisher (1953) Fisher, R. (1953), “Dispersion on a sphere,” Proc. Roy. Soc. London Ser. A, 217, 295–305.
- Goodd and Gaskins (1971) Goodd, I. J. and Gaskins, R. A. (1971), “Nonparametric roughness penalties for probability densities,” Biometrika, 58, 255–277.
- Hathaway (1985) Hathaway, R. J. (1985), “A constrained formulation of maximum-likelihood estimation for normal mixture distributions,” Ann. Statist., 13, 795–800.
- Hornik and Grün (2014) Hornik, K. and Grün, B. (2014), “movMF: An R Package for Fitting Mixtures of von Mises-Fisher Distributions,” Journal of Statistical Software, 58, 1–31.
- Jupp (1995) Jupp, P. E. (1995), “Some applications of directional statistics to astronomy,” in New trends in probability and statistics, Vol. 3 (Tartu/Pühajärve, 1994), VSP, Utrecht, pp. 123–133.
- Li (2011) Li, S. (2011), “Concise Formulas for the Area and Volume of a Hyperspherical Cap,” Asian Journal of Mathematics & Statistics, 4, 66–70.
- Mardia (1972) Mardia, K. V. (1972), Statistics of directional data, Academic Press, London-New York, probability and Mathematical Statistics, No. 13.
- Mardia and Jupp (2000) Mardia, K. V. and Jupp, P. E. (2000), Directional statistics, Wiley Series in Probability and Statistics, John Wiley & Sons, Ltd., Chichester.
- Mcgraw et al. (2006) Mcgraw, T., Vemuri, B., Yezierski, B., and Mareci, T. (2006), “Von Mises-Fisher mixture model of the diffusion ODF,” IEEE International Symposium on Biomedical Imaging, 2006, 65–68.
- Ng and Kwong (2021) Ng, T. L. J. and Kwong, K.-K. (2021), “Universal approximation on the hypersphere,” Communications in Statistics - Theory and Methods, 0, 1–11.
- Peters et al. (1978) Peters, B. C., Jr., and Walker, H. F. (1978), “An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions,” SIAM J. Appl. Math, 362–378.
- Redner (1981) Redner, R. (1981), “Note on the consistency of the maximum likelihood estimate for nonidentifiable distributions,” Ann. Statist., 9, 225–228.
- Schou (1978) Schou, G. (1978), “Estimation of the concentration parameter in von Mises-Fisher distributions,” Biometrika, 65, 369–377.
- Sinkkonen and Kaski (2002) Sinkkonen, J. and Kaski, S. (2002), “Clustering based on conditional distributions in an auxiliary space,” Neural Comput., 14, 217–239.
- Song et al. (2012) Song, H., Liu, J., and Wang, G. (2012), “High-order parameter approximation for von Mises-Fisher distributions,” Appl. Math. Comput., 218, 11880–11890.
- Taghia et al. (2014) Taghia, J., Ma, Z., and Leijon, A. (2014), “Bayesian Estimation of the von-Mises Fisher Mixture Model with Variational Inference,” IEEE Trans. Pattern Anal. Mach. Intell., 36, 1701–1715.
- Tanabe et al. (2007) Tanabe, A., Fukumizu, K., Oba, S., Takenouchi, T., and Ishii, S. (2007), “Parameter estimation for von Mises-Fisher distributions,” Comput. Statist., 22, 145–157.
- Zhe et al. (2019) Zhe, X., Chen, S., and Yan, H. (2019), “Directional statistics-based deep metric learning for image classification and retrieval,” Pattern Recognit., 93, 113 – 123.