Empirical Likelihood Method for Complete Independence Test on High Dimensional Data
Abstract
Given a random sample of size from a dimensional random vector, where both and are large, we are interested in testing whether the components of the random vector are mutually independent. This is the so-called complete independence test. In the multivariate normal case, it is equivalent to testing whether the correlation matrix is an identity matrix. In this paper, we propose a one-sided empirical likelihood method for the complete independence test for multivariate normal data based on squared sample correlation coefficients. The limiting distribution for our one-sided empirical likelihood test statistic is proved to be when both and tend to infinity, where is a standard normal random variable. In order to improve the power of the empirical likelihood test statistic, we also introduce a rescaled empirical likelihood test statistic. We carry out an extensive simulation study to compare the performance of the rescaled empirical likelihood method and two other statistics which are related to the sum of squared sample correlation coefficients.
keywords:
Empirical likelihood; complete independence test; high dimension; multivariate normal distribution1 Introduction
Statistical inference on high dimensional data has gained a wide range of applications in recent years. New techniques generate a vast collection of data sets with high dimensions, for example, trading data from financial market, social network data and biological data like microarray and DNA data. The dimension of these types of data is not small compared with sample size, and typically of the same order as sample size or even larger. Yet classical multivariate statistics usually deal with data from normal distributions with a large sample size and a fixed dimension , and one can easily find some classic treatments in reference books such as Anderson [1], Morrison [10] and Muirhead [11].
Under multivariate normality settings, the likelihood ratio test statistic converges in distribution to a chi-squared distribution when is fixed. However, when changes with and tends to infinity, this conclusion is no longer true as discovered in Bai et al. [2], Jiang et al. [6], Jiang and Yang [8], Jiang and Qi [7], Qi et al. [16], among others. The results in these papers indicate that the chi-square approximation fails when diverges as goes to infinity.
The test of complete independence of a random vector is to test whether all the components of the random vector are mutually independent. In the multivariate normal case, the test of complete independence is equivalent to the test whether covariance matrix is a diagonal matrix, or whether the correlation matrix is the identity matrix.
For more details, we assume is a random vector from a -dimensional multivariate normal distribution , where denotes the mean vector, and is a covariance matrix. Given a random sample of size from the normal distribution, , where for , Pearson’s correlation coefficient between the -th and -th components is given by
(1) |
where and for . Now we set as the sample correlation coefficient matrix.
The complete independence test for the normal random vector is
(2) |
where is the population correlation matrix and is identity matrix. When , the likelihood ratio test statistic for (2) is a function of , the determinant of , from Bartlett [3] or Morrison [10]. In traditional multivariate analysis, when is a fixed integer, we have under the null hypothesis in (2) that
where denotes a chi-square distribution with degrees of freedom.
When depends on with and , the likelihood ratio method can still be applied to test (2). The limiting distributions of the likelihood ratio test statistics in this case have been discussed in the aforementioned papers. It is worth mentioning that Qi et al. [16] propose an adjusted likelihood ratio test statistic and show that the distribution of the adjusted likelihood test statistic can be well approximated by a chi-squared distribution whose number of degrees of freedom depends on regardless of whether is fixed or divergent.
The limitation of the likelihood ratio test is that the dimension of the data must be smaller than the sample size . Many other likelihood tests related to the sample covariance matrix or sample correlation matrix have the same problem as the sample covariance matrices are degenerate when . In order to relax this constraint, a new test statistic using the sum of squared sample correction coefficients is proposed by Schott [17] as follows
Assume that the null hypothesis of (2) holds. Under assumption , Schott [17] proves that converges in distribution to a normal distribution with mean and variance , that is,
(3) |
where .
Recently, Mao [9] proposes a different test for complete independence. His test statistic is closely related to Schott’s test and is defined by
It has been proved in Mao [9] that is asymptotically normal under the null hypothesis of (2) and the assumption that .
Very recently, Chang and Qi [4] investigate the limiting distributions for the two test statistics above under less restrictive conditions on and . Chang and Qi [4] show that (3) is also valid under the general condition that as , regardless of the convergence rate of . Thus, the normal approximation in (3) based on yields an approximate level rejection region
(4) |
where is a level critical value of the standard normal distribution.
Furthermore, Chang and Qi [4] propose adjusted test statistics whose distribution can be fitted by chi-squared distribution regardless of how changes with as long as is large. Chang and Qi’s [4] adjusted test statistics is defined as
(5) |
Chang and Qi show that
as long as as . Let denote the level critical value of . Then an approximate level rejection region based on is given by
(6) |
In practice, the assumption of normality for distributions may be violated. Now we assume is a random vector and are identically distributed with distribution function . Given a random sample of size , , where for , are drawn from the distribution of , and define Pearson’s correlation coefficients ’s as in (1). By using the Stein method, Chen and Shao [5] show that (3) holds under some moment conditions of if is bounded.
In this paper, we propose to apply empirical likelihood method to the testing problem (2). The empirical likelihood is a nonparametric statistical method proposed by Owen [12, 13], which is originally used to test the mean vector of a population based on a set of independent and identically distributed (i.i.d.) random variables. Empirical likelihood does not require to specify the family of distributions for the data and it possesses some good properties of the likelihood methods.
The rest of the paper is organized as follows. In Section 2, we first introduce a one-sided empirical likelihood method for the mean of a set of random variables with a common mean and then establish the connection between the test of complete independence and the one-sided empirical likelihood method. Our main result concerning the limiting distribution of the one-sided empirical likelihood ratio statistic is also given in Section 2. In Section 3, we carry out a simulation study to compare the performance of the empirical likelihood method and normal approximation based on Schott’s test statistic and chi-square approximation based on Chang and Qi’s adjusted test statistic. In our simulation study, we also apply these methods to some other distributions such as the exponential distributions and mixture of the exponential and normal distributions so as to compare their adaptability to non-normality. The proofs of the main results are given in Section 4.
2 Main Results
In this section, we apply the empirical likelihood method to the test of complete independence. First, we assume is a random vector from a -dimensional multivariate normal distribution. Under the null hypothesis of (2), are random variables from an identical distribution with mean . As a matter of fact, it follows from Corollary 5.1.2 in Muirhead [11] that has the same distribution as under the null hypothesis of (2), where is a random variable having -distribution with degrees of freedom. are asymptotically independent if the sample size is large. We will develop a one-sided empirical likelihood test statistic and apply it to the data set , where is a sequence of positive integers such that as . As an extension, we then consider the case when is a random vector with an identical marginal distribution function which is not necessarily Gaussian. When the components of are independent, we demonstrate that the empirical likelihood method we develop under normality works for general distribution as well if some additional conditions are satisfied.
2.1 One-sided empirical likelihood test
Consider a random sample of size , namely . Assume the sample comes from a population with mean and variance . The empirical likelihood function for the mean is defined as
(7) |
The function is well defined if belongs to the convex hull given by
otherwise, set . We see that .
Assume . By the standard Lagrange multiplier technique, the supremum on the right-hand side of (7) is achieved at
(8) |
where is the solution to equation , with defined as follows
(9) |
Assume . When , then the function defined in (9) is strictly increasing for . A solution to in this range exists and the solution must be unique.
Proposition 2.1.
Assume are observations with for some and . Then is strictly concave in , and , where .
Remark. The results in Proposition 2.1 are well-known among the researchers in the area of empirical likelihood methods. A short proof will be given in Section 4 for completeness.
Consider the following two-sided test problem
The empirical likelihood ratio is given by
where is the solution to the following equation
Therefore, the log-empirical likelihood test statistic is given by
(10) |
It is proved in Owen [14] that converges in distribution to a chi-square distribution with one degree of freedom if are i.i.d. random variables with mean and a finite second moment.
Our interest here is to consider a one-sided test
(11) |
According to Proposition 2.1, is increasing in and decreasing in , which implies . Therefore, the empirical likelihood ratio corresponding to test (11) is
Then the log-empirical likelihood test statistic for test (11) is
(12) |
where is defined in (10).
2.2 Empirical likelihood method for testing complete independence
Let denote the sample Pearson correlation coefficient based on a random sample of size from a bivariate normal distribution with correlation coefficient . From Muirhead [11], page 156,
where
is the hypergeometric function, , and is the gamma function. It is easy to check when , , and ; when , , and thus, .
First, we assume is a random vector from a -dimensional multivariate normal distribution . Review the sample correlation coefficients defined in (1). Denote the correlation matrix of by . From the above discussion, we have that under the null hypothesis of (2), for all , under the alternative of (2) and at least one of the inequalities is strict. We see that test (2) is equivalent to the following one-tailed test
where . Under the null hypothesis of (2), are identically distributed with mean and variance . We also notice from Chang and Qi [4] that behave as if they were independent and identically distributed. For these reasons, we propose a one-sided empirical likelihood ratio test as follows.
Rewrite as , where . Then are asymptotically i.i.d with mean . Define the one-sided log-empirical likelihood ratio test statistics as in (12) with , or equivalently
(13) |
where is the solution to the equation
and .
Our first result on empirical likelihood method for testing the complete independence under normality in the paper is as follows.
Theorem 1.
Assume as . Then as under the null hypothesis of (2), where is a standard normal random variable.
Let denote the cumulative distribution function of the standard normal distribution, i.e,
Let denote the cumulative distribution function of . Then
Therefore, for any , an -level critical value of is given by , where is an -level critical value for the standard normal distribution. Based on Theorem 1, a level rejection region for test on (11) is
(14) |
Here we only consider because is nonnegative, if , and if .
Now we consider the general case when is a random vector with independent and identically distributed components. The one-sided empirical likelihood test statistic based on is defined as in (13). The limiting distribution for is the same as that under normality.
Theorem 2.
Assume are independent and identically distributed and . If as and is bounded, then as .
Compared with Theorem 1, in Theorem 2 is restricted in a smaller range and it can be of the same order as .
To demonstrate the performance of empirical likelihood method and two other test statistics, we have a numerical study. Our simulation study indicates that the empirical likelihood test (14) based om maintains a very stable size or type I error. In terms of size, is more accurate and . Most of the time, and have slightly larger sizes than when the nominal level is , and their powers are also slightly larger than that of in our simulation study. For simplicity purpose, the simulation result on is not shown in this paper.
In order to balance the size and power for the empirical likelihood method, We introduce a rescaled empirical likelihood statistic, , defined as follows
(15) |
Under conditions of Theorems 1 or 2, and have the same limiting distribution, that is,
(16) |
provided that
(17) |
This equation will be verified in Section 4. Based on (16), a level test rejects the complete independence if falls into the rejection region
(18) |
3 Simulation
In this section, we will consider the following three test statistics for testing complete independence (2), including Schott’s test statistic given in (3), Chang and Qi’s adjusted test statistic defined in (5), and the rescaled empirical likelihood test statistic given in (15). The corresponding rejection regions are given in (4), (6), and (18), respectively. All simulations are implemented by the software R.
For sample size and dimension , we apply the three test statistics to each of five distributions for 10000 iterations to obtain the empirical sizes and the empirical powers of the tests. We set the nominal type I error . The five distributions include the normal, the uniform over , the exponential, the mixture of the normal and exponential distributions, and the sum of normal and exponential distributions.
To control the dependence structure, we introduce a covariance matrix defined by
(19) |
which is also a correlation matrix. In our simulation study, we generate random samples from the distribution of a random vector with covariance matrix or correlation matrix . For details, see the five distributions described below. For all distributions we consider, the observations have independent components when and positively dependent components when . We choose very small values for such as and . When the value of is large, the resulting powers for all three methods will be too close to , and the comparison is meaningless. Therefore, based on replicates, the sizes for three test statistics are estimated when , and their powers are estimated when and . All results are reported in Tables 1 to 5.
a. Normal Distribution
The observations are drawn from a multivariate normal random vector with mean and variance matrix specified in (19). The results on the empirical sizes and powers are given in Table 1.
b. Uniform Distribution
We first generate i.i.d. random variables from Uniform distribution, then set , . It is easy to verify that random vector has mean and correlation matrix as defined in (19). The results on the empirical sizes and powers are given in Table 2.
c. Exponential Distribution
We generate i.i.d. random variables from the unit exponential distribution, then define , . The random vector has a correlation matrix as defined in (19) for . The results on the empirical sizes and powers are given in Table 3.
d. Mixture of Normal and Exponential Distributions
The random vector is sampled from a mixture of the normal and exponential distributions which is with 90% probability from the multivariate normal with mean and covariance matrix given in (19) and with 10% probability from a random vector where are i.i.d. unit exponential random variables. The results on the empirical sizes and powers are given in Table 4.
e. Sum of Normal and Exponential Distribution
The random vector is a weighted sum of two independent random vectors, and , , where is from a multivariate normal distribution with mean and covariance matrix defined in (19), and with ’s being i.i.d. unit exponential random variables. The results on the empirical sizes and powers are given in Table 5.
From the simulation results, the empirical sizes for all three tests are close to which is the nominal type I error we set in the simulation, especially when both and are large. Test statistic has the smallest size in most cases, and it is a little bit conservative sometimes. The size of is between that of and and both and are comparable for most combinations of and .
As we expect, the powers of all three test statistics become higher as grows larger. The increase in also brings about an increase in power, but not as much as the increase in does, because is the number of ’s involved in the test. All test statistics achieve high power when . Three test statistics result in comparable powers in general, although the power of Chang and Qi’s test statistic is occasionally a little bit less than the other two test statistics. These differences may be due to the fact that Chang and Qi’s test statistic maintain a lower type I error.
In summary, in this paper, we have developed the one-sided empirical likelihood method and proposed the rescaled empirical likelihood test statistic for testing the complete independence for high dimensional random vectors. The rescaled empirical likelihood test statistic performs very well in terms of the size and power and can serve as a good alternative to the existent test statistics in the literature.
4 Proofs
Proof of Proposition 2.1. To prove the strict concavity of , we need to show that for ,
(20) |
Since for , we have , where , are determined by (8) and (9) with being replaced by , , for .
For every , set , . Then , , . Since is strictly concave in , we have
and at least one of the inequalities is strict, i.e, for some , since implies . Therefore, we get
which implies
proving (20).
When , an obvious solution to (9) is . Since the solution to (9) is unique, we see that , and thus, . We also notice that
The last step is obtained by using the Lagrange multipliers. We omit the details here. Therefore, we conclude that .
Define and . Review that . We have . Since the distribution of ’s depends on , forms an array of random variables.
If the following three conditions are satisfied: (i). as ; (ii). as ; (iii). as , equivalently, in term of ’s,
-
(C1).
as ;
-
(C2).
as ;
-
(C3).
as ,
we can follow the same procedure as in Owen [14] or use Theorem 6.1 in Peng and Schick [15] to conclude that
where is defined in (10) with . Again, by using condition (C3), we have
Now we will verify conditions (C1), (C2) and (C3). (C3) has been proved by Chang and Qi [4] as we indicate below equation (3).
Assume is a pair of integers with for . It is proved in Schott [17] that
(21) |
Now we can verify condition (C1). By use of Chebyshev’s inequality, equations (21) and (22)
as for every . This implies . Hence, we have
proving condition (C1).
Below we will use and to denote two pair of integers with and . It follows from Theorem 2 in Veleval and Ignatov [18] that are pairwise independent, that is, If , then and are independent, thus we have
Since , we have
We can classify the summands within the double summation above into two classes: terms in class 1 when and terms in class 2 when . We see that
if by using the independence, and
if , where is given by (23). Therefore, we have
In view of (21), (22) and (23), some tedious calculation shows that
which implies
as , and thus Condition (C2) holds. The proof of Theorem 1 is complete.
Proof of Theorem 2. Theorem 2 can be proved by using similar arguments in the proof of Theorem 1. We will continue to use the notation defined in the proof of Theorem 1.
Under the conditions in Theorem 2, Chen and Shao [5] have obtained the following results:
(24) |
where , and . It follows from the inequality that
(25) |
We need to verify conditions (C1), (C2), and (C3) as given in the proof of theorem 1. Condition (C1) can be verified similarly by using estimates of moments in (24), and condition (C3) follows from the central limit theorem (3), which is true under the condition of the theorem in virtue of Theorem 2.2 in Chen and Shao [5].
Now we proceed to verify condition (C2). First, we have
(26) |
from (24). Then
Considering the summands within the double summation above, we see that there are pairs of sets and which are disjoint. For these pairs,
because and are independent. For all other pairs, corresponding summands are dominated by
from the Cauchy-Schwarz inequality and equation (25). Therefore, we have
as . In the estimation above, we also use the fact that from (26). Therefore, we have
as . This yields , which together with (26) implies condition (C2). This completes the proof of the theorem.
Acknowledgements
The authors would like to thank the Editor, Associate Editor, and referee for reviewing the manuscript and providing valuable comments. The research of Yongcheng Qi was supported in part by NSF Grant DMS-1916014.
Disclosure statement
No potential conflict of interest was reported by the authors.
References
- [1] Anderson, T. W.(1984). An Introduction to Multivariate Statistical Analysis. 2nd Ed John Wiley & Sons..
- [2] Bai, Z., Jiang, D., Yao, J. and Zheng, S. (2009). Corrections to LRT on large dimensional covariance matrix by RMT. Annals of Statistics, 37(6B), 3822-3840.
- [3] Bartlett, M. S. (1954). A note on multiplying factors for various chi-squared approximations. J. Royal Stat. Soc., Ser. B 16, 296-298.
- [4] Chang, S. and Qi, Y. (2018). On Schott’s and Mao’s test statistics for independence of normal random vectors. Statistics and Probability Letters 140, 132-141.
- [5] Chen, Y. and Shao, Q.M. (2012). Berry-Esseen inequality for unbounded exchangeable pairs. In Probability Approximations and Beyond, A.D. Barbour et al. (eds.), Lecture Notes in Statistics 205, pp. 13–30, Springer, New York..
- [6] Jiang, D., Jiang, T. and Yang, F. (2012). Likelihood ratio tests for covariance matrices of high-dimensional normal distributions. Journal of Statistical Planning and Inference, 142(8), 2241-2256.
- [7] Jiang, T. and Qi, Y. (2015). Likelihood ratio tests for high-dimensional normal distributions. Scandinavian Journal of Statistics, 42(4), 988-1009.
- [8] Jiang, T. and Yang, F. (2013). Central limit theorems for classical likelihood ratio tests for high-dimensional normal distributions. Annals of Statistics, 41(4), 2029-2074.
- [9] Mao, G. (2014). A new test of independence for high-dimensional data. Statist. Probab. Lett. 93, 14-18.
- [10] Morrison, D. F. (2005). Multivariate Statistical Methods. Duxbury Press, 4th Ed.
- [11] Muirhead, R. J. (1982). Aspects of Multivariate Statistical Theory. Wiley, New York.
- [12] Owen, A.B. (1988). Empirical likelihood ratio confidence intervals for a single functional. Biometrika 75, 237-249.
- [13] Owen, A.B. (1990). Empirical likelihood ratio confidence regions. Ann. Statist. 18, 90-120.
- [14] Owen, A.B. (2001). Empirical Likelihood. New York: Chapman and Hall/CRC.
- [15] Peng, H. and Schick, A. (2013). An empirical likelihood approach to goodness of fit testing. Bernoulli 19, 954-981.
- [16] Qi, Y., Wang, F. and Zhang, L. (2019). Likelihood ratio test of independence of components for high-dimensional normal vectors. Annals of the Institute of Statistical Mathematics 71, 911-946.
- [17] Schott, J. R. (2005). Testing for complete independence in high dimensions. Biometrika, 92(4), 951-956.
- [18] Veleva1, E. I. and Ignatov, T. G. (2006). Distributions of joint sample correlation coefficients of independent normally distributed random variables. Advanced Studies in Contemporary Mathematics 12, 247-254.
size () | power () | power () | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
20 | 10 | 0.0593 | 0.0486 | 0.0598 | 0.0646 | 0.0551 | 0.0657 | 0.0974 | 0.0863 | 0.0987 |
20 | 0.0609 | 0.0565 | 0.0618 | 0.0670 | 0.0626 | 0.0675 | 0.1357 | 0.1281 | 0.1366 | |
50 | 0.0594 | 0.0584 | 0.0596 | 0.0836 | 0.0810 | 0.0842 | 0.3079 | 0.3030 | 0.3096 | |
100 | 0.0524 | 0.0519 | 0.0525 | 0.1156 | 0.1142 | 0.1159 | 0.5584 | 0.5571 | 0.5588 | |
50 | 10 | 0.0596 | 0.0484 | 0.0603 | 0.0750 | 0.064 | 0.0744 | 0.1660 | 0.1478 | 0.1656 |
20 | 0.0533 | 0.0481 | 0.0525 | 0.0855 | 0.0790 | 0.0859 | 0.3257 | 0.3098 | 0.3244 | |
50 | 0.0508 | 0.0489 | 0.0508 | 0.1457 | 0.1410 | 0.1461 | 0.7372 | 0.7320 | 0.7362 | |
100 | 0.0536 | 0.0519 | 0.0535 | 0.2684 | 0.2660 | 0.2684 | 0.9609 | 0.9604 | 0.9608 | |
100 | 10 | 0.0608 | 0.0504 | 0.0607 | 0.0894 | 0.0743 | 0.0882 | 0.3305 | 0.3025 | 0.3285 |
20 | 0.0581 | 0.0511 | 0.0573 | 0.1265 | 0.1167 | 0.1260 | 0.6520 | 0.6345 | 0.6495 | |
50 | 0.0523 | 0.0494 | 0.0519 | 0.2735 | 0.2675 | 0.2723 | 0.9816 | 0.9809 | 0.9816 | |
100 | 0.0516 | 0.0506 | 0.0515 | 0.5711 | 0.5669 | 0.5703 | 0.9997 | 0.9997 | 0.9997 |
size () | power () | power () | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
20 | 10 | 0.0610 | 0.0529 | 0.0615 | 0.0607 | 0.0525 | 0.0626 | 0.0913 | 0.0802 | 0.0934 |
20 | 0.0600 | 0.0571 | 0.0615 | 0.0668 | 0.0621 | 0.0686 | 0.1230 | 0.1147 | 0.1243 | |
50 | 0.0595 | 0.0582 | 0.0607 | 0.0806 | 0.0785 | 0.0812 | 0.2680 | 0.2647 | 0.2693 | |
100 | 0.0585 | 0.0582 | 0.0598 | 0.1099 | 0.1093 | 0.1109 | 0.5183 | 0.5171 | 0.5202 | |
50 | 10 | 0.0572 | 0.0489 | 0.0575 | 0.0773 | 0.0655 | 0.0774 | 0.1744 | 0.1536 | 0.1749 |
20 | 0.0606 | 0.0548 | 0.0600 | 0.0809 | 0.0734 | 0.0816 | 0.3061 | 0.2920 | 0.3062 | |
50 | 0.0559 | 0.0530 | 0.0562 | 0.1358 | 0.1319 | 0.1359 | 0.7449 | 0.7387 | 0.7449 | |
100 | 0.0526 | 0.0518 | 0.0527 | 0.2417 | 0.2378 | 0.2422 | 0.9741 | 0.9736 | 0.9741 | |
100 | 10 | 0.0596 | 0.0497 | 0.0583 | 0.0842 | 0.0710 | 0.0848 | 0.3225 | 0.2934 | 0.3200 |
20 | 0.0589 | 0.0528 | 0.0580 | 0.1269 | 0.1152 | 0.1271 | 0.6430 | 0.6265 | 0.6397 | |
50 | 0.0516 | 0.0489 | 0.0522 | 0.2650 | 0.2587 | 0.2646 | 0.9889 | 0.9880 | 0.9887 | |
100 | 0.0504 | 0.0494 | 0.0504 | 0.5567 | 0.5525 | 0.5566 | 1.0000 | 1.0000 | 1.0000 |
size () | power () | power () | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
20 | 10 | 0.0718 | 0.0636 | 0.0745 | 0.0799 | 0.0734 | 0.0845 | 0.1419 | 0.1338 | 0.1486 |
20 | 0.0638 | 0.0610 | 0.0670 | 0.0867 | 0.0859 | 0.0924 | 0.2135 | 0.2116 | 0.2221 | |
50 | 0.0581 | 0.0586 | 0.0612 | 0.1224 | 0.1266 | 0.1301 | 0.3982 | 0.4040 | 0.4080 | |
100 | 0.0602 | 0.0622 | 0.0632 | 0.1901 | 0.1955 | 0.1975 | 0.5774 | 0.5848 | 0.5871 | |
50 | 10 | 0.0734 | 0.0620 | 0.0751 | 0.0950 | 0.0840 | 0.0975 | 0.2288 | 0.2156 | 0.2335 |
20 | 0.0650 | 0.0606 | 0.0660 | 0.1158 | 0.1114 | 0.1205 | 0.3834 | 0.3800 | 0.3898 | |
50 | 0.0612 | 0.0608 | 0.0643 | 0.1877 | 0.1912 | 0.1953 | 0.7102 | 0.7140 | 0.7175 | |
100 | 0.0589 | 0.0607 | 0.0618 | 0.3230 | 0.3289 | 0.3310 | 0.9079 | 0.9107 | 0.9111 | |
100 | 10 | 0.0701 | 0.0604 | 0.0698 | 0.1048 | 0.0936 | 0.1088 | 0.3636 | 0.3428 | 0.3689 |
20 | 0.0646 | 0.0595 | 0.0647 | 0.1555 | 0.1492 | 0.1578 | 0.6350 | 0.6281 | 0.6400 | |
50 | 0.0636 | 0.0622 | 0.0653 | 0.3108 | 0.3125 | 0.3187 | 0.9481 | 0.9486 | 0.9495 | |
100 | 0.0574 | 0.0583 | 0.0600 | 0.5823 | 0.5878 | 0.5899 | 0.9965 | 0.9966 | 0.9966 |
size () | power () | power () | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
20 | 10 | 0.0650 | 0.0547 | 0.0664 | 0.0682 | 0.0594 | 0.0680 | 0.1042 | 0.0921 | 0.1052 |
20 | 0.0618 | 0.0565 | 0.0630 | 0.0704 | 0.0650 | 0.0706 | 0.1561 | 0.1477 | 0.1572 | |
50 | 0.0605 | 0.0584 | 0.0608 | 0.0974 | 0.0955 | 0.0983 | 0.3381 | 0.3353 | 0.3394 | |
100 | 0.0688 | 0.0677 | 0.0692 | 0.1395 | 0.1381 | 0.1407 | 0.5979 | 0.5974 | 0.5998 | |
50 | 10 | 0.0624 | 0.0520 | 0.0616 | 0.0778 | 0.0657 | 0.0781 | 0.1991 | 0.1790 | 0.1988 |
20 | 0.0602 | 0.0548 | 0.0606 | 0.0958 | 0.0869 | 0.0952 | 0.3832 | 0.3713 | 0.3828 | |
50 | 0.0587 | 0.0559 | 0.0578 | 0.1643 | 0.1595 | 0.1638 | 0.7827 | 0.7792 | 0.7824 | |
100 | 0.0569 | 0.0561 | 0.057 | 0.3086 | 0.3063 | 0.3097 | 0.9669 | 0.9666 | 0.9672 | |
100 | 10 | 0.0610 | 0.0510 | 0.0615 | 0.0990 | 0.0845 | 0.0961 | 0.3777 | 0.3484 | 0.3745 |
20 | 0.0599 | 0.0520 | 0.0592 | 0.1395 | 0.1299 | 0.1380 | 0.7200 | 0.7055 | 0.7183 | |
50 | 0.0556 | 0.0522 | 0.0555 | 0.3137 | 0.3080 | 0.3137 | 0.9888 | 0.9887 | 0.9888 | |
100 | 0.0561 | 0.0545 | 0.0559 | 0.6389 | 0.6359 | 0.6400 | 1.0000 | 1.0000 | 1.0000 |
size () | power () | power () | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
20 | 10 | 0.0601 | 0.0520 | 0.0607 | 0.0625 | 0.0552 | 0.0634 | 0.0997 | 0.0884 | 0.1019 |
20 | 0.0556 | 0.0499 | 0.0558 | 0.0635 | 0.0584 | 0.0637 | 0.1402 | 0.1303 | 0.1401 | |
50 | 0.0520 | 0.0499 | 0.0520 | 0.0804 | 0.0773 | 0.0807 | 0.3066 | 0.3031 | 0.3072 | |
100 | 0.0592 | 0.0583 | 0.0593 | 0.1172 | 0.1153 | 0.1173 | 0.5509 | 0.5484 | 0.5515 | |
50 | 10 | 0.0587 | 0.0493 | 0.0586 | 0.0723 | 0.0610 | 0.0723 | 0.1702 | 0.1501 | 0.1684 |
20 | 0.0558 | 0.0497 | 0.0557 | 0.0843 | 0.0762 | 0.0838 | 0.3315 | 0.3144 | 0.3290 | |
50 | 0.0537 | 0.0507 | 0.0541 | 0.1429 | 0.1385 | 0.1429 | 0.7446 | 0.7377 | 0.7433 | |
100 | 0.0557 | 0.0540 | 0.0555 | 0.2637 | 0.2614 | 0.2642 | 0.9550 | 0.9545 | 0.9550 | |
100 | 10 | 0.0635 | 0.0552 | 0.0640 | 0.0897 | 0.0758 | 0.0892 | 0.3237 | 0.2955 | 0.3187 |
20 | 0.0593 | 0.0527 | 0.0586 | 0.1211 | 0.1097 | 0.1197 | 0.6540 | 0.6360 | 0.6513 | |
50 | 0.0480 | 0.0459 | 0.0479 | 0.2716 | 0.2644 | 0.2706 | 0.9803 | 0.9791 | 0.9801 | |
100 | 0.0551 | 0.0537 | 0.0549 | 0.5719 | 0.5690 | 0.5713 | 1.0000 | 1.0000 | 1.0000 |