Inference with approximate local false discovery rates
Abstract
Efron’s two-group model is widely used in large scale multiple testing. This model assumes that test statistics are mutually independent, however in realistic settings they are typically dependent, and taking the dependence into account can boost power. The general two-group model takes the dependence between the test statistics into account. Optimal policies in the general two-group model require calculation, for each hypothesis, of the probability that it is a true null given all test statistics, denoted local false discovery rate (locFDR). Unfortunately, calculating locFDRs under realistic dependence structures can be computationally prohibitive. We propose calculating approximate locFDRs based on a properly defined N-neighborhood for each hypothesis. We prove that by thresholding the approximate locFDRs with a fixed threshold, the marginal false discovery rate is controlled for any dependence structure. Furthermore, we prove that this is the optimal procedure in a restricted class of decision rules, where decision for each hypothesis is only guided by its N-neighborhood. We show through extensive simulations that our proposed method achieves substantial power gains compared to alternative practical approaches, while maintaining conceptual simplicity and computational feasibility. We demonstrate the utility of our method on a genome wide association study of height.
Keywords: dependent test statistics; large scale inference; marginal false discovery rate (mFDR); multiple testing; genome wide association studies (GWAS)
1 Introduction
The two-group model introduced in Efron et al., (2001), is a popular mixture model-based procedure for large scale inference. Within this framework it is common to control false discovery by controlling the False Discovery Rate (FDR, Benjamini and Hochberg, 1995), or its closely related variant the marginal FDR (mFDR, Genovese and Wasserman, 2002).
Sun and Cai, (2007) developed an adaptive multiple testing rule for mFDR control in the two-group model and showed that such a procedure is optimal in the sense that it minimizes the false non-discovery rate while controlling the mFDR under independence of tests. This rule is based on thresholding the marginal local FDR (locFDR), which is the conditional probability of the hypothesis being null, given only the observed test statistic for that hypothesis. If we denote the true state for test by and the observed test statistics by , then marginal . In a series of papers (Efron,, 2007, 2010), Efron has investigated the effect of correlations on the outcome of multiple testing procedures, being concerned that approaches that ignore dependence may not correctly control the FDR, so it is necessary to accommodate these correlations in multiple testing methodology.
Putting power at the forefront, Xie et al., (2011) introduced the general two-group model with arbitrary dependence structure among z-scores. They showed that the optimal policy for mFDR control is thresholding the “oracle” locFDR statistics, , with a fixed threshold. The oracle locFDR statistics explicitly use the correlation among the z-scores. Recently Heller and Rosset, (2021) showed that the oracle locFDRs are the optimal test statistics for controlling FDR as well, under general dependence. They showed that for some special dependence structures like block dependence or equicorrelated setting, it may be practical to calculate these oracle locFDRs for dependent data. However beyond these special cases there is no computationally feasible method for calculating the full locFDR under dependence. A standard approach is to approximate the oracle locFDR statistics by sub optimal marginal locFDR statistics which are much easier to compute. By ignoring the correlation among tests statistics, this approach leads to substantial loss of power, as demonstrated empirically in our analyses below.
As a real life example, in GWAS we are interested in finding significant SNPs associated with a phenotype among millions of SNPs in the whole genome. It is well understood that nearby SNPs on the genome are highly correlated due to linkage disequilibrium (LD), while SNPs which are far apart are very weakly correlated. Marginal analysis is a standard procedure for finding associations in GWAS, but it does not identify the SNPs that are associated with the phenotype given all other SNPs. For a joint analysis, one approach is to approximate the oracle locFDRs using a complete Bayesian treatment, which relies on methods like Markov Chain Monte Carlo (MCMC) for inference. For example, Zhu and Stephens, (2017) developed a Bayesian approach for finding association using GWAS summary statistics which takes into account the correlations among the SNPs. Their approach relies on publicly available estimates of effect sizes for SNPs for , calculated from marginal regressions without taking correlations into account, and builds a complete model taking into account the correlations between these estimates, using a known correlation (LD) matrix. They then implement a full Bayesian approach using MCMC to ultimately infer the posterior null probabilities where . In practice, their procedure is very time consuming, as each MCMC iteration takes substantial computation, and the entire procedure has to be repeated several times to confirm convergence of the MCMC solution.
In this paper we propose a novel approach for mFDR control under dependence in the two-group model, which can be viewed as a compromise between the simplistic approach of relying on marginal locFDRs and the computationally prohibitive approach of calculating the full oracle locFDRs and obtaining optimal power. We propose to define the N-neighborhood for each test, for some small and only consider multiple testing procedures that base the decision for test only on its neighborhood. We enote this class of procedures (or decision policies) by For example, in our motivating application to GWAS, LD structure implies physically close SNPs are correlated, and the range of correlation is roughly similar throughout the genome, so the neighborhood for each SNP is defined by the group of SNPs surrounding it.
The rest of the paper is organized as follows.
In § 2 we define all the necessary notations and explain our goal in this paper. In § 3 we prove in Theorem 3.1 that within the reduced class , the optimal test that maximizes power while controlling mFDR in the two-group model, relies on thresholding the posterior probabilities which we term We further show in Corollary 3.1.1 that the power (expected number of true discoveries) is an increasing function of for the resulting rules, making explicit the trade off between power and computation that the neighborhood approximation provides. In § 4 we show that the computation of the statistics is linear in the number of hypothesis. Furthermore we show how we can estimate the statistics assuming a reasonable model for GWAS. Finally we summarize our complete data driven algorithm. In § 5 we perform extensive simulations with different covariance structures and we observe that the statistics achieves more power compared to other practical methods. In § 6 we provide a complete analysis pipeline for GWAS using summary statistics. Required input data are marginal SNP effect sizes (), standard error of the effect (s.e.() ) and estimated matrix of LD (correlations) among Z-scores () of SNPs.
2 Notation and Goal
We assume the data are generated from the generalized two group model. The hypothesis states vector, , has entries sampled independently from the distribution. Given the hypothesis states , the sequence of test statistic for testing the null hypothesis, , , follows . Based on , the goal is to construct a multiple testing procedure, defined by the decision rule , which has good power while controlling a meaningful error.
Let denote the oracle locFDR, which is the statistic for optimal decisions (Xie et al.,, 2011; Heller and Rosset,, 2021):
The expected number of the true and false positives are simple expressions of the oracle locFDRs:
A relevant error rate is the marginal false discovery rate (mFDR),
where is the expected number of rejections.
Note that some expectations above are over the joint distribution of and some are over . From now on we will not denote the random variable with respect to which we take the expectation since it is easily understandable from the context. It has been shown in Xie et al., (2011) that the optimal rule over the class of all decision functions for the above problem is thresholding the locFDR statistic with a fixed threshold. So the decision rule is , where is a fixed constant depending on . The complexity required for naively calculating all the locFDR statistics is , which is typically infeasible. Hence the optimal procedure is impractical, except for some very special types of dependencies across the -scores, e.g., block dependence with small block sizes (Heller and Rosset,, 2021). Due to the difficulty in finding the globally optimal solution, we suggest considering a smaller class of decision functions which depend only on neighbors, for :
(1) | |||
In this definition we assume that the tests are ordered and each test’s ”neighbors” are defined by the ones next to it in the order. We also assume that the neighborhood size is fixed for all tests. Both of these assumptions can be relaxed, but we make them here for simplicity of notation and presentation of the resulting algorithms.
Given a choice of and the resulting class we show in § 3 that the optimal testing procedure, which is limited to rules in , relies on the localFDR after marginalization so that the test statistic is within this class:
(2) |
Henceforth, we refer to as the . For , includes only marginal rules and hence is the marginal locFDR (Efron et al.,, 2001), while for , (Sun and Cai,, 2009; Xie et al.,, 2011; Heller and Rosset,, 2021) is the true locFDR. We can view increasing the size of as offering a trade-off between computation and power: for small , calculating and the resulting optimal rules within the limited class is computationally efficient but can suffer loss of power, and as increases computations become exponentially more complex but power improves. As we demonstrate below, in GWAS-like situations with a natural neighborhood structure and strong dependence only within neighborhood, the power increase can be rapid when is small, and we can obtain excellent power-computation tradeoff (Figure 1(a)). For other types of correlation structures we also get a nice trade-off (Figure 1(b), 1(c)) . Here our aim is not to find the optimal solution (which is not implementable), but a relaxation of the optimal solution which improves power significantly compared to existing marginal test statistics. We show in § 5 that the power gain using the statistic, compared to existing methods, can be substantial.
3 OMT Policy in the restricted class
In the following proposition we state a simple but useful result.
Proposition 3.1.
Define the quantity
(3) |
for and . Then the procedure which rejects if for , controls the mFDR at level .
Proof.
The mFDR of the procedure which rejects if is given by
mFDR | |||
where the last inequality follows from the definition of . ∎
A similar result was given in Heller and Rosset, (2021) for the marginal locFDR statistics , but we expect more power by using the procedure based on , .
The optimal multiple testing procedure with mFDR control is a solution to the problem:
(4) |
Now we concentrate on the class and find an optimal solution within this restricted class, so the optimization problem is
(5) |
Theorem 3.1.
Proof.
see Appendix A. ∎
Remark.
Though marginal test statistics () based fixed threshold rules were already present long before, the optimality of such rules for dependent test statistics was not known (as far as we know). Therefore, the above result is not only new for but also for .
Corollary 3.1.1.
The optimal fixed threshold mFDR level rule based on has more power than the optimal fixed threshold mFDR level rule based on for every .
4 A data driven procedure
4.1 The computational complexity of the statistics
To implement the procedure described in Proposition 3.1, we need to compute the statistics . We denote . Clearly , so can be written as
(6) |
where is the probability distribution of and is the conditional density of given . The sum in the denominator above is over dimensional binary vectors and hence requires evaluations of and each of these takes time of at most . Hence calculating the denominator requires time . Note that all the calculations in the denominator can be stored with memory and hence to evaluate the numerator, we do not need any extra computation time. Since , the worst case runtime for computing all the s is .
Note that for finding the globally optimal rule, we had to calculate the oracle s . If has no special structure, calculating these quantities requires runtime of .
4.2 Estimation strategies
Although computations of the statistics of interest are simple with known probabilistic structure, in real life examples we need to estimate the process parameters from the data. Below we discuss how to estimate these quantities in situation where the dependence can be assumed to be of short range, which approximates many real life applications.
Assume we have the following model
(7) |
where is known, with diagonal entries of one. This model assumes that a null (i.e., with ) SNP’s test statistic has a distribution, and a nonnull (i.e., with ) SNP’s test statistic has a distribution. It is reasonable, e.g., for testing the correlations between covariates and outcome in a linear model, where is determined by the design matrix of the covariates, see § 6 for details. This model is similar to the “RSS-BVSR” model considered in Zhu and Stephens, (2017), and it reduces to the two-group model of Efron et al., (2001) if is a diagonal matrix.
In practical applications, we want to estimate the model (7) from the observed summary statistics . If we can assume the dependence to be of short ranged (as in GWAS), then we can find a subset of -scores. which are approximately independent. Due to LD, nearby SNPs are correlated and SNPs which are far apart tend to be independent of each others. As a result, if we select the z-scores of the SNPs based on their position on the chromosome, we can get approximately independent collection of -scores. After we select these -scores forestimation, they can be assumed to follow the following simpler model two group model:
(8) |
It is easier to find the estimates of from the model (8) using the subset of -scores which we call . Once these parameters are estimated, we plug these estimates into model (7). Specifics follow.
Our first step is thus the selection of the sub-vector . We do not assume the i.i.d. two group model on , but we assume that most of the time we are able to extract a subset of z-scores which either exactly or approximately follow a i.i.d. two group model due to the assumption of short range strong dependency. Details of how estimation is done in case Banded(1), AR(1) covariance are given in section 5.2.
Next, we need to estimate using . We apply the EM algorithm (Dempster et al.,, 1977) to estimate the parameters as suggested in Peel and MacLahlan, (2000). We consider the following two methods, which differ in the way they handle the estimation of .
Estimating Parameters 1: The updates of the parameters are given below
(9) |
where are the estimates of the parameters in the -th step and . The derivation of the equations (9) is detailed in appendix B.
Estimating Parameters 2: Alternatively we estimate from using the well known method of Jin and Cai, (2007) and plug-in this estimate of in the EM-iterations. Let be the estimate from -scores of non-null proportion . Then the updates of the other parameters are given below
(10) |
where are the estimates of the parameters in the -th step. The derivation of the the equations (10) follows similarly as done in appendix B.
Using the estimated parameters , we estimate the , and denote these statistics by . For inference using the estimated s, we need to determine the rejection cutoff, so that hypotheses with estimated s below that cutoff will be rejected. We determine this cutoff numerically as follow. We draw different data sets of -scores from the mixture model in (8) with estimated parameters . We estimate by where has been calculated from the -th -vector and estimated parameters . Now we perform a simple line search to find
(11) |
We summarize the whole pipeline concisely in Algorithm 1.
5 Simulations
5.1 Simulation from Mixture of Gaussians (Oracle)
Here we show all the results with known values of the parameters of the distribution described by (7).
In the rest of the paper, we will consider covariance structures for among the following four:
-
•
AR(1):
(12) for where .
-
•
Banded(1):
(13) for where .
-
•
Long Range: (Fan and Han,, 2017)
(14) , where with indicating weak dependence and indicating strong long range dependence.
-
•
Equicorrelated:
(15) for where .
Figure 1 shows a scatter plot of versus , and the marginal density of , for . As we move to comparison between higher order locFDRs, the simulated locFDRs tend to concentrate around the 45 degree line. The concentration is particularly good for AR(1) and the long-range correlation structure. The density plots show that for all dependencies, the densities are similar for and strikingly different than the density for .



We make the following observations from the results in Table 1:
-
•
Power: as expected, the power increases with .
-
•
Effect of Correlation: What makes the most interesting is the rate of power increase, as a function of : the power increase is large when moving from to , and it is much smaller when moving from to . This behaviour is common observed under all dependency structures considered. The highest power increase was observed for the equicorrelated covariance structure.
-
•
Variability of the FDP: When there are strong correlations, FDP is highly variable. In all settings, the procedure with marginal statistics shows higher variability in FDP compared to procedures that use statistics with . We consistently observe from simulations that the variability of the FDP is decreasing in , so this is an added advantage of the use of with as large as (computationally) possible.
-
•
mFDR vs FDR: mFDR and FDR differ when there is strong correlations using statistics. But when we use the statistics, more discoveries are made and the difference between mFDR and FDR is small, regardless of the correlation structure.
-
•
Cutoff: All the simulations shows that cutoffs for fixed are increasing in , and the change from to is much smaller than the change from to . The decrease of the sequence depends on the covariance structure, and the rate is higher in the short range setting, than in the long range or equicorrelated setting.
Fixed Parameters in all Simulations, , | |||||||
---|---|---|---|---|---|---|---|
Correlation | Method | -rule | -rule | -rule | |||
Error | Estimate | s.e. | Estimate | s.e. | Estimate | s.e. | |
AR(1) | mFDR | 0.0501 | 0.0015 | 0.0499 | 0.0009 | 0.0508 | 0.0008 |
FDR | 0.0494 | 0.0014 | 0.0498 | 0.0009 | 0.0507 | 0.0008 | |
TP | 61.19 | 0.352 | 124.08 | 0.428 | 139.364 | 0.448 | |
Cutoff (t) | 0.1742 | - | 0.2356 | - | 0.2729 | - | |
Long Range | mFDR | 0.0489 | 0.0015 | 0.0506 | 0.001 | 0.0497 | 0.0010 |
FDR | 0.0481 | 0.0014 | 0.0506 | 0.001 | 0.0497 | 0.0010 | |
TP | 61.95 | 0.3413 | 84.256 | 0.3815 | 88.37 | 0.3944 | |
Cutoff (t) | 0.1742 | - | 0.2014 | - | 0.2074 | - | |
Equi | mFDR | 0.0731 | 0.0249 | 0.0515 | 0.0032 | 0.0501 | 0.0012 |
FDR | 0.0149 | 0.0035 | 0.0489 | 0.0023 | 0.0497 | 0.0012 | |
TP | 62.026 | 0.8173 | 121.374 | 0.4404 | 146.29 | 0.4835 | |
Cutoff (t) | 0.1742 | - | 0.2361 | - | 0.2903 | - | |
AR(1) | mFDR | 0.0510 | 0.0012 | 0.0487 | 0.001 | 0.0487 | 0.001 |
FDR | 0.0509 | 0.0012 | 0.0488 | 0.001 | 0.0487 | 0.0010 | |
TP | 62.112 | 0.3465 | 83.85 | 0.3989 | 85.824 | 0.4019 | |
Cutoff (t) | 0.1742 | - | 0.1989 | - | 0.2029 | - | |
Long Range | mFDR | 0.0506 | 0.0013 | 0.0505 | 0.0011 | 0.0513 | 0.0011 |
FDR | 0.0504 | 0.0013 | 0.0507 | 0.0011 | 0.0514 | 0.0011 | |
TP | 61.922 | 0.3503 | 70.282 | 0.3748 | 71.504 | 0.3786 | |
Cutoff (t) | 0.1742 | - | 0.1823 | - | 0.1842 | - | |
Equi | mFDR | 0.0452 | 0.0049 | 0.0502 | 0.0023 | 0.0496 | 0.0014 |
FDR | 0.0304 | 0.0028 | 0.0473 | 0.0021 | 0.0491 | 0.0014 | |
TP | 61.848 | 0.5322 | 81.152 | 0.4022 | 91.166 | 0.4055 | |
Cutoff (t) | 0.1742 | - | 0.1971 | - | 0.2088 | - |
For different parameters of the model (7) (rows), using the procedure with (columns 3-4), (columns 5-6), and (columns 7-8): the “Estimate” column indicates the estimates of different error rates and the “s.e.” column indicates the the standard error of these estimates. Standard Errors for mFDR were determined by bootstrapping, based on 500 data generations.
5.2 Simulation from Mixture of Gaussian (Data Driven)
Here we show the performance of our procedure when we estimate the unknown parameters of distribution in model (7) using algorithm 1. From now on in all the tables we denote “-rule”, “SunCai”, “BH” and “ABH”, respectively, the testing procedure based on for , the testing procedure developed in Sun and Cai, (2007), the Benjamini-Hochberg procedure (Benjamini and Hochberg,, 1995), and the adaptive Benjamini-Hochberg, which incorporates the plug-in estimate of given by the method “Est-EM” and “Est-S&C” (see, e.g., Benjamini et al., 2006).
In Table 2 by “Est-EM” we mean estimating the parameters of the distribution using equations (9). For AR(1) covariance, we take the subset of very weakly correlated z-scores and estimate the parameters using EM algorithm and in this case we get slightly inflated mFDR level. For Banded(1) covariance, we take the subset of independent z-scores to estimate the parameters in this situation to get mFDR control for data driven rule. In this same table by “Est-S&C” we mean estimating the parameters of the distribution using equations (10).
In Table 2 , we observe that the mFDR level is more conservative with “Est-S&C” than with “Est-EM”. Due to this conservative nature of “Est-S&C”, we make less discoveries in the case of “Est-S&C” compared to “Est-EM”. Nevertheless, even with “Est-S&C”, using the statistics with , results in much higher power than the power of or the other competitors.
Fixed Parameters in all Simulations , | |||||||
---|---|---|---|---|---|---|---|
Correlation | Error | -rule | -rule | -rule | SunCai | BH | ABH |
Banded(1),Est-EM | mFDR | 0.0494 | 0.0511 | 0.0495 | 0.0435 | 0.0327 | 0.0489 |
, | FDR | 0.0486 | 0.0508 | 0.0494 | 0.0430 | 0.0324 | 0.0480 |
TP | 124 | 174 | 184 | 118 | 109 | 123 | |
Banded(1),Est-EM | mFDR | 0.0497 | 0.0498 | 0.0509 | 0.0475 | 0.0396 | 0.0497 |
, | FDR | 0.0480 | 0.0495 | 0.0506 | 0.0464 | 0.0383 | 0.0481 |
TP | 69 | 108 | 118 | 68 | 65 | 70 | |
AR(1),Est-EM | mFDR | 0.051 | 0.0526 | 0.0520 | 0.0441 | 0.0356 | 0.0510 |
, | FDR | 0.0503 | 0.0522 | 0.0517 | 0.0439 | 0.0355 | 0.0506 |
TP | 248 | 336 | 344 | 237 | 219 | 249 | |
AR(1),Est-EM | mFDR | 0.0522 | 0.0524 | 0.0525 | 0.050 | 0.0416 | 0.0534 |
, | FDR | 0.0511 | 0.0521 | 0.0521 | 0.0495 | 0.0411 | 0.0525 |
TP | 136 | 202 | 206 | 134 | 127 | 137 | |
AR(1),Est-S&C | mFDR | 0.0342 | 0.0414 | 0.0417 | 0.0444 | 0.0333 | 0.0417 |
, | FDR | 0.0338 | 0.0413 | 0.0416 | 0.0440 | 0.0329 | 0.0413 |
TP | 218 | 313 | 319 | 236 | 218 | 233 | |
AR(1),Est-S&C | mFDR | 0.0381 | 0.0423 | 0.0415 | 0.046 | 0.039 | 0.0447 |
,, | FDR | 0.0375 | 0.042 | 0.0412 | 0.0455 | 0.0385 | 0.0441 |
TP | 124 | 191 | 195 | 133 | 126 | 132 |
For different parameters of the model (7) (rows),for each multiple testing procedure, we provide the FDR, mFDR, and power. The novel -rules applied Algorithm 1 with . In bold, the most powerful procedure. Bootstrap standard error (s.e.) of all the mFDRs (for the combination “AR(1),Est-EM”) in rd and th rows are at most and respectively. Based on 200 data generations.
6 UK biobank height data analysis
Assume we have a standardized phenotype and standardized Genotype data available for individuals and different SNPs. We want to find the SNPs that are associated with the phenotype, while controlling the mFDR at level 0.05, using the statistics for . In our GWAS example we have a sufficient number of individuals so . In order to apply our method we need z-scores and correlation among z-scores as input data. For finding the z-scores, we use the linear model,
and we know the least squares estimate of is with . Let , and let the estimated variance of be . Our z-scores vector is , and the correlation among the z-scores is . So, our input data is and our working model is the two group model (7).
We apply our method to some selected SNPs of chromosome 20, as described next. We have the genotype data of around 500K individuals and approximately 20K SNPs for each of them from the UK biobank (Sudlow et al.,, 2015). The individuals with missing height (a small number) were removed from our analysis. In the raw data for each SNP we also have many missing individuals. Moreover, SNPs that are adjacent to each other tend to be highly correlated. Therefore, we choose about 3.5K SNPs from chromosome 20 such that between any two of them there are at least 5 SNPs and none of these 3.5K SNPs has more than 10% missing percentage. We impute the number of major allele count of each missing individuals for a particular SNP by the mean of all the available individuals’ major allele count of that SNP. Next, we compute the scores vector and , and use them in the “Est-S&C” data driven procedure. The estimated mixture components are
In Table 3 we show simulation results with parameters estimated from the data (essentially a parametric bootstrap experiment) after repeating the process times and aggregating the results. First row of the table shows the result where we assume the parameters are unknown and we estimate the parameters in each run (we take here). Second row of the table shows the result where we assume the parameters are known and we take here. Our procedures control the mFDR at the nominal 0.05 level.
The number of rejected SNPs for the real data at level by our method and the competitors with 3.5K SNPs are given in Table 4. Though “SC” and “-rule” use the same statistics for inference, due to different estimation methods of these statistics and different way of determining cutoffs, they reject different number of SNPs. The “-rule” rejects more SNPs than the “SC” method. Manhattan plots of all the methods are given in Figure 2. In all the subplots in Figure 2, we indicate by a red rectangle a specific region of the Genome where statistics with finds more associations than “BH”, “ABH” and “-rule”. Note that in this specific region there are two associations (-th and -th SNP) which are found by all the methods. However between these two SNPs, there are two other close SNPs (-th and -th SNP) which are only identified by statistics with . Interestingly, we observe from the correlation matrix of the -scores that these two newly identified SNPs are not highly correlated.
Parameters are , , and | |||||||
---|---|---|---|---|---|---|---|
Method | Error | -rule | -rule | -rule | SunCai | BH | ABH |
Data | mFDR | 0.043 | 0.0449 | 0.0445 | 0.047 | 0.0408 | 0.045 |
Driven | FDR | 0.0419 | 0.044 | 0.0439 | 0.0468 | 0.0399 | 0.0438 |
TP | 52 | 59 | 65 | 55 | 51 | 54 | |
mFDR | 0.049 | 0.0501 | 0.0488 | 0.0482 | 0.0408 | 0.0516 | |
Oracle | FDR | 0.0486 | 0.0499 | 0.0487 | 0.0467 | 0.0395 | 0.0497 |
TP | 57 | 65 | 70 | 56 | 52 | 58 |
Boldsymbols indicates the best power among the competitors.
Number of Common Identified SNPs | |||||||
Method | SunCai | BH | ABH | -rule | -rule | -rule | -rule |
SC | 36 | 20 | 34 | 36 | 33 | 35 | 32 |
BH | 20 | 20 | 20 | 19 | 20 | 20 | |
ABH | 36 | 36 | 33 | 35 | 33 | ||
-rule | 49 | 43 | 45 | 40 | |||
-rule | 53 | 52 | 49 | ||||
-rule | 66 | 60 | |||||
-rule | 64 |




In the plot of “ vs ”, black dotted line indicates the threshold by “BH” procedure and red dotted line indicates the threshold by “ABH” procedure
In the plot of “ vs ”, black dotted line indicates the threshold
7 Discussion
In this paper we have studied practical multiple testing procedures in the two-group model under general dependence. We provide motivation and need for finding an optimal solution in a restricted decision space and indeed show empirically that we do not lose much by concentrating on the smaller class . We provide practical approaches to estimate the parameters of a two group model under short range dependence structures which encompasses typical real applications. Using this approach we give a practical application in GWAS by finding more associations than existing practical method using summary statistics. Our developed method works in GWAS without resorting to extensive modelling assumptions of complete bayesian framework as done in Zhu and Stephens, (2017).
Though concentration on the class of decision functions makes sense, it is natural to ask if there is better way of defining such a class which yields more practical and powerful solution. Our solution does not use the full correlation matrix - it only uses the correlations within main diagonals of the correlation matrix. In other words, the solution ignores all the correlation outside the -band. One possible approach can be to define -th locFDR statistic by conditioning on top highest correlated -scores and possibly we can choose this depending on determined by strength of correlation of the test statistic with the other test statistics. It will be interesting to explore how we can modify our method to accommodate the whole correlation matrix so we do not lose much “information”.
Proposition 3.1 enables us to see how we can devise a method for mFDR control using the statistics . If the entire joint distribution of the -score vector is known, we can always define a policy of the form , and determine
with . This procedure will control the chosen error by construction. However we may not be able to prove similar optimality result like Theorem 3.1 for the policy with these error rates. To find optimal solutions, we need to solve the generic problem with ,
(16) |
The difficulty with finding such a solution is that other error measures such as FDX, FDR, pFDR do not decompose in the same way as mFDR as shown in equation (A). It is a challenge to find similar solution for these error measures.
It is an interesting direction of future work to develop based method for problems with properly chosen scores which takes into account the joint effect of covariates to some extent at least.
Sun and Cai, (2009) developed a method assuming hypothesis states follow the hidden markov model (HMM) structure and the z-scores given the hypothesis states are independent. So their method does not use correlation among the -scores. Interestingly, proposition 3.1 holds for any joint distribution of on . Although it is normally assumed that , we can also assume that follows a HMM as done in Sun and Cai, (2009) and we can draw inference based on the statistics in these situations also. An interesting direction to extend our approach is thus to also allow an HMM structure of hypothesis states in addition to the correlation among -scores. We expect that it will be possible to derive a practical solution for this setting.
In particular, our estimation method is such that we can easily extend to this HMM based model as well, thereby producing a multiple testing procedure which we can directly apply to real data sets.
Our approach for parameter estimation in the two-group model relies on being able to extract a subset of test statistics which are roughly independent. We believe this is a realistic assumption in most real-life applications, in particular in GWAS. However it remains a challenge to design and test approaches that also apply in cases of long-range strong dependence. Note also that our estimation method do not take into account the known joint dependence among the statistics. Though this estimation approach is robust, taking joint dependence into account can lead to efficient estimate of the model parameters and in turn a boost in power. It may be interesting to explore how we can accommodate the correlations among the test statistics into estimation.
One major observation from our simulation results is that taking even part of the (short range) dependence into account improves power. In fact, we observe large power gains even when there is very limited and not very strong dependence. Hence our recommendation is to use statistics based rules, with , whenever there is known dependence, or the dependence within neighborhoods can be well approximated.
SUPPLEMENTARY MATERIAL
- Github Link of all relevant codes:
- Height data set:
-
We use Genotype () and Height () data of around 500K individuals provided by uk biobank. Unfortunately we can not make the data available publicly. For reproducibility, readers can use their own data in the notebook available in the above github link.
Acknowledgments
This study was supported by Israeli Science Foundation grant 2180/20. UK Biobank research has been conducted using the UK Biobank Resource under Application Number 56885.
References
- Benjamini and Hochberg, (1995) Benjamini, Y. and Hochberg, Y. (1995). Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal statistical society: series B (Methodological), 57(1):289–300.
- Benjamini et al., (2006) Benjamini, Y., Krieger, A. M., and Yekutieli, D. (2006). Adaptive linear step-up procedures that control the false discovery rate. Biometrika, 93(3):491–507.
- Dempster et al., (1977) Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977). Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society: Series B (Methodological), 39(1):1–22.
- Efron, (2007) Efron, B. (2007). Correlation and large-scale simultaneous significance testing. Journal of the American Statistical Association, 102(477):93–103.
- Efron, (2010) Efron, B. (2010). Correlated z-values and the accuracy of large-scale statistical estimates. Journal of the American Statistical Association, 105(491):1042–1055.
- Efron et al., (2001) Efron, B., Tibshirani, R., Storey, J. D., and Tusher, V. (2001). Empirical bayes analysis of a microarray experiment. Journal of the American statistical association, 96(456):1151–1160.
- Fan and Han, (2017) Fan, J. and Han, X. (2017). Estimation of the false discovery proportion with unknown dependence. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 79(4):1143–1164.
- Genovese and Wasserman, (2002) Genovese, C. and Wasserman, L. (2002). Operating characteristics and extensions of the false discovery rate procedure. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 64(3):499–517.
- Heller and Rosset, (2021) Heller, R. and Rosset, S. (2021). Optimal control of false discovery criteria in the two-group model. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 83(1):133–155.
- Jin and Cai, (2007) Jin, J. and Cai, T. T. (2007). Estimating the null and the proportion of nonnull effects in large-scale multiple comparisons. Journal of the American Statistical Association, 102(478):495–506.
- Peel and MacLahlan, (2000) Peel, D. and MacLahlan, G. (2000). Finite mixture models. John & Sons.
- Sudlow et al., (2015) Sudlow, C., Gallacher, J., Allen, N., Beral, V., Burton, P., Danesh, J., Downey, P., Elliott, P., Green, J., Landray, M., et al. (2015). Uk biobank: an open access resource for identifying the causes of a wide range of complex diseases of middle and old age. PLoS medicine, 12(3):e1001779.
- Sun and Cai, (2007) Sun, W. and Cai, T. T. (2007). Oracle and adaptive compound decision rules for false discovery rate control. Journal of the American Statistical Association, 102(479):901–912.
- Sun and Cai, (2009) Sun, W. and Cai, T. T. (2009). Large-scale multiple testing under dependence. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 71(2):393–424.
- Xie et al., (2011) Xie, J., Cai, T. T., Maris, J., and Li, H. (2011). Optimal false discovery rate control for dependent data. Statistics and its interface, 4(4):417.
- Zhu and Stephens, (2017) Zhu, X. and Stephens, M. (2017). Bayesian large-scale multiple regression with summary statistics from genome-wide association studies. The annals of applied statistics, 11(3):1561.
Appendix A Proof of Theorem 3.1
Note that constraint is equivalent to . The main essence of the proof relies on the following two decompositions of objective and the constraint for :
(17) | ||||
The following expression follows exactly the same way as above
(18) |
The lagrangian of the problem is
(19) | ||||
(20) |
Now the problem in (5) is equivalent to maximizing over . Clearly the lagrangian is maximized for the rule of the form where
(21) | ||||
(22) |
with . Now the constraint
(23) |
is satisfied if . Note that is increasing in and hence the optimal solution is the decision where
Appendix B Derivation of the equations in (9)
Let us assume that we have the model (8). Our task is to estimate the unknown parameters of the model using EM algorithm. We consider as our observed data and as unobserved data. Log likelihood of the complete data is
(24) | ||||
(25) | ||||
(26) | ||||
(27) |
E Step: Let us assume that the estimate of the parameters in the -th step is . Then we need to find the estimate at the -th step by maximizing the conditional expectation .