A Unified and Optimal Multiple Testing Framework based on -values
Abstract
Multiple testing is an important research direction that has gained major attention in recent years. Currently, most multiple testing procedures are designed with p-values or Local false discovery rate (Lfdr) statistics. However, p-values obtained by applying probability integral transform to some well-known test statistics often do not incorporate information from the alternatives, resulting in suboptimal procedures. On the other hand, Lfdr based procedures can be asymptotically optimal but their guarantee on false discovery rate (FDR) control relies on consistent estimation of Lfdr, which is often difficult in practice especially when the incorporation of side information is desirable. In this article, we propose a novel and flexibly constructed class of statistics, called -values, which combines the merits of both p-values and Lfdr while enjoys superiorities over methods based on these two types of statistics. Specifically, it unifies these two frameworks and operates in two steps, ranking and thresholding. The ranking produced by -values mimics that produced by Lfdr statistics, and the strategy for choosing the threshold is similar to that of p-value based procedures. Therefore, the proposed framework guarantees FDR control under weak assumptions; it maintains the integrity of the structural information encoded by the summary statistics and the auxiliary covariates and hence can be asymptotically optimal. We demonstrate the efficacy of the new framework through extensive simulations and two data applications.
Keywords: Asymptotically optimal, False discovery rate, Local false discovery rate, p-value, -value, Side information.
1 Introduction
With the rise of big data and the improved data availability, multiple testing has become an increasingly pertinent issue in modern scientific research. Multiple testing involves the simultaneous evaluation of multiple hypotheses or variables within a single study, which may lead to an increased risk of false positives if statistical methods are not appropriately employed. A popular notion for Type I error in the context of multiple testing is the false discovery rate (FDR, Benjamini and Hochberg, (1995)), which refers to the expected proportion of false positives. Since its introduction by Benjamini and Hochberg, (1995), FDR quickly becomes a key concept in modern statistics and a primary tool for large-scale inference for most practitioners. On a high level, almost all testing rules that control FDR operate in two steps: first rank all hypotheses according to some significance indices and then reject those with index values less than or equal to some threshold. In this paper, we propose an optimal multiple testing framework based on a new concept called -values, which unifies the widely employed p-value and Local false discovery rate (Lfdr) based approaches and in the meanwhile enjoys superiorities over these two types of methods. In addition, the -value framework has a close connection with the e-value based approaches and therefore enjoys flexibility of data dependency. In what follows, we first provide an overview of conventional practices and identify relevant issues. Next, we introduce the proposed framework and then highlight the contributions of our approach in detail.
1.1 Conventional practices and issues
Some of the most popular FDR rules use p-value as significance index for ranking (e.g., Benjamini and Hochberg, , 1995; Cai et al., , 2022; Lei and Fithian, , 2018; Genovese et al., , 2006). The standard way of obtaining p-values is to apply a probability integral transform to some well-known test statistics. For example, Li and Barber, (2019) uses a permutation test, Roquain and Van De Wiel, (2009) employs a Mann–Whitney U test and Cai et al., (2022) adopts a t-test. However, the p-value based methods can be inefficient because the conventional p-values do not incorporate information from the alternative distributions (e.g., Sun and Cai, , 2007; Leung and Sun, , 2021). The celebrated Neyman–Pearson lemma states that the optimal statistic for testing a single hypothesis is the likelihood ratio, while an analog of the likelihood ratio statistic for multiple testing problems is the Lfdr (Efron et al., , 2001; Efron, , 2003; Aubert et al., , 2004). It has been shown that a ranking and thresholding procedure based on Lfdr is asymptotically optimal for FDR control (Sun and Cai, , 2007; Cai et al., , 2019). Nevertheless, the validity of many Lfdr based methods relies crucially on the consistent estimation of the Lfdr statistics (Basu et al., , 2018; Gang et al., , 2023; Xie et al., , 2011), which can be difficult in practice (e.g., Cai et al., , 2022). To overcome this, the weighted p-value based approaches are proposed to emulate the Lfdr method; see, for example, Lei and Fithian, (2018), Li and Barber, (2019) and Liang et al., (2023). However, all of those Lfdr-mimicking procedures either require strong model assumptions or are suboptimal.
1.2 Our methods and contributions
To address the aforementioned issues arising from p-value and Lfdr frameworks, this article proposes a new concept called -value, which takes the form of likelihood ratios and allows wide flexibility in the choice of density functions. Based on such -values (or weighted -values, similarly as weighted p-values), we aim to develop a new flexible multiple testing framework that unifies p-value based and Lfdr based approaches. Specifically, the proposed -value based methods also operate in two steps: first rank all hypotheses according to (weighted) -values (with the goal of mimicking Lfdr ranking), then reject those hypotheses with (weighted) -values less than or equal to some threshold, where the threshold is determined similarly as in p-value based procedures.
Compared to existing frameworks, the new framework has several advantages. First, if the -values are carefully constructed, then their ranking coincides with that produced by Lfdr statistics. Thus, methods based on -values can be asymptotically optimal. Second, the strategies for choosing the threshold for -value based methods are similar to those for p-value based methods. Therefore, the proposed approaches enjoy theoretical properties similar to methods based on p-values. Moreover, the FDR guarantee of the -value approaches does not require consistent estimations of Lfdr statistics, and hence the proposed approaches are much more flexible than the Lfdr framework. Third, compared to Lfdr methods, side information can be easily incorporated into -value based procedures through the simple weighting scheme to improve the ranking of the hypotheses. Finally, we provide a unified view of p-value based and Lfdr based frameworks. In particular, we show that these two frameworks are not as different as suggested by previous works (e.g., Sun and Cai, , 2007; Leung and Sun, , 2021).
1.3 Organization
The paper is structured as follows. Section 2 presents the problem formulation. In Section 3, we introduce -values and provide several examples of -value based procedures. Sections 4 and 5 present numerical comparisons of the proposed methods and other competing approaches using simulated and real data, respectively. More discussions of the proposed framework are provided in Section 6. Technical proofs are collected in the Appendix.
2 Problem Formulation
To motivate our analysis, suppose are independent summary statistics arising from the following random mixture model:
(1) |
which has been widely adopted in many large-scale inference problems (e.g., Efron and Tibshirani, , 2007; Jin and Cai, , 2007; Efron, , 2004). Note that, for simplicity, we assume homogeneous alternative density in Model (1) for now, and it will be extended to heterogeneous scenarios in later sections.
Upon observing the summary statistics , the goal is to simultaneously test the following hypotheses:
Denote by an -dimensional decision vector, where means we reject , and otherwise. In large-scale multiple testing problems, false positives are inevitable if one wishes to discover non-nulls with a reasonable power. Instead of aiming to avoid any false positives, Benjamini and Hochberg, (1995) introduces the FDR, i.e., the expected proportion of false positives among all selections,
and a practical goal is to control the FDR at a pre-specified significance level. A closely related quantity of FDR is the marginal false discovery rate (mFDR), defined by
Under certain first and second-order conditions on the number of rejections, the mFDR and the FDR are asymptotically equivalent (Genovese and Wasserman, , 2002; Basu et al., , 2018; Cai et al., , 2019), and the main considerations for using the mFDR criterion are to derive optimality theory and facilitate methodological developments. An ideal testing procedure should control the FDR (mFDR) and maximize the power, which is measured by the expected number of true positives (ETP):
We call a multiple testing procedure valid if it controls the mFDR asymptotically at the nominal level and optimal if it has the largest ETP among all valid procedures. We call asymptotically optimal if for all decision rule that controls mFDR at the pre-specified level asymptotically.
3 Methodology
In this section, we first introduce the concept of -value and discuss its connection with e-value. Then, several -value based testing procedures will be proposed in turn in Sections 3.2 - 3.6. Specifically, we will respectively introduce oracle and data-driven -BH procedures, weighted -BH procedures, as well as -value methods with side information.
3.1 -value and its connection with e-value
As introduced by the previous sections, -value is an analog of the likelihood ratio, and we now present its rigorous definition below.
Definition 1.
Suppose is a summary statistic and is the density of under the null, then is a -value of , where is any density function.
It can be seen from the above definition that, the -values are broadly defined. In particular, -value can be viewed as a generalization of the likelihood ratio. The key difference is that the denominator of -value can be any density function and is no longer required to be .
In addition, -value has a close connection with e-value, which is defined below.
Definition 2 (Wang and Ramdas, (2022); Vovk and Wang, (2021)).
A non-negative random variable is an e-value if , where the expectation is taken under the null hypothesis.
Using Markov inequality, it is straightforward to show that the reciprocal of an e-value is a p-value (Wang and Ramdas, , 2022). Note that , which means that is an e-value and a -value is a special p-value. Therefore, one can directly use -values as the inputs for the BH procedure and obtain an e-BH procedure (Wang and Ramdas, , 2022). One attractive feature of the e-BH procedure is that it controls FDR at the target level under arbitrary dependence. However, in practice the e-BH procedure is usually very conservative. To see why, let be the distribution function of under and assume that ’s are independent. Then if we reject the hypotheses with , the e-BH procedure uses as a conservative estimate of the number of false positives despite the fact that no longer follows under the null. In the following section, we introduce a novel -BH procedure which improves e-BH by using a tighter estimate of the number of false positives.
3.2 The -BH procedure
For simplicity, we assume that the -values are independent under the null and there are no ties among them. Recall is the null distribution function of ’s. If the null density is known, then can be easily estimated by the empirical distribution of the samples generated from ; the details will be provided in the simulation section. Consider the decision rule that rejects if and only if , , then a tight estimate of the number of false positives is , where is defined in Model (1). Hence, a reasonable and ideal threshold should control the estimated false discovery proportion (FDP) at level , i.e., . To maximize power, we can choose the rejection threshold to be , where with the -th order statistic. We call this procedure the -BH procedure and summarize it in Algorithm 1.
Input: ; a predetermined density function ; non-null proportion ; desired FDR level .
It is important to note that, since is a monotonically increasing function, the rankings produced by and are identical. Also note that under the null. It follows that the -BH procedure is in fact equivalent to the original BH procedure with as the p-value inputs and as the target FDR level. Consequently, the -BH procedure enjoys the same FDR guarantee as the original BH procedure, as presented in Theorem 1, and has power advantage over e-BH in the meantime.
Theorem 1.
Assume that the null -values are mutually independent and are independent of the non-null -values, then .
It is worthwhile to note that, one can use any predetermined to construct -values and Theorem 1 always guarantees FDR control if ’s are independent under the null. Recall the Lfdr statistic is defined as , and it is shown in Sun and Cai, (2007); Cai et al., (2019) that a ranking and thresholding rule based on Lfdr is asymptotically optimal among all mFDR control rules. Thus, ideally we would like to adopt as the -values since the ranking produced by ’s is identical to that produced by Lfdr statistics. In fact, with this choice of -values, the -BH procedure is asymptotically optimal under some mild conditions as stated by the next theorem.
Theorem 2.
Let and suppose ’s are independent. Denote by the -BH rule and let be any other rule with asymptotically. Suppose the following holds
-
(A1)
.
Then we have .
Remark 1.
Since the rankings produced by Lfdr statistics and -values are identical, the rule is equivalent to the rule for some . It is shown in Sun and Cai, (2007); Cai et al., (2019) that the following procedure
(2) |
is asymptotically optimal for maximizing ETP while controlling and converges to some fixed threshold in probability. If the number of rejections by (2) goes to infinity as , then it guarantees that , which is equivalent to with . Hence, Assumption (A1) is mild.
3.3 The data-driven -BH procedure
In practice, and are usually unknown and need to be estimated from the data. The problem of estimating non-null proportion has been discussed extensively in the literatures (e.g., Storey, , 2002; Jin and Cai, , 2007; Meinshausen and Rice, , 2006; Chen, , 2019). To ensure valid mFDR control, we require the estimator to be conservative consistent, defined as follows.
Definition 3.
An estimator is a conservative consistent estimator of if it satisfies .
One possible choice of such is the Storey estimator as provided by the following proposition.
Proposition 1.
The estimator proposed in Storey, (2002) is conservative consistent for any satisfying .
The problem of estimating is more complicated. If we use the entire sample to construct and let , then ’s are no longer independent even if ’s are. One possible strategy to circumvent this dependence problem is to use sample splitting. More specifically, we can randomly split the data into two disjoint halves and use the first half of the data to estimate the alternative density for the second half, i.e., (e.g., we can use the estimator proposed in Fu et al., (2022)), then the -values for the second half can be calculated by . Hence, when testing the second half of the data, can be regarded as predetermined and independent of the data being tested. The decisions on the first half of the data can be obtained by switching the roles of the first and the second halves and repeating the above steps. If the FDR is controlled at level for each half, then the overall mFDR is also controlled at level asymptotically. We summarize the above discussions in Algorithm 2 and Theorem 3.
Input: ; desired FDR level .
Remark 2.
A natural question for the data-splitting approach is whether it will negatively impact the power. Suppose that , are consistent estimators for some function , and , are consistent estimators for some constant . Denote by the threshold selected by Algorithm 1 with and as inputs, on full data. Then it is expected that thresholds , selected by Algorithm 2 for each half of the data both converge to . Hence, the decision rules on both halves converge to the rule on the full data. Therefore, the decision rule by data-splitting is asymptotically equally powerful as the rule on the full data.
Theorem 3.
Assume that ’s are independent. Denote by , and the -values, selected thresholds and the estimated alternative proportions obtained from Algorithm 2, for the first and second halves of the data respectively, . Denote by the null distribution function for . Suppose and let and , . Assume the following hold
-
2.
and , for some constants ;
-
3.
, ;
-
4.
, and is strictly increasing in , for some constants .
Then we have .
Remark 3.
Theorem 2 and Remark 1 imply that, in the oracle case when the alternative density and the non-null proportion are estimated by the truths, the threshold of the -values should be at least . Since ’s are conservative consistent, we have converges in probability to a number less than . Therefore, the first part of 2 is mild. Moreover, by setting equal to some fixed number, say , the first part of 2 can be easily checked numerically. The second part of 2 is only slightly stronger than the condition that the total number of rejections for each half of the data is of order . It is a sufficient condition to show that the estimated FDP, , is close to , and it can be easily relaxed if satisfies certain convergence rate. 3 is also a reasonable condition, it excludes the trivial cases where no null hypothesis can be rejected or all null hypotheses are rejected. If ’s are continuous, then the first part of 4 is implied by 3 and the definition of . The second part of 4 can be easily verified numerically and it is also mild under the continuity of . Finally, all of the above conditions are automatically satisfied in the oracle case.
3.4 -BH under dependence
We have mentioned in Section 3.1 that a -value is the reciprocal of an e-value. Therefore, if are dependent with possibly complicated dependence structures, one can simply employ as the inputs of the BH procedure and the resulting FDR can still be appropriately controlled at the target level.
Alternatively, one can deal with arbitrary dependence by the proposal in Benjamini and Yekutieli, (2001). They have shown that the BH procedure with target level controls FDR at level , where is known as the Benjamini -Yekutieli (BY) correction. Since the -BH procedure is equivalent to the BH procedure with as p-values and target FDR level , we also have the following theorem on BY correction.
Theorem 4.
By setting the target FDR level equals to , we have under arbitrary dependence.
Note that if one chooses to use BY correction or the e-BH procedure, then the data splitting step in the data-driven procedures is no longer necessary. We also remark that, we can extend our FDR control results under independence in the previous sections to weakly dependent -values and same technical tools as in Cai et al., (2022) can be employed to achieve asymptotic error rates control. The current article focuses on the introduction of the new testing framework based on -values and skips the discussions of the weakly dependent scenarios.
3.5 Weighted -BH procedure
Similar to incorporating prior information via a p-value weighting scheme (e.g., Benjamini and Hochberg, , 1997; Genovese et al., , 2006; Dobriban et al., , 2015), we can also employ such weighting strategy in the current -value framework. Let be a set of positive weights such that . The weighted BH procedure proposed in Genovese et al., (2006) uses ’s as the inputs of the original BH procedure. Genovese et al., (2006) proves that, if ’s are independent and are independent of conditional on , then the weighted BH procedure controls FDR at level less than or equal to .
Following their strategy, we can apply the weighted BH procedure to ’s and obtain the same FDR control result. However, such procedure might be suboptimal as will be explained in the following section. Alternatively, we derive a weighted -BH procedure and the details are presented in Algorithm 3.
Input: ; a predetermined density function ; non-null proportion ; predetermined weights ; desired FDR level .
Note that in Algorithm 3 produces a different ranking than , which may improve the power of the weighted p-value procedure with proper choices of -values and weights. On the other hand, the non-linearity of imposes challenges on the theoretical guarantee for the mFDR control of Algorithm 3 compared to that of Genovese et al., (2006), and we derive the following result based on similar assumptions as in Theorem 3.
Theorem 5.
Assume that are independent. Denote by and Based on the notations from Algorithm 3, suppose the following hold
-
5.
and as , for some ;
-
6.
; ;
-
7.
, is strictly increasing in , for some constants .
Then we have
3.6 -BH with side information
In this section, we propose a -BH procedure that incorporates the auxiliary information while maintaining the proper mFDR control. In many scientific applications, additional covariate informations such as the patterns of the signals and nulls are available. Hence, the problem of multiple testing with side information has received much attention and is becoming an active area of research recently (e.g., Lei and Fithian, , 2018; Li and Barber, , 2019; Ignatiadis and Huber, , 2021; Leung and Sun, , 2021; Cao et al., , 2022; Zhang and Chen, , 2022; Liang et al., , 2023). As studied in the aforementioned works, proper use of such side information can enhance the power and the interpretability of the simultaneous inference methods. Let denote the primary statistic and the side information. Then we model the data generation process as follows.
(3) |
Upon observing , we would like to construct a decision rule that controls mFDR based on -values. As before, we assume the null distributions are known, and we define the -values by for some density function . Let be a predetermined function and be the null distribution of . Then we incorporate the side information through a -value weighting scheme by choosing an appropriate function . The details are summarized in Algorithm 4.
Input: ; ; predetermined density functions ; non-null proportions ; predetermined ; desired FDR level .
In contrast to the -BH procedure, Algorithm 4 is no longer equivalent to the BH procedure with ’s as the inputs since the rankings produced by ’s and ’s are different. The ideal choice of is again , while the ideal choice of is , the rationale is provided as follows. Define the conditional local false discovery rate (Clfdr, Fu et al., (2022); Cai et al., (2019)) as
Cai et al., (2019) shows that a ranking and thresholding procedure based on Clfdr is asymptotically optimal for FDR control. Note that if we take to be and to be , then the ranking produced by ’s is identical to that produced by Clfdr statistics. However, the validity of the data-driven methods proposed in Cai et al., (2019) and Fu et al., (2022) relies on the consistent estimation of Clfdri’s. In many real applications, it is extremely difficult to accurately estimate Clfdr even when the dimension of is moderate (Cai et al., , 2022). In contrast, the mFDR guarantee of Algorithm 4 does not rely on any of such Clfdr consistency results and our proposal is valid under much weaker conditions as demonstrated by the next theorem.
Theorem 6.
Assume that are independent. Denote by and Based on the notations from Algorithm 4, suppose the following hold
-
8.
and as , for some ;
-
9.
; ;
-
10.
, is strictly increasing in , for some constants .
Then we have
Note that, the validity of the above theorem allows flexible choices of functions and the weights . Hence, similarly as the comparison between -value and Lfdr, the -value framework with side information is again much more flexible than the Clfdr framework that requires the consistent estimation of the Clfdr statistics.
We also remark that, Cai et al., (2022) recommends using to weigh p-values arising from the two sample t-statistics, and the authors only provide a heuristic explanation on the superiority of using over as weights, while we have the following optimality result which provides a more rigorous justification for .
Theorem 7.
Remark 4.
Assumptions 8-10 are automatically satisfied under the conditions assumed by Theorem 7. Therefore, in such ideal setting, Algorithm 4 is optimal among all testing rules that asymptotically control mFDR at level . In addition, Theorem 7 implies that the weighted BH procedure (Genovese et al., , 2006) based on the ranking of is suboptimal.
In practice, we need to choose and based on the available data . Again, if the entire sample is used to construct and , then the dependence among ’s and ’s is complicated. Similar to Algorithm 2, we can use sample splitting to circumvent this problem. The details of the data-driven version of Algorithm 4 is provided in Algorithm 5. To ensure a valid mFDR control, we require a uniformly conservative consistent estimator of , whose definition is given below.
Definition 4.
An estimator is a uniformly conservative consistent estimator of if it satisfies as , where for .
The problem of constructing such uniformly conservative consistent estimator has been discussed in the literatures; see for example, Cai et al., (2022) and Cai et al., (2019).
Input: ; ; desired mFDR level .
The next theorem shows that Algorithm 5 indeed controls mFDR at the target level asymptotically under conditions analogous to those assumed in Theorem 3.
Theorem 8.
Assume that are independent. Denote by , and the weighted -values, selected thresholds and the estimated alternation proportions obtained from Algorithm 5, for the first and second halves of the data respectively, . Denote by the null distribution function for . Suppose for some and let and , . Based on the notations from Algorithm 5, suppose the following hold
-
12.
, , for some constants ;
-
13.
, ;
-
14.
, is strictly increasing in , for some constants .
Then we have
4 Numerical Experiments
In this section, we conduct several numerical experiments to compare our proposed procedures with some state-of-the-art methods. In all experiments, we study the general case where side information is available, and generate data according to the following hierarchical model:
(4) |
where , and for . We are interested in testing
To implement our proposed data-driven procedure with side information, i.e., Algorithm 5, we use the following variation of the Storey estimator to estimate in Step 2:
(5) |
where is the two-sided p-value and is chosen as the p-value threshold of the BH procedure at ; this ensures that the null cases are dominant in the set . We let , where is the density of multivariate normal distribution with mean zero and covariance matrix . We use the function Hns in the R package ks to chose . Similar strategies for choosing and are employed in Cai et al., (2022) and Ma et al., (2022).
We construct using a modified version of the two-step approach proposed in Fu et al., (2022) as follows.
-
1.
Let for .
-
2.
Calculate ,
and the weights for . -
3.
Obtain as the non-null density estimate for .
Here, the kernel function is the same as the one in Equation (5), and the bandwidth is chosen automatically using the function density in the R package stats. To estimate the null densities ’s, i.e., the distribution functions of under , , we independently generate 1000 samples ’s from for each and estimate through the empirical distribution of ’s. The estimations on the first half of the data can be obtained by switching the roles of the first and the second halves. The implementation details are provided at https://github.com/seanq31/rhoBH.
We compare the performance of the following six methods throughout the section:
-
•
-BH.OR: Algorithm 4 with , .
-
•
-BH.DD: Algorithm 5 with implementation details described above.
-
•
LAWS: the data-driven LAWS procedure (Cai et al., , 2022) with p-value equals to , where is the cumulative distribution function (cdf) of the standard normal variable.
-
•
CAMT: the CAMT procedure (Zhang and Chen, , 2022) with the same p-values used in LAWS.
-
•
BH: the Benjamini-Hochberg Procedure (Benjamini and Hochberg, , 1995) with the same p-values used in LAWS.
-
•
Clfdr: the Clfdr based method (Fu et al., , 2022) with Clfdr, where and ’s are derived in (b). Specifically, we calculate the threshold and reject those with Clfdr.
All simulation results are based on 100 independent replications with target level . The FDR is estimated by the average of the FDP, , and the average power is estimated by the average proportion of the true positives that are correctly identified, , both over the number of repetitions.
4.1 Bivariate side information
We first consider a similar setting as Setup S2 in Zhang and Chen, (2022), where the non-null proportions and non-null distributions are closely related to a two dimensional covariate. Specifically, the parameters in Equation (4) are determined by the following equations.
where is the identity matrix, , , and are hyper-parameters that determine the impact of on and . In the experiments, we fix at 2 or 1 (denoted as “Medium” and “High”, respectively), at or (denoted as “Moderate” and “Strong”, respectively), and vary from 2 to 6.
Note that, Zhang and Chen, (2022) assumes it is known that and each depend on one coordinate of the covariate when performing their procedure. Hence, for a fair comparison, we employ the same assumption, substitute by for the estimations of (as defined in (5)) and (as defined in Step (a) of constructing ), and substitute by in the rest steps of obtaining , for .
It can be seen from Figure 1 that, except the Clfdr procedure, all other methods successfully control the FDR at the target level. Figure 2 shows that, the empirical powers of -BH.OR and -BH.DD are significantly higher than all other FDR controlled methods. It is not surprising that -BH.OR and -BH.DD outperform LAWS and BH, because the -values only rely on the null distribution, whereas the -values mimic the likelihood ratio statistics and encode the information from the alternative distribution. Both -BH.OR and -BH.DD outperforms CAMT as well, because CAMT uses a parametric model to estimate the likelihood ratio, while -BH.DD employs a more flexible non-parametric approach that can better capture the structural information from the alternative distribution. Finally, as discussed in the previous sections, the Clfdr based approaches strongly rely on the estimation accuracy of and , which can be difficult in practice. Hence as expected, we observe severe FDR distortion of Clfdr method. Such phenomenon reflects the advantage of the proposed -value framework because its FDR control can still be guaranteed even if is far from the ground truth.


4.2 Univariate side information
Next, we consider the univariate covariate case and generate data according to the following model
Two settings are considered. In Setting 1, the signals appear with elevated frequencies in the following blocks:
For the rest of the locations we set . We vary from 2 to 4 to study the impact of signal strength. In Setting 2, we set in the above specified blocks and elsewhere. We fix and vary from 0.5 to 0.9 to study the influence of sparsity levels. In these two cases, the side information can be interpreted as the signal location . When implementing CAMT, we use a spline basis with six equiquantile knots for and to account for potential complex nonlinear effects as suggested in Zhang and Chen, (2022) and Lei and Fithian, (2018).
Again, we compare the six procedures as in Section 4.1, and the results of Settings 1 and 2 are summarized in the first and second rows of Figure 3, respectively. We can see from the first column of Figure 3 that, in both settings all methods control FDR appropriately at the target level. From the second column, it can be seen that both -BH.OR and -BH.DD outperform the other four methods. This is due to the fact that, besides the ability in incorporating the sparsity information, the -value statistic also adopts other structural knowledge and is henceforth more informative than the p-value based methods. In addition, the nonparametric approach employed by -BH.DD is better at capturing non-linear information than the parametric model used in CAMT, and again leads to a more powerful procedure.

5 Data Analysis
In this section, we compare the performance of -BH.DD with Clfdr, CAMT, LAWS and BH on two real datasets.
5.1 MWAS data
We first analyze a dataset from a microbiome-wide association study of sex effect (McDonald et al., , 2018), which is available at https://github.com/knightlab-analyses/american-gut-analyses. The aim of the study is to distinguish the abundant bacteria in the gut microbiome between males and females by the sequencing of a fingerprint gene in the bacteria 16S rRNA gene. This dataset is also analyzed in Zhang and Chen, (2022). We follow their preprocessing procedure to obtain 2492 p-values from Wilcoxon rank sum test for different operational taxonomic units (OTUs), and the percentages of zeros across samples for the OTUs are considered as the univariate side information. Because a direct estimation of the non-null distributions of the original Wilcoxon rank sum test statistics is difficult, we construct pseudo z-values by , where ’s are independent random variables and is the inverse of standard normal cdf. Then we run -BH.DD on those pseudo z-values by employing the same estimation methods of and as described in Section 4. When implementing CAMT, we use the spline basis with six equiquantile knots as the covariates as recommended in Zhang and Chen, (2022). The results are summarized in Figure 4 (a). We can see that -BH.DD rejects significantly more hypotheses than LAWS and BH across all FDR levels. -BH.DD also rejects slightly more tests than Clfdr under most FDR levels, and is more stable than CAMT. Because Clfdr may suffer from possible FDR inflation as shown in the simulations, we conclude that -BH.DD enjoys the best performance on this dataset.
5.2 ADHD data
We next analyze a preprocessed magnetic resonance imaging (MRI) data for a study of attention deficit hyperactivity disorder (ADHD). The dataset is available at http://neurobureau.projects.nitrc.org/ADHD200/Data.html. We adopt the Gaussian filter-blurred skullstripped gray matter probability images from the Athena Pipline, which are MRI images with a resolution of . We pool the 776 training samples and 197 testing samples together, remove 26 samples with no ADHD index, and split the pooled data into one ADHD sample of size 585 and one normal sample of size 362. Then we downsize the resolution of images to by taking the means of pixels within blocks, and then obtain two-sample t-test statistics. Similar data preprocessing strategy is also used in Cai et al., (2022). In such dataset, the 3-dimensional coordinate indices can be employed as the side information. The results of the five methods are summarized in Figure 4 (b). Again, it is obvious to see that -BH.DD rejects more hypotheses than CAMT, LAWS, Clfdr and BH across all FDR levels.

6 Conclusion and Discussions
This article introduces a novel multiple testing framework based on the newly proposed -values. The strength of such framework includes its abilities in unifying the current existing p-values and Lfdr statistics based procedures, as well as in achieving optimal power with much less stringent conditions compared to the Lfdr methods. In the meanwhile it can be extended to incorporate side information through weighting and again the asymptotic optimality can be attained.
As a final message, based on our proposal, we briefly discuss in this section that the frameworks provided by p-values and Lfdr statistics are not as different as claimed in the literature. Note that, a central message of Storey et al., (2007); Sun and Cai, (2007); Leung and Sun, (2021) is that reducing z-values to p-values may lead to substantial information loss. The narrative seems to view Lfdr and p-value as two fundamentally different statistics. However, we have shown in Section 3.1 that -value is a special case of p-value but in the meantime -value can produce the same ranking as Lfdr. Therefore, a more accurate paraphrasing of the message in Storey et al., (2007); Sun and Cai, (2007) should be that “Statistics that take into account the information from alternative distributions are superior to statistics that do not.”
To be more concrete, we show below that a Lfdr based procedure proposed in Leung and Sun, (2021) is actually a special variation of Algorithm 1 under Model (1). Suppose and are known but is not. As mentioned in Section 3.3, a natural choice of is the Storey estimator. Note that, the Storey estimator requires a predetermined tuning parameter . By replacing with , the threshold in Step 3 of the -BH procedure becomes where In a special case when we allow varying for different and let , then it yields that Now if we add to the numerator and let then the decision rule is equivalent to the rule given by the ZAP procedure (Leung and Sun, , 2021) that is based on Lfdr. Hence, the ZAP procedure can be viewed as a special case of the -BH procedure under Model (1), and can be unified into the proposed -BH framework.
Appendix A Data-Driven Weighted -BH Procedure
The data-driven version of the weighted -BH procedure is described in Algorithm 6.
Input: ; predetermined weights ; desired FDR level .
The next theorem provides the theoretical guarantee for the asymptotic mFDR control of Algorithm 6.
Theorem 9.
Assume that are independent. Denote by , and the weighted -values, selected thresholds and the estimated alternative proportions obtained from Algorithm 6, for the first and second halves of the data respectively, . Denote by the null distribution function for . Suppose and let and , . Based on the notations from Algorithm 6, suppose the following hold
-
15.
, , for some constants ;
-
16.
, ;
-
17.
, is strictly increasing in , for some constants .
Then we have
Appendix B Proofs of Main Theorems and Propositions
Note that Theorems 1 and 4 follow directly from the proof of the original BH procedure as discussed in the main text. Theorem 2 is a special case of Theorem 7. Theorems 3 and 9 are special cases of Theorem 8. Theorem 5 is a special case of Theorem 6. Hence, we focus on the proofs of Proposition 1, Theorem 6, Theorem 7 and Theorem 8 in this section.
To simplify notation, we define a procedure equivalent to Algorithm 4. This equivalence is stated in Lemma 1, whose proof will be given later.
Input: ; ; predetermined density functions ; non-null proportions ; predetermined ; desired FDR level .
Lemma 1.
B.1 Proof of Proposition 1
Denote by . Since follows where , we have that
Let and , then . Since under , it follows that . Hence, , and the proposition follows. ∎
B.2 Proof of Theorem 6
Proof.
By Lemma 1, we only need to prove the mFDR control for Algorithm 7. Assumption 8 ensures that is well defined when . Note that, by Assumption 8 and standard Chernoff bound for independent Bernoulli random variables, we have uniformly for and any
which implies
(6) |
as , where
Assumption 9 implies . Moreover, combining Equation (6) with Assumption 10, we have for any with probability going to . Thus, we only have to consider . Specifically, we consider . As is strictly increasing within this range by Assumption 10, we have
Therefore, we have
(7) |
∎
B.3 Proof of Theorem 7
We first state a useful lemma whose proof will be given later.
Lemma 2.
Let , , . For any , let
Suppose Assumption 11 holds. Then we have
-
1.
;
-
2.
is strictly increasing;
-
3.
, for any testing rule based on and such that , where .
Next we prove Theorem 7.
Proof.
By Lemma 1, Algorithm 4 is equivalent to reject all hypotheses that satisfying , where is the threshold defined in Algorithm 7. To simplify notations, let and we next show that in probability.
By the standard Chernoff bound for independent Bernoulli random variables, we have
for all . By Assumption 11, the above implies
(8) |
Combining Equation (8) and the first part of Lemma 2, we have
(9) |
which implies . Therefore, we will only focus the event in the following proof.
For any , we let
and
Following the proof of the third part of Lemma 2, we consider two cases: and .
The first case is trivial by noting that mFDR can be controlled even if we reject all null hypotheses. For the second case, we need to show that . Similar to the proof of Equation (8), we have uniformly for and any ,
which implies uniformly in . Thus, . Moreover, by Lemma 2, we know that is continuous and strictly increasing. Therefore, we can define the inverse function of . Thus, by the continuous mapping theorem, we have
By the third part of Lemma 2, we have and therefore,
∎
B.4 Proof of Theorem 8
We first introduce Lemma 3, whose proof will be given later.
Lemma 3.
Next we prove Theorem 8.
Appendix C Proofs of Lemmas
C.1 Proof of Lemma 1
Proof.
It is easy to see that as
Now it suffices to show that, for any , we have
(12) |
By the definition of , for any , we have
Then for any , for any where , we have
This proves Equation (12) and concludes the proof.
∎
C.2 Proof of Lemma 2
Proof.
First of all, by Assumption 11, we have that is well defined for . For any such that , we set for simplicity and it will not affect the results. We can rewrite as
(13) |
For the first part of this lemma, note that
The equality holds if and only if . Therefore, by Equation (13), we have
(14) |
Denote by . By Equation (14), we immediately have . Therefore, we only consider in the following proof.
For the second part, let , and . From the first part, we learn that . Therefore,
For the third part, note that here is continuous and increasing when . We consider two cases: and .
The first case is trivial since it implies and . The procedure rejects all hypotheses and is obviously most powerful. For the second case, we have . Combining this with the fact that , we can always find a unique such that . Note that, by
we have
which implies
(15) |
Note that, by the law of total expectation as in Equation (13), we have
(16) |
Hence, it suffices to show By Equation (15), it suffices to show that there exists some such that for every , i.e.,
(17) |
By the first part of this lemma, we have and thus . Let , then for each :
-
1.
If , we have and . Therefore, .
-
2.
If , we have and . Therefore, .
This proves Equation (17) and concludes the proof. ∎
C.3 Proof of Lemma 3
Proof.
For , we let
As by the first part of Assumption 12 and Lemma 1, we only consider in the following proof. The second part of Assumptiont 12 implies when for , which makes , , well defined when .
Note that, by Assumption 12 and the standard Chernoff bound for independent Bernoulli random variables, we have uniformly for and any
which implies
(18) |
as . On the other hand, we have uniformly for ,
where the first is with regard to by the uniformly conservative consistency of , and the term in the denominator comes from the first part of Assumption 12. As the data splitting strategy ensures , we obtain the second with regard to . Thus, we have
(19) |
in probability as .
References
- Aubert et al., (2004) Aubert, J., Bar-Hen, A., Daudin, J.-J., and Robin, S. (2004). Determination of the differentially expressed genes in microarray experiments using local fdr. BMC bioinformatics, 5(1):1–9.
- Basu et al., (2018) Basu, P., Cai, T. T., Das, K., and Sun, W. (2018). Weighted false discovery rate control in large-scale multiple testing. Journal of the American Statistical Association, 113(523):1172–1183.
- Benjamini and Hochberg, (1995) Benjamini, Y. and Hochberg, Y. (1995). Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society: Series B (Methodological), 57(1):289–300.
- Benjamini and Hochberg, (1997) Benjamini, Y. and Hochberg, Y. (1997). Multiple hypotheses testing with weights. Scandinavian Journal of Statistics, 24(3):407–418.
- Benjamini and Yekutieli, (2001) Benjamini, Y. and Yekutieli, D. (2001). The control of the false discovery rate in multiple testing under dependency. Annals of Statistics, 29(4):1165–1188.
- Cai et al., (2019) Cai, T. T., Sun, W., and Wang, W. (2019). Covariate-assisted ranking and screening for large-scale two-sample inference. Journal of the Royal Statistical Society Series B: Statistical Methodology, 81(2):187–234.
- Cai et al., (2022) Cai, T. T., Sun, W., and Xia, Y. (2022). Laws: A locally adaptive weighting and screening approach to spatial multiple testing. Journal of the American Statistical Association, 117(539):1370–1383.
- Cao et al., (2022) Cao, H., Chen, J., and Zhang, X. (2022). Optimal false discovery rate control for large scale multiple testing with auxiliary information. The Annals of Statistics, 50(2):807–857.
- Chen, (2019) Chen, X. (2019). Uniformly consistently estimating the proportion of false null hypotheses via lebesgue–stieltjes integral equations. Journal of Multivariate Analysis, 173:724–744.
- Dobriban et al., (2015) Dobriban, E., Fortney, K., Kim, S. K., and Owen, A. B. (2015). Optimal multiple testing under a gaussian prior on the effect sizes. Biometrika, 102(4):753–766.
- Efron, (2003) Efron, B. (2003). Robbins, empirical bayes and microarrays. The Annals of Statistics, 31(2):366–378.
- Efron, (2004) Efron, B. (2004). Large-scale simultaneous hypothesis testing: the choice of a null hypothesis. Journal of the American Statistical Association, 99(465):96–104.
- Efron and Tibshirani, (2007) Efron, B. and Tibshirani, R. (2007). On testing the significance of sets of genes. The Annals of Applied Statistics, 1(1):107–129.
- Efron et al., (2001) Efron, B., Tibshirani, R., Storey, J. D., and Tusher, V. (2001). Empirical bayes analysis of a microarray experiment. Journal of the American Statistical Association, 96(456):1151–1160.
- Fu et al., (2022) Fu, L., Gang, B., James, G. M., and Sun, W. (2022). Heteroscedasticity-adjusted ranking and thresholding for large-scale multiple testing. Journal of the American Statistical Association, 117(538):1028–1040.
- Gang et al., (2023) Gang, B., Sun, W., and Wang, W. (2023). Structure–adaptive sequential testing for online false discovery rate control. Journal of the American Statistical Association, 118(541):732–745.
- Genovese and Wasserman, (2002) Genovese, C. and Wasserman, L. (2002). Operating characteristics and extensions of the false discovery rate procedure. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 64(3):499–517.
- Genovese et al., (2006) Genovese, C. R., Roeder, K., and Wasserman, L. (2006). False discovery control with p-value weighting. Biometrika, 93(3):509–524.
- Ignatiadis and Huber, (2021) Ignatiadis, N. and Huber, W. (2021). Covariate powered cross-weighted multiple testing. Journal of the Royal Statistical Society Series B: Statistical Methodology, 83(4):720–751.
- Jin and Cai, (2007) Jin, J. and Cai, T. T. (2007). Estimating the null and the proportion of nonnull effects in large-scale multiple comparisons. Journal of the American Statistical Association, 102(478):495–506.
- Lei and Fithian, (2018) Lei, L. and Fithian, W. (2018). Adapt: an interactive procedure for multiple testing with side information. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 80(4):649–679.
- Leung and Sun, (2021) Leung, D. and Sun, W. (2021). Zap: -value adaptive procedures for false discovery rate control with side information. arXiv preprint arXiv:2108.12623.
- Li and Barber, (2019) Li, A. and Barber, R. F. (2019). Multiple testing with the structure-adaptive benjamini–hochberg algorithm. Journal of the Royal Statistical Society Series B: Statistical Methodology, 81(1):45–74.
- Liang et al., (2023) Liang, Z., Cai, T. T., Sun, W., and Xia, Y. (2023). Locally adaptive algorithms for multiple testing with network structure, with application to genome-wide association studies. arXiv preprint arXiv:2203.11461.
- Ma et al., (2022) Ma, L., Xia, Y., and Li, L. (2022). Napa: Neighborhood-assisted and posterior-adjusted two-sample inference. arXiv preprint arXiv:2201.10043.
- McDonald et al., (2018) McDonald, D., Hyde, E., Debelius, J. W., Morton, J. T., Gonzalez, A., Ackermann, G., Aksenov, A. A., Behsaz, B., Brennan, C., Chen, Y., et al. (2018). American gut: an open platform for citizen science microbiome research. Msystems, 3(3):10–1128.
- Meinshausen and Rice, (2006) Meinshausen, N. and Rice, J. (2006). Estimating the proportion of false null hypotheses among a large number of independently tested hypotheses. The Annals of Statistics, 34(1):373–393.
- Roquain and Van De Wiel, (2009) Roquain, E. and Van De Wiel, M. A. (2009). Optimal weighting for false discovery rate control. Electronic Journal of Statistics, 3:678–711.
- Storey, (2002) Storey, J. D. (2002). A direct approach to false discovery rates. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 64(3):479–498.
- Storey et al., (2007) Storey, J. D., Dai, J. Y., and Leek, J. T. (2007). The optimal discovery procedure for large-scale significance testing, with applications to comparative microarray experiments. Biostatistics, 8(2):414–432.
- Sun and Cai, (2007) Sun, W. and Cai, T. T. (2007). Oracle and adaptive compound decision rules for false discovery rate control. Journal of the American Statistical Association, 102(479):901–912.
- Vovk and Wang, (2021) Vovk, V. and Wang, R. (2021). E-values: Calibration, combination and applications. The Annals of Statistics, 49(3):1736–1754.
- Wang and Ramdas, (2022) Wang, R. and Ramdas, A. (2022). False discovery rate control with e-values. Journal of the Royal Statistical Society Series B: Statistical Methodology, 84(3):822–852.
- Xie et al., (2011) Xie, J., Cai, T. T., Maris, J., and Li, H. (2011). Optimal false discovery rate control for dependent data. Statistics and Its Interface, 4(4):417.
- Zhang and Chen, (2022) Zhang, X. and Chen, J. (2022). Covariate adaptive false discovery rate control with applications to omics-wide multiple testing. Journal of the American Statistical Association, 117(537):411–427.