On the Consistency of Optimal Bayesian Feature Selection in the Presence of Correlations
Abstract
Optimal Bayesian feature selection (OBFS) is a multivariate supervised screening method designed from the ground up for biomarker discovery. In this work, we prove that Gaussian OBFS is strongly consistent under mild conditions, and provide rates of convergence for key posteriors in the framework. These results are of enormous importance, since they identify precisely what features are selected by OBFS asymptotically, characterize the relative rates of convergence for posteriors on different types of features, provide conditions that guarantee convergence, justify the use of OBFS when its internal assumptions are invalid, and set the stage for understanding the asymptotic behavior of other algorithms based on the OBFS framework.
doi:
0000keywords:
[class=MSC]keywords:
and
1 Introduction
Biomarker discovery aims to identify biological markers (genes or gene products) that lead to improved diagnostic or prognostic tests, better treatment recommendations, or advances in our understanding of the disease or biological condition of interest (Ilyin et al., 2004; Rifai et al., 2006; Ramachandran et al., 2008). Reliable and reproducible biomarker discovery has proven to be difficult (Diamandis, 2010); while one reason is that preliminary small-sample datasets are inherently difficult to analyze, another factor is that most popular algorithms and methods employed in bioinformatics have inherent limitations that make them unsuitable for many discovery applications.
Consider univariate filter methods like t-tests, which are perhaps the most ubiquitous throughout the bioinformatics literature. Since they focus on only one feature at a time, filter methods cannot take advantage of potential correlations between markers. In particular, they cannot identify pairs (or sets) of markers with tell-tale behaviors only when observed in tandem. Multivariate methods, on the other hand, can account for correlations, but the vast majority are wrapper or embedded feature selection algorithms designed to aid in classification or regression model reduction. Model construction is not a reasonable goal in most small-scale exploratory studies, particularly in biology where the expected number of variables is large and the nature of their interactions can be highly complex and context dependent. Furthermore, feature selection methods designed for model reduction intrinsically penalize redundant features and reward smaller feature sets (Sima and Dougherty, 2008; Awada et al., 2012; Ang et al., 2016; Li et al., 2017). This can be counterproductive in biomarker discovery; so much so that filter methods often far outperform multivariate methods (Sima and Dougherty, 2006, 2008; Foroughi pour and Dalton, 2018b).
Optimal Bayesian feature selection (OBFS) is a supervised multivariate screening method designed to address these problems (Dalton, 2013; Foroughi pour and Dalton, 2019). The OBFS modeling framework, discussed in detail in Section 2, assumes that features can be partitioned into two types of independent blocks: blocks of correlated “good” features that have distinct joint distributions between two classes, and blocks of correlated “bad” features that have identical joint distributions in all classes (Foroughi pour and Dalton, 2018a). Given distributional assumptions on each block, and priors on the distribution parameters, OBFS can be implemented using one of many selection criteria, for example the maximum number correct (MNC) criterion, which maximizes the posterior expected number of features correctly identified as belonging to good versus bad blocks, or the constrained MNC (CMNC) criterion, which additionally constrains the number of selected features. In this way, OBFS aims to optimally identify and rank the set of all features with distinct distributions between the classes, which represents the predicted set of biomarkers, while accounting for correlations.
Optimal Bayesian feature filtering (OBF) is a special case of OBFS that assumes that features are independent (blocks of size one) and the parameters governing each feature are independent (Foroughi pour and Dalton, 2015). Gaussian OBF (which assumes independent Gaussian features with conjugate priors, henceforth referred to as simply OBF), has very low computation cost, robust performance when its modeling assumptions are violated, and particularly excels at identifying strong markers with low correlations (Foroughi pour and Dalton, 2017a, 2018a). Furthermore, it has been shown that OBF is strongly consistent under very mild conditions, including cases where the features are dependent and non-Gaussian (Foroughi pour and Dalton, 2019). In particular, OBF, in the long run, selects precisely the set of features with distinct means or variances between the classes. The consistency theorem in (Foroughi pour and Dalton, 2019) formalizes our intuition that OBF cannot take advantage of correlations; although OBF identifies features that are individually discriminating, we see that it has no capacity to identify features that only discriminate when grouped together, or features that are merely correlated with (or linked through a chain of correlation with) other discriminating features.
In bioinformatics, features that only discriminate when grouped together and features that are correlated with other strong markers are of tremendous interest, since they may represent important actors in the biological condition under study. While multivariate methods have the potential to discover such features, as discussed above, methods focused on model reduction tend to do so unreliably and inconsistently. Rather than involve classification or regression models, OBFS searches for intrinsic differences between features. Like most multivariate methods, one difficulty with OBFS is that it is computationally expensive (except in certain special cases, like OBF). If an exact solution is desired, even Gaussian OBFS (which assumes independent blocks of Gaussian features with conjugate priors, henceforth referred to as simply OBFS) is currently only tractable in small-scale problems with up to approximately ten features. This has lead to the development of a number of promising computationally efficient heuristic algorithms based on OBFS theory, for example 2MNC-Robust (Foroughi pour and Dalton, 2014), POFAC, REMAIN, and SPM (Foroughi pour and Dalton, 2018a).
Although OBFS and heuristic algorithms based on OBFS have demonstrated better performance than other multivariate methods in biomarker discovery, it is currently unknown precisely what features they select in the long run. Our main contribution is a theorem presented in Section 3, analogous to the consistency theorem presented in (Foroughi pour and Dalton, 2019) for OBF, that shows that OBFS is strongly consistent under mild conditions. The consistency proof for OBFS utilizes nine lemmas and is significantly more complex than that of OBF. This theorem identifies precisely what features are selected by OBFS asymptotically, provides conditions that guarantee convergence, justifies the use of OBFS in non-design settings where its internal assumptions are invalid, and characterizes rates of convergence for key posteriors in the framework, including marginal posteriors on different types of features.
The asymptotic behavior of optimal feature selection provides a frame of reference to understand the performance of heuristic algorithms based on OBFS; for instance, we may compare the sets of features selected asymptotically, the conditions required to guarantee convergence, and rates of convergence. Furthermore, while numerous works emphasize the need to account for gene interactions and to detect families of interacting biomarkers [e.g., see Han et al. (2017), Xi et al. (2018) and Fang et al. (2019)], typically the focus is on simple statistics that measure pairwise interactions in the data, rather than on establishing intrinsic characteristics of complete marker families, or on quantifying the performance and properties of selection algorithms designed to identify marker families. Thus, this work is also important because it proposes a formal definition for marker families (correlated blocks of features), and shows that these marker families are identifiable (via OBFS).
2 Gaussian Optimal Bayesian Feature Selection
We begin with an overview of the Gaussian modeling framework presented in Foroughi pour and Dalton (2018a), and a derivation of the corresponding optimal selection rule, OBFS. Let be the set of feature indices, each corresponding to a real-valued feature. We call feature indices we wish to select “true good features,” denoted by , and we call feature indices we wish to not select “true bad features,” denoted by . In this model, good features assign (w.p. over the prior) distinct distributions between two classes, labeled and , while bad features assign the same distribution across the whole sample. We further assume that and can each be partitioned into sub-blocks of interacting features. We call the set of all sub-blocks a “true feature partition,” denoted by . If and are non-empty, we write , where and are positive integers, and the set of all ’s and ’s are disjoint such that , , and all ’s and ’s are non-empty. If is empty, then we still denote this way, but also define , and define sums and products from to to be and , respectively. We define similar conventions when is empty.
In the Bayesian model, , and are random sets, and we denote instantiations of these random sets by , , and , respectively. We define a prior on the true feature partition, , which induces a prior on the true good features, .
Given a true feature partition , define for each class and each good block index , and for each bad block index . Let denote the collection of all ’s and ’s. Assume the ’s and ’s are mutually independent, i.e., we have a prior of the form
(2.1) |
For good block , assume for each class , where is a length column vector, is an matrix, and denotes the cardinality of set . We assume is normal-inverse-Wishart:
(2.2) | ||||
(2.3) |
where is the determinant, is the trace, and is the transpose of matrix . For a proper prior, , is a symmetric positive-definite matrix, , is an vector,
(2.4) | ||||
(2.5) |
and denotes the multivariate gamma function, where is a positive integer. Likewise, for bad block assume , and that is normal-inverse-Wishart with hyperparameters , , , and , and scaling constants and .
Let be a sample composed of labeled points with points in class , where labels are determined by a process independent from and (for instance, using random sampling or separate sampling). Given , , and the labels, assume all sample points are mutually independent, and features in different blocks are also independent. Assume that features in block , , are jointly Gaussian with mean and covariance matrix under class , and that features in block , , are jointly Gaussian with mean and covariance across the whole sample.
The posterior on over the set of all valid feature partitions is given by the normalized product of the prior and likelihood function. It can be shown that , where is normalized to have unit sum, and
(2.6) | ||||
(2.7) |
For each good block , , and class ,
(2.8) | ||||
(2.9) | ||||
(2.10) | ||||
(2.11) | ||||
(2.12) |
and and are the sample mean and unbiased sample covariance of features in under class . Similarly, for each bad block , we find , , , and using (2.8) through (2.12) with all subscript ’s removed, where and are the sample mean and unbiased sample covariance of features in across the whole sample.
The posterior on induces a posterior on over all subsets ,
(2.13) |
as well as posterior probabilities that each individual feature is in ,
(2.14) |
Note and are distinct.
The objective of OBFS is to identify the set of true good features, . We will consider two objective criteria: MNC and CMNC. MNC maximizes the expected number of correctly identified features; that is, MNC outputs the set with complement that maximizes , which is given by (Foroughi pour and Dalton, 2014). CMNC maximizes the expected number of correctly identified features under the constraint of selecting a specified number of features, , and is found by picking the features with largest (Foroughi pour and Dalton, 2017b). Both MNC and CMNC require computing for all , which generally requires computing for all valid feature partitions , and is generally intractable unless is small.
Under proper priors, Gaussian OBFS takes the following modeling parameters as input: (1) for each valid feature partition, , (2) , , , and symmetric positive-definite for all and all possible good blocks in valid , and (3) , , , and symmetric positive-definite for all possible bad blocks in valid . If CMNC is used, it also takes as input. When is improper, the above derivations are invalid, but can still be taken as a definition and computed as long as: (1) is proper, (2) , , and is symmetric positive-definite, (3) , , and is symmetric positive-definite, and (4) and are no longer given by (2.4) and (2.5), and similarly and are no longer given by analogous equations; instead these are positive input parameters specified by the user.
Gaussian OBF is a special case where assumes that all blocks and are of size one, and the events are mutually independent. For each , OBF takes as input (1) marginal priors , (2) scalars , , and for all , (3) scalars , , and , and (4) if improper priors are used. OBF also takes as input if CMNC is used. Under OBF, it can be shown that , defined in (2.14), simplifies to , where
(2.15) |
, and are defined in (2.9), (2.10) and (2.11), respectively, and , and are defined similarly. Rather than evaluate for all feature partitions, OBF under both MNC and CMNC reduces to simple filtering, where features are scored by the given in (2.15).
3 Consistency
Let be an arbitrary sampling distribution on an infinite sample, . Each sample point in consists of a binary label, , and a set of features corresponding to a set of feature indices, . For each , let denote a sample consisting of the first points in , let denote the number of points in class , and define .
The goal of feature selection is to identify a specific subset of features (say those with different distributions between two classes), which we denote by . Proving strong consistency for a feature selection algorithm thus amounts to showing that
(3.1) |
with probability (w.p. ) over the infinite sampling distribution, where is the sample size, is a sample of size , and is the output of the selection algorithm. Here, and are sets, and we define to mean that for all but a finite number of .
OBFS and OBF fix the sample and model the set of features we wish to select and the sampling distribution as random. To study consistency, we reverse this: now the sampling distribution is fixed and the sample is random. We begin by reviewing a few definitions and a special case of the strong consistency theorem for OBF.
Definition 1 (Foroughi pour and Dalton, 2019).
is an independent unambiguous set of good features if the following hold, where and are the mean and variance of feature under class , respectively:
-
1.
For each , and exist for all such that either or .
-
2.
For each , and exist for all such that and .
Definition 2 (Foroughi pour and Dalton, 2019).
An infinite sample, , is a balanced sample if , , and, conditioned on the labels, sample points are independent with points belonging to the same class identically distributed.
Theorem 1 (Foroughi pour and Dalton, 2019).
Let be a fixed infinite sample and let be a fixed subset of . If , then and if , where and are the MNC and CMNC feature sets under , respectively.
Theorem 2 (Foroughi pour and Dalton, 2019).
Suppose the following are true:
-
(i)
is the independent unambiguous set of good features.
-
(ii)
For each feature not in , the fourth order moment across the whole sample exists.
-
(iii)
is a balanced sample w.p. .
-
(iv)
assumes a fixed Gaussian OBF model for all with for all .
Then for -almost all infinite samples.
By Theorems 1 and 2, Gaussian OBF under MNC and Gaussian OBF under CMNC with “correct” (the user knows in advance how many features to select) are strongly consistent if the conditions of Theorem 2 hold. Condition (i) uses Definition 1, and characterizes the features that OBF aims to select. Condition (iii) uses Definition 2, and characterizes requirements of the sample. Conditions (i)–(iii) impose very mild assumptions on the data generating process, which is not required to satisfy the Gaussian OBF modeling assumptions in Condition (iv).
Let us now turn to OBFS. Denote the true mean and covariance of features in feature set and class by and , respectively (the features need not be Gaussian). OBFS aims to select the smallest set of features, , with different means or covariances between the classes, where has the same means and covariances between the classes and is uncorrelated with . This is formalized in the following definition.
Definition 3.
is an unambiguous set of good features if the following hold:
-
1.
and exist for all .
-
2.
Either or .
-
3.
and , where .
-
4.
Each feature in is uncorrelated with each feature in in both classes.
-
5.
Conditions 1-4 do not all hold for any strict subset .
Assuming all first and second order moments exist, an unambiguous set of good features always exists and is unique. To prove uniqueness, let and be arbitrary unambiguous sets of good features, and let . By condition 4, , has a block diagonal structure for each corresponding to the sets of features , , , and . Thus, condition 4 holds for . By condition 3, and for all of these blocks except . Thus, condition 3 holds for . If , then (since the means for each class are equal for ) . Alternatively, if , then (since and are uncorrelated for each class, and the covariances between each class are equal for ) . In either case, condition 2 holds for . Since and , by condition 5 we must have . We denote the unique unambiguous set of good features by , and its complement by , throughout.
Similar to the Bayesian model, define a feature partition to be an ordered set of the form , where the set of ’s and ’s partition . We call each a “good block” and each a “bad block”, keeping in mind that these terms are always relative to a specified arbitrary partition. Feature partitions with no good blocks or no bad blocks are permitted, with appropriate conventions for unions, sums, and products over the (empty set of) corresponding blocks. The following definitions formalize a non-Bayesian analog of the true feature partition.
Definition 4.
Let and be arbitrary feature partitions. is a mesh of if every block in is a subset of a block in . is a refinement of if every good block in is a subset of a good block in and every bad block in is a subset of a bad block in . is a strict refinement of if it is a refinement and .
Definition 5.
is an unambiguous feature partition if the following hold:
-
1.
is an unambiguous set of good features.
-
2.
Each feature in any arbitrary block is uncorrelated with each feature in any other block in both classes.
-
3.
Conditions 1 and 2 do not hold for any feature partition that is a strict refinement of .
An unambiguous feature partition always exists and is unique, and we denote it by throughout. By definition, the unambiguous feature partition induces the unambiguous set of good features .
Our main result is given in the following theorem (Theorem 3), which provides sufficient conditions for the (almost sure) convergence of to a point mass at , thereby guaranteeing the (almost sure) convergence of to a point mass at . By Theorem 1, Gaussian OBFS under MNC and Gaussian OBFS under CMNC with “correct” are strongly consistent if the conditions of Theorem 3 hold, which are very mild. Condition (i) is based on Definition 3 and essentially guarantees that really represents the type of features OBFS aims to select (those with different means or covariances between the classes). Conditions (i) and (ii) ensure certain moments exist, and Condition (iii) ensures that all covariances are full rank. There are no other distributional requirements. Condition (iv) is based on Definition 2 and characterizes the sampling strategy; the proportion of points observed in any class must not converge to zero, and sample points should be independent. Condition (v) places constraints on the inputs to the OBFS algorithm; we must assign a non-zero probability to the true feature partition.
Finally, from the proof of Theorem 3 we have that, under the conditions stated in the theorem, w.p. there exist and such that
(3.2) |
for all and all that label any good features as bad or that put features that are correlated in separate blocks. Also, w.p. there exist such that
(3.3) |
for all and all . Fix . By (3.2), for each feature partition such that , w.p. there exist and such that for all . Therefore, w.p. ,
(3.4) |
for all , where and . Thus, w.p. ,
(3.5) |
for all . The marginal posterior of good features converges to at least exponentially w.p. . Now fix . By (3.3), w.p. there exist such that
(3.6) |
for all . Thus, w.p. ,
(3.7) |
for all . In other words, the marginal posterior of bad features converges to at least polynomially w.p. . This characterizes rates of convergence of the posterior on feature partitions, and the marginal posterior probabilities on individual features.
Throughout, “ and are asymptotically equivalent” and “ as ” mean that , denotes the th element of vector , denotes the th row, th column element of matrix , denotes the all-zero matrix, and the sample mean and unbiased sample covariance of features in block and class are denoted by and , respectively.
Theorem 3.
Suppose the following hold:
-
(i)
is the unambiguous feature partition.
-
(ii)
For all , the fourth order moment across both classes exists and is finite.
-
(iii)
and are full rank.
-
(iv)
is a balanced sample w.p. .
-
(v)
assumes a fixed Gaussian OBFS model for all with .
Then for -almost all sequences.
Proof.
Since , it suffices to show that for all feature partitions ,
(3.8) |
Fix . If then (3.8) holds trivially. In the remainder of this proof, assume . It suffices to show:
(3.9) |
We prove this by constructing a sequence of partitions from to in six steps. In Step 1, blocks that are labeled bad in are split into subsets of bad blocks in and subsets of the unambiguous set of good features . In Steps 2 and 3, blocks that are labeled bad in the previous partition but that are actually subsets of are labeled good (and in Step 3 they are also merged with other good blocks). In Step 4, blocks that are labeled good in the previous partition are split into subsets of (good and bad) blocks in . In Step 5, blocks that are labeled good in the previous partition but that are actually subsets of bad blocks in are labeled bad. In Step 6, we merge blocks in the previous partition until we arrive at . An example of this sequence of partitions is illustrated in Fig. 1. Finally, Step 7 uses the sequence of partitions constructed in Steps 1-6 to show (3.9).

Step 1. Let , , be a sequence of partitions where (1) we let , (2) for each , we let be identical to except with one bad block partitioned into two bad blocks such that one of the smaller blocks is the intersection of the original block with a bad block in , and (3) we iterate until no more blocks can be split. The order we split blocks does not matter; at the end we always obtain the partition that is identical to except with each bad block split into smaller blocks that are each either a subset of a bad block in or a subset of the unambiguous set of good features . Suppose . Note is a strict refinement of . By Lemma 1, w.p. there exist (which may depend on the sample) such that for all . Furthermore,
(3.10) |
For each , by Lemma 3, w.p. there exist (which may depend on the sample) such that for all . By (3.10), w.p. there exist (namely from above, , and ) such that for all . Finally, since for dominates , w.p. .
Step 2. Construct a sequence of partitions starting from where, in each step, one bad block in the current partition that is (a) contained in , and (b) has either different means or different covariances between the classes, is labeled good in the next partition, and iterate until no more blocks can be re-labeled. The order we re-label blocks does not matter; we always obtain the same final partition, which we call . Suppose . By Lemma 1, w.p. there exists such that for all large enough. By Lemma 4, w.p. there exists such that for all large enough. Since for dominates , w.p. .
Step 3. Some bad blocks in may have equal means and covariances between the classes, but may still be contained in because they are correlated with, or connected through a chain of correlation with, features in that have either different means or different covariances between the classes. Here we construct a sequence of partitions to correct the label of these features. The construction goes as follows: starting from , in each step merge one bad block in the current partition that is contained in (which must have the same means and covariances between classes after Step 2) with one good block in the current partition that is correlated in at least one class (because it is correlated it must also be in ), label the resulting block as good in the next partition, and iterate until no more blocks can be merged.
First only blocks directly correlated with good blocks in the original partition can be moved, then only blocks directly correlated with good blocks in the original partition or the block just moved can be moved, etc. All bad blocks in that are actually in will eventually be moved because: (1) all features that have either different means or different variances have already been correctly labeled good in Step 2, and (2) the definition of guarantees that every feature in is connected to at least one feature in with either different means or different variances via at least one chain of pairwise-correlated features that are all in the same good block in . Thus, each bad block in the final partition, call it , is guaranteed to be a subset of a bad block in .
Suppose . By Lemma 1, w.p. there exists such that for all large enough. By Lemma 5, w.p. there exists such that for all large enough. Since for dominates , w.p. .
Step 4. Construct a sequence of partitions starting from where, in each step, one good block in the current partition is split into two good blocks such that one of the smaller blocks is the intersection of the original block with a (good or bad) block in , and iterate until no more blocks can be split. The order we split blocks does not matter; in the final partition, call it , each bad block is a subset of a bad block in and each good block is a subset of a good or bad block in . Suppose . Note that is a strict refinement of . By Lemma 1, w.p. there exists such that for all large enough. By Lemma 6, w.p. there exists such that for all large enough. Finally, since for dominates , w.p. .
Step 5. Construct a sequence of partitions starting from where, in each step, one good block in the current partition that is contained in a bad block in is labeled bad in the next partition, and iterate until no more blocks can be re-labeled. The order we re-label blocks does not matter; in the final partition, call it , each block is a subset of a block in of the same type. Suppose . Note that is a mesh of , every good block in is a good block in , and every bad block in is a block (good or bad) in . By Lemma 1, w.p. there exists such that for all large enough. By Lemma 7, w.p. there exists such that for all large enough. Since for dominates , w.p. .
Step 6. Construct a sequence of partitions from to where each partition is a strict refinement of and, in each step, two blocks of the same type (good or bad) in the current partition are merged into a larger block of the same type in the next partition that is contained in a block of the same type in . The order we merge blocks does not matter, as long as the pair of blocks merged in each step are correlated in at least one class. Suppose . By Lemma 1, w.p. there exists such that for all large enough. By Lemmas 8 and 9, w.p. there exists such that for all large enough. Since for dominates , w.p. .
Lemma 1.
Let and be arbitrary feature partitions, and let be a fixed sample such that and . Then there exists and such that for all . Further, this holds for if: (i) is a strict refinement of , or (ii) is a mesh of where , every good block in is a subset of a good block in , and every bad block in is a subset of a good or bad block in .
Proof.
For any feature partition , we may write
(3.12) |
where is a positive constant that does not depend on . We first focus on each block individually. and are asymptotically equivalent to , and and are asymptotically equivalent to . Further, for all positive integers ,
(3.13) |
and by Stirling’s formula,
(3.14) |
as . Thus, for all ,
(3.15) | |||
(3.16) |
as , where does not depend on . For any constants and ,
(3.17) |
as . Hence,
(3.18) |
as , where does not depend on . Similarly,
(3.19) |
as , where does not depend on . Applying (3.18) and (3.19) in (3.12), we have that as , where does not depend on , , , and we treat and as functions of . Applying this in the ratio, we have:
(3.20) |
as , where , and . We always have , and by our assumed bounds on the limit inferior and limit superior of we also have . Thus, for all ,
(3.21) |
Further,
(3.22) |
If is a strict refinement of , then . Also, if is a mesh of where , then every good block in is a subset of a good block in and every bad block in is a subset of a good or bad block in , thus . If , then (3.21) holds for all . ∎
Lemma 2.
Let be any non-empty feature set such that and are full rank. Suppose for all the fourth order moment across both classes exists and is finite. Let be a balanced sample. Then, w.p. there exist (which may depend on the sample) such that
(3.23) | ||||
(3.24) | ||||
(3.25) | ||||
(3.26) | ||||
(3.27) | ||||
(3.28) | ||||
(3.29) | ||||
(3.30) |
all hold for all , and , where
(3.31) |
Proof.
Fix . Since is a balanced sample, as , and by the strong law of large numbers converges to and converges to w.p. for both . In the rest of this proof, we only consider the event where and converge.
Since is a balanced sample, there exist and such that for all and . By the law of the iterated logarithm (Hartman and Wintner, 1941), w.p. there exist and such that
(3.32) |
for all and (Foroughi pour and Dalton, 2019). This proves (3.23).
We can decompose the sample covariance for class as follows:
(3.33) |
where is the th sample of feature in class . By (3.23), w.p. there exists and such that
(3.34) |
for all and . is a random variable with mean and finite variance . Suppose . By the law of the iterated logarithm, w.p.
(3.35) |
for all . Thus, w.p. there exists and such that
(3.36) |
for all and . A similar inequality holds when . Combining (3.33), (3.34) and (3.36), w.p. there exists such that
(3.37) |
for all and . Thus (3.25) holds. Since is a polynomial function of the , where each term has degree and coefficient , w.p. there exists such that
(3.38) |
for all . Dividing both sides by proves (3.26).
Note that,
(3.39) |
Further (3.23) implies that w.p. is bounded for all . Similarly, (3.25) implies that w.p. is bounded for all . Thus, from (3.39), w.p. there exists and such that
(3.40) |
for all and . Again noting that the determinant is a polynomial, w.p. there exists such that
(3.41) |
for all . By (3.38), is bounded for all w.p. . Thus, (3.24) holds. Applying the triangle inequality on and applying (3.25) and (3.40), we have that (3.27) also holds.
The sample covariance across all samples in both classes can be decomposed as
(3.42) |
Define
(3.43) |
Again since w.p. and are bounded for all , by the triangle inequality w.p. there exists such that
(3.44) |
for all and . Further, by the triangle inequality
(3.45) |
for all and w.p. , where
(3.46) |
, and . is constant for all . By (3.23), w.p. there exists such that for all , and . Thus, w.p. there exists such that for all and . Applying this fact and (3.25) to (3.44), w.p. there exists and such that
(3.47) |
for all and . Combining this fact with (3.45), w.p. there exists such that
(3.48) |
for all and . Again since the determinant is a polynomial, w.p. there exists such that
(3.49) |
for all . Observe from (3.31) that is lower bounded by
(3.50) |
and upper bounded by
(3.51) |
Since is a polynomial function of the , where each term has degree and coefficient , must also be upper and lower bounded when is large enough. (3.29) follows by dividing both sides of (3.49) by , and applying a bound on on the right hand side.
Observe that
(3.52) |
Since , (3.23) implies that w.p. is bounded for all . Similarly, (3.48) and the fact that is bounded for large enough implies that w.p. there exists such that is bounded for all . Thus, from (3.52), w.p. there exists and such that
(3.53) |
for all and . Again since the determinant is a polynomial, w.p. there exists such that
(3.54) |
for all . By (3.49) and the fact that is bounded for large enough, w.p. there exists such that is bounded for all . Thus, (3.28) holds. Applying the triangle inequality on and applying (3.48) and (3.53), we have that (3.30) also holds. ∎
Corollary 1.
Let be any non-empty feature set such that and are full rank. Suppose for all the fourth order moment across both classes exists and is finite. Let be a balanced sample. Then, w.p. the following all hold for all and , where is given in (3.31).
-
1.
as .
-
2.
as .
-
3.
If and , then as .
-
4.
If and , then as .
-
5.
as .
-
6.
as .
-
7.
There exist and such that for all .
-
8.
There exist such that for all .
Proof.
(1) follows from (3.27). (2) follows by combining (3.24) and (3.26). (5) follows from (3.30). (6) follows by combining (3.28) and (3.29). (3) and (4) are special cases of (5) and (6), where in this case is constant. Recall that in Lemma 2 we showed that and are lower and upper bounded when is large enough. By (3.30), must also be upper and lower bounded when is large enough, so (7) holds. Finally, since by item 6 and are asymptotically equivalent, must also be upper and lower bounded when is large enough, so (8) holds. ∎
Lemma 3.
Let and be any disjoint feature sets such that , , features in and are uncorrelated in both classes, and and are full rank, where . Suppose for all the fourth order moment across both classes exists and is finite. Let be a balanced sample. Then, w.p. there exist such that
(3.55) |
for all . The left hand side of (3.55) is when and are identical feature partitions except is a bad block in , and and are bad blocks in .
Proof.
We have that
(3.56) |
where
(3.57) |
By Corollary 1, the first term in (3.56) is upper bounded by a positive finite constant for large enough w.p. . Without loss of generality, assume features in are ordered such that
(3.58) |
for some matrix . Observe that
(3.59) |
where . Since , , and features in and are uncorrelated in both classes,
(3.60) |
where and are defined in (3.31). By Lemma 2, w.p. there exist such that for all , and ,
(3.61) |
Since is comprised of only quadratic terms in and w.p. (Corollary 1), w.p. there exist such that for all , and ,
(3.62) |
Further, since is a polynomial function of the elements of , where each term has a degree between and and a coefficient that is a polynomial function of the elements of of at most degree , and since is upper and lower bounded for large enough w.p. for all (Corollary 1), w.p. there exists such that for all ,
(3.63) |
Therefore,
(3.64) |
for w.p. , where the first line follows from (3.57) and (3.59) and the second line from (3.63). Further, by Corollary 1, w.p. there exist and such that for all , thus
(3.65) |
for all . The second line holds for large enough that is between and for all , so that . The last line follows from the fact that for all . From (3.56), the theorem holds with and , where is such that exceeds a bound on the first term in (3.56). ∎
Lemma 4.
Let be any feature set such that either or , and and are full rank. Let be a balanced sample. Then, w.p. there exist and such that
(3.66) |
for all . The left hand side of (3.66) is when and are identical feature partitions except is a bad block in and a good block in .
Proof.
We have that
(3.67) |
where
(3.68) |
By Corollary 1, the first term in (3.67) is upper bounded by a positive finite constant for large enough, and
(3.69) |
as , both w.p. . Suppose and . Then,
(3.70) |
The last line follows from the matrix determinant lemma (Harville, 1998). Since is balanced and is positive-definite, there exists such that for all large enough. The theorem holds for any . If , then
(3.71) |
Fan (1950) showed that for any symmetric positive-definite and and ,
(3.72) |
Although not mentioned by Fan, if then equality holds if and only if (by the weighted arithmetic-geometric-mean inequality). Suppose and fix . Then there exists such that for all . Applying this fact here, since is balanced there exists such that . The theorem holds for any . ∎
Lemma 5.
Let and be any disjoint feature sets such that , , there exists at least one feature in and one feature in that are correlated in at least one class, and and are full rank, where . Let be a balanced sample. Then, w.p. there exist and such that
(3.73) |
for all . The left hand side of (3.73) is when and are identical feature partitions except is a good block in , is a bad block in , and is a good block in .
Proof.
We have that
(3.74) |
where
(3.75) |
By Corollary 1, the first term in (3.74) converges to a positive finite constant w.p. , and
(3.76) |
as , both w.p. . Without loss of generality, assume features in are ordered such that
(3.77) |
where is a matrix of correlations between features in and . Since
(3.78) |
we have that
(3.79) |
We always have , and thus . Since there exists at least one feature in and one feature in that are correlated in at least one class, in this class is non-zero, , and . Since is balanced, there exist such that and for large enough. Thus,
(3.80) |
for large enough, where . Note , where . The theorem holds for any . ∎
Lemma 6.
Let and be any disjoint feature sets such that features in and are uncorrelated in both classes, and and are full rank, where . Suppose for all the fourth order moment across both classes exists and is finite. Let be a balanced sample. Then, w.p. there exist such that
(3.81) |
for all . The left hand side of (3.81) is when and are identical feature partitions except is a good block in , and and are good blocks in .
Proof.
We have that
(3.82) |
where
(3.83) |
By Corollary 1, the first term in (3.82) converges to a positive finite constant w.p. . Fix . Without loss of generality, assume features in are ordered such that
(3.84) |
for some matrix . Observe that
(3.85) |
where . Since features in and are uncorrelated in both classes,
(3.86) |
By Lemma 2, w.p. there exist such that for all , , and we have
(3.87) |
Since is comprised of only quadratic terms in and w.p. (Corollary 1), w.p. there exist such that for all , , and
(3.88) |
Further, since is a polynomial function of the elements of , where each term has a degree between and and a coefficient that is a polynomial function of the elements of of at most degree , and since w.p. (Corollary 1), w.p. there exists such that for all and ,
(3.89) |
Therefore,
(3.90) |
for w.p. , where the first line follows from (3.83) and (3.85) and the second line from (3.89). Further, w.p. there exists such that for all and ,
(3.91) |
The first line holds as long as is large enough that for all and (this is possible since both converge). The second line holds as long as is large enough that is between and for all and , so that . The third line holds as long as is large enough that for all and and some (this is possible since is balanced). Finally, the last line follows from the fact that for all . From (3.82), the theorem holds with and , where is such that exceeds a bound on the first term in (3.82). ∎
Lemma 7.
Let be any feature set such that , , and is full rank. Suppose for all the fourth order moment across both classes exists and is finite. Let be a balanced sample. Then, w.p. there exist such that
(3.92) |
for all . The left hand side of (3.92) is when and are identical feature partitions except is a good block in and a bad block in .
Proof.
We have that
(3.93) |
where
(3.94) |
By Corollary 1, the first term in (3.93) converges to a positive finite constant w.p. . By Lemma 2, w.p. there exists such that for all and ,
(3.95) | ||||
(3.96) |
Let be such that is between and for all , so that . Then for all ,
(3.97) |
w.p. , where the last line follows from the fact that for all . From (3.42),
(3.98) |
and for ,
(3.99) | ||||
(3.100) |
where . By Lemma 2, w.p. there exists and such that
(3.101) |
for all , and . Since is balanced, there exist and such that for all and . Since as w.p. , there exists such that for all , , and w.p. . By Lemma 2, w.p. there exists and such that for all and . By the triangle inequality,
(3.102) |
for all and w.p. . In particular, there exists such that
(3.103) |
Observe that is a polynomial function of the elements of , , and , where each term has a degree of and a coefficient of . In particular,
(3.104) |
where is the Levi-Civita symbol, equal to if is an even permutation of , if its an odd permutation, and otherwise, , , , , and
(3.105) |
From (3.101) and (3.103), w.p. there exists such that
(3.106) |
for all , where
Further, w.p. there exists such that
(3.107) |
for all , where
(3.108) |
, , and we have used the facts that and when . Similarly, w.p. there exists such that
(3.109) |
for all . Combining (3.106), (3.107) and (3.109), w.p. there exists such that
(3.110) |
for all . Thus, w.p. there exists such that
(3.111) |
for all , where is chosen based on the fact that w.p. (an application of Corollary 1 with hyperparameters , and in place of the hyperparameters used by the selection rule), and thus must be bounded for all . In addition, w.p. there exists and such that
(3.112) |
for , where in the second line we have applied the triangle inequality and the fact that w.p. , so is chosen such that for all , and the last line follows from Lemma 2. By Lemma S3 in Foroughi pour and Dalton (2019), there exists such that for all and ,
(3.113) |
where in the last inequality we use the fact that for all . Thus, w.p. there exists such that
(3.114) |
for all , where . Thus, from (3.97), w.p.
(3.115) |
for all , where the last line follows from the fact that for all . From (3.93), the theorem holds with and , where is such that exceeds times a bound on the first term in (3.93). ∎
Lemma 8.
Let and be any disjoint feature sets such that there exists at least one feature in and one feature in that are correlated in at least one class, and and are full rank, where . Let be a balanced sample. Then, w.p. there exist and such that
(3.116) |
for all . The left hand side of (3.116) is when and are strict refinements of the unambiguous feature partition that are identical except and are good blocks in and is a good block in .
Proof.
Lemma 9.
Let and be any disjoint feature sets such that , , there exists at least one feature in and one feature in that are correlated in at least one class, and is full rank, where . Let be a balanced sample. Then, w.p. there exist and such that
(3.119) |
for all . The left hand side of (3.119) is when and are strict refinements of the unambiguous feature partition that are identical except and are bad blocks in and is a bad block in .
Proof.
We have that
(3.120) |
where
(3.121) |
By Corollary 1, the first term in (3.120) converges to a positive finite constant w.p. , and
(3.122) |
as , both w.p. , where and . Without loss of generality, assume features in are ordered such that
(3.123) |
where is a matrix of correlations between features in and . Since
(3.124) |
we have that
(3.125) |
Since there exists at least one feature in and one feature in that are correlated, is non-zero, , and . The theorem holds for any . ∎
4 Conclusion
The consistency theory presented herein is important because it provides a richer understanding the type of features selected by OBFS. Furthermore, we have characterized rates of convergence for the posterior on feature partitions, and the marginal posterior probabilities on individual features.
Although here we focus on identifying using the posterior , the OBFS framework can be used for optimal Bayesian partition selection, which aims to identify using the full posterior . Partition selection may be of interest, for instance, if one wishes to identify communities of closely interacting genes. Since Theorem 3 proves that converges to a point mass at , this theorem has direct implications on the consistency of optimal Bayesian partition selection as well.
The conditions in Theorem 3 are sufficient but not necessary. For example, it may be possible to relax Condition (iii) of Theorem 3. It is also possible for to converge to a point mass at , but for to not converge to a point mass at . For example, if non-zero correlations are present in the data generation process, then the OBF variant of OBFS sets , which means that Condition (v) of Theorem 3 does not hold. However, if the independent unambiguous set of good features given in Definition 1 and the unambiguous set of good features given in Definition 3 happen to be equal (the latter always contains the former), then OBF can be strongly consistent relative to the unambiguous set of good features.
OBFS searches for the unambiguous set of good features, which expands upon the independent unambiguous set of good features targeted by OBF. In particular, the unambiguous set of good features includes features that are only strongly discriminating when grouped together, features that are individually weak but strongly correlated with strong features, and features that are linked to discriminating features only through a chain of correlation. The unambiguous feature set is complete in the sense that any features that are not included must not have any first or second order distributional differences between the classes and must be uncorrelated with all features that are included, and minimal in the sense that it is the smallest feature set with this property. The unambiguous feature set is thus of great interest and importance in bioinformatics and many other application areas, although we are not aware of any selection algorithms besides OBFS that aim to identify this set.
This work is supported by the National Science Foundation (CCF-1422631 and CCF-1453563).
References
- Ang et al. (2016) Ang, J. C., Mirzal, A., Haron, H., and Hamed, H. N. A. (2016). “Supervised, unsupervised, and semi-supervised feature selection: A review on gene selection.” IEEE/ACM Transactions on Computational Biology and Bioinformatics, 13(5): 971–989.
- Awada et al. (2012) Awada, W., Khoshgoftaar, T. M., Dittman, D., Wald, R., and Napolitano, A. (2012). “A review of the stability of feature selection techniques for bioinformatics data.” In Proceedings of the 2012 IEEE 13th International Conference on Information Reuse and Integration (IRI), 356–363.
- Dalton (2013) Dalton, L. A. (2013). “Optimal Bayesian feature selection.” 65–68. IEEE.
- Diamandis (2010) Diamandis, E. P. (2010). “Cancer biomarkers: Can we turn recent failures into success?” Journal of the National Cancer Institute, 102(19): 1462–1467.
- Fan (1950) Fan, K. (1950). “On a theorem of Weyl concerning eigenvalues of linear transformations II.” Proceedings of the National Academy of Sciences, 36(1): 31–35.
- Fang et al. (2019) Fang, G., Wang, W., Paunic, V., Heydari, H., Costanzo, M., Liu, X., Liu, X., VanderSluis, B., Oately, B., Steinbach, M., et al. (2019). “Discovering genetic interactions bridging pathways in genome-wide association studies.” Nature communications, 10(1): 1–18.
- Foroughi pour and Dalton (2014) Foroughi pour, A. and Dalton, L. A. (2014). “Optimal Bayesian feature selection on high dimensional gene expression data.” In Proceedings of the 2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP), 1402–1405.
- Foroughi pour and Dalton (2015) — (2015). “Optimal Bayesian feature filtering.” In Proceedings of the 6th ACM Conference on Bioinformatics, Computational Biology, and Health Informatics (ACM-BCB), 651–652.
- Foroughi pour and Dalton (2017a) — (2017a). “Robust feature selection for block covariance Bayesian models.” In Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2696–2700.
- Foroughi pour and Dalton (2017b) — (2017b). “Robust feature selection for block covariance Bayesian models.”
- Foroughi pour and Dalton (2018a) — (2018a). “Heuristic algorithms for feature selection under Bayesian models with block-diagonal covariance structure.” BMC Bioinformatics, 19(Suppl 3): 70.
- Foroughi pour and Dalton (2018b) — (2018b). “Optimal Bayesian filtering for biomarker discovery: Performance and robustness.” IEEE/ACM Transactions on Computational Biology and Bioinformatics, early access.
- Foroughi pour and Dalton (2019) — (2019). “Theory of Optimal Bayesian Feature Filtering.” Bayesian Analysis, accepted.
- Han et al. (2017) Han, K., Jeng, E. E., Hess, G. T., Morgens, D. W., Li, A., and Bassik, M. C. (2017). “Synergistic drug combinations for cancer identified in a CRISPR screen for pairwise genetic interactions.” Nature biotechnology, 35(5): 463.
- Hartman and Wintner (1941) Hartman, P. and Wintner, A. (1941). “On the law of the iterated logarithm.” American Journal of Mathematics, 63(1): 169–176.
- Harville (1998) Harville, D. A. (1998). “Matrix algebra from a statistician’s perspective.” Technometrics, 40(2): 164–164.
- Ilyin et al. (2004) Ilyin, S. E., Belkowski, S. M., and Plata-Salamán, C. R. (2004). “Biomarker discovery and validation: Technologies and integrative approaches.” Trends in Biotechnology, 22(8): 411–416.
- Li et al. (2017) Li, J., Cheng, K., Wang, S., Morstatter, F., Trevino, R. P., Tang, J., and Liu, H. (2017). “Feature selection: A data perspective.” ACM Computing Surveys, 50(6): 94.
- Ramachandran et al. (2008) Ramachandran, N., Srivastava, S., and LaBaer, J. (2008). “Applications of protein microarrays for biomarker discovery.” Proteomics - Clinical Applications, 2(10–11): 1444–1459.
- Rifai et al. (2006) Rifai, N., Gillette, M. A., and Carr, S. A. (2006). “Protein biomarker discovery and validation: The long and uncertain path to clinical utility.” Nature Biotechnology, 24(8): 971–983.
- Sima and Dougherty (2006) Sima, C. and Dougherty, E. R. (2006). “What should be expected from feature selection in small-sample settings.” Bioinformatics, 22(19): 2430–2436.
- Sima and Dougherty (2008) — (2008). “The peaking phenomenon in the presence of feature-selection.” Pattern Recognition Letters, 29(11): 1667–1674.
- Xi et al. (2018) Xi, J., Li, A., and Wang, M. (2018). “A novel unsupervised learning model for detecting driver genes from pan-cancer data through matrix tri-factorization framework with pairwise similarities constraints.” Neurocomputing, 296: 64–73.