Group Testing with Correlation under Edge-Faulty Graphs
Abstract
In applications of group testing in networks, e.g. identifying individuals who are infected by a disease spread over a network, exploiting correlation among network nodes provides fundamental opportunities in reducing the number of tests needed. We model and analyze group testing on correlated nodes whose interactions are specified by a graph . We model correlation through an edge-faulty random graph formed from in which each edge is dropped with probability , and all nodes in the same component have the same state.
We consider three classes of graphs: cycles and trees, -regular graphs and stochastic block models or SBM, and obtain lower and upper bounds on the number of tests needed to identify the defective nodes. Our results are expressed in terms of the number of tests needed when the nodes are independent and they are in terms of , , and the target error. In particular, we quantify the fundamental improvements that exploiting correlation offers by the ratio between the total number of nodes and the equivalent number of independent nodes in a classic group testing algorithm.
The lower bounds are derived by illustrating a strong dependence of the number of tests needed on the expected number of components. In this regard, we establish a new approximation for the distribution of component sizes in “-regular trees” which may be of independent interest and leads to a lower bound on the expected number of components in -regular graphs.
The upper bounds are found by forming dense subgraphs in which nodes are more likely to be in the same state. When is a cycle or tree, we show an improvement by a factor of . For grid, a graph with almost edges, the improvement is by a factor of , indicating drastic improvement compared to trees. When has a larger number of edges, as in SBM, the improvement can scale in .
1 Introduction
Group testing [DHH00] is a well studied problem at the intersection of many fields, including computer science [Dor43, CHKV11, COGHKL20, CGM21, CTB+20], information theory [AJS19, BJ21, GH08] and computational biology [KM95, FKKM97]. The goal is to find an unknown subset of items that are different from the rest using the least number of tests. The target subset is often referred to as defective, corrupted or infected, depending on the field of study. In this work, we use the term defective. To find the subset of defectives, items are tested in groups. The result of a test is positive if and only if at least one item in the group is defective.
Over the years, this problem has been formulated via two approaches: the combinatorial approach and the information theoretic approach. In the “combinatorial” version of the problem, it is assumed that there are defective items that are to be detected with zero error [DHH00]. Using adaptive group testing (i.e., when who to test next depends on the results of the previous tests), there is a matching upper and lower bound on the number of tests in the form [DHH00]. Using non-adaptive group testing (i.e., when the testing sequence is pre-determined), there is an upper bound of and an almost matching lower bound of . The “information theoretic” approach, on the other hand, assumes a prior statistic on the defectiveness of items, i.e., item is assumed to be defective with probability . The aim in this case is to identify the defective set with high probability [LCH+14]. Roughly speaking, there is a lower bound in terms of the underlying entropy of the unknowns, and an almost matching upper bound up to a factor of the lower bound.
In most existing works, it is assumed that the state of the items, whether or not they are defective, are independent of each other, which is not realistic in many applications. Group testing, for example, can identify the infected individuals using fewer tests, and therefore in a more timely manner, than individual testing, during the spread of an infectious disease (eg, COVID-19) [BMR21, VFH+21, MNB+21, Ald20, GG20]. But the infection state of individuals are in general correlated, with correlation levels ranging from high to low, depending on how close they live: same household (high), same neighborhood, same city, same country (low). Correlation levels also depend on other factors such as frequency of contact, the number of paths between the individuals in the network of interactions. We elaborate on this further in Section 1.1. Another example is the multiaccess channel problem: here a number of users want to communicate through a common channel and we want to assign time slots to them to avoid any conflicts. Before assigning, we aim to find the number of active users that want to send a message. Using group testing, we can identify the number of active users faster by asking a subset of users if any of them is active [BMTW84, Chl01, GH08]. But again, nodes are often not independent. Generally, some subset of users might communicate among themselves more often and hence, be more correlated. With this motivation, we aim to model such correlation, design group testing techniques that exploit it, and quantify the gain they provide in reducing the number of tests needed.
The closet to our work are [ACO21, AU21, NSG+21, NSFD21] where specific correlation models are considered and group testing methods are designed and analyzed. In [ACO21], the authors consider correlation that is imposed by a one day spread of an infectious disease in a clustered network modeled by a stochastic block model. Each node is initially defective (infected) with some probability and in the next time step, its neighbors become defective probabilistically. The authors provide a simple adaptive algorithm and prove optimality in some regimes of operation and under some assumptions. In [AU21], the authors model correlation through a random edge-faulty graph . Each edge is realized in the graph with a given probability . So depending on how the graph is realized, it is partitioned into some connected components. Each connected component is assumed defective with probability (in which case, all the nodes in that component are defective) and otherwise non-defective with probability . The authors focus only on a subset of the realizations by studying the case in which the set of connected components across realizations forms a particular nested structure. More specifically, they only consider a subset of realizations such that for each realization, it is instantiated from another realization in the subset, i.e., two realizations have the same components except one component that is partitioned into two components. Only one realization does not obey this rule, the one with the least number connected components. They found a near optimal non-adaptive testing strategy for any distribution over the considered realizations of the graph and showed optimality for specific graphs.
The correlation model we consider is close to the work of [AU21]. We consider a random (edge-faulty) graph where each edge is realized with probability . In a realized graph, each connected component is assumed defective with probability . As opposed to [AU21], we do not constrain our study to a subset of the realizations and instead consider all possible realizations of the graph . Despite its simplicity, our model captures two key features. First, given a network of nodes, one expects that when a path between two nodes gets shorter, the probability that they are in the same state increases. Our proposed model captures this intuition. By adding one edge to a path, the probability of being in the same state reduces by a factor of . Second, two nodes might have long distances from each other, but when there are many edge-distinct paths between them, it is more likely that they are in the same state. Under our model, by adding distinct paths between two nodes, the probability of them being in the same state increases.
In other related works, a graph could represent potential constraints on testing (among independent nodes) [CKMS12, SW18]. This can be viewed as a complementary direction to our setting in which a graph models the inherent dependency of the nodes, but there is no constraint in how the groups can be formed for testing. In [CKMS12], the authors design optimal non-adaptive testing strategies when each group is constrained to be path connected. In particular, they use random walks to obtain the pool of tests. In a follow up work, [SW18] shows that either the constraints are too strong and no algorithm can do better than testing most of the nodes, or optimal algorithms can be designed matching with the unconstrained version of the problem. This is attained by sampling each edge with probability ( and optimized). If a component is large enough, the algorithm tests the entire component. Our approach in this paper has similarities with [SW18] in aiming to find parts of the graph that are large and connected enough so that they remain connected with a decent probability after realizing the edges, but our techniques to find the dense subgraphs and the corresponding analysis are different.
1.1 Our Model
We start by motivating the key attributes that we capture in our model through an example of testing for infections in a network of people (nodes). Consider the interaction network for the spread of an infectious disease (e.g. COVID-19) in a network of people/nodes. There is an edge between two nodes if the corresponding individuals are in physical proximity for a minimum amount of time each week. Such individuals are more likely to be in the same state than those who have been distant throughout. Thus, firstly, the probability of being in the same state decreases with increase in the length of paths (i.e., distance in interaction network) between nodes. Second, infection is more likely to spread from one node to another if there are many distinct paths between them. Thus, the probability that two nodes are in the same state increases with the increase in the number of distinct paths between them.
We capture correlation through a faulty-edge graph model. Consider a graph where the node set , , represents the items and the edge set represents connections/correlations between them. Suppose each edge is realized with probability . After the sampling, we have a random graph that we denote by . Each node is either defective or non-defective. All nodes in the same component of are in the same state, rendering defectiveness a component property. We consider that each component is defective with probability (w.p.) independent of others. As an example, consider graph with five nodes and eight edges, and a sampled graph realization as shown in Figure 1 (left) and Figure 1 (right) respectively. When , is realized w.p. . There are two components in , namely, and ; are in the same state, which is defective w.p. , independent of the state of
Two nodes are guaranteed to be in the same state in if there exists at least one path that connects them in . The probability that a path in survives in increases with increase in . Thus both the parameter and the graph determine the correlation between states of different nodes; the correlation is higher if is higher, states of all nodes are independent for , while the correlation is the highest possible for a given for
This model importantly captures the two attributes we discussed: Clearly, a long path between two nodes in has a smaller chance of survival in , compared to a short path, making the end nodes less likely to be in the same state as the length of the path in between them increases. Moreover, the probability that at least one path between two nodes survive in increases with increase in the number of distinct paths between them in , so having distinct paths between a pair of nodes in makes them more likely to be in the same state.
We aim to find the minimum expected number of tests needed to find the defective items with at most errors, where can potentially be of order . To be precise, let #ERR(H) be the number of nodes mispredicted by an algorithm on graph . Then we require to have
(1) |
where the expectation is taken over and possible randomization of the algorithm. We refer to it as the average error.
Remark 1.1.
The definition of error in classic probabilistic group testings such as [LCH+14] is a stronger notion of error probability where the goal is to correctly predict all nodes with probability , and with probability one or more nodes are mispredicted. This is stronger than our definition of average error in (1) because with probability at most nodes are mispredicted in the classic group testing, so the average error would be less than , the allowed error in our model.
We mostly work with the notion of average error in this paper. In the last section (Section 5), we consider a stronger notion of error to limit the maximum error: the group testing schemes now need to upper bound the number of mispredicted nodes by with high probability. We recover all the results of the paper for this stronger constraint on error as well.
Methodologically, we relate the problem of group testing with correlation to that in a network with fewer nodes in which the states of nodes are independent. We obtain bounds on the number of group tests required in the former, to satisfy the constraints we consider on errors, in terms of known bounds in the latter. The relative quantification provides a basis for comparison and determination of the improvements that can be obtained by exploiting correlation.
The tests can not be designed with the knowledge of , only the value of is known apriori. In the extreme case of , the problem is reduced to the classic group testing with independent nodes. In the extreme case of , all components of remain connected and have the same state and hence the problem is reduced to testing a single node. When , the problem is non-trivial, because there can be multiple components, some with more than one node, and the number and composition of the components is apriori unknown. Thus, it is not apriori known which nodes will be in the same state. Our group testing strategies will seek to circumvent this challenge by identifying parts of that are connected enough so that they remain connected in with a high probability.
1.2 Contributions
We obtain upper and lower bounds on the number of group tests needed to determine the states, defective or otherwise, of individual nodes in a large class of interaction graphs in the presence of correlation among the states of nodes. We progressively consider 1) cycles and trees (about links), 2) regular graphs (about links) and 3) stochastic block models or SBM (with potentially links). The correlation is captured by the factor (see Section 1.1). The bounds are obtained in terms of the number of tests needed when the states are independent, and help us quantify the efficiency brought forth by group testing in terms of In particular, our group testing strategies exploit correlation and build upon classical group testing strategies, but with fewer number of nodes. The ratio between this number and the total number of nodes () determines the benefits of correlation in our strategies and we refer to it as the (multiplicative) improvement factor. Note that this is not the ratio between the number of tests that are needed, but the ratio between the number of nodes that a classical group testing algorithm gets as input. As such, our results are valid for any group testing algorithm, and they can be translated to the ratio between the total number of tests, accordingly, as needed.
For trees and cycles, we prove an upper bound on the optimal number of tests in terms of the number of group tests when there are independent nodes. Note that one can trivially determine the states of each node by disregarding correlation and testing among nodes (e.g. using classic group testing techniques). Our upper bound therefore shows that group testing can reduce the tests. The (multiplicative) improvement factor of is meaningful (less than 1) when . As approaches the multiplicative factor reduces even further implying even greater benefits due to group testing. Our lower bound, on the other hand, shows an improvement factor .
For regular graphs, we prove new results for the distribution of components. This leads to a lower bound that is expressed as a sum series depending on and . We further prove an upper bound for a specific regular graph, namely grid, in terms of the number of group tests when there are independent items. Thus, the improvement factor is , as opposed to only for trees; this hints us that group testing gets drastically more efficient for denser graphs.
The stochastic block model divides the network into communities such that nodes in the same community are more connected than nodes in different communities. We show that the reduction in the test count due to group testing can be classified into three regimes: 1) strong intra-community connectivity but sparse inter-community connectivity, which reduces the effective number of independent nodes to the number of communities, 2) strong connectivity leading to an (almost) fully connected graph, in which case all nodes have the same state and one independent test is representative 3) sparse connectivity leading to many isolated nodes, in which case the states of all nodes are independent. The first case reduces to independent group testing with the number of nodes equal to the number of communities, second regime needs a constant number of tests, and finally the third regime reduces to independent group testing with nodes. The tight upper and lower bounds that are known in the literature for the independent group testing subsequently apply for the first and third cases; for the second case, the analysis is rather simple as there is only a constant number of tests needed.
1.3 Our Methods and Ideas
We now briefly describe the mathematical techniques that we follow to obtain the bounds. The techniques constitute a contribution in themselves as a graphical structure was not investigated earlier for group testing except under significant restrictions as described earlier. For the upper bound for a cycle, we divide the cycle into subgraphs of size ( nodes) where is a parameter. The subgraphs are connected in , but need not be connected in which we do not know apriori. For every subgraph, we select a node which we consider as representative of the subgraph and determine the states of the representatives of all the subgraphs using group testing strategies deployed when states of nodes are independent (ie, we do not exploit possible correlation between the representatives). We consider the state of each node in each subgraph as that of the representative; this is indeed the case if each subgraph is connected, otherwise the state of some nodes are determined in error. The probability of each subgraph being connected decreases with increase in , thus the expected number of errors, which can be computed as a function of , increases with increase in The number of representatives and therefore the number of nodes subjected to group tests described above is . Thus the number of group tests is non-increasing with Thus represents a tradeoff between the expected number of errors and the number of group tests, and is selected appropriately to ensure a low number of group tests subject to ensuring that the expected number of errors does not exceed the specified limit. The number of group tests for the that satisfies the specified error constraints provides the upper bound on the number of tests for a cycle.
The upper bound for an arbitrary tree can be obtained similarly, with the additional significant complication that for an arbitrary tree and an that satisfies constraints on error one may not be able to obtain subgraphs in of size that are connected in (in contrast when is a cycle, a path of size constitutes such a subgraph). Refer to Figure 2 for an illustration of this challenge. We get around this challenge by using subgraphs that are not connected in by themselves, but become connected in through at most additional nodes in (which are not in the subgraph). Construction of such subgraphs is not apriori clear and constitutes an innovation needed for upper bounding the number of tests needed for trees, above and beyond the overall methodology. Each such subgraph is connected in if the links in among these nodes survive in , the probability of this event can again be expressed as a function of , and as before this probability decreases with increase in The rest of the methodology is similar to for cycles.
The overall methodology for obtaining the upper bound for grids is the same as that for cycles. The subgraphs in question constitute sub-grids of nodes. We determine the probability that each such sub-grid is connected in through a recursive decomposition which is not apriori obvious.
The characterization of lower bound on the number of tests needed for cycle or trees constitutes another innovation. To obtain the lower bound one can assume the knowledge of the components of , which one does not know in reality. Nodes in each component of have the same state, which is independent of those of the states of nodes in other components. Thus each component can be considered a super-node and the states of the super-nodes are independent of each other. Thus, one needs at least as many tests as that when there are nodes whose states are independent, where is the number of components of . A lower bound can now be obtained if the random variable can be bounded. We accomplish this objective by observing that number of components in a stochastic graph constitutes an edge-exposure martingale and the value of this random variable is concentrated around its expectation, courtesy of Azuma’s inequality which holds for such martingales.
We initially present the above results for constraints on the expected number of errors. We subsequently generalize them for a stronger constraint, that on the number of errors with a high probability (rather than expectation thereof), for cycle and trees. Lower bounds can be obtained following the same methodology as before, except that lower bounds on the number of tests needed for independent nodes for this stronger constraint do not exist in the literature. We derive the latter lower bound first, adapting some existing proof techniques relying on Information-theoretic inequalities. We show that the lower bounds can be improved for some specific structures of , such as when is a star. This stronger notion of error allows us to include the structure of more heavily and directly in this proof, rather than going via an analysis for the number of components. We have resorted to the latter in obtaining the lower bounds for cycle, tree for the weaker constraint on errors. The upper bound can be obtained following the same broad structure, though some additional technical challenges arise. The number of errors over all subgraphs can be bounded with high probability using Hoeffding’s inequality if the number of errors in different subgraphs are independent. This happens for cycle the subgraphs are non-overlapping paths and the events that they remain connected in are independent. Nonetheless, this does not happen for trees because nodes in one subgraph may be connected through those in other subgraphs. We surmount this technical challenge by invoking an innovative exposure of nodes of the tree such that the number of connected subgraphs constitutes a node exposure Martingale which satisfies the requisite Lipschitz condition for Azuma’s inequality to hold. The high probability bound on the number of errors over all subgraphs now follow via Azuma’s inequality.
2 Preliminaries and Notations
We use the following notations for the rest of the paper. Let be the expected number of tests in an optimal algorithm on graph with correlation parameter , probability of defectiveness , and an error of . Let be the minimum expected number of tests needed for items in order to find the defective set with the error probability at most , where each item is independently defective with probability . Notice that does not depend on . Note that group testing is potentially beneficial when the number of defective items is . It is often assumed that the (expected) number of defective items is , so from now on we assume . It is also noteworthy that the definitions of error in IndepOPT and CrltOPT are different, as mentioned in Remark 1.1. When clear from the context, we may drop from the notations. We write if . We write when
The following lemma provides a lower bound on CrltOPT in terms of IndepOPT – the minimum number of group tests needed in discovery of the defective set among independent nodes.
Lemma 2.1.
Let be the number of connected components of . Then
(2) |
Proof.
Consider the idealized scenario in which one knows the components of . Suppose that we have components. All nodes in the same component are in the same state, and the states of nodes in different components are independent. Thus, each component can be replaced by one node and the minimum number of tests is that needed to test a graph with independent nodes and the expected number of errors is at most . This number corresponds to IndepOPT( for some such that the expected number of errors is at most . A classical group testing algorithm with IndepOPT( tests may make (at least) an error in finding one node’s state with probability ; thus the expected number of errors is at least . This implies that we need Clearly IndepOPT( is a non-increasing function of Thus at least IndepOPT( tests are needed when one knows the components of . If one does not know the components, the expected number of tests can only increase. The lemma follows. ∎
Remark 2.2.
At first glance, the bound may come across as counter-intuitive because the number of tests in a graph in which the states of nodes are independent provides a lower bound on that when the states are correlated. The apparent contradiction is resolved when we note that the lower bound is obtained in terms of the number of tests in a graph with a fewer nodes than the original: number of components in instead of the number of nodes in
Lemma 2.1 illustrates a connection between the number of connected components and the minimum number of tests CrltOPT. We next discuss concentration lemmas for a class of graph theoretic functions, including the number of connected components. By having the concentration results, we would be able to replace by its expectation in (2).
2.1 Concentration Results
Definition 2.3.
[AS16] A graph theoretic function is said to satisfy the edge Lipschitz condition if, whenever and differ in only one edge, .
Note that the number of components is edge Lipschitz, as when two graphs differ in only one edge, they either have the same number of components, or the graph with one less edge has one additional component. One can define a node Lipschitz condition by replacing edge with node [AS16].
The Edge Exposure Martingale. Let be an arbitrary order of the edges. We define a martingale where is the value of a graph theoretic function after exposing . Note that is a constant which is the expected of , where is drawn from . This is a special case of martingales sometimes referred to as a Doob martingle process, where is the conditional expectation of , as long as the information known at time includes the information at time [AS16]. The same process can be defined for node exposure martingales, where the nodes are exposed one by one [AS16]. Node exposure can be seen as exposing one node at each step, so at the step the graph has nodes along with the edges between them. You can find more about the topic in [AS16, Chapter 7]. We have the following theorem.
Theorem 2.4.
[AS16] When satisfies the edge (resp. node) Lipschitz condition, the corresponding edge (resp. node) exposure martingale satisfies .
We then have Azuma’s inequality.
Theorem 2.5.
Corollary 2.6.
Let . Then with probability we have
Specifically, in the case that for a constant , and has edges, then with high probability the number of connected components is within .
Proof.
Consider the number of components which is an edge-lipschitz function of the graph. Now define and . Applying Theorem 2.5 concludes the proof. ∎
2.2 Classic Group Testing Results
Lemma 2.1 provides a lower bound on CrltOPT in terms of IndepOPT. Indeed, any lower bound on the latter leads to a lower bound on the former. We now review some useful lower and upper bounds on next.
In the probabilistic group testing of [LCH+14], there are individuals and every individual is independently defective with probability . Let be the indicator vector of the items’ defectiveness and be the entropy of . The testing can be adaptive or non-adaptive, meaning the testing at each step can depend on the results of the previous tests, or not, respectively. [LCH+14] proves the following lower bound on the required number of tests (for both adaptive and non-adaptive scenarios)
Theorem 2.7.
[LCH+14] Any probabilistic group testing algorithm whose probability of error is at most requires at least tests.
For the upper bounds in this paper, roughly speaking, we partition the graph into groups of nodes and assume that the state of nodes in different groups are independent. Subsequently we obtain bounds for group testing on the original graph in terms of those when state of nodes are independent. We therefore utilize the existing bounds for smaller networks in our context. Below, we summarize the probabilistic group testing results of [LCH+14] for upper bounds on number of tests needed when items are independent.
Theorem 2.8.
[LCH+14] There is an adaptive algorithm with probability of error at most such that the number of tests is less than
Theorem 2.9.
[LCH+14] For any and , if the entropy of satisfies
where
then with probability of error at most
there is a non-adaptive algorithm that requires no more than
tests.
3 Graphs with a Low Number of Edges
In this section, we consider graphs that have edges, where is a constant. Specifically, we prove lower and upper bounds on the number of tests needed when the underlying graph is a cycle or tree, where the lower bound can be generalized to graph with edges. We obtain these bounds by formulating the problem into one with independent nodes, potentially with less nodes than the original problem. First, we use the lemmas introduced in Section 2 to obtain lower bounds for cycles and trees. Next, we propose group testing algorithms and prove upper bounds for cycles and trees.
3.1 A Lower Bound for Cycles and Trees
By Corollary 2.6, if we know the expected number of components in a graph, we would be able to lower bound the minimum number of tests needed by Lemma 2.1. The following theorem proves a lower bound for cycle and trees.
Theorem 3.1.
Let be a cycle or a tree. Then we have
Proof.
In a tree, by removing each edge we get one more component, so after removing edges the tree has components and the cycle has components.
Each edge is removed with probability , so the expected number of components is for trees, and for cycles. By Corollary 2.6, the number of components is with probability , and with probability , the difference in tests is at most , hence additional tests. Applying Lemma 2.1 thus completes the proof. ∎
The above proof also works for any graph with edges, where is a constant. In other words, when the number of edges is less than , a lower bound on the number of tests needed for almost independent nodes is also a lower bound on the number of tests needed under our model with correlation .
3.2 An Upper Bound for Cycles and Trees
In this section, we provide algorithms to find the defective set and provide theoretical bounds. We start by considering that is a simple cycle, and subsequently generalize the ideas to arbitrary trees. Note that after having an algorithm for trees, we would have an algorithm for general graphs, by just considering a tree spanning it. But the algorithm might be far from optimal.
The general idea is to partition the graph into subgraphs that will remain connected in with high probability. The nodes in those connected subgraphs will thus have the same states. We can then select a candidate node for each subgraph to be tested. By knowing the probability of each subgraph being connected and the probability of error in classic group testing, we can estimate the error in our problem as a function of the size of the subgraphs and design the subgraphs accordingly.
First, we provide our results for when is a cycle.
Theorem 3.2.
Consider a cycle of length . Let and . Then there is an algorithm that uses tests and finds the defective set with the error at most .
Proof.
Consider the following algorithm:
-
1.
Let . Partition the cycle into paths of the same length , except one path that may be shorter.
-
2.
For each path, choose one of its nodes at random and let the corresponding nodes be .
- 3.
-
4.
Assign the state of all the nodes in as for all .
Note that for each , the defectiveness probability of is . The probability that is actually connected after a realization is . So the probability that is not in the same state as is . Then assuming that we detect all ’s correctly, the error in is at most . By replacing , the error becomes less than . Moreover, we might also have probability of error for the ’s (given the criteria set in IndepOPT), meaning that with probability , all the nodes are predicted correctly, and with probability we have at least one mispredicted node, and at most mispredicted nodes. So the total error from this part is at most . So the total error is at most and we have the above theorem. ∎
Corollary 3.3.
Consider the case in which where is a constant. Let be the vector of candidate nodes. Then, , where is the binary entropy. Note that the average number of infected nodes in is , hence by Theorem 2.8, the number of tests is upper bounded by
Note that when correlations are strong, i.e., , the algorithm does a constant number of tests, as expected. Note that using classic group testing without incorporating correlation we need tests.
We now generalize the ideas to derive an upper bound when is a tree. We partition into subgraphs (which we sometimes refer to as groups) of nodes, find the probability of each subgraph being connected in a random realization, and then optimize it by choosing . At a high level, we partition such that the nodes within a subgraph have small paths among each other. This is because shorter paths remain connected in with higher probability, maximizing the probability of the nodes being in the same state. Finding the probability of error is not straightforward here, because the subgraphs we form may not necessarily be connected in the original graph but could be connected through realization of other edges/nodes.
We first give a definition to formalize the number of nodes needed to make a subset of nodes connected.
Definition 3.4.
Let be a subset of the nodes of graph . The smallest connecting closure of is a subset such that the induced graph over is connected.
For example, consider the graph in Figure 1. If , then the smallest connecting closure of is , as by adding to we make S connected.
Note that if is a tree, then every connected subgraph of is also a tree. And the number of links in the induced graph over is one less than the number of nodes in it.
Now we provide a partition of nodes for trees such that for each subgraph, only a few additional nodes and thereby additional links will make them connected. Formally:
Lemma 3.5.
Let be a tree. There is a partition of the graph into subgraphs each with nodes (one subgraph may have less than nodes), such that the number of nodes in the smallest connecting closure for each subgraph is less than or equal to , for each .
Proof.
We prove the lemma by induction on the number of nodes of . for , the statement is trivially true. Now suppose the lemma is true for any number of nodes less than , we prove it for .
We aim to find a set of nodes of size such that, first, by removing the set the graph remains connected, and second, the smallest connecting closure of the set has at most nodes. Then by removing the aforementioned set and considering it as one of the subgraphs, we use induction hypothesis for the rest of the graph.
To do that, suppose the tree is hanged by an arbitrary node. Let be one of the deepest leafs, that is, any other node is at higher or equal level of . Let be the first ancestor of , such that the subtree rooted at , including , has or more nodes. If there is exactly nodes, then the subtree rooted at is the desired subgraph.
Now suppose that the subtree rooted at has more than nodes. Note that the number of nodes in the path from to is or fewer; otherwise would have had an ancestor lower than such that the subtree rooted at it would have at least nodes. Thus the distance from to is less than . Now we form a subgraph with nodes and connecting closure of or fewer nodes. We progressively build . Starting with empty set , we add the subtree of the child of that is a descendant of, and the child itself; call this subtree . Note that has nodes, otherwise would have had an ancestor lower than such that the subtree rooted at it would have at least nodes. Since is an entire subtree rooted at a node, remains connected even when is removed from it.
Consider . Recursively, do the same process for the other subtrees of with instead of Note that has subtrees other than , as the subtree rooted at has more than nodes and has at most nodes. Then consider another subtree of , called . If , we update and add to and continue with another subtree of , which by the same argument exists. If , then again we choose a deepest leaf of and proceed with the same process as before to find another group of nodes, i.e. we start with the deepest leaf and go up in the tree until it exceeds , and repeat the procedure. Note that after moving to the subtrees of , we disregard the rest of the graph, so is an ancestor of all the nodes we encounter next.
Again, for the next recursion, the subtree of the node that exceeds updated is an ancestor of the rest of the nodes. Let’s call and other nodes that we make a recursion “breaking point”. Then any pair of breaking points are ancestor and descendant, and all the nodes added to are subtrees of breaking points. So by connecting all the breaking points by a single path, which has length at most , as the distance is less than or equal to to , we connect ; so the smallest connecting closure of has or fewer nodes. More than that, we have only included some subtrees of , so by removing , remains connected and we can use the induction for the remaining tree.
To illustrate the algorithm, consider the tree at Figure 3 with . We start with , which is a deepest leaf, move up and now the subtree is and we add node 7 to the subgraph as . we move up again, and this time we can’t add and its subtrees, as the size of the subtree rooted at would exceed . So we update and proceeds with another subtree of , let’s say the right subtree containing node . The size of the subtree at is one (only {8 }), so we can add it to the subgraph() and update . Now we continue with the updated and the left subtree of . Node 10 is a deepest leaf that we start with, move up to , but we can’t add the subtree rooted at , as the size of the subtree rooted at is bigger than . So we add 10 to the set, update and proceed with . Finally our updated is 1 and we only add 9 to the subgraph. So the final subgraph is , and we can connect all of them by adding and .
Now we’ve found that has the smallest connecting closure at most , and includes only subtrees of , so removing that does not disconnect the graph. Then we save as an aimed subgraph and use the induction hypothesis on the rest of the graph. Then other formed subgraphs have size (except one) and have small connecting closures. By adding to the subgraphs, we get the desired grouping of all nodes and the proof is complete. ∎
Now we are ready to prove the upper bound for trees.
Theorem 3.6.
Consider a tree with nodes and let . Let . Then there is an algorithm that uses tests and finds the defective set with at most errors. I.e.,
Proof.
Consider the following algorithm:
-
1.
By Lemma 3.5, partition the tree into subgraphs of the same length , one subgraph might be smaller than the other ones.
-
2.
For each group, choose one of its nodes at random and let them be .
-
3.
Use an algorithm to find the defective set among .
-
4.
Assign the state of all the nodes in as , for all .
First, we calculate the probability that is connected. By Lemma 3.5, we know that each has the property that its smallest connecting closure has or fewer nodes. Thus, together and its connected closure have at most edges in , and and its connected closure constitutes a connected subgraph in Therefore, the probability of being connected in is at least the probability that the above edges are retained in which is at least So the probability that is not in the same state as is at most . The rest of the proof revolves around proving that the total error is less than as was done for cycle and this completes the proof. ∎
Corollary 3.7.
Corollary 3.3 can be recovered for trees with an additional factor of 2.
4 An Upper Bound for Graphs with More Edges: Grids and SBMs
In this section, we focus on graphs that potentially have many edges. As the number of edges increases, the correlation between nodes increases even when is not large. As mentioned earlier, we need to target those components that are more likely to appear in various realizations.
We know that there is a threshold phenomenon in some edge-faulty graphs, meaning that when is below a threshold, there are many isolated nodes (and hence many independent tests are needed) and when is above that threshold, we have a giant component (and hence a single test suffices). Most famously, this threshold is for Erdős-Rényi graphs. For random -regular graphs, also, [Goe97] has shown that when a graph is drawn uniformly from the set of all -regular graphs with nodes and then each edge is realized with probability , is a threshold almost surely.
For the rest of this section, we first study a (deterministic) -regular graph, known as the grid111There is a subtle difference worthwhile to mention here: The degree regularity does not hold on the boundaries of the grid. and then provide near-optimal results for the stochastic block model. When we consider (deterministic) -regular graphs, we can’t use the results of [Goe97] for random -regular graphs because we can not be sure that the specific chosen graph is among the “good” graphs that constitute the almost sure result. So we need to develop new results on the number of connected components and the distribution on them for our purposes.
4.1 The grid
We first formally define a grid. A grid with nodes and side length is a graph where nodes are in the form of . Node is connected to its four close neighbors (if exist), namely . Border nodes (with or ) might have three or two neighbors. Note that the side length is defined by number of nodes on the side.
In order to derive a lower bound, we need to know the expected number of components in and equivalently the expected component size that nodes belong to [AS16, Goe97]. To find the expected component size, we describe a random process that forms the components and analyze the expected stopping time. While this is well-known for ER graphs, to the best of our knowledge there are no bounds when the graph is not complete. We derive a lower bound on the expected number of components in a grid by analyzing the problem for 3-regular infinite trees. This approach can be generalized for any -regular graph.
Consider the following process. Pick a node , mark it as processed, and let it be the root of a tree. For each that is not processed and is a neighbor of , is realized with probability and added as a child of . The same process is repeated for each realized in a Breath First Search (BFS) order. When the process ends, there is a tree with root , and the expected size of the tree is the expected size of the component that ends up in.
An example of this proecss is shown in Figure 4. Node is the root (colored in blue), the children that are realized are marked in green, and the children that are not realized are marked in red. The component would be .
By repeating the process for each node that is not processed yet, we get a spanning forest. The expected number of components in the forest is the expected number of components in the original random graph.
Here, the challenge is that we don’t know the number of available (unprocessed) neighbors of a node. It highly depends on the previously chosen nodes, especially when is small, like in the grid. Note that this is more complicated than the process for ER graphs, as the number of unprocessed neighbors in an ER graph is independent of the previously processed nodes (due to its homogeneous nature). We circumvent this issue by analyzing an infinite regular tree process that effectively corresponds to a more connected graph and therefore leads to a lower bound on the expected number of connected components for the grid.
4.1.1 3-regular Trees
Consider an infinite tree with root such that each node in the tree has three children. Consider the process where each edge is realized with probability . Let be the component that ends up in. The following lemmas approximates the distribution of .
Lemma 4.1.
Under the above process and for ,
Proof.
Let be an embedded tree in with nodes. In order to be realized in the process, all the edges in should be realized and the rest of edges that has a node in should not be realized. There are edges in , and each node has three potential edges, so there are edges that are not realized. So the probability that be realized is .
Let be the number of trees with nodes and as the root. We have
Note that . We find a closed form of by recursion. Node has three potential subtrees, where the sum of the size of the subtrees is . We thus get the following recursion
This recursion has the same initial points and the same recursion as second order Catalan numbers as follows:
Lemma 4.2.
So the solution has the form and the proof is completed. ∎
Lemma 4.3.
Let . Then,under the above process,
Proof.
In order for to be in an infinite component, at least one of its children should be in an infinite component. There are three cases: (i) Either has one child, and this one is infinite, which happens with probability ; or (ii) has two children and at least one of them lies in an infinite component, which happens with probability ; (iii) or has three children and at least one of them lies in an infinite component, which happens with probability . So in total we have
The solutions of this equation are and . Note that , so is not valid. The relevant solution turns out to be dependent on whether or . As a matter of fact, when , the probabilities of all finite components (as found in Lemma 4.1) do not add up to one. So for , the correct solution is . When , we have , so zero is the correct solution and the lemma is proved. ∎
Theorem 4.4.
When , the expected component size is
(3) | |||
Note that when , then and hence the sum converges.
It is worthwhile to remark that the proof generalizes to general regular tree processes.
Remark 4.5.
When the underlying graph is an infinite -regular tree, under the defined process and for ,
4.1.2 Lower Bound on the Expected Number of Components in a Grid
In the case of grid, we consider 3-regular trees, as after the root in grid, each child has at most 3 potential neighbors and if we choose a node in the border, the root also has at most 3 potential children, as illustrated in Figure 4. So the 3-regular tree process that we analyzed corresponds to a more connected graph than the grid. Therefore its expected number of connected components that we found in (3) provides a lower bound on the expected number of connected components in the grid.
Let be the number of connected components. Note than is a stopping time, as by knowing , where is the size of the ’th component, we can decide whether the process has finished or not. The random process in the regular tree is symmetric over all the nodes, so by be stopping time, . So by Theorem 4.4, we immediately have the following result.
Theorem 4.6.
For a grid with nodes and , the expected number of components is
4.1.3 An Upper Bound for the Grid
In this subsection, we provide an upper bound for the number of tests in a grid. At a high level idea, we partition the grid into subgrids and assume that each subgrid is connected, so we can consider one representative node per subgrid to test. In other words, the algorithmic idea is similar to the proof of Theorems [3.2, 3.6] where the graph was partitioned into subgraphs that were more likely to remain connected in . But, here, we choose the subgraphs to be subgrids. In order to calculate the error, we need to compute the probability that a subgrid of length is connected, where is to be optimized later. We first estimate the probability that a subgrid becomes connected.
Lemma 4.8.
Let be the probability that a grid of length becomes connected when each of its edge is realized with probability . We have:
Proof.
Consider the subgrid of length that contains the bottom-left corner node. Then the main grid consists of the subgrid and a path with nodes, where each node in the path has one edge to the subgrid. For example in Figure 4, , the subgrid is the grid with corner nodes and the path of with nodes is . With probability the subgrid is connected. Note that in expectation, edges in the path would be removed, and by Chernoff bound it is concentrated around its mean. Then, the path would be decomposed into subpaths with probability at least . Each subpath has at least one edge to the subgrid, so each one is connected to the path with probability at least . The probability that all of them connect to the subgrid then is at least and the lemma is proved. ∎
Theorem 4.9.
Let be the probability that a grid of length becomes connected when each of its edge is realized with probability .We have:
Proof.
The proof is done by replacing with , and then with etc in Lemma 4.8 and at last replacing . ∎
We now partition the grid into subgrids of length , will be set later, and consider a candidate node for each subgrid and do the independent group testing on candidate nodes. Now similar to Theorem 3.6, by setting error probability of each subgraph small enough, that is , we get . Then the error is less than with at most independent node tests with error .
4.2 An Optimal Algorithm for the Stochastic Block Model
In this section, we study our model on SBM graphs. We apply the same techniques used in Erdős-Rényi graphs to find the connectivity threshold to find the structure of the connected components.
A stochastic block model has clusters of size , where any pair of nodes in the same cluster are connected with probability , and any pair of nodes in different clusters are connected with probability . After realizing each edge with probabilities and , we have our graph . Then based on our correlation model, each edge remains in with probability . So with probability an edge remains in the same cluster and with probability an edge remains between two different clusters. Here, we assume the size of the clusters are much bigger than , i.e. . We find the number of connected components based on and with high probability, for simplicity let’s say 99%. The probability can be improved with a slight change in the parameters.
Theorem 4.10.
-
•
If and , then with high probability is connected. (first regime, one test needed)
-
•
If and , then with high probability each cluster is connected but most of the clusters are isolated. (second regime, independent tests needed)
-
•
If and , then with high probability has many isolated nodes. (third regime, independent tests needed)
-
•
If and , and , then with high probability is connected. (fourth regime, one test needed)
Proof.
First, suppose . A cut is a partition of nodes into two sets (parts), and its size is the number of nodes in the smaller set. We say a cut is disconnected if there is no edge between the sets. A cut of size in a single cluster has potential edges between the parts. Then the probability that the specific cut is disconnected is
The first inequality, (i), is true because for , (ii) is true by , and (iii) is true by . Note that number of cuts of size is , and by Union Bound, the probability that any cut of size becomes disconnected is at most . But by a simple counting argument, so the probability of a cut be disconnected is at most . So with probability a single cluster of size is connected. Again, by Union Bound, with probability at most there is a disconnected cluster. So with probability , all clusters are connected.
Now if we assume all the clusters are connected, if there is an edge between two clusters, then those two clusters are connected. So if we consider a graph where the nodes represent the clusters and two nodes are connected if there is at least one edge between the corresponding clusters, then we need to understand the connectivity of the new graph. The probability that there is at least one edge between two clusters is , and again if this value is more than , then is connected with high probability. If , then the probability that a cluster is isolated is more than , so most of the clusters are isolated, which proves the first two parts of the theorem.
If , then with the same argument, with high probability most of the nodes in all clusters are isolated. If we also have , then this means that most of the nodes don’t have any neighbors outside of their cluster with high probability, so in total the graph has many isolated nodes, which proves the third part.
Now suppose , and we prove the last part of the theorem. We assume each cluster is empty, i.e. there is no edge in the cluster, and even when they’re empty with , the graph is connected with high probability. Consider a cut in with nodes. Each node has potential neighbors in other clusters. So it has at least potential neighbors outside of its clusters and the chosen cut. Then, almost similar to the first part of the theorem, the probability that this cut is disconnected is at most
Here, (i) is true by and . Again, there are cuts of size . So the probability that any cut of size is disconnected is at most . It is not hard to see that if , then . So the probability that any cut is disconnected is bounded by
So in the case of , we’ve proved the last part of the theorem. If , for , a cut of size has at most edges in the node set of size , as the graph is bipartite and the number of edges in the set is maximized when nodes is chosen from each part of the graph. So the potential edges to the other side of the cut is at least , as , and we can repeat the reasoning to prove that with high probability all cuts in this graph are connected. It is also easy to verify that when the cut is a single node or a pair of nodes, then the cut is disconnected with probability at most , and this completes the proof. ∎
Based on the previous theorem, we can now design a simple algorithm based on the parameters and . In the first and the last regime, a single node is tested and the the result generalizes to all the nodes. In the second regime, we pick a candidate node from each cluster and perform independent group testing on them. The result of each candidate node generalizes for all the nodes in the correspondent cluster. Finally, in the third regime we perform independent group testing on the nodes of the graph.
5 A Stronger Notion of Error
So far, we have focused on bounding the expected error of the algorithms, meaning the error is low (only) on average. But in many applications, we need to have a low error with high probability. As an example, let’s say we need the error to be less than . Under the weaker notion of error (bounding the average), we might have an error of half of the time, and for the other half have error. So half of the time we don’t satisfy the required error. Similar to [LCH+14], we introduce probabilistic error with relaxation on error-free prediction. Precisely, in [LCH+14] they wanted to find the defective set with probability such that all the nodes are correctly predicted, but we allow up to mispredicted nodes.
We consider the following stronger notion of error: suppose that we want to have at most mispredicted nodes with probability , and for the other fraction we can have any number of mispredicted nodes. We refer to this notion of error as maximum error with parameters and . This is a relaxation of the error compared to [LCH+14], where , i.e. with probability we recover perfectly. Let be the expected number of tests in an optimal algorithm with maximum error with parameters and . Parameters and are defined as in Section 1.1.
In the following, we will provide lower bounds on the number of tests under the above notion of error. Then we will provide matching upper bounds for some of the graphs discussed so far.
5.1 Lower Bounds for the Stronger Error
We first find a lower bound for . Note that we are no longer able to use lower bounds of classic group testing directly, like Lemma 2.1, because an error in classical group testing (which affects the average error) might not be counted as an error in maximum error. In other words, in classical group testing, when an error happens with probability , there is no guarantee on the number of mispredicted nodes, the error might be one or a constant or all the nodes, but only mispredicted nodes are allowed in maximum error. So we need to find a lower bound for the problem with maximum error definition and independent nodes directly. We adapt the approach in [LCH+14] to derive the following lower bound:
Theorem 5.1.
Let be the minimum number of tests for independent nodes under maximum error with parameters . Then any Probabilistic Group Testing algorithm that, with probability , predicts all independent nodes but of them correctly, needs tests where is the binary entropy function. i.e.
Proof.
The proof is a modified version of the proof of Theorem 2.7 in [LCH+14]. Let be the vector of states of the nodes, be the vector of the result of the group tests for a testing strategy of choice, and be the estimated states of the nodes. Then form a Markov chain. Thus, by data processing inequality, . Also we have Now, , where indicates the number of possible values of the random vector . Since represents the result vector, the number of possible values of this vector is at most , where is the number of tests. Thus, . Thus, combining the above inequalities,
Moreover,
Thus, Since represents the state of independent nodes, each of which is defective with probability . Thus,
(4) |
We now obtain an upper bound for Define the error random variable such that
where is the number of non-zero elements in a vector. We can bound the conditional entropy as follows
(5) |
We now upper bound :
(6) |
To obtain the inequality, we note that given that , there are at most bits in which differs from . Thus, given a value of and that , there are at most values of , where is a constant. The result follows by recalling that the entropy for any random variable with values is at most . The Theorem follows by putting together (4), (5) and (6).
∎
Analogous to Lemma 2.1, we immediately get the following lemma:
Lemma 5.2.
Let be the number of connected components in . Then, under a maximum error target, we have
(7) |
Using Lemma 5.2 along with our concentration results on the number of connected components for cycles, trees and grids, we find lower bounds on the number of tests for our maximum error criteria.
Recall that the lower bound for the average error has the form of for a graph , while under the stronger notion of error we have . For the regime where the lower bound for average error and the stronger error simplify to and , respectively, and when we get an improvement.
As discussed, there is a gap between our lower and upper bounds. We next show that the lower bound can be improved by capturing the underlying topology of the graph more heavily.
5.2 An Improved Lower Bound for the Star Graph
Theorem 5.1 does not depend on the underlying graph . However, if we take into account, we can infer information about the state of the nodes, at least for some graphs. Here, we give an improved lower bound for star graphs, where there is a node with degree and all the other nodes are leaves.
Theorem 5.3.
When the underlying graph is a star, we have
(8) |
where .
Proof.
Let , and be defined same as in the proof of Theorem 5.1. Let be a random binary vector where the ’th coordinate is 1 iff the ’th edge of is realized (with probability ). Then forms a Markov chain. We are interested in because
where is the number of test. Write
(9) |
We now bound . Let if and otherwise, same as Theorem 5.1. Then
(10) |
By writing and and replacing Eq (10) in Eq (9), we get
(11) |
As is a tree and every component is independently infected with probability , we can write
Now we bound . For the first term, in Theorem 5.1 we found
(12) |
To bound the second term, note that the underlying graph is a star and with probability , the number of nodes in a different state than the center is , so the contribution outside of this range to is only . The edges connected to such nodes can not be realized, so edges are still uncertain, and knowing that the two end-points are in the same state, each of such edges are realized with probability . So
(13) |
Again, by replacing all in Eq (11), we get a lower bound on the number of tests:
∎

We compare the lower bound in (8)(Star-specific bound) with the lower bound in (7) (generic bound) in Figure 5. One sees that the lower bound in (8) is strictly larger than (7) for a range of around . This range corresponds to cases where uncertainty about the edge set of is high, and therefore knowledge about the existence of (or lack thereof) an edge is very informative as captured in our way of upper bounding . Even though the improvement offered by the lower bound in (8) is relatively small, it suggests that generic lower bounds are likely to be loose for our correlated group testing problem and the structure of the underlying correlation graph need to be considered in order to obtain tighter lower bounds.
5.3 Upper Bounds
We now find upper bounds similar to those that we provided in Sections 3.2 and 4.1. Under an average error target, we partitioned the graph and computed the incurred average error. Under maximum error targets, however, we don’t know what fraction of the realizations with average error actually has less than mispredicted nodes, so we can’t use those results and need to prove concentrations around the average. For cycles and grids, the subgraphs we designed were independent of each other, in the sense that the connectivity of a subgraph would not change the probability of connectivity of the other subgraphs. Hence by using Hoeffding’s bound we prove the following theorem for a maximum error target.
Theorem 5.4.
Consider a cycle of length and let , , and . There is an algorithm that uses tests and finds the defective set with maximum error with parameters and , i.e.
Proof.
We use the same algorithm and the same subgraphs as in Theorem 3.2. Recall that the probability of one subgraph not being connected is and the average error is . The number of mispredicted nodes in each subgraph is in and they are independent of each other, meaning the connectivity of a group does not change the connectivity of another group, so by Hoeffding’s bound, with probability at least , the error is at most . Also with probability more than all candidates are predicted correctly, so with probability more than the classic group tests detect all the defective nodes with no error, and assuming this, the error on subgraphs is less than and we’re done. ∎
The same reasoning works for subgrids of a grid and leads to the same upper bound with the parameters , and . But for trees, we can’t use Hoeffding’s bound because the connectivity of a subgraph can affect the others, for instance the absence of an edge might make several groups disconnected, hence the groups are not independent anymore. This dependency violates the conditions we need for applying Hoeffding’s inequality. In order to fix the issue, we use the node exposure martingales process with a proper graph function definition to prove the desired concentration.
Before providing the new theorem for trees, we need the following lemma to use the concentration lemma for node exposure martingale defined in Section 3.1.
Lemma 5.5.
Let be the subgraphs formed in Lemma 3.5. Let be the number of connected subgraphs in graph . Then there is an order of node exposure such that at each step, the value of does not change by more than one.
Proof.
Let be the subgraphs formed in Lemma 3.5 in this order. Consider the following order of node exposure: we first expose all the nodes in the last subgraph, , in some order. Then expose the nodes in the subgraph before that, , and so on until the nodes in the last subgraph, , are exposed.
Consider a node in subgraph when it is exposed. Note that no subgraph can become connected after exposing , as by construction of subgraphs, all the nodes of connecting closure of lies in . Also, no subgraph can become connected, because neither of ’s nodes are exposed yet. So can potentially only change by one, and for the last node of to make connected, and the lemma is proved. ∎
The above lemma allows us to use Azuma’s inequality for groups made for trees as described in the following theorem.
Theorem 5.6.
Consider a tree with nodes and let , , and . Then there is an algorithm that uses tests and finds the defective set with maximum error with parameters and , i.e.
Proof.
We use the algorithm and the same subgraphs introduced in Theorem 3.2. Recall from the proof of Theorem 3.6 that the probability of one group not being connected is and the average error is . Now we can’t use the Hoeffding’s bound to show concentration around the average. Instead, we use a node exposure martingale to prove the concentration. Note that Theorem 2.5 for node exposure martingale only works when the graph function is node Lipschitz, meaning when and only differ in one node, . But if we can find an order of nodes exposed such the graph on first nodes satisfies , then we can still use Azuma’s inequality. Let be the number of connected subgraphs in . Then by Lemma 5.5 there is an order such that , and we can use the concentration theorem (Theorem 2.5) for the number of connected groups in the random graph . We now have the same error concentration as in Theorem 5.4 for error (equivalent to the Hoeffding’s bound), hence we can repeat the argument to complete the proof. ∎
6 Conclusion
In this paper, we consider group testing strategies for identifying defective items when the defects of different nodes are correlated. The correlation is modeled through an underlying graph in which the degree of correlation between the defects in the items depends on the distance between the corresponding nodes in the graph. We relate the problem of design of testing strategies in presence of such correlation to that when the defects are independent. We subsequently obtain testing strategies in terms of those already known for independent defects for a large class of underlying graphs, namely trees, cycles, grids, and stochastic block models. This provides an upper bound for the number of tests needed to ensure the desired error bounds. We also obtain fundamental limits ie lower bounds on the minimum number of tests required to ensure the same error bounds using bounds already known when the defects are independent. The bounds are obtained through a novel combination of edge exposure Martingale theory and graph partition techniques.
We now describe some directions for future research stemming from the above results. There is a gap between the upper and lower bounds, which may be because there is scope of improvement in the lower bound, possibly utilizing the specific structures of the underlying correlation graphs. Another important area for future work is the development of testing strategies for general graphs. Towards that end one may envision the partition of the graph into structures such as trees, cycles, grids, stochastic block models etc for which we have identified in this paper testing strategies with guarantees on error rates and the number of tests required. Furthermore, designing group testing strategies when the defects dynamically evolve over time over graphs in question remains open. This problem has started receiving attention [SNFD22, AU21].
References
- [ACO21] Surin Ahn, Wei-Ning Chen, and Ayfer Ozgur. Adaptive group testing on networks with community structure. arXiv preprint arXiv:2101.02405, 2021.
- [AJS19] Matthew Aldridge, Oliver Johnson, and Jonathan Scarlett. Group testing: an information theory perspective. arXiv preprint arXiv:1902.06002, 2019.
- [Ald20] Matthew Aldridge. Conservative two-stage group testing. arXiv preprint arXiv:2005.06617, 2020.
- [AS16] Noga Alon and Joel H Spencer. The probabilistic method. John Wiley & Sons, 2016.
- [AU21] Batuhan Arasli and Sennur Ulukus. Group testing with a graph infection spread model. arXiv preprint arXiv:2101.05792, 2021.
- [Ava08] Jean-Christophe Aval. Multivariate fuss–catalan numbers. Discrete Mathematics, 308(20):4660–4669, 2008.
- [BJ21] Paolo Bertolotti and Alo Jadbabaie. Network group testing. 2021.
- [BMR21] Vincent Brault, Bastien Mallein, and Jean-François Rupprecht. Group testing as a strategy for covid-19 epidemiological monitoring and community surveillance. PLoS computational biology, 17(3):e1008726, 2021.
- [BMTW84] Toby Berger, Nader Mehravari, Don Towsley, and Jack Wolf. Random multiple-access communication and group testing. IEEE Transactions on Communications, 32(7):769–779, 1984.
- [CGM21] Mahdi Cheraghchi, Ryan Gabrys, and Olgica Milenkovic. Semiquantitative group testing in at most two rounds. In 2021 IEEE International Symposium on Information Theory (ISIT), pages 1973–1978. IEEE, 2021.
- [CHKV11] Mahdi Cheraghchi, Ali Hormati, Amin Karbasi, and Martin Vetterli. Group testing with probabilistic tests: Theory, design and application. IEEE Transactions on Information Theory, 57(10):7057–7067, 2011.
- [Chl01] Bogdan S Chlebus. Randomized communication in radio networks. COMBINATORIAL OPTIMIZATION-DORDRECHT-, 9(1):401–456, 2001.
- [CKMS12] Mahdi Cheraghchi, Amin Karbasi, Soheil Mohajer, and Venkatesh Saligrama. Graph-constrained group testing. IEEE Transactions on Information Theory, 58(1):248–262, 2012.
- [COGHKL20] Amin Coja-Oghlan, Oliver Gebhard, Max Hahn-Klimroth, and Philipp Loick. Optimal group testing. In Conference on Learning Theory, pages 1374–1388. PMLR, 2020.
- [CTB+20] Marco Cuturi, Olivier Teboul, Quentin Berthet, Arnaud Doucet, and Jean-Philippe Vert. Noisy adaptive group testing using bayesian sequential experimental design. arXiv preprint arXiv:2004.12508, 2020.
- [DHH00] Dingzhu Du, Frank K Hwang, and Frank Hwang. Combinatorial group testing and its applications, volume 12. World Scientific, 2000.
- [Dor43] Robert Dorfman. The detection of defective members of large populations. The Annals of Mathematical Statistics, 14(4):436–440, 1943.
- [FKKM97] Martin Farach, Sampath Kannan, Emanuel Knill, and Shan Muthukrishnan. Group testing problems with sequences in experimental molecular biology. In Proceedings. Compression and Complexity of SEQUENCES 1997 (Cat. No. 97TB100171), pages 357–367. IEEE, 1997.
- [GG20] Christian Gollier and Olivier Gossner. Group testing against covid-19. Technical report, EconPol Policy Brief, 2020.
- [GH08] Michael T Goodrich and Daniel S Hirschberg. Improved adaptive group testing algorithms with applications to multiple access channels and dead sensor diagnosis. Journal of Combinatorial Optimization, 15(1):95–121, 2008.
- [Goe97] Andreas Goerdt. The giant component threshold for random regular graphs with edge faults. In International Symposium on Mathematical Foundations of Computer Science, pages 279–288. Springer, 1997.
- [KM95] Emanuel Knill and S Muthukrishnan. Group testing problems in experimental molecular biology. arXiv preprint math/9505211, 1995.
- [LCH+14] Tongxin Li, Chun Lam Chan, Wenhao Huang, Tarik Kaced, and Sidharth Jaggi. Group testing with prior statistics. In 2014 IEEE International Symposium on Information Theory, pages 2346–2350. IEEE, 2014.
- [MNB+21] Leon Mutesa, Pacifique Ndishimye, Yvan Butera, Jacob Souopgui, Annette Uwineza, Robert Rutayisire, Ella Larissa Ndoricimpaye, Emile Musoni, Nadine Rujeni, Thierry Nyatanyi, et al. A pooled testing strategy for identifying sars-cov-2 at low prevalence. Nature, 589(7841):276–280, 2021.
- [NSFD21] Pavlos Nikolopoulos, Sundara Rajan Srinivasavaradhan, Christina Fragouli, and Suhas N Diggavi. Group testing for community infections. IEEE BITS the Information Theory Magazine, 1(1):57–68, 2021.
- [NSG+21] Pavlos Nikolopoulos, Sundara Rajan Srinivasavaradhan, Tao Guo, Christina Fragouli, and Suhas Diggavi. Group testing for connected communities. In International Conference on Artificial Intelligence and Statistics, pages 2341–2349. PMLR, 2021.
- [SNFD22] Sundara Rajan Srinivasavaradhan, Pavlos Nikolopoulos, Christina Fragouli, and Suhas Diggavi. Dynamic group testing to control and monitor disease progression in a population. In 2022 IEEE International Symposium on Information Theory (ISIT), pages 2255–2260. IEEE, 2022.
- [SW18] Bruce Spang and Mary Wootters. Unconstraining graph-constrained group testing. arXiv preprint arXiv:1809.03589, 2018.
- [VFH+21] Claudio M Verdun, Tim Fuchs, Pavol Harar, Dennis Elbrächter, David S Fischer, Julius Berner, Philipp Grohs, Fabian J Theis, and Felix Krahmer. Group testing for sars-cov-2 allows for up to 10-fold efficiency increase across realistic scenarios and testing strategies. Frontiers in Public Health, page 1205, 2021.