Department of Computer Science, University of Wisconsin-Madison, United [email protected]://orcid.org/0000-0001-7714-2195Department of Computer Science, University of Wisconsin-Madison, United States and http://pages.cs.wisc.edu/~paris/[email protected]://orcid.org/0000-0001-6309-1702 \CopyrightAusten Z. Fan and Paraschos Koutris \ccsdescTheory of computation Database theory \ccsdescTheory of computation Incomplete, inconsistent, and uncertain databases \fundingThis research was supported in part by National Science Foundation grants CRII-1850348 and III-1910014, as well as a gift by Google.\EventEditorsDan Olteanu and Nils Vortmeier \EventNoEds2 \EventLongTitle25th International Conference on Database Theory (ICDT 2022) \EventShortTitleICDT 2022 \EventAcronymICDT \EventYear2022 \EventDateMarch 29–April 1, 2022 \EventLocationEdinburgh, UK \EventLogo \SeriesVolume220 \ArticleNo18
Certifiable Robustness for Nearest Neighbor Classifiers
Abstract
ML models are typically trained using large datasets of high quality. However, training datasets often contain inconsistent or incomplete data. To tackle this issue, one solution is to develop algorithms that can check whether a prediction of a model is certifiably robust. Given a learning algorithm that produces a classifier and given an example at test time, a classification outcome is certifiably robust if it is predicted by every model trained across all possible worlds (repairs) of the uncertain (inconsistent) dataset. This notion of robustness falls naturally under the framework of certain answers. In this paper, we study the complexity of certifying robustness for a simple but widely deployed classification algorithm, -Nearest Neighbors (-NN). Our main focus is on inconsistent datasets when the integrity constraints are functional dependencies (FDs). For this setting, we establish a dichotomy in the complexity of certifying robustness w.r.t. the set of FDs: the problem either admits a polynomial time algorithm, or it is coNP-hard. Additionally, we exhibit a similar dichotomy for the counting version of the problem, where the goal is to count the number of possible worlds that predict a certain label. As a byproduct of our study, we also establish the complexity of a problem related to finding an optimal subset repair that may be of independent interest.
keywords:
Inconsistent databases, -NN classification, certifiable robustness1 Introduction
Machine Learning (ML) has been widely adopted as a central tool in business analytics, medical decisions, autonomous driving, and many other domains. In supervised learning settings, ML models are typically trained using large datasets of high quality. However, real-world training datasets often contain incorrect or incomplete data. For example, attribute values may be missing from the dataset, attribute values may be wrong, or the dataset can violate integrity constraints. Several approaches to tackle this problem have been proposed in the literature, including data cleaning [17, 23] and robust learning methods [4, 26, 5].
In this work, we study an alternative but complementary approach using the framework of certain answers, which has been a focus of the database community in the last two decades [1, 2]. Under this framework, an inconsistent or incomplete training dataset is modeled as a set of possible worlds called repairs. We can think of a repair as a way to clean the dataset such that it is consistent (w.r.t. a set of integrity constraints) and complete. In the context of query answering, certain answers are the output tuples that exist in the query result of every possible world. In other words, we will obtain a certain answer in the output no matter how we choose to repair the dataset.
The notion of certain answers in the context of ML is known as certifiable (or certified) robustness [3, 26]. Given a learning algorithm that produces a classifier and a tuple at test time, we say that a predicted label is certifiably robust if it is predicted by every model trained across all possible worlds. In other words, certifiably robust predictions come with a formal guarantee that the label is robust to how the training dataset is fixed. Such a guarantee can be beneficial to decide whether we should trust the predictions of the ML model or whether we should spend resources to clean the dataset before feeding it to the learning algorithm.
As a first step towards certifying robustness for other more complex ML algorithms, we focus in this work on -Nearest Neighbor classification (-NN). In this problem, we start with a -dimensional dataset equipped with a distance function. Given a test point , the classifier first finds the points closest to w.r.t. the given distance function, and assigns to the label that is a plurality among these. Certified robustness for -NNs was recently explored by Karlas et. al [13] under the uncertain model where tuples (training points) with the same label are grouped into blocks, and a possible world is constructed by picking independently exactly one tuple from each block. This setting is equivalent to considering the subset repairs of an inconsistent dataset where the only integrity constraint is a primary key, with the additional restriction that training points with the same key must have the same label.
A | B | C | label | |
---|---|---|---|---|
1 | 0 | a | 0 | |
1 | 2 | b | 0 | |
2 | 0 | a | 2 | |
2 | 5 | c | 1 | |
3 | 1 | a | 0 | |
4 | 2 | d | 2 |
In this paper, we generalize the study of certifying robustness for -NNs to other inconsistent and uncertain models. Specifically, we consider subset repairs of a dataset where the integrity constraints can be any set of functional dependencies (FDs) and not only a primary key as in [13].
Example 1.1.
Consider the inconsistent dataset in the table, where the integrity constraint is the FD . Let the distance function between two tuples be . Suppose that we want to label the test point using a 3-NN classifier. This induces the following ordering of the tuples w.r.t. their distance from (for convenience, we include the labels as well):
A repair for this inconsistent dataset has to choose one tuple from and one tuple from . There are 4 possible repairs, which form the following 3-neighborhoods around :
In all repairs, label 0 occurs two times, and hence it is always the majority label. Hence, we can certify that is a robust label for tuple .
We show that for general sets of FDs the complexity landscape for certified robustness admits a dichotomy: it is computationally intractable for some sets of FDs, and is in polynomial time for the other sets of FDs. We also investigate certifying robustness for other widely used uncertain models, including Codd-tables [11], or-sets and -sets [25]. We establish that in these settings the problem can always be solved in polynomial time.
Our work shows that the logical structure of the errors (or missing values) from a training set can be exploited to construct fast algorithms for certifiable robustness. Tools developed in the database theory community can facilitate the design of these algorithms. We also demonstrate that, even for the relatively simple -NN classifier, the complexity landscape exhibits a complex behavior that is related to other problems in consistent query answering.
Our Contribution. We now present in more detail the contributions of this work:
-
•
We establish a complexity dichotomy for certifying robustness for -NNs under subset repairs (Section 4) into P and coNP-complete. The dichotomy depends on the structure of FDs. More precisely, the syntactic condition for the dichotomy is based on the notion of lhs chains. In fact, it is the same as the one used for the complexity classification of the problem of counting the number of subset repairs under a set of FDs [21]. In the case where the only FD constraint is a primary key, we show that we can design an even faster algorithm that runs in linear time in the size of the data, improving upon the running time of the algorithm in [13] (Section 5).
-
•
In addition to certified robustness, we study the related problem of counting the number of repairs that predict a given label for a test point (Section 8). We establish a dichotomy into the complexity classes FP and #P-complete with the same syntactic condition as the decision problem. The polynomial time algorithm here depends exponentially on the number of classification labels, in contrast to our algorithm for certifiable robustness which has a linear dependence on the number of labels.
-
•
We show that certifying robustness for -NNs is tightly connected to the problem of finding the subset repair with the smallest total weight, when each tuple is assigned a positive weight (Section 7). As a consequence, we obtain a dichotomy result for that problem as well. Note that this problem is a symmetric variant of the problem in [20], which asks instead for the repair with the maximum total weight.
-
•
Finally, we investigate the complexity of certifiable robustness for -NNs for three widely used uncertain models: Codd-tables, -sets and or-sets (Section 9). We show that for all the above models, certifying robustness for -NN classification admits a polynomial time algorithm.
2 Related Work
Certain Query Answering. There has been a long line of research in Certain Query Answering (CQA) in the database community. Data consistency might be violated, for example, during data integration or in a data warehouse. It is then natural to ask: given a query and such inconsistent data, can we answer the query with a certain answer? Arenas, Bertossi, and Chomicki [1] first define the notion of a repair, which refers to a consistent subinstance that is minimally different from the inconsistent data. A certain answer to the query is defined as an answer that will result from every repair. Beginning from the work of Fuxman and Miller [8], more general dichotomy results in CQA have been proven [14, 15, 16]. A dichotomy theorem for a class of queries and integrity constraints says CQA is either in polynomial time or coNP-complete, usually depending on an efficiently checkable criterion for tractability. Certain answers have also been studied in the context of incomplete models of data [19, 22, 10].
Subset Repairs. Livshits et. al [21] studied the problem of counting the number of subset repairs w.r.t. a given set of FDs, establishing a complexity dichotomy. The syntactic condition for tractability (existence of a lhs chain) is the same as the one we establish for certifiable robustness in -NN classification. Livshits et. al [20] also studied the problem of finding a maximum-weight subset repair w.r.t. a given set of FDs, and showed that the complexity observes a dichotomy. In this paper we study the symmetric problem of finding a minimum-weight subset repair, and show that the problem also exhibits a complexity dichotomy, albeit the condition for tractability is again the existence of a lhs chain.
Certifiable Robustness in ML. Robust learning methods are used to learn over inconsistent or incomplete datasets. For example, [5] discusses a sound verification technique which proves whether a prediction is robust to data poisoning in decision-tree models. There is also a line of work on smoothing techniques for ML robustness [3, 12, 24, 18], where added random noise, usually with a Gaussian distribution, will sometimes boost the robustness of the model against adversarial attacks such as label-flipping. Our approach is different in that we prove a dichotomy for -NN certifiable robustness, i.e. either we can assert that the dataset will always lead to the same prediction efficiently or it is coNP-complete to do so, with an efficiently testable criterion. We show how to extend our model to capture scenarios including uncertain labels, weighted tuples, and data poisoning.
3 Preliminaries
In this paper, we consider relation schemas of the form with arity . The attributes take values from a (possibly infinite) domain . Given a tuple in an instance over , we will use to denote its value at attribute . It will be convenient to interpret an instance as a training set of points in the -dimensional space . We will use the terms point/tuple interchangeably in the paper.
An uncertain instance over a schema is a set of instances over the schema. We will often refer to each instance in as a possible world. We will see later different ways in which we can define uncertain instances implicitly.
For each tuple that occurs in some possible world in , we associate a label which takes values from a finite set . We will say that the uncertain instance equipped with the labeling function is a labeled uncertain instance over the schema . We similarly define a labeled instance .
Certifiable Robustness. In this work, we will focus on the classification task. Let be a learning algorithm that takes as training set a labeled instance over the schema , and returns a classifier, which is a total function .
Definition 3.1 (Certifiable Robustness).
Let be a labeled uncertain instance over and be a classification learning algorithm with labels in . We say that a (test) point is certifiably robust in if there exists a label such that for every possible world , . The label is called a certain label for .
In other words, suppose we call a possible label for if there exists some possible world for which , then certifiable robustness simply means that there is only one possible label for .
Nearest Neighbor Classifiers. In -NN classification, we are given a labeled instance over , along with a distance function over . Given a point , the classifier first finds the -neighborhood , which consists of the points closest to w.r.t. the distance function . Then, the classifier assigns to the label that is a plurality among . When , the classifier returns the label of the point that is closest to w.r.t. the distance function . When , we are performing binary classification. We will also consider the generalization of -NN where each tuple has a positive weight, and the classifier assigns the label with the largest total weight.
For this work, we require the following tie-breaking mechanisms: if there are two labels in with the maximum number, then we say is not certifiably robust for -NN classification, and if there are more tuples with the same distance to the test point that will make not well-defined, we will break ties according to a predefined ordering of the tuples in the instance. By a slight abuse of notation, throughout the proof when we say , we mean the number of tuples labeled is strictly more than that of any other labels for any choices made to pick .
Functional Dependencies. A functional dependency (FD) is an expression of the form , where and are sets of attributes from . An instance over satisfies if for every two tuples in , if they agree on they must also agree on . We say that satisfies a set of functional dependencies if it satisfies every functional dependency in . For an attribute and set of FDs , we define to be the FD set where we have removed from any FD in the attribute . An FD schema is a pair , where is a set of FDs defined over .
Repairs. Given , assume that we have an inconsistent instance that violates the functional dependencies in . We say that is a repair of if it is a maximal subset of that satisfies . In other words, we can create a repair by removing the smallest possible number of tuples from such that all the integrity constraints are satisfied. We will use to denote the set of all possible repairs of w.r.t. the FD schema . If the instance is consistent, namely it does not violate any functional dependency, then is defined to be itself.
3.1 Problem Definitions
Although our algorithms work for any distance function such that can be computed in time (assuming the dimension is fixed), for the hardness results we need a more precise formalization. We consider two variants of the problem. In the first variant, we will consider a specific distance function, the -norm when the domain is . Recall that for any , the -norm is
For the following definitions, we fix the dimension and the label set . Formally, we can now define the following decision problem, parameterized by an FD schema and an integer .
Definition 3.2 ().
Given an inconsistent labeled instance over an FD schema and a test point , is certifiably robust in for -NN classification w.r.t. the -norm?
Note that is fixed in the above problem. We can also define the decision problem where is instead an input parameter, denoted as .
In the second variant of the problem, instead of fixing a distance function, we will directly provide as input to the problem a strict ordering of the points in the dataset w.r.t. their distance from the test point . From an algorithmic point of view this does not make much difference, since we can compute the ordering in time for any distance function that can be computed in time .
Definition 3.3 ().
Given an inconsistent labeled instance over an FD schema and a strict ordering of the points in w.r.t. their distance from a test point , is certifiably robust in for -NN classification?
Similarly we also define the problem with the parameter as part of the input. Note here that there is a straightforward many-one polynomial time reduction from to .
Finally, we define the counting variant of the problem. Given an inconsistent instance and a label , we want to count how many repairs of will predict label .
Definition 3.4 ().
Given an inconsistent labeled instance over an FD schema , a strict ordering of the points in w.r.t. their distance from a test point , and a label , output the number of repairs in for which the -NN classifier assigns label to .
Similarly, one can define the counting question .
4 Main Results
In this section, we present and discuss our key results. The main dichotomy theorem relies on the notion of lhs chains for an FD schema, as defined in [21].
Definition 4.1 (lhs Chain).
A set of FDs has a left-hand-side chain (lhs chain for short) if for every two FDs and in , either or .
One can determine efficiently whether an FD schema is equivalent to one with an lhs chain or not [21].
Example 4.2.
Consider the relational schema . The FD set does not have an lhs-chain, since neither of the two left-hand-sides of the FDs are contained in each other. The FD set on the other hand has an lhs chain.
We are now ready to state our main theorem.
Theorem 4.3 (Main Theorem).
Let be an FD schema. Then, the following hold:
-
•
If is equivalent to an FD schema with an lhs chain, then (and thus ) is in P.
-
•
Otherwise, for any integer , (and thus ) is coNP-complete.
Moreover, it can be decided in polynomial time which of the two cases holds.
We show the polynomial time algorithm in Section 5 and the hardness proof in Section 6. We should discuss three things at this point. First, the polynomial time algorithm works for any distance function, as long as we can compute the distance between any two points in time . Indeed, the distance function is only needed to compute the order of tuples in the dataset according to their distance from the test point. Second, we show the intractability result for the -norm distance function, which is widely used in practice. It is likely that the problem remains hard for other popular distance functions as well. Third, as we will see in the next section, the tractable cases are polynomial in , the size of the neighborhood. This is important, since is often set to be a function of the training set size, for example . For the intractable cases, the problem is already hard even for .
Uncertain Labels. The above dichotomy theorem holds even when we allow inconsistent labels. We can model inconsistent labels by modifying the labelling function to take values from , the power set of the finite label set . The set of possible worlds is then defined to be the set of possible worlds of the inconsistent instance paired with a labelling function such that for all . The definition of certifiable robustness carries over to this set of possible worlds.
We can simulate uncertain labels by adding an extra attribute (label) to the schema and an FD . It is easy to see that the new schema is equivalent to one with an lhs chain if and only if the original one is. Thus, we conclude that uncertain labels do not change the complexity with respect to certifiable robustness.
Counting. For the counting variant of certifying robustness for -NNs, we show an analogous dichotomy result.
Theorem 4.4.
Let be an FD schema. Then, the following hold:
-
•
If is equivalent to an FD schema with an lhs chain, then is in FP.
-
•
Otherwise, even is #P-complete.
Moreover, it can be decided in P which of the two cases holds.
5 Tractable Cases
In this section, we prove that if the FD schema has an lhs chain, then there is a polynomial time algorithm for in the size of the inconsistent dataset, the parameter and the number of possible labels. Then, we show that when consists of a single primary key we can construct an even faster algorithm that runs in linear time w.r.t the number of tuples, number of labels, and .
For this section, let be an inconsistent labeled instance and be the test point. Assume w.l.o.g. that and let be the number of tuples in . We assume that the points in are already in strict order w.r.t. their distance from : .
5.1 An Algorithm for Certifiable Robustness
Note that in the following analysis we fix an FD schema with an lhs chain. The algorithm first constructs a repair by choosing greedily points using the given ordering as long as they do not conflict with each other. This step can be implemented in time by, say, building a hash map per FD which maps for each tuple the value of the LHS attribute(s) to the value of the RHS attribute(s). Suppose w.l.o.g. that .
As a second step, for every label , we will attempt to construct a repair of such that the number of -labeled points is at least as many as the number of 1-labeled points in the -neighborhood of . Such a repair will be a witness that some other label is possible for , hence is not certifiably robust.
It will be helpful now to define the following terms for a subinstance , , and a label :
We are now ready to present the core component of our algorithm. This component will be executed for every label and . Thus, it will run times. Define the following quantity for a subinstance , an FD set , and where :
Here for simplicity we adopt a slight abuse of notation where, although depends on , is not explicitly shown in the notation . If there is no repair for such that , we define . The key observation is that if then is a possible label for and hence robustness is falsified. The algorithm computes this quantity using a combination of dynamic programming and recursion on the structure of the FD set.
The Recursive Algorithm. Given and a set of FDs , we want to compute for every . First, we remove from any trivial FDs. Then we distinguish three cases:
- Base Case:
-
the set of FDs is empty. In this case, . For every , . For (if ), we have , so we can compute this in a straightforward way.
- Consensus FD:
-
there exists an FD . In this case, we recursively call the algorithm to compute for every . Then, for every :
- Common Attribute:
-
there exists a common attribute in the lhs of all FDs. In this case, we recursively call the algorithm to compute for every . Let . Then, for every :
We next discuss how to do the above computation using dynamic programming. We can view as a matrix with rows indexed by value of attributes and columns indexed by with . The task is to pick one entry from each row so that the sum of entries is maximized and the column indices of entries sum to . The dynamic programming computes an matrix where the -entry represents the maximum of such that and finally returns the entries .
Lemma 5.1.
The Recursive Algorithm outputs the correct result and runs in polynomial time with respect to the size of , the parameter and the number of labels .
Proof 5.2.
To prove the correctness of the algorithm, we use the observation that if the functional dependencies form an lhs chain, they can be arranged in an order such that for all . Note the 3 cases are mutually exclusive, meaning that at every execution step the algorithm will go into one and only one case. Note also that the algorithm will eventually terminate since step 2 and 3 will eliminate at least one lhs or rhs attribute of an FD and the FD schema is fixed.
We now argue that the algorithm will return the correct result for . The crucial observation is that represents the maximum difference between the number of label and the number of label within all repairs of , where and the number of tuples with distance to the test point less than is exactly . Thus, the formula correctly computes such maximum difference among all repairs of by summing all repairs with fixed value of the attribute with suitable ’s. The fact that a repair can be “reassembled” by sub-repairs partitioned according to different values of an attribute hinges on the assumption that the FD schema has an lhs chain. We conclude that the algorithm will correctly return whose non-negativity is equivalent to non-certifiable-robustness.
Regarding the running time, observe that for every label and every threshold , the algorithm will be called recursively finitely many times since the FD schema is fixed. Furthermore, the dynamic programming in step 3 will run in time, since the matrix has size and to compute each entry we need time. Thus for each and , the algorithm will run in polynomial time with respect to and the parameter . Since there are thresholds, we conclude that the algorithm runs in polynomial time with respect to the size of , the parameter and the number of labels .
Weighted Majority. The algorithm can also handle the case where we compute the predicted label by weighted majority, where each tuple is assigned a weight . The only thing we need to modify is the definition of , which now becomes .
5.2 A Faster Algorithm for Single Primary Key
The algorithm given in the above section, though in polynomial time, is not very efficient. In this section, we give a faster algorithm for certifiable robustness when the FD schema is equivalent to one with a single primary key. Recall that in this case we need to pick exactly one tuple from the set of tuples that share the same key.
As in the previous section, we will first run -NN on an arbitrarily chosen repair to obtain a possible label for (this part needs only linear time). Without any loss of generality, assume that the predicted label for is 1. For every target label , we will attempt to construct a repair such that occurs at least as many times as 1 in the -neighborhood of . If such a repair exists, then robustness is falsified.
For a tuple , we denote to be its key. The block of a tuple is the set of tuples with the same key. Further, define .
Suppose now we want to find a repair such that . Define to be the dataset obtained from if we remove any tuple such that there exists another tuple in the same block with:
-
1.
and ; or
-
2.
and .
Lemma 5.3.
Let and . Then, there exists a repair of s.t. if and only if there exists a repair of with .
Proof 5.4.
For the proof of Lemma 5.3, we need the following helper Sublemma.
Sublemma 1.
Let such that and . Consider any repair of with and let be another repair of . Then for any label :
Proof 5.5 (Proof of Sublemma 1).
We distinguish two cases. Suppose . Since , the -neighborhood of remains the same, and thus the inequality holds with equality. Suppose now that . In this case, is replaced by some other tuple in the -neighborhood. If , the inequality holds again with equality. Otherwise, . Since , the inequality follows.
Since , the one direction is straightforward. For the other direction, suppose that there exists a repair of s.t. . We will transform to a repair of with the same property by constructing a sequence of datasets such that for every there exists a repair of with . The invariant is true for . For some , if , then is a repair of with the desired property. Otherwise, let . Then, two cases exist:
-
•
and there exists a tuple in the same block as with . Let and its repair . Then, by applying Lemma 1 with , and , we obtain , where the last inequality comes from the invariant.
-
•
There exists a tuple in the same block as with and . Let and its repair . Then, by applying Lemma 1 with , and , we obtain , where the last inequality comes from the invariant.
Finally, note that the sequence must terminate at some point since the datasets strictly decrease in size.
Lemma 5.3 tells us that it suffices to consider instead of . has the following nice properties:
-
•
every block has at most one tuple with a label from .
-
•
any tuple with label in is always the last tuple in its block (i.e. the one furthest away from ).
The pruning procedure can be implemented in linear time . Algorithm 2 FastScan now attempts to find the desired repair. It runs in linear time with respect to the size of and the number of labels and, moreover, its time complexity does not depend on .
Theorem 5.6.
There exists an algorithm for when the FD schema is equivalent to one with a single primary key.
Proof 5.7.
We first show the correctness of our Algorithm 2. The algorithm falsifies robustness if there exists a pair of labels for which FastScan returns true. Suppose that FastScan terminates at iteration . Then, we can construct a repair of (and hence of ) as follows. For each block in we pick the last tuple unless its label is , in which case we arbitrarily pick another tuple in that block. For the blocks in , we pick any tuple for blocks, and for the remaining blocks in we pick a tuple . For the remaining blocks we make an arbitrary choice. From our construction, has exactly tuples , which form the -neighborhood. Since we have pruned , all the occurrences of tuples with label occur at the last positions of the blocks. Hence, .
For the opposite direction, suppose that there exists a repair that falsifies robustness. Then, for some label we must have . From Lemma 5.3, there exists a repair of for which . Let be the last tuple in the -neighborhood of . Then, the exit condition will be satisfied for the -th iteration of the loop of FastScan and hence the algorithm will return true.
We now discuss the time complexity. First, we spend to find a possible label for . The algorithm runs FastScan times, for every pair of labels where . Finally, we need to argue that each iteration of the loop in FastScan requires time. This can be achieved by implementing B and as hash-sets with constant-time insertions. Checking whether a tuple is the last one in a block can also be done in time by performing a preprocessing linear pass that marks all the last tuples of every block.
When , the algorithm essentially reduces to the MinMax algorithm in [13]. For it outperforms the SortScan algorithm [13], since the latter algorithm has an exponential dependence on and . Our algorithm also can deal with the case where two tuples in the same block have different labels, which is not something the MinMax and SortScan algorithms can handle.
Example 5.8.
We now illustrate our algorithm by a simple example where and . Figure 1(a) represents an inconsistent instance , where nodes with the same shape/color are in the same block. Their distances to a given test point are increasing from left to right. A repair that chooses the first two tuples assigns label 1 to , hence 1 is a possible label. Figures 1(b) and 1(c) illustrate the pruned instances Prune(D, 2, 1) and Prune(D, 3, 1), respectively. Take Figure 1(c) for example: when FastScan reaches the iteration where , we have , and , so and . Indeed, by choosing the first, second, third, and last two tuples in Figure 1(c), we see that label 1 is not robust (against label 3).
6 Hardness Result
In this section, we establish the main intractability result.
Theorem 6.1.
Let be an FD schema that is not equivalent to any FD schema with an lhs chain. Then, the problem is coNP-complete for any .
Proof 6.2.
The coNP membership of the problem follows from the observation that if one is not certainly robust, then it can be checked efficiently two given repairs (certificate) indeed lead to two different prediction labels. To prove this hardness result, we will describe a reduction from the SAT-3-restricted problem, inspired by the construction of [27] for the edge dominating set problem. In this variant of SAT, each clause has at most three literals, while every variable occurs two times and its negation once.
Let be a SAT-3-restricted formula. Suppose that has clauses and variables . Our construction consists of three steps.
Step 1: In the first step, we construct a directed labeled graph with labels in :
-
•
The set of vertices .
-
•
For each clause , where , we add the following labeled edge:
That is, we add the directed edge which points from to to the set of edges and label it as .
-
•
Suppose that variable , where , appears positive in clauses , and negative in clause . Then, we introduce the following labeled edges:
Figure 2 shows the above construction. Note that is a directed bipartite graph, since no vertex has both incoming and outgoing edges. Hence, one can equivalently view each maximal matching of as a subset repair of an instance with FD schema and vice versa (attributes and correspond to the two sides of the bipartite graph).
Sublemma 2.
is satisfiable if and only if there exists a maximal matching for that includes only edges with label 1.
Proof 6.3 (Proof of Sublemma 2).
Assume that the variable assignment makes satisfiable. Fix any order of variables . We form a set of edges as follows. For any variable visited in the above order, we distinguish two cases:
-
•
: we pick . If is not yet matched, pick , otherwise pick . Similarly for .
-
•
: pick and . If is not yet matched, pick , otherwise pick .
By construction, contains only edges with label 1.
Claim 1: is a matching. Since occur only in a variable gadget, they will have at most one adjacent edge from . By construction, each clause will also have at most one adjacent edge from .
Claim 2: is a maximal matching. First, observe that by our construction, all are matched for any . Second, notice that if , the edge will be chosen; if , the edges and will be chosen. Thus, the edges can not be added to . Finally, consider the edge corresponding to clause . If there exists a positive literal which satisfies , then consider the earliest in the linear order of variables. By construction, is in the matching, where . Otherwise, is satisfied by a negative literal: consider the earliest such in the linear order. Then is in . In either case, cannot be added to increase the size of the matching.
Assume a maximal matching that avoids 0-labeled edges. For every variable , if contains , we assign , otherwise . We claim that is a satisfying assignment for . Indeed, take any clause . Since , there exists some edge in that conflicts with it. If this edge is of the form , then it must be that . But then , and since occurs positively in the clause is satisfied. If this edge is of the form , then . Thus, , and since occurs negatively in it is again satisfied.
Step 2: A maximal matching of can be viewed equivalently as a repair of a labeled instance with FD schema . In the second step, we will transform the instance of to a labeled instance of the target FD schema . We will do this using the concept of fact-wise reductions. A fact-wise reduction from to is a function that maps a tuple from an instance of to a tuple of an instance of such that it is injective, it preserves consistency and inconsistency (i.e. a tuple in violates if and only if the corresponding tuple in violates ), and it can be computed in polynomial time. In fact, we will use exactly the same fact-wise reduction as the one used in [21] to reduce an instance in to one in , where is not equivalent to an FD schema with an lhs chain. It will be necessary to present this reduction in detail, since its structure will be needed for the third step of our reduction.
W.l.o.g., we can assume the FD schema is minimal. Since it does not have an lhs chain, there are two FDs and such that and . Let be a fresh constant. We map with label to a tuple with label also such that:
Here, denotes the closure of an attribute set w.r.t. the FD set . By [21] we know that is a fact-wise reduction. Let be the resulting instance of .
Sublemma 3.
is satisfiable if and only if there exists a repair for in that includes only tuples with label 1.
Proof 6.4 (Proof of Sublemma 3).
This follows from Sublemma 2 and the fact that is a fact-wise reduction.
Step 3: In the last step of the reduction, we will present an encoding that embeds the values of the attributes in to values in , hence allowing us to compute distances with the -norm. The resulting tuples will also be labeled from .
Let , where is the number of attributes. First, let . For a vertex , let
It is easy to see that the above embedding is injective, meaning that if then . As for the edges, consider any ordering and simply let . Note that the number of edges in is . This embedding is also injective. Let denote the instance we obtain by encoding every value of as above. Since the encoding is injective, this is also trivially a fact-wise reduction, hence Sublemma 3 holds for as well.
Sublemma 4.
is satisfiable if and only if has no certain label in .
Proof 6.5 (Proof of Sublemma 4).
We first need the following two claims.
Claim 1: Any tuple with label 0 is closer to than any tuple with label 1. Indeed, a tuple has label 1 if and only if it contains in an attribute a value of the form . Hence, any tuple with label 1 has distance from . On the other hand, each attribute in a 0-labeled tuple has value at most . Hence, the distance from any tuple with label 0 is bounded by .
Claim 2: 0 is a possible label for . Indeed, the tuple corresponding to the edge is the closest one to and has label 0. Hence, any repair that includes this tuple will assign the label 0 to .
Assume that the variable assignment makes satisfiable. Then, we know that there exists a repair for that avoids any tuple with label 0. This repair will then assign label 1 to , which implies that is not a certain label since by Claim 2 0 is a possible label for .
Assume a repair that assigns a label 1 to – hence making have no certain label. Since by Claim 1 all 0-labeled tuples are closer than the 1-labeled tuples, this means that all tuples in the repair must have label 1. But then, is satisfiable.
This completes the proof.
Finally, we extend the intractability result from to for any integer .
Theorem 6.6.
Let be an FD schema that is not equivalent to any FD schema with an lhs chain. Then, for any integer , is coNP-hard for any .
Proof 6.7.
To show this hardness result, we will show a reduction from to , , for any FD schema and binary labels . We will also assume that we restrict the inputs such that the test point does not coincide with any existing tuple in the dataset (this assumption does not affect the hardness claim).
Let the input labeled instance with values in , and be the test point. Assume w.l.o.g. that ; otherwise, we can shift the coordinates so that distances are maintained and becomes the origin. Find the tuple in that is closest to and let (this can be done in polynomial time). Since , . W.l.o.g. assume . Let .
We construct an instance as follows. First, we map each tuple in to . Second, we add the tuples with corresponding labels .
By construction, every new point in has distance at most from . On the other hand, all the other points have distance at least . Moreover, the tuples in do not conflict with the existing tuples or with each other. From this, we obtain that the -neighborhood of in every repair of contains .
We now claim that is certifiably robust in for 1-NN classification if and only if it is certifiably robust in for -NN classification.
Suppose that is certifiably robust in for 1-NN classification. Since is a possible label for , this means that for every repair of , the closest point has label . But then every repair of will be plus one point that has label in the -neighborhood of , so the number of -labeled tuples will be at least , which consists a majority.
Suppose that is not certifiably robust in for 1-NN classification. Since 0 is a possible label for , this means that there exists a repair of such that the closest tuple has label 1. But then the corresponding repair of will have plus one point that has label in the -neighborhood of , so the number of -labeled tuples will be at least , which is either majority or tie.
7 Optimal Repairs Revisited
In this section, we investigate the complexity landscape of a related problem to certifying robustness for -NN classification, which may be of independent interest. In [20], the authors studied the Opt-Repair problem: each tuple is associated with a positive weight , and we want to find the optimal subset repair that removes the set of tuples with the smallest total weight. Note that this is equivalent to finding the repair with the largest total weight.
Here, we consider the symmetric problem, Min-Repair: we want to find the subset repair that has tuples with the smallest total weight. In this case, we interpret the tuple weights as a measure of how "wrong" we think a tuple is. We can parametrize this problem with a given FD schema , as in . Min-Repair captures as a special case the following problem, denoted as : given an inconsistent instance over and a subinstance , does there exist a subset repair such that ?
Lemma 7.1.
There exists a many-one polynomial time reduction from to .
Proof 7.2.
One can set the weight of any tuple in to be 1, otherwise 0. Then, there exists a repair that avoids the forbidden set if and only if there exists a repair with total weight equal to .
From the hardness proof of Theorem 6.1, we immediately obtain the following intractability result.
Theorem 7.3.
Let be an FD schema that is not equivalent to any FD schema with an lhs chain. Then, is NP-hard. As a result, is also NP-hard.
It turns out that the forbidden set repair problem is directly connected with certifying robustness for -NN classification.
Lemma 7.4.
There exists a many-one polynomial time reduction from to the complement of .
Proof 7.5.
Let , be the inputs to the Forbidden-Repair problem. We will construct a labeled instance for the classification problem using only two labels, . The input instance is exactly . For labeling, if then , otherwise . Finally, we construct an ordering of the tuples in such that whenever , .
We first claim that any repair of that avoids is a repair that assigns a label of to . Indeed, the -neighborhood of for such a repair will consist of a tuple with label 1 by construction. On the other hand, any repair that includes a tuple from assigns label to . Since there exists at least one repair includes a tuple from , there exists no repair that avoids the forbidden set if and only if is certifiably robust (with label 0).
The above reduction gives a polynomial time algorithm for whenever is equivalent to an FD schema with an lhs chain. We now present Algorithm 3 that works for the more general Min-Repair problem.
The algorithm works similarly to the OptSRepair algorithm in [20] with two differences. First, we only need to consider the cases where the FD schema has a common lhs or a consensus FD (i.e., FD with an empty lhs). Second, in the case of the consensus FD, where we partition the instance, we take the repair that has the minimum total weight instead of the largest one.
Theorem 7.6.
Let be an FD schema that is equivalent to an FD schema with an lhs chain. Then, is in P.
The dichotomy we obtain for Min-Repair coincides with the one we obtain for CR-NN. However, it is different from the dichotomy observed for the Opt-Repair problem in [20]; in fact, every FD schema that admits a polynomial time algorithm for Min-Repair also admits a polynomial time algorithm for Opt-Repair, but not vice versa. Specifically, the FD schema is hard for Min-Repair, but tractable for Opt-Repair. The reason is that finding a maximum-weight (maximal) matching in a bipartite graph is polynomially solvable, but finding a minimum-weight maximal matching is NP-hard.
The next lemma might be of independent interest.
Lemma 7.7.
There exists a Cook reduction from to .
Proof 7.8.
We are given an instance , a test point . Let be the tuples in increasing order of their distance to . Observe that we can always choose a repair that includes the closest tuple . Hence, the label is a possible label for . To show that is certifiably robust, it suffices to prove that none of the labels in are possible.
Let . For every tuple such that , we construct an input to as follows. First, construct to be the instance where we have removed from the tuple along with any tuples that conflict with . Then, the input to Forbidden-Repair is the instance along with the forbidden set . In other words we ask: is there a repair that includes but avoids all the other tuples that are closer to ? It is now easy to observe that is a possible label for – and thus is not certifiably robust – if and only if at least one instance (out of the at most instances) for Forbidden-Repair returns true.
8 Certifiable Robustness by Counting
In this section, we consider the counting version of certifiable robustness for -NN classifiers. The counting problem of certifiable robustness asks the following: among all possible repairs, how many will predict label for a fixed ?
We show that the counting problem still remains in polynomial time if the FD set is equivalent to an lhs chain by generalizing the algorithm in Section 5.1. Formally, the class of counting problems that can be computed in polynomial time is called FP.
Theorem 8.1.
If the FD schema is equivalent to some FD schema with an lhs chain, then the counting problem is in FP.
We show now how to generalize the algorithm in Section 5.1 to perform counting. Let be the test point and . It suffices to show how to count for the label . Recall the definition of and . Similarly as in Section 5.1, we will compute a high-dimensional matrix “layer by layer” and read off the answer from it.
We now make our key definition. Fix a threshold , and define the following quantity for a (possibly inconsistent) subinstance and integers , where and for all :
For simplicity of notation, let denote the vector with the understanding that could represent any new vector . Sometimes we might suppress the FD set to write when the context is clear. The interpretation for the entry is that it records the number of repairs in with many tuples which are among -th closest to such that the differences between the number of -tuples and the number of -tuples are exactly , . If there is no such , we define . The algorithm computes the entries of by a combination of dynamic programming and recursion on the FD schema . Note that after computing , the answer to is exactly the sum of all entries where for all . The algorithm has three disjoint cases when computing :
- Base Case:
-
the set of FDs is empty. In this case, unless and for all , in which case the entry is 1. This step clearly can be computed efficiently.
- Consensus FD:
-
there exists an FD . In this case, we recursively call the algorithm to compute for every value . Then, we calculate .
- Common Attribute:
-
there exists a common attribute in the lhs of all FDs. In this case, we recursively call the algorithm to compute for every value and all such that and where . Let . We then compute
where the first summation is over all possible solutions of ’s and ’s such that and for all . We now show how to compute by a direct generalization of Algorithm 1. The dynamic programming computes a -dimensional matrix where its -entry represents the sum over all possible solutions ’s and ’s such that and for . Note that has entries. Let and . We are now ready to present our dynamic programming algorithm:
12for do3 ;45for do6 for do7 ;8Algorithm 4 Dynamic Programming (Counting)
We now show that the counting algorithm returns correctly and runs in polynomial time with respect to the size of and the parameter . However, the running time depends exponentially on the number of labels . An open question remains whether this exponential dependency is essential.
Proof 8.2 (Proof of Theorem 8.1).
The correctness of the counting algorithm follows similarly as in the proof of Lemma 5.1. We now argue that the counting algorithm runs in polynomial time. For every label and every threshold , the algorithm will be called recursively finitely many times since the FD schema is fixed. Furthermore, the dynamic programming in step 3 will run in , where , since the matrix has size and to compute each entry we need time. Thus for each and , the algorithm will run in polynomial with respect to and the parameter . Since there are boundary thresholds, we conclude that the algorithm runs in polynomial time with respect to the size of and the parameter .
The hardness part for counting is an immediate corollary with the previous result in [21].
Theorem 8.3.
If the FD schema is not equivalent to some FD schema with an lhs chain, then is #P-complete.
Proof 8.4.
By Theorem 3.2 in [21], it is #P-hard to count the number of repairs for the FD schema . Now, given any instance , we can pick any ordering of the points and assign the same label to every tuple. Then, the number of repairs that predict label is the same as the total number of repairs.
The approximate counting of certifying robustness of -NN classifiers is also hard.
Theorem 8.5.
If the FD schema is not equivalent to some FD schema with an lhs chain, then .
Here, refers to the equivalency up to approximation-preserving reductions [6].
Proof 8.6.
By the same fact-wise reduction in Lemma 3 and labelling in Theorem 8.3, we have where #MaximalBIS is the problem of counting the number of maximal independent sets in a bipartite graph and #SRep is the problem of counting the number of subset repairs of an inconsistent instance. By Theorem 1 in [9], and thus we obtain .
9 Other Uncertainty Models
In this section, we study the complexity of certifying robustness for -NNs under three simple uncertainty models: -sets, or-sets, and Codd tables. We show that for these models we can certify robustness in polynomial time. Throughout the section, we fix the relational schema to be .
?-Sets with Size Constraints. For a given instance over the schema, we mark an uncertain subset of the tuples in . Then, for a positive integer , we define the set of possible worlds as:
In other words, we can construct a possible world by removing any – but at most – tuples from . When , this definition captures the notion of -tables as defined in [25]. When , it captures data poisoning scenarios, where the goal is to protect against attacks that can poison up to tuples of the training set.
For this setting, we can show that certifiable robustness for -NNs can be computed in almost linear time in the size of the input.
Lemma 9.1.
We can certify robustness in for -NNs in time where is the size of the dataset.
Proof 9.2.
For the sake of simplicity, let .
Let be the test point. As a first step, we order the tuples in increasing order of – this can be done in time . We assume w.l.o.g. that all distances are distinct. Let the resulting order be .
Since it always holds that , we can fist compute the predicted label of in in time . W.l.o.g. assume that . For every label , the algorithm will now try to construct a possible world where occurs at least as many times as 1 in the -neighborhood of . To make the algorithm exposition easier, define for every tuple a priority value as follows.
We now present our Algorithm 5. Intuitively, it iterates over the tuples in order of proximity to . For each tuple , it attempts to construct the most promising -neighborhood that includes tuples from . Each loop in the algorithm can be executed in time . Indeed, we can implement this by keeping three sets with tuples depending on their labels, where we can do insertion and deletion in constant time.
Note that the running time of Algorithm 5 is independent of and , but it does depend linearly on the number of labels.
Or-Sets. In this uncertain model, each attribute value of a tuple is an or-set consisting of finite values (e.g., ). Each possible world in is formed by choosing exactly one value from each or-set, independent of the choices across all other or-sets. We can express as subset repairs for the following schema: add to an extra attribute id. Then, assign a unique value for this attribute to each tuple , and create a tuple for each "realization" of this tuple with the same id. This will increase the input size of the problem, but by at most a polynomial factor (since is fixed). Moreover, the FD schema is clearly equivalent to an lhs chain; in fact, this is the single primary key case. As a consequence, we have the following proposition.
Proposition 9.3.
We can certify robustness in for -NNs in P.
Codd tables. In a Codd table, a missing value is represented as Null paired with a domain Dom from which that value can be chosen. A repair is any possible completion of the table. The first observation is that, by adding a new identifier attribute ID, we can think of a Codd table as an inconsistent instance with a single primary key, where each block has a consistent label (i.e., every tuple in the block has the same label). However, if Dom is given as an infinite interval, then Algorithm 2 does not apply directly since it may not be possible to write all tuple completions in an increasing order (there are uncountably many). Indeed, in [13] Karlas et al. consider only the situation where Dom is given as a finite discrete set.
Formally, the distance between a tuple in the Codd table and a test point is given by the function where the ’s are the missing entries in . In order to be able to certify robustness, we need to assume that the minimum and maximum of this function can be computed efficiently. This is a mild assumption, since for example for the -norm the minimum and maximum is achieved when each summand is minimized or maximized respectively.
Now, we observe that since every block has the same label, we can modify Algorithm 2 so that every resulting repair consists of only the first or the last tuple in a block without changing its correctness properties. With this observation in hand, we can now simply define the extremal set consisting of the minimum and maximum of each block and then run Algorithm 2 on . Since is at most twice the size of , this implies a linear time algorithm (w.r.t. the number of tuples, labels, and ) for certifiable robustness on Codd tables. We have shown the following proposition.
Proposition 9.4.
We can certify robustness in a Codd table for -NNs in linear time even if the domain Dom is given as an infinite interval.
10 Conclusion
In this paper, we study the complexity of certifiable robustness for -NN classification under FD constraints. We show a dichotomy in the complexity of the problem depending on the structure of the FDs, and we prove that the same dichotomy condition also holds for the counting version of the problem. We envision this work to be a first step towards the long-term goal of investigating the complexity of certifying robustness for other widely used classification algorithms, such as decision trees, Naive Bayes classifiers and linear classifiers.
References
- [1] Marcelo Arenas, Leopoldo E. Bertossi, and Jan Chomicki. Consistent query answers in inconsistent databases. In PODS, pages 68–79. ACM Press, 1999. doi:10.1145/303976.303983.
- [2] Jan Chomicki. Consistent query answering: Five easy pieces. In ICDT, volume 4353 of Lecture Notes in Computer Science, pages 1–17. Springer, 2007. doi:10.1007/11965893\_1.
- [3] Jeremy M. Cohen, Elan Rosenfeld, and J. Zico Kolter. Certified adversarial robustness via randomized smoothing. In ICML, volume 97 of Proceedings of Machine Learning Research, pages 1310–1320. PMLR, 2019. URL: http://proceedings.mlr.press/v97/cohen19c.html.
- [4] Ilias Diakonikolas, Gautam Kamath, Daniel Kane, Jerry Li, Ankur Moitra, and Alistair Stewart. Robust estimators in high-dimensions without the computational intractability. SIAM J. Comput., 48(2):742–864, 2019. doi:10.1137/17M1126680.
- [5] Samuel Drews, Aws Albarghouthi, and Loris D’Antoni. Proving data-poisoning robustness in decision trees. In PLDI, pages 1083–1097. ACM, 2020. doi:10.1145/3385412.3385975.
- [6] Martin E. Dyer, Leslie Ann Goldberg, Catherine S. Greenhill, and Mark Jerrum. The relative complexity of approximate counting problems. Algorithmica, 38(3):471–500, 2004. doi:10.1007/s00453-003-1073-y.
- [7] Austen Z. Fan and Paraschos Koutris. Certifiable robustness for nearest neighbor classifiers. CoRR, abs/2201.04770, 2022. URL: http://arxiv.org/abs/2201.04770.
- [8] Ariel Fuxman and Renée J. Miller. First-order query rewriting for inconsistent databases. J. Comput. Syst. Sci., 73(4):610–635, 2007. doi:10.1016/j.jcss.2006.10.013.
- [9] Leslie Ann Goldberg, Rob Gysel, and John Lapinskas. Approximately counting locally-optimal structures. J. Comput. Syst. Sci., 82(6):1144–1160, 2016. doi:10.1016/j.jcss.2016.04.001.
- [10] Sergio Greco, Cristian Molinaro, and Francesca Spezzano. Incomplete Data and Data Dependencies in Relational Databases. Synthesis Lectures on Data Management. Morgan & Claypool Publishers, 2012. doi:10.2200/S00435ED1V01Y201207DTM029.
- [11] Tomasz Imielinski and Witold Lipski Jr. Incomplete information in relational databases. J. ACM, 31(4):761–791, 1984. doi:10.1145/1634.1886.
- [12] Jinyuan Jia, Xiaoyu Cao, Binghui Wang, and Neil Zhenqiang Gong. Certified robustness for top-k predictions against adversarial perturbations via randomized smoothing. In ICLR. OpenReview.net, 2020. URL: https://openreview.net/forum?id=BkeWw6VFwr.
- [13] Bojan Karlas, Peng Li, Renzhi Wu, Nezihe Merve Gürel, Xu Chu, Wentao Wu, and Ce Zhang. Nearest neighbor classifiers over incomplete information: From certain answers to certain predictions. Proc. VLDB Endow., 14(3):255–267, 2020. doi:10.5555/3430915.3442426.
- [14] Phokion G. Kolaitis and Enela Pema. A dichotomy in the complexity of consistent query answering for queries with two atoms. Inf. Process. Lett., 112(3):77–85, 2012. doi:10.1016/j.ipl.2011.10.018.
- [15] Paraschos Koutris and Dan Suciu. A dichotomy on the complexity of consistent query answering for atoms with simple keys. In ICDT, pages 165–176. OpenProceedings.org, 2014. doi:10.5441/002/icdt.2014.19.
- [16] Paraschos Koutris and Jef Wijsen. The data complexity of consistent query answering for self-join-free conjunctive queries under primary key constraints. In PODS, pages 17–29. ACM, 2015. doi:10.1145/2745754.2745769.
- [17] Sanjay Krishnan, Jiannan Wang, Eugene Wu, Michael J. Franklin, and Ken Goldberg. Activeclean: Interactive data cleaning for statistical modeling. Proc. VLDB Endow., 9(12):948–959, 2016. doi:10.14778/2994509.2994514.
- [18] Aounon Kumar, Alexander Levine, Tom Goldstein, and Soheil Feizi. Curse of dimensionality on randomized smoothing for certifiable robustness. In ICML, volume 119 of Proceedings of Machine Learning Research, pages 5458–5467. PMLR, 2020. URL: http://proceedings.mlr.press/v119/kumar20b.html.
- [19] Leonid Libkin. Incomplete information and certain answers in general data models. In PODS, pages 59–70. ACM, 2011. doi:10.1145/1989284.1989294.
- [20] Ester Livshits, Benny Kimelfeld, and Sudeepa Roy. Computing optimal repairs for functional dependencies. ACM Trans. Database Syst., 45(1):4:1–4:46, 2020. doi:10.1145/3196959.3196980.
- [21] Ester Livshits, Benny Kimelfeld, and Jef Wijsen. Counting subset repairs with functional dependencies. J. Comput. Syst. Sci., 117:154–164, 2021. doi:10.1016/j.jcss.2020.10.001.
- [22] Simon Razniewski and Werner Nutt. Completeness of queries over incomplete databases. Proc. VLDB Endow., 4(11):749–760, 2011. URL: http://www.vldb.org/pvldb/vol4/p749-razniewski.pdf.
- [23] Theodoros Rekatsinas, Xu Chu, Ihab F. Ilyas, and Christopher Ré. Holoclean: Holistic data repairs with probabilistic inference. Proc. VLDB Endow., 10(11):1190–1201, 2017. doi:10.14778/3137628.3137631.
- [24] Elan Rosenfeld, Ezra Winston, Pradeep Ravikumar, and J. Zico Kolter. Certified robustness to label-flipping attacks via randomized smoothing. In ICML, volume 119 of Proceedings of Machine Learning Research, pages 8230–8241. PMLR, 2020. URL: http://proceedings.mlr.press/v119/rosenfeld20b.html.
- [25] Anish Das Sarma, Omar Benjelloun, Alon Y. Halevy, and Jennifer Widom. Working models for uncertain data. In ICDE, page 7. IEEE Computer Society, 2006. doi:10.1109/ICDE.2006.174.
- [26] Jacob Steinhardt, Pang Wei Koh, and Percy Liang. Certified defenses for data poisoning attacks. In NIPS, pages 3517–3529, 2017. URL: https://proceedings.neurips.cc/paper/2017/hash/9d7311ba459f9e45ed746755a32dcd11-Abstract.html.
- [27] M. Yannakakis and F. Gavril. Edge dominating sets in graphs. SIAM Journal on Applied Mathematics, 38(3):364–372, 1980. doi:10.1137/0138030.