On the Tractability of SHAP Explanations
Abstract
Shap explanations are a popular feature-attribution mechanism for explainable AI. They use game-theoretic notions to measure the influence of individual features on the prediction of a machine learning model. Despite a lot of recent interest from both academia and industry, it is not known whether Shap explanations of common machine learning models can be computed efficiently. In this paper, we establish the complexity of computing the Shap explanation in three important settings. First, we consider fully-factorized data distributions, and show that the complexity of computing the Shap explanation is the same as the complexity of computing the expected value of the model. This fully-factorized setting is often used to simplify the Shap computation, yet our results show that the computation can be intractable for commonly used models such as logistic regression. Going beyond fully-factorized distributions, we show that computing Shap explanations is already intractable for a very simple setting: computing Shap explanations of trivial classifiers over naive Bayes distributions. Finally, we show that even computing Shap over the empirical distribution is #P-hard.
1 Introduction
Machine learning is increasingly applied in high stakes decision making. As a consequence, there is growing demand for the ability to explain the prediction of machine learning models. One popular explanation technique is to compute feature-attribution scores, in particular using the Shapley values from cooperative game theory (Roth 1988) as a principled aggregation measure to determine the influence of individual features on the prediction of the collective model. Shapley value based explanations have several desirable properties (Datta, Sen, and Zick 2016), which is why they have attracted a lot of interest in academia as well as industry in recent years (see e.g., Gade et al. (2019)).
Štrumbelj and Kononenko (2014) show that Shapley values can be used to explain arbitrary machine learning models. Datta, Sen, and Zick (2016) use Shapley-value-based explanations as part of a broader framework for algorithmic transparency. Lundberg and Lee (2017) use Shapley values in a framework that unifies various explanation techniques, and they coined the term Shap explanation. They show that the Shap explanation is effective in explaining predictions in the medical domain; see Lundberg et al. (2020). More recently there has been a lot of work on the tradeoffs of variants of the original Shap explanations, e.g., Sundararajan and Najmi (2020), Kumar et al. (2020), Janzing, Minorics, and Bloebaum (2020), Merrick and Taly (2020), and Aas, Jullum, and Løland (2019).
Despite all of this interest, there is considerable confusion about the tractability of computing Shap explanations. The Shap explanations determine the influence of a given feature by systematically computing the expected value of the model given a subsets of the features. As a consequence, the complexity of computing Shap explanations depends on the predictive model as well as assumptions on the underlying data distribution. Lundberg et al. (2020) describe a polynomial-time algorithm for computing the Shap explanation over decision trees, but online discussions have pointed out that this algorithm is not correct as stated. We present a concrete example of this shortcoming in the supplementary material, in Appendix A. In contrast, for fully-factorized distributions, Bertossi et al. (2020) prove that there are models for which computing the Shap explanation is #P-hard. A contemporaneous paper by Arenas et al. (2020) shows that computing the Shap explanation for tractable logical circuits over uniform and fully factorized binary data distributions is tractable. In general, the complexity of the Shap explanation is open.
In this paper we consider the original formulation of the Shap explanation by Lundberg and Lee (2017) and analyze its computational complexity under the following data distributions and model classes:
-
1.
First, we consider fully-factorized distributions, which are the simplest possible data distribution. Fully-factorized distributions capture the assumption that the model’s features are independent, which is a commonly used assumption to simplify the computation of the Shap explanations, see for example Lundberg and Lee (2017).
For fully-factorized distributions and any prediction model, we show that the complexity of computing the Shap explanation is the same as the complexity of computing the expected value of the model.
It follows that there are classes of models for which the computation is tractable (e.g., linear regression, decision trees, tractable circuits) while for other models, including commonly used ones such as logistic regression and neural nets with sigmoid activation functions, it is #P-hard.
-
2.
Going beyond fully-factorized distributions, we show that computing Shap explanation becomes intractable already for the simplest probabilistic model that does not assume feature independence: naive Bayes. As a consequence, the complexity of computing Shap explanations on such data distributions is also intractable for many classes of models, including linear and logistic regression.
-
3.
Finally we consider the empirical distribution, and prove that computing Shap explanations is #P-hard for this class of distributions. This result implies that the algorithm by Lundberg et al. (2020) cannot be fixed to compute the exact Shap explanations over decision trees in polynomial time.
2 Background and Problem Statement
Suppose our data is described by indexed features . Each feature variable takes a value from a finite domain . A data instance consists of values for every feature . This instance space is denoted . We are also given a learned function that computes a prediction on each instance . Throughout this paper we assume that the prediction can be computed in polynomial time in .
For a particular instance of prediction , the goal of local explanations is to clarify why the function gave its prediction on instance , usually by attributing credit to the features. We will focus on local explanation that are inspired by game-theoretic Shapley values (Datta, Sen, and Zick 2016; Lundberg and Lee 2017). Specifically, we will work with the Shap explanations as defined by Lundberg and Lee (2017).
2.1 Shap Explanations
To produce Shap explanations, one needs an additional ingredient: a probability distribution over the features, which we call the data distribution. We will use this distribution to reason about partial instances. Concretely, for a set of indices , we let denote the restriction of complete instance to those features with indices in . Abusing notation, we will also use to denote the probabilistic event .
Under this data distribution, it now becomes possible to ask for the expected value of the predictive function . Clearly, for a complete data instance we have that , as there is no uncertainty about the features. However, for a partial instance , which does not assign values to the features outside of , we appeal to the data distribution to compute the expectation of function as .
The Shap explanation framework draws from Shapley values in cooperative game theory. Given a particular instance , it considers features to be players in a coalition game: the game of making a prediction for . Shap explanations are defined in terms of a set function . Its purpose is to evaluate the “value” of each coalition of players/features in making the prediction under data distribution . Concretely, following Lundberg and Lee (2017), this value function is the conditional expectation of function :
(1) |
We will elide , , and when they are clear from context.
Our goal, however, is to assign credit to individual features. In the context of a coalition , the contribution of an individual feature is given by
(2) |
where each term is implicitly w.r.t. the same , , and .
Finally, the Shap explanation computes a score for each feature averaged over all possible contexts, and thus measures the influence feature has on the outcome. Let be a permutation on the set of features , i.e., fixes a total order on all features. Let be the set of features that come before in the order . The Shap explanations are then defined as computing the following scores.
Definition 1 (Shap Score).
Fix an entity , a predictive function , and a data distribution . The Shap explanation of a feature is the contribution of given the features , averaged over all permutations :
(3) |
We mention two simple properties of the Shap explanations here; for more discussion see Datta, Sen, and Zick (2016) and Lundberg et al. (2020). First, for the linear combination of functions , we have that
(4) |
Second, the sum of the Shap explanation of all features is related to the expected value of function :
(5) |
2.2 Computational Problems
This paper studies the complexity of computing ; a task we formally define next. We write F for a class of functions. We also write for a class of data distributions over features, and let . We assume that all parameters are rationals. Because Shap explanations are for an arbitrary fixed instance , we will simplify the notation throughout this paper by assuming it to be the instance , and that each domain contains the value , which is without loss of generality.
Definition 2 (Shap Computational Problems).
For each function class F and distribution class PR, consider the following computational problems.
-
–
The functional Shap problem : given a data distribution and a function , compute .
-
–
The decision Shap problem : given a data distribution , a function , a feature , and a threshold , decide if .
To establish the complexities of these problems, we use standard notions of reductions. A polynomial time reduction from a problem A to a problem B, denoted by , and also called a Cook reduction, is a polynomial-time algorithm for the problem A with access to an oracle for the problem B. We write when both and .
In the remainder of this paper will study the computational complexity of these problems for natural hypothesis classes F that are popular in machine learning, as well as common classes of data distributions PR, including those most often used to compute Shap explanations.
3 Shap over Fully-Factorized Distributions
We start our study of the complexity of Shap by considering the simplest probability distribution: a fully-factorized distribution, where all features are independent.
There are both practical and computational reasons why it makes sense to assume a fully-factorized data distribution when computing Shap explanations. First, functions are often the product of a supervised learning algorithm that does not have access to a generative model of the data – it is purely discriminative. Hence, it is convenient to make the practical assumption that the data distribution is fully factorized, and therefore easy to estimate. Second, fully-factorized distributions are highly tractable; for example they make it easy to compute expectations of linear regression functions (Khosravi et al. 2019b) and other hard inference tasks (Vergari et al. 2020).
Lundberg and Lee (2017) indeed observe that computing the Shap-explanation on an arbitrary data distribution is challenging and consider using fully-factorized distributions (Sec. 4, Eq. 11). Other prior work on computing explanations also use fully-factorized distributions of features, e.g., Datta, Sen, and Zick (2016); Štrumbelj and Kononenko (2014). As we will show, the Shap explanation can be computed efficiently for several popular classifiers when the distribution is fully factorized. Yet, such simple data distributions are not a guarantee for tractability: computing Shap scores will be intractable for some other common classifiers.
3.1 Equivalence to Computing Expectations
Before studying various function classes, we prove a key result that connects the complexity of Shap explanations to the complexity of computing expectations.
Let be the class of fully-factorized probability distributions over discrete and independent random variables . That is, for every instance , we have that . Let . We show that for every function class F, the complexity of is the same as that of the fully-factorized expectation problem.
Definition 3 (Fully-Factorized Expectation Problem).
Let F be a class of real-valued functions with discrete inputs. The fully-factorized expectation problem for F, denoted , is the following: given a function and a probability distribution , compute .
We know from Equation 5 that for any function over features, , because . In this section we prove that the converse holds too:
Theorem 1.
For any function , we have that
In other words, for any function , the complexity of computing the Shap scores is the same as the complexity of computing the expected value under a fully-factorized data distribution. One direction of the proof is immediate: because, if we are given an oracle to compute for every feature , then we can obtain from Equation 5 (recall that we assumed that is computable in polynomial time). The hard part of the proof is the opposite direction: we will show in Sec. 3.2 how to compute given an oracle for computing . Theorem 1 immediately extends to classes of functions F, and to any number of variables, and therefore implies that .
3.2 Proof of Theorem 1
We start with the special case when all features are binary: . We denote by the class of fully-factorized distributions over binary domains.
Theorem 2.
For any function , we have that .
Proof.
We prove only ; the opposite direction follows immediately from Equation 5. We will assume w.l.o.g. that has binary features and show how to compute using repeated calls to an oracle for computing , i.e., the expectation of the same function , but over fully-factorized distributions with different probabilities. The probability distribution Pr is given to us by rational numbers, , ; obviously, . Recall that the instance whose outcome we want to explain is . Recall that for any set we write for the event . Then, we have that
(6) | ||||
Let and (both are functions in binary features, ). Then:
and therefore is given by:
For any function , Equation 1 defines value as . Abusing notation, we write for the sum of these quantities over all sets of cardinality :
(7) |
We will prove the following claim.
Claim 1.
Let be a function over binary variables. Then the quantities until can be computed in polynomial time, using calls to an oracle for .
Note that an oracle for is also an oracle for both and , by simply setting or respectively. Therefore, Claim 1 proves Theorem 2, by applying it once to and once to in order to derive all the quantities and , thereby computing , and finally computing using Equation 6. It remains to prove Claim 1.
Fix a function over binary variables and let . Let , for , define the distribution over which we need to compute . We will prove the following additional claim.
Claim 2.
Given any real number , consider the distribution , for . Let denote under distribution . We then have that
(8) |
Assuming Claim 2 holds, we prove Claim 1. Choose any distinct values for , use the oracle to compute the quantities , and form a system of linear equations (8) with unknowns . Next, observe that its matrix is a non-singular Vandermonde matrix, hence the system has a unique solution which can be computed in polynomial time. It remains to prove Claim 2.
Because of independence, the probability of instance is , where looks up the value of feature in instance . Similarly, . Using direct calculations we derive:
(9) |
Separately we also derive the following identity, using the fact that by independence:
(10) |
We are now in a position to prove Claim 2:
The last line follows from Equation 10. Next, we simply exchange the summations and , after which we apply the identity .
(continuing) | |||
The final line uses Equation 9. This completes the proof of Claim 2 as well as Theorem 2. ∎
Next, we generalize this result from binary features to arbitrary discrete features. Fix a function with inputs, , where each domain is an arbitrary finite set, ; we assume w.l.o.g. that . A fully factorized probability space is defined by numbers , , , such that, for all , . Given and Pr over the domain , we define their projections, over the binary domain as follows. For any instance , let denote the event asserting that iff . Formally,
Then, the projections are defined as follows: ,
(11) |
Notice that depends both on and on the probability distribution Pr. Intuitively, the projections only distinguishes between and , for example:
We prove the following result in Appendix B:
Proposition 3.
Let be a function with input features, and a fully factorized distribution over . Then (1) for any feature , , and (2) .
Item (1) states that the Shap-score of computed over the probability space Pr is the same as that of its projection (which depends on Pr) over the projected probability space . Item (2) says that, for any probability space over (not necessarily ), we can compute in polynomial time given access to an oracle for computing . We can now complete the proof of Theorem 1, by showing that . Given a function and probability space , in order to compute , by item (1) of Proposition 3 it suffices to show how to compute . By Theorem 2, we can compute the latter given access to an oracle for computing . Finally, by item (2) of the proposition, we can compute given an oracle for computing .
3.3 Tractable Function Classes
Given the polynomial-time equivalence between computing Shap explanations and computing expectations under fully-factorized distributions, a natural next question is: which real-world hypothesis classes in machine learning support efficient computation of Shap scores?
Corollary 4.
For the following function classes F, computing Shap scores is in polynomial time in the size of the representations of function and fully-factorized distribution .
-
1.
Linear regression models
-
2.
Decision and regression trees
-
3.
Random forests or additive tree ensembles
-
4.
Factorization machines, regression circuits
-
5.
Boolean functions in d-DNNF, binary decision diagrams
-
6.
Bounded-treewidth Boolean functions in CNF
These are all consequences of Theorem 1, and the fact that computing fully-factorized expectations for these function classes F is in polynomial time. Concretely, we have the following observations about fully-factorized expectations:
- 1.
-
2.
Paths from root to leaf in a decision or regression tree are mutually exclusive. Their expected value is therefore the sum of expected values of each path, which are tractable to compute within IND; see Khosravi et al. (2020).
-
3.
Additive mixtures of trees, as obtained through bagging or boosting, are tractable, by the linearity of expectation.
-
4.
Factorization machines extend linear regression models with feature interaction terms and factorize the parameters of the higher-order terms (Rendle 2010). Their expectations remain easy to compute. Regression circuits are a graph-based generalization of linear regression. Khosravi et al. (2019a) provide an algorithm to efficiently take their expectation w.r.t. a probabilistic circuit distribution, which is trivial to construct for the fully-factorized case.
The remaining tractable cases are Boolean functions. Computing fully-factorized expectations of Boolean functions is widely known as the weighted model counting task (WMC) (Sang, Beame, and Kautz 2005; Chavira and Darwiche 2008). WMC has been extensively studied both in the theory and the AI communities, and the precise complexity of is known for many families of Boolean functions F. These results immediately carry over to the problem through Theorem 1:
-
5.
Expectations can be computed in time linear in the size of various circuit representations, called d-DNNF, which includes binary decision diagrams (OBDD, FBDD) and SDDs (Bryant 1986; Darwiche and Marquis 2002).111In contemporaneous work, Arenas et al. (2020) also show that the Shap explanation is tractable for d-DNNFs, but for the more restricted class of uniform data distributions.
-
6.
Bounded-treewidth CNFs are efficiently compiled into OBDD circuits (Ferrara, Pan, and Vardi 2005), and thus enjoy tractable expectations.
To conclude this section, the reader may wonder about the algorithmic complexity of solving with an oracle for under the reduction in Section 3.2. Briefly, we require a linear number of calls to the oracle, as well as time in for solving a system of linear equations. Hence, for those classes, such as d-DNNF circuits, where expectations are linear in the size of the (circuit) representation of , computing is also linear in the representation size and polynomial in .
3.4 Intractable Function Classes
The polynomial-time equivalence of Theorem 1 also implies that computing Shap scores must be intractable whenever computing fully-factorized expectations is intractable. This section reviews some of those function classes F, including some for which the computational hardness of is well known. We begin, however, with a more surprising result.
Logistic regression is one of the simplest and most widely used machine learning models, yet it is conspicuously absent from Corollary 4. We prove that computing the expectation of a logistic regression model is #P-hard, even under a uniform data distribution, which is of independent interest.
A logistic regression model is a parameterized function , where is a vector of weights, is the logistic function, , and is the dot product. Note that we define the logistic regression function to output probabilities, not data labels. Let denote the class of logistic regression functions with variables, and . We prove the following:
Theorem 5.
Computing the expectation of a logistic regression model w.r.t. a uniform data distribution is #P-hard.
The full proof in Appendix C is by reduction from counting solutions to the number partitioning problem.
Because the uniform distribution is contained in IND, and following Theorem 1, we immediately obtain:
Corollary 6.
The computational problems and are both #P-hard.
We are now ready to list general function classes for which computing the Shap explanation is #P-hard.
Corollary 7.
For the following function classes F, computing Shap scores is #P-hard in the size of the representations of function and fully-factorized distribution .
-
1.
Logistic regression models (Corollary 6)
-
2.
Neural networks with sigmoid activation functions
-
3.
Naive Bayes classifiers, logistic circuits
-
4.
Boolean functions in CNF or DNF
Our intractability results stem from these observations:
-
2.
Each neuron is a logistic regression model, and therefore this class subsumes LOGIT.
- 3.
-
4.
For general CNFs and DNFs, weighted model counting, and therefore is #P-hard. This is true even for very restricted classes, such as monotone 2CNF and 2DNF functions, and Horn clause logic (Wei and Selman 2005).
4 Beyond Fully-Factorized Distributions
Features in real-world data distributions are not independent. In order to capture more realistic assumptions about the data when computing Shap scores, one needs a more intricate probabilistic model. In this section we prove that the complexity of computing the Shap-explanation quickly becomes intractable, even over the simplest probabilistic models, namely naive Bayes models. To make computing the Shap-explanation as easy as possible, we will assume that the function simply outputs the value of one feature. We show that even in this case, even for function classes that are tractable under fully-factorized distributions, computing Shap explanations becomes computationally hard.
Let denote the family of naive Bayes networks over variables , with binary domains, where is a parent of all features:
As usual, the class . We write for the function that returns the value of feature ; that is, . We prove the following.
Theorem 8.
The decision problem is NP-hard.
The proof in Appendix D is by reduction from the number partitioning problem, similar to the proof of Corollary 6. We note that the subset sum problem was also used to prove related hardness results, e.g., for proving hardness of the Shapely value in network games (Elkind et al. 2008).
This result is in sharp contrast with the complexity of the Shap score over fully-factorized distributions in Section 3. There, the complexity was dictated by the choice of function class F. Here, the function is as simple as possible, yet computing Shap is hard. This ruins any hope of achieving tractability by restricting the function, and this motivates us to restrict the probability distribution in the next section. This result is also surprising because it is efficient to compute marginal probabilities (such as the expectation of ) and conditional probabilities in naive Bayes distributions.
Theorem 8 immediately extends to a large class of probability distributions and functions. We say that depends only on if there exist two constants such that when and when . In other words, ignores all variables other than , and does depend on . We then have the following.
Corollary 9.
The problem is NP-hard, when PR is any of the following classes of distributions:
-
1.
Naive Bayes, bounded-treewidth Bayesian networks
-
2.
Bayesian networks Markov networks, Factor graphs
-
3.
Decomposable probabilistic circuits
and when F is any class that contains some function that depends only on , including the class of linear regression models and all the classes listed in Corollaries 4 and 7.
This corollary follows from two simple observations. First, each of the classes of probability distributions listed in the corollary can represent a naive Bayes network over binary variables . For example, a Markov network will consists of factors ; similar simple arguments prove that all the other classes can represent naive Bayes, including tractable probabilistic circuits such as sum-product networks (Vergari et al. 2020).
Second, for each function that depends only on , there exist two distinct constants such that when and when . For example, if we consider the class of logistic regression functions , then we choose the weights , and we obtain when and when . Then, over the binary domain the function is equivalent to , and, therefore, by the linearity of the Shap explanation (Equation 4) we have (because the Shap explanation of a constant function is 0) for which, by Theorem 8, the decision problem is NP-hard.
We end this section by proving that Theorem 8 continues to hold even if the prediction function is the value of some leaf node of a (bounded treewidth) Bayesian Network. In other words, the hardness of the Shap explanation is not tied to the function returning the root of the network, and applies to more general functions.
Corollary 10.
The Shap decision problem for Bayesian networks with latent variables is NP-hard, even if the function returns a single leaf variable of the network.
The full proof is given in Appendix E.
5 Shap on Empirical Distributions
In supervised learning one does not require a generative model of the data, instead, the model is trained on some concrete data set: the training data. When some probabilistic model is needed, then the training data itself is conveniently used as a probability model, called the empirical distribution. This distribution captures dependencies between features, while its set of possible worlds is limited to those in the data set. For example, the intent of the KernelSHAP algorithm by Lundberg and Lee (2017) is to compute the Shap explanation on the empirical distribution. In another example, Aas, Jullum, and Løland (2019) extend KernelSHAP to work with dependent features, by estimating the conditional probabilities from the empirical distribution.
Compared to the data distributions considered in the previous sections, the empirical distribution has one key advantage: it has many fewer possible worlds with positive probability – this suggests increased tractability. Unfortunately, in this section, we prove that computing the Shap explanation over the empirical distribution is #P-hard in general.
To simplify the presentation, this section assumes that all features are binary: . The probability distribution is given by a 0/1-matrix , where each row is an outcome with probability . One can think of as a dataset with features and data instances, where each row is one data instance. Repeated rows are possible: if a row occurs times, then its probability is . We denote by X the class of empirical distributions. The predictive function can be any function . As our data distribution is no longer strictly positive, we adopt the standard convention that when .
Recall from Section 2.2 that, by convention, we compute the Shap-explanation w.r.t. instance , which is without loss of generality. Somewhat surprisingly, the complexity of computing the Shap-explanation of a function over the empirical distribution given by a matrix is related to the problem of computing the expectation of a certain CNF formula associated to .
Definition 4.
The positive, partitioned 2CNF formula, PP2CNF, associated to a matrix is:
Thus, a PP2CNF formula is over variables , and has only positive clauses. The matrix dictates which clauses are present. A quasy-symmetric probability distribution is a fully factorized distribution over the variables for which there exists two numbers such that for every , or , and for every , or . In other words, all variables have the same probability , or have probability , and similarly for the variables . We denote by the expectation computation problem for PP2CNF over quasi-symmetric probability distributions. is #P-hard, because computing under the uniform distribution (i.e. ) is #P-hard (Provan and Ball 1983). We prove:
Theorem 11.
Let X be the class of empirical distributions, and F be any class of function such that, for each , it includes some function that depends only on . Then, we have that .
As a consequence, the problem is #P-hard in the size of the empirical distribution.
The theorem is surprising, because the set of possible outcomes of an empirical distribution is small. This is unlike all the distributions discussed earlier, for example those mentioned in Corollary 9, which have possible outcomes, where is the number of features. In particular, given an empirical distribution , one can compute the expectation in polynomial time for any function , by doing just one iteration over the data. Yet, computing the Shap explanation of is #P-hard.
Theorem 11 implies hardness of Shap explanations on the empirical distribution for a large class of functions.
Corollary 12.
For instance, any class of Boolean function that contains the single-variable functions , for , fall under this corollary. Section 4 showed an example of how the class of logistic regression functions fall under this corollary as well.
The proof of Theorem 11 follows from the following technical lemma, which is of independent interest:
Lemma 13.
We have that:
-
1.
For every matrix , .
-
2.
.
The proof of the Lemma is given in Appendix F and G. The first item says that we can compute the Shap-explanation in polynomial time using an oracle for computing over quasi-symmetric distributions. The oracle is called only on the PP2CNF associated to the data , but may perform repeated calls, with different probabilities of the Boolean variables. This is somewhat surprising because the Shap explanation is over an empirical distribution, while is taken over a fully-factorized distribution; there is no connection between these two distributions. This item immediately implies , where X is the class of empirical distributions , since the formula is in the class PP2CNF.
The second item says that a weak form of converse also holds. It states that we can compute in polynomial time the expectation over a quasi-symmetric probability distributions by using an oracle for computing Shap explanations, over several matrices , but not necessarily restricted to the matrix associated to . Together, the two items of the lemma prove Theorem 11.
We end this section with a comment on the TreeSHAP algorithm in Lundberg et al. (2020), which is computed over a distribution defined by a tree-based model. Our result implies that the problem that TreeSHAP tries to solve is #P-hard. This follows immediately by observing that every empirical distribution can be represented by a binary tree of size polynomial in the size of . The tree examines the attributes in the order , and each decision node for has two branches: and . A branch that does not exists in the matrix will end in a leaf with label . A complete branch that corresponds to a row in ends in a leaf with label (or if that row occurs times in ). The size of this tree is no larger than twice the size of the matrix (because of the extra dead end branches). This concludes our study of Shap explanations on the empirical distribution.
6 Perspectives and Conclusions
We establish the complexity of computing the Shap explanation in three important settings. First, we consider fully-factorized data distributions and show that for any prediction model, the complexity of computing the Shap explanation is the same as the complexity of computing the expected value of the model. It follows that there are commonly used models, such as logistic regression, for which computing Shap explanations is intractable. Going beyond fully-factorized distributions, we show that computing Shap explanations is also intractable for simple functions and simple distributions – naive Bayes and empirical distributions.
The recent literature on Shap explanations predominantly studies tradeoffs of variants of the original Shap formulation, and relies on approximation algorithms to compute the explanations. These approximation algorithms, however, tend to make simplifying assumptions which can lead to counter-intuitive explanations, see e.g., Slack et al. (2020). We believe that more focus should be given to the computational complexity of Shap explanations. In particular, which classes of machine learning models can be explained efficiently using the Shap scores? Our results show that, under the assumption of fully-factorized data distributions, there are classes of models for which the Shap explanations can be computed in polynomial time. In future work, we plan to explore if there are classes of models for which the complexity of the Shap explanations is tractable under more complex data distributions, such as the ones defined by tractable probabilistic circuits (Vergari et al. 2020) or tractable symmetric probability spaces (Van den Broeck, Meert, and Darwiche 2014; Beame et al. 2015).
Acknowledgements
This work is partially supported by NSF grants IIS-1907997, IIS-1954222 IIS-1943641, IIS-1956441, CCF-1837129, DARPA grant N66001-17-2-4032, a Sloan Fellowship, and gifts by Intel and Facebook research. Schleich is supported by a RelationalAI fellowship. The authors would like to thank YooJung Choi for valuable discussions on the proof of Theorem 5.
References
- Aas, Jullum, and Løland (2019) Aas, K.; Jullum, M.; and Løland, A. 2019. Explaining individual predictions when features are dependent: More accurate approximations to Shapley values. arXiv preprint arXiv:1903.10464 .
- Arenas et al. (2020) Arenas, M.; Barceló, P.; Bertossi, L.; and Monet, M. 2020. The Tractability of SHAP-Score-Based Explanations over Deterministic and Decomposable Boolean Circuits. arXiv preprint arXiv:2007.14045 .
- Beame et al. (2015) Beame, P.; Van den Broeck, G.; Gribkoff, E.; and Suciu, D. 2015. Symmetric Weighted First-Order Model Counting. In Proceedings of the 34th ACM Symposium on Principles of Database Systems, PODS 2015, Melbourne, Victoria, Australia, May 31 - June 4, 2015, 313–328.
- Bertossi et al. (2020) Bertossi, L.; Li, J.; Schleich, M.; Suciu, D.; and Vagena, Z. 2020. Causality-Based Explanation of Classification Outcomes. In Proceedings of the Fourth International Workshop on Data Management for End-to-End Machine Learning, DEEM’20. New York, NY, USA: Association for Computing Machinery.
- Bryant (1986) Bryant, R. E. 1986. Graph-based algorithms for boolean function manipulation. Computers, IEEE Transactions on 100(8): 677–691.
- Chavira and Darwiche (2008) Chavira, M.; and Darwiche, A. 2008. On probabilistic inference by weighted model counting. Artificial Intelligence 172(6-7): 772–799.
- Darwiche and Marquis (2002) Darwiche, A.; and Marquis, P. 2002. A knowledge compilation map. Journal of Artificial Intelligence Research 17: 229–264.
- Datta, Sen, and Zick (2016) Datta, A.; Sen, S.; and Zick, Y. 2016. Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems. In IEEE Symposium on Security and Privacy, SP 2016, San Jose, CA, USA, May 22-26, 2016, 598–617.
- Elkind et al. (2008) Elkind, E.; Goldberg, L. A.; Goldberg, P. W.; and Wooldridge, M. J. 2008. A tractable and expressive class of marginal contribution nets and its applications. In 7th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2008), Estoril, Portugal, May 12-16, 2008, Volume 2, 1007–1014.
- Ferrara, Pan, and Vardi (2005) Ferrara, A.; Pan, G.; and Vardi, M. Y. 2005. Treewidth in verification: Local vs. global. In International Conference on Logic for Programming Artificial Intelligence and Reasoning, 489–503. Springer.
- Gade et al. (2019) Gade, K.; Geyik, S. C.; Kenthapadi, K.; Mithal, V.; and Taly, A. 2019. Explainable AI in Industry. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’19, 3203–3204. New York, NY, USA: Association for Computing Machinery.
- Janzing, Minorics, and Bloebaum (2020) Janzing, D.; Minorics, L.; and Bloebaum, P. 2020. Feature relevance quantification in explainable AI: A causal problem. volume 108 of Proceedings of Machine Learning Research, 2907–2916. PMLR.
- Khosravi et al. (2019a) Khosravi, P.; Choi, Y.; Liang, Y.; Vergari, A.; and Van den Broeck, G. 2019a. On Tractable Computation of Expected Predictions. In Advances in Neural Information Processing Systems 32 (NeurIPS).
- Khosravi et al. (2019b) Khosravi, P.; Liang, Y.; Choi, Y.; and den Broeck, G. V. 2019b. What to Expect of Classifiers? Reasoning about Logistic Regression with Missing Features. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, 2716–2724.
- Khosravi et al. (2020) Khosravi, P.; Vergari, A.; Choi, Y.; Liang, Y.; and Van den Broeck, G. 2020. Handling Missing Data in Decision Trees: A Probabilistic Approach. In The Art of Learning with Missing Values Workshop at ICML (Artemiss).
- Kumar et al. (2020) Kumar, I. E.; Venkatasubramanian, S.; Scheidegger, C.; and Friedler, S. 2020. Problems with Shapley-value-based explanations as feature importance measures. In Proceedings of the 37th International Conference on Machine Learning, Vienna, Austria, PMLR 119, 2020.
- Liang and Van den Broeck (2019) Liang, Y.; and Van den Broeck, G. 2019. Learning Logistic Circuits. In Proceedings of the 33rd Conference on Artificial Intelligence (AAAI).
- Lundberg et al. (2020) Lundberg, S. M.; Erion, G.; Chen, H.; DeGrave, A.; Prutkin, J. M.; Nair, B.; Katz, R.; Himmelfarb, J.; Bansal, N.; and Lee, S. 2020. From Local Explanations to Global Understanding with Explainable AI for Trees. Nature Machine Intelligence 2: 56–67.
- Lundberg, Erion, and Lee (2018) Lundberg, S. M.; Erion, G. G.; and Lee, S.-I. 2018. Consistent individualized feature attribution for tree ensembles. arXiv preprint arXiv:1802.03888 .
- Lundberg and Lee (2017) Lundberg, S. M.; and Lee, S. 2017. A Unified Approach to Interpreting Model Predictions. In Advances in neural information processing systems (NIPS), 4765–4774.
- Merrick and Taly (2020) Merrick, L.; and Taly, A. 2020. The Explanation Game: Explaining Machine Learning Models Using Shapley Values. In International Cross-Domain Conference for Machine Learning and Knowledge Extraction, 17–38. Springer.
- Ng and Jordan (2002) Ng, A. Y.; and Jordan, M. I. 2002. On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes. In Advances in neural information processing systems, 841–848.
- Provan and Ball (1983) Provan, J. S.; and Ball, M. O. 1983. The Complexity of Counting Cuts and of Computing the Probability that a Graph is Connected. SIAM J. Comput. 12(4): 777–788.
- Rendle (2010) Rendle, S. 2010. Factorization machines. In 2010 IEEE International Conference on Data Mining, 995–1000. IEEE.
- Roth (1988) Roth, A. e. 1988. The Shapley Value: Essays in Honor of Lloyd S. Shapley. Cambridge Univ. Press.
- Sang, Beame, and Kautz (2005) Sang, T.; Beame, P.; and Kautz, H. A. 2005. Performing Bayesian inference by weighted model counting. In AAAI, volume 5, 475–481.
- Slack et al. (2020) Slack, D.; Hilgard, S.; Jia, E.; Singh, S.; and Lakkaraju, H. 2020. Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods. In AAAI/ACM Conference on AI, Ethics, and Society (AIES).
- Štrumbelj and Kononenko (2014) Štrumbelj, E.; and Kononenko, I. 2014. Explaining prediction models and individual predictions with feature contributions. Knowledge and information systems 41(3): 647–665.
- Sundararajan and Najmi (2020) Sundararajan, M.; and Najmi, A. 2020. The many Shapley values for model explanation. In Proceedings of the 37th International Conference on Machine Learning, Vienna, Austria, PMLR 119, 2020.
- Van den Broeck, Meert, and Darwiche (2014) Van den Broeck, G.; Meert, W.; and Darwiche, A. 2014. Skolemization for Weighted First-Order Model Counting. In Principles of Knowledge Representation and Reasoning: Proceedings of the Fourteenth International Conference, KR 2014, Vienna, Austria, July 20-24, 2014.
- Vergari et al. (2020) Vergari, A.; Choi, Y.; Peharz, R.; and Van den Broeck, G. 2020. Probabilistic Circuits: Representations, Inference, Learning and Applications. AAAI Tutorial.
- Wei and Selman (2005) Wei, W.; and Selman, B. 2005. A new approach to model counting. In International Conference on Theory and Applications of Satisfiability Testing, 324–339. Springer.
Appendix A Discussion on the TreeSHAP algorithm
procedure EXPVALUE(x, S, tree = {})
procedure G(j)
if internal then
return
else
if then
return if else
else
return
return
Lundberg, Erion, and Lee (2018) propose TreeSHAP, a variant of Shap explanations for tree-based machine learning models such as decision trees, random forests and gradient boosted trees. The authors claim that, for the case when both the model and probability distribution are defined by a tree-based model, the algorithm can compute the exact Shap explanations in polynomial time. However, it has been pointed out in Github discussions (e.g., https://github.com/christophM/interpretable-ml-book/issues/142) that the TreeSHAP algorithm does not compute the Shap explanation as defined in Section 2. In this section, we provide a concrete example of this shortcoming.
The main shortcoming of the TreeSHAP algorithm is captured by Algorithm 1. The authors claim that Algorithm 1 computes the conditional expectation , for a given set of features and tree-based model . We first describe the algorithm and then show by example that this algorithm does not accurately compute the conditional expectation.
Algorithm 1 takes as input a feature vector , a set of features , and a binary tree, which represents the tree-based model. The tree is defined by the following vectors: is a vector of node values; internal nodes are assigned the value internal. The vectors and represent the left and right node indexes for each internal node. The vector contains the thresholds for each internal node, and is a vector of indexes of the features used for splitting in internal nodes. The vector represents the cover of each node (i.e., how many data samples fall in that sub-tree).
The algorithm proceeds recursively in a top-down traversal of the tree. For inner nodes, the algorithm follows the decision path for if the split feature is in , and takes the weighted average of both branches if the split feature is not in . For leaf nodes, it returns the value of the node, which corresponds to the prediction of the model.
The algorithm does not accurately compute the conditional expectation , because it does not normalize expectation by the probability of the condition. The following simple example shows that the value returned by Algorithm 1 does not represent the conditional expectation.
Example 14.
We consider the following dataset and decision tree model. The dataset has two binary variables and , and each instance is weighted by the occurrence count (i.e., the instance (0,0) occurs twice in the dataset). We want to compute , where is the outcome of the decision tree.
# | ||
---|---|---|
0 | 0 | 2 |
0 | 1 | 1 |
1 | 0 | 1 |
1 | 1 | 2 |
The correct value is:
This is because there are three items with , and their probabilities are and .
Appendix B Proof of Proposition 3
We start with item (1). Recall that . We denote by their probabilities, thus . By definition, the projected distribution is: , and . We denote by be the corresponding expectation. Our goal is to prove .
Let again denote the event . Note that, by construction, for any set , . Recall that for any instance , we let denote the event asserting that iff ; formally,
Then, for any instance ,
Thus, there are disjoint events , which partition the space . Therefore, for every set :
This implies that for any set , and for all follows from Equation 6.
We now prove item (2): we show how to compute given an oracle for computing . Recall that we want to compute on some arbitrary distribution on ; this should not be confused with the probability defined in Eq.11, and used to define the function . Denote , thus . To compute we will use the oracle for computing , on the probability space defined by the numbers , , defined as follows:
One can check that the numbers indeed define a probability space on , in other words and, for all : . We denote by the probability space that they define, and denote by the expectation of in this space. We prove:
Claim 3.
The claim immediately proves item (2) of Proposition 3: we simply invoke the oracle to compute , then multiply with the quantities and , both of which are computable in polynomial time. It remains to prove the claim.
We start with some observations and notations. Recall that the projection depends on both and on Pr, see Equation 11. We express it here in terms of the probabilities :
We used the fact that, for every instance , , and denoted by the set of feature indices for which example has value . We now prove the claim by applying directly the definition of :
In line 4 we noticed that the conditions and are equivalent, because , and that the assignment uniquely defines , hence can be dropped from the summation. This completes the proof of the claim, and of Proposition 3.
Appendix C Proof of Theorem 5
The number partitioning problem, NUMPAR, is the following: given natural numbers , decide whether there exists a subset that partitions the numbers into two sets with equal sums: . NUMPAR is known to be NP-complete. The corresponding counting problem, in notation , asks for the number of sets such that , and is #P-hard.
We show that we can solve the problem using an oracle for , where is a logistic regression function and is the uniform probability distribution. This implies that computing the expectation of a logistic regression function is #P-hard.
Fix an instance of NUMPAR, , and assume w.l.o.g. that the sum of the numbers is even, for some natural number . Let
(12) |
For each set , denote by its complement. Obviously, iff , therefore is an even number.
We next describe an algorithm that computes using an oracle for , where is a logistic regression function and is the uniform probability distribution. Let be a natural number large enough, to be chosen later, and define the following weights:
Let , then is the logistic regression function defined by the weights .
Claim 4.
Let . If satisfies both and , then:
The claim immediately proves the theorem: in order to solve the problem, compute and then use the formula above to derive . To prove the claim, for each set denote by:
Let denote the uniform probability distribution over the domain . Then,
If is a solution to the number partitioning problem (), then:
Otherwise, one of , is and the other is and therefore:
Since , and satisfies both and , we have:
This implies:
Thus, we have a lower and an upper bound for . Since , the difference between the two bounds is and there exists at most one integer number between them, hence is equal to the ceiling of the lower bound (and also to the floor of the upper bound), proving the claim.
Appendix D Proof of Theorem 8
We use a reduction from the decision version of number partitioning problem, NUMPAR, which is NP-complete, see Sec. C.
As before we assume w.l.o.g. that the sum of the numbers is even, for some natural number . Let be a large natural number to be defined later. We reduce the NUMPAR problem to the problem. The Naive Bayes network NBN consists of binary random variables . Let denote the events and respectively . We define the following probabilities of the NBN:
The probabilities and can be chosen arbitrarily (with the obvious constraints and ). As required, our classifier is . Let and let be any number such that for all . We prove:
Claim 5.
Let be the value defined above. If satisfies both and , then NUMPAR has a solution iff .
The claim implies Theorem 8. To prove the claim, we express the Shap explanation using Eq. (6). Let denote the event . Then, we can write the Shap explanation as:
Obviously, . In addition, we have , because there are sets of size , hence . Therefore , where:
(13) |
To compute , we expand:
where:
We compute in Eq. (13) by grouping each set with its complement :
(14) |
If is a solution to the number partitioning problem, then:
Otherwise, one of , is and the other is and therefore:
As in Sec. C, we obtain:
Therefore, using the fact that , we derive these bounds for the expression (14) for :
-
•
If the number partitioning problem has no solution, then , and .
-
•
Otherwise, let be any solution to the NUMPAR problem, and , then:
and therefore .
Appendix E Proof of Corollary 10
Proof.
(Sketch) We use a reduction from the NUMPAR problem, as in the proof of Theorem 8. We start by constructing the NBN with variables (as for Theorem 8), then add two more variables , and edges , and define the random variables to be identical to , i.e. . The prediction function is , i.e. it returns the feature , and the variables are latent. Thus, the new BN is identical to the NBN, and, since both models have exactly the same number of non-latent variables, the Shap-explanation is the same. ∎
Appendix F Proof of Lemma 13 (1)
Fix a PP2CNF . A symmetric probability space is defined by two numbers and consists of the fully-factorized distribution where and . A quasi-symmetric probability space consists of two sets of indices and two numbers such that:
In this and the following section we prove Lemma 13: computing the Shap-explanation over an empirical distribution is polynomial time equivalent to computing the expectation of PP2CNF formulas over a (quasi)-symmetric distribution. Provan and Ball (1983) proved that computing the expectation of a PP2CNF over uniform distributions is #P-hard in general. Since uniform distribution are symmetric (namely ) it follows that computing Shap-explanations is #P-hard in general.
In this section we prove item (1) of Lemma 13. Fix a 0/1-matrix defining an empirical distribution, and let be a real-valued prediction function over these features. Let be the PP2CNF associated to (see Definition 4). Will assume w.l.o.g. that has features (columns), denoted . The prediction function is any function . We prove:
Proposition 15.
One can compute in polynomial time using an oracle for computing over quasi-symmetric distributions.
Denote by , the value of on the ’th row of the matrix . Since the only possible outcomes of the probability space are the rows of the matrix, the quantity depends only on the vector . Furthermore, by the linearity of the Shap explanation (Eq. (4)), it suffices to compute the Shap explanation in the case when has a single value and all others are . By permuting the rows of the matrix, we will assume w.l.o.g. that , and . In summary, denoting the function that is 1 on the first row of the matrix and is 0 on all other rows, our task is to compute .
For that we use the following expression for Shap (see also Sec. 3):
(15) |
We will only show how to compute the quantity
(16) |
using an oracle to , because the quantity is computed similarly, by restricting the matrix to the rows where . The PP2CNF associated to this restricted matrix is obtained from as follows. Let be the set of rows of the matrix where the feature is 1. Then, we need to remove all clauses of the form for . This is equivalent to setting in . Therefore, we can compute the expectation of the restricted by using our oracle for , and running it over the probability space where we define for all . Hence, it suffices to show only how to compute the expression (16). Notice that the quantity is the same as what we defined earlier in Eq. (7).
Column of the matrix is not used in expression (16), because the set ranges over subsets of . Hence w.l.o.g. we can drop feature and denote by (with some abuse) the matrix that only has the features . In other words, . The PP2CNF formula for the modified matrix is obtained from by setting , hence we can compute its expectation by using our oracle for .
We introduce the following quantities associated to the matrix :
-
•
For all , , we define:
(17) (18) -
•
We define the sequence , :
(19) -
•
We define the value :
(20)
We prove that, under a certain condition, the value in Eq. (19) is equal to Eq. (16); this justifies the notation , since it turns out to be the same as from Eq. (7).
Definition 5.
Call the matrix “good” if , .
In other words, the matrix is “good” if the first row dominates all others. In general the matrix need not be “good”, however we can make it “good” by removing all columns where row 1 has a value 0. More precisely, let denote the non-zero positions of the first row, and let denote the sub-matrix of consisting of the columns . Obviously, is “good”, because its first row is . The following hold:
(When then the quantity is undefined). Therefore:
It follows that, in order to compute the values in Eq. (16), we can consider the matrix instead of ; its associated PP2CNF is obtained from by setting for all , hence we can compute its expectation over a quasi-symmetric space by using our oracle for computing over quasi-symmetric spaces. To simplify the notation, we will still use the name for the matrix instead of , and assume w.l.o.g. that the first row of the matrix is .
We prove that, when is “good”, then is indeed the quantity Eq. (16) that we want to compute. This holds for any “good” matrix, not just matrices with in the first row, and we need this more general result later in Sec. G.
Claim 6.
If the matrix is “good”, then, for any :
Proof.
Recall that . Let be any set of columns. We consider two cases, depending on whether is a subset of or not:
Therefore:
∎
At this point we introduce two polynomials, and .
Definition 6.
Fix an matrix with -entries. The polynomials and in real variables associated to the matrix are the following:
The polynomials are defined by summing over exponentially many sets , or pairs of sets . In the definition of , we use the function associated to the matrix , see Eq. (17). In the definition of we sum only those pairs where , , . While their definition involves exponentially many terms, these polynomials have only terms, because the degrees of the variables are and respectively. We claim that these terms are as follows:
Claim 7.
The following identities hold:
Proof.
The identity for follows immediately from the definition of . We prove the identity for . From the definition of in Eq. (17) we derive the following equivalence:
Which implies:
and the claim follows from . ∎
Thus, in order to compute the quantities for it suffices to compute the coefficients of the polynomial , and, for that, it suffices to compute the coefficients of the polynomial . For that, we establish the following important connection between and the polynomial . Fix any two positive real values, and let , ; notice that . Consider the probability space over independent Boolean variables where , , and , . Then:
Claim 8.
Given the notations above, the following identity holds:
(21) |
Proof.
A truth assignment for consists of two assignments, for the variables , and for the variables . Defining and , we observe that iff , , and therefore:
Pr | |||
∎
Finally, to prove Lemma 13 (1), it suffices to show how to use an oracle for to compute the coefficients of the polynomial . We denote by these coefficients, in other words:
(22) |
To compute the coefficients , we proceed as follows. Choose distinct values , and choose distinct values , and for all and , use the oracle for to compute as per identity (21). This leads to a system of equations whose unknowns are the coefficients (see Eq. (22)) and whose coefficients are . The matrix of this system of equations is an matrix, whose rows are indexed by pairs , and whose columns are indexed by pairs :
We prove that this matrix is non-singular, and for that we observe that it is the Kronecker product of two Vandermonde matrices. Recall that the Vandermonde matrix defined by numbers is:
It is known that and this is iff the values are distinct. We observe that the matrix is the Kronecker product of two Vandermonde matrices:
Since we have chosen to be distinct, and similarly for , it follows that both Vandermonde matrices are non-singular, hence . Thus, we can solve this linear system of equations in time , and compute all coefficients .
Putting It Together We prove now Proposition 15. We are given a 0/1 matrix with features and rows. To compute we proceed as follows:
-
1.
For each , compute , where is the function defined as on row of the matrix, and on all other rows of the matrix. Return , where is the value of on the ’th row of the matrix.
-
2.
To compute , switch rows and of the matrix, and compute on the modified matrix.
-
3.
To compute , compute both sums in Eq. (15).
- 4.
-
5.
Let ; notice that . Let . Let denote the PP2CNF obtained from by setting for all . Thus, has variables: for , and for .
-
6.
Choose distinct values and distinct values . For each fixed combination , compute (see Claim 8). The value over the probability space where, for all : , : this can be done by computing over a quasi-symmetric space.
-
7.
Using the results from the previous step, form a system of Equations where the unknowns are the coefficients , , , of the polynomial , see (22). Solve for the coefficients .
- 8.
- 9.
-
10.
This completes Step (3), and we obtain .
Appendix G Proof of Lemma 13 (2)
Here we prove item (2) of Lemma 13: one can compute over a quasi-symmetric probability space in polynomial time, given an oracle for Shap on empirical distributions. If the probability space sets for some variable, then we can simply replace with , and similarly if . Hence, w.l.o.g., we can assume that the probability space is symmetric.
More precisely, we fix a PP2CNF formula , and let and define a symmetric probability space. Our task is to compute over this space, given an oracle for computing Shap-explanations on empirical distributions. Throughout this section we will use the notations introduced in Sec. F.
Let the matrix associated to : iff contains a clause . We describe our algorithm for computing in three steps.
Step 1: . More precisely:, we claim that we can compute using an oracle for computing the quantities defined in Eq. (19). We have seen in Eq. (21) that where and . From Claim 7 we know that , and the coefficients of are the quantities defined in Eq. (18). To complete Step 1, we will describe a polynomial time algorithm that computes the quantities associated to our matrix , with access to an oracle for computing the quantities associated to any matrix .
Starting from the matrix , construct new matrices, denoted by , where, for each , consists of the matrix extended with rows consisting of . That is, the matrix has rows, the first rows are , and the remaining rows are those in . We run our oracle to compute the quantities on each matrix . We continue to use the notations introduced in Equations (17), (18), (19) for the matrix , and add the superscript for the same quantities associated to the matrix . We observe:
and therefore:
By solving this system of equations, we compute the quantities for . The matrix of this system is a special case of Cauchy’s double alternant determinant:
where and , and therefore the matrix of the system is non-singular.
We observe that all matrices are “good” (see Definition 5), because their first row is .
Step 2: Let be a “good” matrix (Definition 5). Then: ( defined in Eq. (20)). In other words, given a matrix , we claim that we can compute the quantities associated to by Eq. (19) in polynomial time, given access to an oracle for computing the quantity associated to any matrix . The algorithm proceeds as follows. For each , construct a new matrix by extending with new columns set to and new columns set to . Thus, is:
Notice that is “good”, for any . We run the oracle on each matrix to compute the quantity . We start by observing the following relationships between the parameters of the matrix and those of the matrix :
Notice that, when , then . We use the oracle to compute the quantity , which is:
Thus, after running the oracle on all matrices , we obtain a system of equations with unknowns . It remains to prove that system’s matrix, , is non-singular matrix. Let us denote following matrices by:
It is immediate to verify that , so it suffices to prove , . We start with , and for that consider the Vandermonde matrix , . Denoting , we have that
is also a Vandermonde matrix . We have when are distinct, proving that .
Finally, we prove . For that, we prove a slightly more general result. For any , denote by the following matrix:
We will prove that ; our claim follows from the special case . For the base case, , because is a matrix equal to , hence . To show the induction step, we will perform elementary column operations (which preserve the determinant) to make the last row of the resulting matrix consist of zeros, except for the last entry.
Consider an arbitrary row , and two adjacent columns in that row:
We use the fact that and rewrite the two adjacent elements as:
Now, for each , we subtract column , multiplied by , from column . The last row becomes , which means that is equal to times the upper left minor.
Now, we check what happens with element at . After subtraction, it becomes
This expression can be rewritten as:
Note that this expression holds with the whole upper-left minor of : the element in the lower-right corner of the matrix remains . Observe that the -th entry of this minor is precisely the -entry of , multiplied by . Here is a global constant, is the same constant in the entire row , and is the same constant in the entire column . We factor out the global constant , factor out from each row , and factor out from each column , and obtain the following recurrence:
It follows by induction on that .
Step 3: Let be a “good” matrix (Definition 5). Then . More precisely, we claim that we can compute the quantity associated to a matrix as defined in Eq. (20) in polynomial time, by using an oracle for computing over any matrix .
We modify the matrix as follows. We add a new attribute whose value is 1 only in the first row, and let denote the function that returns the value of feature . We show here the new matrix , augmented with the values of the function :
We run our oracle to compute over the matrix . The value is given by Eq. (15), but notice that the matrix has columns, while Eq. (15) is given for a matrix with columns. Therefore, since for any set , we have:
Since is “good”, so is the new matrix and, by Claim 6, for any
This implies that we can use the value returned by the oracle to compute the quantity:
which completes Step 3
Putting It Together Given a PP2CNF formula , and two probability values and , to compute we proceed as follows:
-
•
Construct the 0,1-matrix associated to , denote it .
-
•
Construct matrices , , by adding rows at the beginning of the matrix.
-
•
For each matrix , construct matrices , , by adding columns, of which the first columns are 1, the others are 0.
-
•
For each , construct one new matrix by adding a column . Call this new column .
-
•
Use the oracle to compute . From here, compute the value associated with the matrix .
-
•
Using the values , compute the values associated to the matrix .
-
•
For each , use the values to compute the coefficients associated to the matrix .
-
•
At this point we have all coefficients of the polynomial .
-
•
Compute the coefficients of the polynomial .
-
•
Finally, return .
This concludes the entire proof of Lemma 13.