This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Explanations for Monotonic Classifiers

Joao Marques-Silva    Thomas Gerspacher    Martin Cooper    Alexey Ignatiev    Nina Narodytska
Abstract

In many classification tasks there is a requirement of monotonicity. Concretely, if all else remains constant, increasing (resp. decreasing) the value of one or more features must not decrease (resp. increase) the value of the prediction. Despite comprehensive efforts on learning monotonic classifiers, dedicated approaches for explaining monotonic classifiers are scarce and classifier-specific. This paper describes novel algorithms for the computation of one formal explanation of a (black-box) monotonic classifier. These novel algorithms are polynomial in the run time complexity of the classifier and the number of features. Furthermore, the paper presents a practically efficient model-agnostic algorithm for enumerating formal explanations.

Machine Learning, ICML

1 Introduction

Monotonicity is an often required constraint in practical applications of machine learning. Broadly, a monotonicity constraint requires that increasing (resp. decreasing) the value of one or more features, while keep the other features constant, will not cause the prediction to decrease (resp. increase). Monotonicity has been investigated in the context of classification (Cano et al., 2019), including neural networks (Sill, 1997; Magdon-Ismail & Sill, 2008; Bonakdarpour et al., 2018; Sivaraman et al., 2020; Liu et al., 2020), random forests (Bartley et al., 2019) and rule ensembles (Bartley et al., 2018), decision trees (Ben-David et al., 1989; Ben-David, 1995), decision lists (Potharst & Bioch, 2000) and decision rules (Verbeke et al., 2017), support vector machines (Bartley et al., 2016), nearest-neighbor classifiers (Duivesteijn & Feelders, 2008), among others (Fard et al., 2016; Gupta et al., 2016; You et al., 2017; Bonakdarpour et al., 2018). Monotonicity has been studied in bayesian networks (van der Gaag et al., 2004; Shih et al., 2018), active learning (Barile & Feelders, 2012) and, more recently, in fairness (Wang & Gupta, 2020).

To a much lesser extent, monotonicity has also been studied from the perspective of explainability, with one recent example being the study of the explainability of monotonic bayesian networks (Shih et al., 2018). This work proposes to compile different families of bayesian networks, including naive bayes and monotonic networks, into a decision diagram, which can then be used for computing PI-explanations222Given some feature space point 𝐯\mathbf{v}, a PI-explanation is a subset-minimal subset of features which, the assignment of the corresponding coordinate value in 𝐯\mathbf{v}, is sufficient for the prediction.. Approaches based on an intermediate (knowledge) compilation step are characterized by two main drawbacks, namely their worst-case complexity, which is exponential both in time and in the size of the representation, but also the fact that these approaches are not model-agnostic, i.e. some formal logic representation of the model must be known and reasoned about. Clearly, model-agnostic heuristic approaches, which include LIME (Ribeiro et al., 2016), SHAP (Lundberg & Lee, 2017), or Anchor (Lundberg & Lee, 2017), can also be applied to explaining monotonic classifiers. However, these approaches do not readily exploit monotonicity, and both the theoretical and practical performance may be discouraging333In fact, there are recent negative results on the tractability of exact SHAP learning (Van den Broeck et al., 2020).. Furthermore, heuristic approaches offer no formal guarantees of rigor, e.g. an Anchor explanation may be consistent with points in feature space for which the model’s prediction differ from the target prediction (Ignatiev, 2020).

On a more positive note, recent work proposed polynomial-time exact algorithms for computing PI-explanations explanations of different classes of classifiers (Marques-Silva et al., 2020), namely linear and naive bayes classifiers. These results were complemented by the observation that, for ML models related with some classes of knowledge representation languages, PI-explanations can also be computed in polynomial time (Audemard et al., 2020).

This paper extends these initial results to the case of monotonic classifiers, in a number of ways. First, the paper proposes model-agnostic algorithms for computing PI-explanations and contrastive explanations (Miller, 2019) for any monotonic ML model. Second, the complexity of the proposed algorithms is shown to be polynomial on the time required to run the (black-box) monotonic classifier and the number of features. Third, the paper proposes an algorithm for the iterative enumeration of formal explanations444The term formal explanation is used in contrast with heuristic explanation (Ribeiro et al., 2016; Lundberg & Lee, 2017; Ribeiro et al., 2018) and it will be defined precisely in Section 2.. (This algorithm is worst-case exponential, but it is shown to be remarkably efficient in practice.)

The paper is organized as follows. Section 2 introduces the notation and definitions used in the rest of the paper. Section 3 details algorithms for computing one or more formal explanations of monotonic classifiers. Section 4 summarizes initial experiments, which confirm the scalability of the proposed algorithms. The paper concludes in Section 5.

2 Preliminaries

Classification problems.

A classification problem is defined on a set of features (or attributes) ={1,,N}{\mathcal{F}}=\{1,\ldots,N\} and a set of classes 𝒦={c1,c2,,cM}{\mathcal{K}}=\{c_{1},c_{2},\ldots,c_{M}\}. Each feature ii\in{\mathcal{F}} takes values from a domain 𝔻i\mathbb{D}_{i}. Domains are ordinal and bounded, and each domain can be defined on boolean, integer or real values. If xi𝔻ix_{i}\in\mathbb{D}_{i}, then λ(i)\lambda(i) and μ(i)\mu(i) denote respectively the smallest and largest values that xix_{i} can take, i.e. λ(i)xiμ(i)\lambda(i)\leq{x_{i}}\leq\mu(i). Feature space is defined as 𝔽=𝔻1×𝔻2××𝔻N\mathbb{F}=\mathbb{D}_{1}\times{\mathbb{D}_{2}}\times\ldots\times{\mathbb{D}_{N}}. The notation 𝐱=(x1,,xN)\mathbf{x}=(x_{1},\ldots,x_{N}) denotes an arbitrary point in feature space, where each xix_{i} is a variable taking values from 𝔻i\mathbb{D}_{i}. Moreover, the notation 𝐯=(v1,,vN)\mathbf{v}=(v_{1},\ldots,v_{N}) represents a specific point in feature space, where each viv_{i} is a constant representing one concrete value from 𝔻i\mathbb{D}_{i}. An instance (or example) denotes a pair (𝐯,c)(\mathbf{v},c), where 𝐯𝔽\mathbf{v}\in\mathbb{F} and c𝒦c\in{\mathcal{K}}. (We also use the term instance to refer to 𝐯\mathbf{v}, leaving cc implicit.) An ML classifier \mathbb{C} is characterized by a classification function κ\kappa that maps feature space 𝔽\mathbb{F} into the set of classes 𝒦{\mathcal{K}}, i.e. κ:𝔽𝒦\kappa:\mathbb{F}\to{\mathcal{K}}.

Monotonic classification.

Given two points in feature space 𝐚\mathbf{a} and 𝐛\mathbf{b}, 𝐚𝐛\mathbf{a}\leq\mathbf{b} if aibia_{i}\leq{b_{i}}, for all i{1,,N}i\in\{1,\ldots,N\}. A set of classes 𝒦={c1,,cM}{\mathcal{K}}=\{c_{1},\ldots,c_{M}\} is ordered if it respects a total order \preccurlyeq, with c1c2cMc_{1}\preccurlyeq{c_{2}}\preccurlyeq\ldots\preccurlyeq{c_{M}}. An ML classifier \mathbb{C} is fully monotonic if the associated classification function is monotonic, i.e. 𝐚𝐛κ(𝐚)κ(𝐛)\mathbf{a}\leq\mathbf{b}\Rightarrow\kappa(\mathbf{a})\preccurlyeq\kappa(\mathbf{b})555The paper adopts the classification of monotonic classifiers proposed in earlier work (Daniels & Velikova, 2010).. Throughout the paper, when referring to a monotonic classifier, this signifies a fully monotonic classifier. In addition, the interaction with a classifier is restricted to computing the value of κ(𝐯)\kappa(\mathbf{v}), for some point 𝐯𝔽\mathbf{v}\in\mathbb{F}, i.e. the classifier will be viewed as a black-box.

Example 1 (Running example).

Let us consider a classifier for predicting student grades. We assume that the classifier has learned the following formula (after being trained with grades of students from different cohorts):

S=max[0.3×Q+0.6×X+0.1×H,R]M=ite(S9,A,ite(S7,B,ite(S5,C,ite(S4,D,ite(S2,E,F)))))\begin{array}[]{rcl}S&=&\max\left[0.3\times{Q}+0.6\times{X}+0.1\times{H},R\right]\\[2.0pt] M&=&\textnormal{ite}(S\geq 9,A,\textnormal{ite}(S\geq 7,B,\textnormal{ite}(S\geq 5,C,\\[1.5pt] &&\textnormal{ite}(S\geq 4,D,\textnormal{ite}(S\geq 2,E,F)))))\\ \end{array}

SS, QQ, XX, HH and RR denote, respectively, the final score, the marks on the quiz, the exam, the homework, and the mark of an optional research project. Each mark ranges from 0 to 10. (For the optional mark RR, the final mark is 0 if the student opts out.) The final score is the largest of the two marks, as shown above. Moreover, the final grade MM is defined using an ite (if-then-else) operator, and ranges from AA to FF. As a result, QQ, XX, HH and RR represent the features of the classification problem, respectively numbered 1, 2, 3 and 4, and so ={1,2,3,4}{\mathcal{F}}=\{1,2,3,4\}. Each feature takes values from [0,10][0,10], i.e. λ(i)=0\lambda(i)=0 and μ(i)=10\mu(i)=10. The set of classes is 𝒦={A,B,C,D,E,F}{\mathcal{K}}=\{A,B,C,D,E,F\}, with FEDCBA{F}\preccurlyeq{E}\preccurlyeq{D}\preccurlyeq{C}\preccurlyeq{B}\preccurlyeq{A}. Clearly, the complete classifier (that given the different marks computes a final grade) is monotonic. Moreover, we will we consider a specific point of feature space representing student s1s_{1}, (Q,X,H,R)=(10,10,5,0)(Q,X,H,R)=(10,10,5,0), with a predicted grade of AA, i.e. κ(10,10,5,0)=A\kappa(10,10,5,0)=A.

Abductive and contrastive explanations.

We now define formal explanations. Prime implicant (PI) explanations (Shih et al., 2018) denote a minimal set of literals (relating a feature value xix_{i} and a constant viv_{i} from its domain 𝔻i\mathbb{D}_{i}) that are sufficient for the prediction666PI-explanations are related with abduction, and so are also referred to as abductive explanations (AXp) (Ignatiev et al., 2019). More recently, PI-explanations have been studied from a knowledge compilation perspective (Audemard et al., 2020).. Formally, given 𝐯=(v1,,vN)𝔽\mathbf{v}=(v_{1},\ldots,v_{N})\in\mathbb{F} with κ(𝐯)=c\kappa(\mathbf{v})=c, a PI-explanation (AXp) is any minimal subset 𝒳{\mathcal{X}}\subseteq{\mathcal{F}} such that,

(𝐱𝔽).[i𝒳(xi=vi)](κ(𝐱)=c)\forall(\mathbf{x}\in\mathbb{F}).\left[\bigwedge\nolimits_{i\in{{\mathcal{X}}}}(x_{i}=v_{i})\right]\operatorname*{\rightarrow}(\kappa(\mathbf{x})=c) (1)

AXp’s can be viewed as answering a ’Why?’ question, i.e. why is some prediction made given some point in feature space. A different view of explanations is a contrastive explanation (Miller, 2019), which answers a ’Why Not?’ question, i.e. which features can be changed to change the prediction. A formal definition of contrastive explanation is proposed in recent work (Ignatiev et al., 2020). Given 𝐯=(v1,,vN)𝔽\mathbf{v}=(v_{1},\ldots,v_{N})\in\mathbb{F} with κ(𝐯)=c\kappa(\mathbf{v})=c, a CXp is any minimal subset 𝒴{\mathcal{Y}}\subseteq{\mathcal{F}} such that,

(𝐱𝔽).j𝒴(xj=vj)(κ(𝐱)c)\exists(\mathbf{x}\in\mathbb{F}).\bigwedge\nolimits_{j\in{\mathcal{F}}\setminus{\mathcal{Y}}}(x_{j}=v_{j})\land(\kappa(\mathbf{x})\not=c) (2)

Building on the results of R. Reiter in model-based diagnosis (Reiter, 1987)(Ignatiev et al., 2020) proves a minimal hitting set (MHS) duality relation between AXp’s and CXp’s, i.e. AXp’s are MHSes of CXp’s and vice-versa.

Example 2 (AXp’s & CXp’s).

As can be readily observed (from the expression for MM in Example 1), as long as QQ and XX take value 10, the prediction will be AA, independently of the values given to HH and RR. Hence, given (Q,X,H,R)=(10,10,5,0)(Q,X,H,R)=(10,10,5,0), one AXp is {1,2}\{1,2\}. Moreover, to obtain a different prediction, it suffices to allow a suitable change of value in QQ (or alternatively in XX). Hence, given (Q,X,H,R)=(10,10,5,0)(Q,X,H,R)=(10,10,5,0), one CXp is {1}\{1\} (and another is {2}\{2\}). As can be observed, {1,2}\{1,2\} is the only MHS of {{1},{2}}\{\{1\},\{2\}\} and vice-versa. These are the only AXp’s and CXp’s for the example instance.

Boolean satisfiability (SAT).

SAT is the decision problem for propositional logic. The paper uses standard notation and definitions e.g. (Biere et al., 2009). A propositional formula is defined on a set UU of boolean variables, where the domain of each variable uiUu_{i}\in{U} is {0,1}\{0,1\}. We consider conjunctive normal form (CNF) formulas, where a formula is a conjunction of clauses, each clause is a disjunction of literals, and a literal is a variable uiu_{i} or its negation ¬ui\neg{u_{i}}. CNF formulas and SAT reasoners are used in Section 3.2.

3 Explanations for Monotonic Classifiers

This section describes three algorithms. The first algorithm serves to compute one AXp (and is referred to as 𝖿𝗂𝗇𝖽𝖠𝖷𝗉\mathsf{findAXp}). Its complexity is polynomial in the run time complexity of the classifier. The second algorithm serves to compute one CXp (and is referred to as 𝖿𝗂𝗇𝖽𝖢𝖷𝗉\mathsf{findCXp}). It has the same polynomial complexity as 𝖿𝗂𝗇𝖽𝖠𝖷𝗉\mathsf{findAXp}. The third algorithm shows how to use SAT reasoners for iteratively enumerating AXp’s or CXp’s. This algorithm is inspired by earlier work (Liffiton et al., 2016), but with key observations that minimize the number of times a SAT reasoner is called. This algorithm is based on the other two algorithms, and is described in Section 3.2.

One key property of the three algorithms is that, besides knowing that the classifier is monotonic, no additional information about the classifier is required. Indeed, the algorithms described in this section only require running the classifier for specific points in feature space. Thus, and similarly to LIME (Ribeiro et al., 2016), SHAP (Lundberg & Lee, 2017) or Anchor (Ribeiro et al., 2018), the algorithms proposed in this section are model-agnostic. However, and in contrast also with LIME, SHAP or Anchor, the proposed algorithms compute rigorously defined AXp’s, CXp’s, and also serve for the enumeration of explanations.

3.1 Finding One AXp and One CXp

The two algorithms 𝖿𝗂𝗇𝖽𝖠𝖷𝗉\mathsf{findAXp} and 𝖿𝗂𝗇𝖽𝖢𝖷𝗉\mathsf{findCXp} (shown as Algorithm 1 and Algorithm 2) share a number of common concepts, while solving different problems. These concepts are summarized next.

Algorithm 1 Finding one AXp – 𝖿𝗂𝗇𝖽𝖠𝖷𝗉(,𝒮,𝐯)\mathsf{findAXp}({\mathcal{F}},{\mathcal{S}},\mathbf{v})
1:   𝐯L(v1,,vN)\mathbf{v}_{\textnormal{L}}\leftarrow(v_{1},\ldots,v_{N})
2:   𝐯U(v1,,vN)\mathbf{v}_{\textnormal{U}}\leftarrow(v_{1},\ldots,v_{N}) // Ensures: κ(𝐯L)=κ(𝐯U)\kappa(\mathbf{v}_{\textnormal{L}})=\kappa(\mathbf{v}_{\textnormal{U}})
3:   (𝒞,𝒟,𝒫)(,,)({\mathcal{C}},{\mathcal{D}},{\mathcal{P}})\leftarrow({\mathcal{F}},\emptyset,\emptyset)
4:  for all  i𝒮i\in{\mathcal{S}} do
5:      (𝐯L,𝐯U,𝒞,𝒟)𝖥𝗋𝖾𝖾𝖠𝗍𝗍𝗋(i,𝐯,𝐯L,𝐯U,𝒞,𝒟)(\mathbf{v}_{\textnormal{L}},\mathbf{v}_{\textnormal{U}},{\mathcal{C}},{\mathcal{D}})\leftarrow\mathsf{FreeAttr}(i,\mathbf{v},\mathbf{v}_{\textnormal{L}},\mathbf{v}_{\textnormal{U}},{\mathcal{C}},{\mathcal{D}})
6:  end for// Require: κ(𝐯L)=κ(𝐯U)\kappa(\mathbf{v}_{\textnormal{L}})=\kappa(\mathbf{v}_{\textnormal{U}}), given 𝒮{\mathcal{S}}
7:  for all  i𝒮i\in{\mathcal{F}}\setminus{\mathcal{S}} do // Loop inv.: κ(𝐯L)=κ(𝐯U)\kappa(\mathbf{v}_{\textnormal{L}})=\kappa(\mathbf{v}_{\textnormal{U}})
8:      (𝐯L,𝐯U,𝒞,𝒟)𝖥𝗋𝖾𝖾𝖠𝗍𝗍𝗋(i,𝐯,𝐯L,𝐯U,𝒞,𝒟)(\mathbf{v}_{\textnormal{L}},\mathbf{v}_{\textnormal{U}},{\mathcal{C}},{\mathcal{D}})\leftarrow\mathsf{FreeAttr}(i,\mathbf{v},\mathbf{v}_{\textnormal{L}},\mathbf{v}_{\textnormal{U}},{\mathcal{C}},{\mathcal{D}})
9:     if  κ(𝐯L)κ(𝐯U)\kappa(\mathbf{v}_{\textnormal{L}})\not=\kappa(\mathbf{v}_{\textnormal{U}}) then // If invariant broken, fix it
10:         (𝐯L,𝐯U,𝒟,𝒫)𝖥𝗂𝗑𝖠𝗍𝗍𝗋(i,𝐯,𝐯L,𝐯U,𝒟,𝒫)(\mathbf{v}_{\textnormal{L}},\mathbf{v}_{\textnormal{U}},{\mathcal{D}},{\mathcal{P}})\leftarrow\mathsf{FixAttr}(i,\mathbf{v},\mathbf{v}_{\textnormal{L}},\mathbf{v}_{\textnormal{U}},{\mathcal{D}},{\mathcal{P}})
11:     end if
12:  end for
13:  return   𝒫{\mathcal{P}}
Algorithm 2 Finding one CXp – 𝖿𝗂𝗇𝖽𝖢𝖷𝗉(,𝒮,𝐯)\mathsf{findCXp}({\mathcal{F}},{\mathcal{S}},\mathbf{v})
1:   𝐯L(λ(1),,λ(N))\mathbf{v}_{\textnormal{L}}\leftarrow(\lambda(1),\ldots,\lambda(N))
2:   𝐯U(μ(1),,μ(N))\mathbf{v}_{\textnormal{U}}\leftarrow(\mu(1),\ldots,\mu(N))// Ensures: κ(𝐯L)κ(𝐯U)\kappa(\mathbf{v}_{\textnormal{L}})\not=\kappa(\mathbf{v}_{\textnormal{U}})
3:   (𝒞,𝒟,𝒫)(,,)({\mathcal{C}},{\mathcal{D}},{\mathcal{P}})\leftarrow({\mathcal{F}},\emptyset,\emptyset)
4:  for all  i𝒮i\in{\mathcal{S}} do
5:      (𝐯L,𝐯U,𝒞,𝒟)𝖥𝗂𝗑𝖠𝗍𝗍𝗋(i,𝐯,𝐯L,𝐯U,𝒞,𝒟)(\mathbf{v}_{\textnormal{L}},\mathbf{v}_{\textnormal{U}},{\mathcal{C}},{\mathcal{D}})\leftarrow\mathsf{FixAttr}(i,\mathbf{v},\mathbf{v}_{\textnormal{L}},\mathbf{v}_{\textnormal{U}},{\mathcal{C}},{\mathcal{D}})
6:  end for// Require: κ(𝐯L)κ(𝐯U)\kappa(\mathbf{v}_{\textnormal{L}})\not=\kappa(\mathbf{v}_{\textnormal{U}}), given 𝒮{\mathcal{S}}
7:  for all  i𝒮i\in{\mathcal{F}}\setminus{\mathcal{S}} do // Loop inv.: κ(𝐯L)κ(𝐯U)\kappa(\mathbf{v}_{\textnormal{L}})\not=\kappa(\mathbf{v}_{\textnormal{U}})
8:      (𝐯L,𝐯U,𝒞,𝒟)𝖥𝗂𝗑𝖠𝗍𝗍𝗋(i,𝐯,𝐯L,𝐯U,𝒞,𝒟)(\mathbf{v}_{\textnormal{L}},\mathbf{v}_{\textnormal{U}},{\mathcal{C}},{\mathcal{D}})\leftarrow\mathsf{FixAttr}(i,\mathbf{v},\mathbf{v}_{\textnormal{L}},\mathbf{v}_{\textnormal{U}},{\mathcal{C}},{\mathcal{D}})
9:     if  κ(𝐯L)=κ(𝐯U)\kappa(\mathbf{v}_{\textnormal{L}})=\kappa(\mathbf{v}_{\textnormal{U}}) then // If invariant broken, fix it
10:         (𝐯L,𝐯U,𝒟,𝒫)𝖥𝗋𝖾𝖾𝖠𝗍𝗍𝗋(i,𝐯,𝐯L,𝐯U,𝒟,𝒫)(\mathbf{v}_{\textnormal{L}},\mathbf{v}_{\textnormal{U}},{\mathcal{D}},{\mathcal{P}})\leftarrow\mathsf{FreeAttr}(i,\mathbf{v},\mathbf{v}_{\textnormal{L}},\mathbf{v}_{\textnormal{U}},{\mathcal{D}},{\mathcal{P}})
11:     end if
12:  end for
13:  return   𝒫{\mathcal{P}}

The two algorithms iteratively update three sets of features (𝒞{\mathcal{C}}, 𝒟{\mathcal{D}} and 𝒫{\mathcal{P}}) and two points in feature space (𝐯L\mathbf{v}_{\textnormal{L}} and 𝐯U\mathbf{v}_{\textnormal{U}}). Using these variables, the two algorithms maintain two invariants. The first invariant is that 𝒞{\mathcal{C}}, 𝒟{\mathcal{D}} and 𝒫{\mathcal{P}} form a partition of {\mathcal{F}}, and represent respectively the candidate, dropped and picked sets of features (with the picked features denoting those that are included either in an AXp or an CXp). The second invariant serves to ensure that the selected set of features satisfies (1) (for 𝖿𝗂𝗇𝖽𝖠𝖷𝗉\mathsf{findAXp}) or (2) (for 𝖿𝗂𝗇𝖽𝖢𝖷𝗉\mathsf{findCXp}). Maintaining this invariant, requires iteratively updating two points 𝐯L=(vL1,,vLN)\mathbf{v}_{\textnormal{L}}=(v_{\textnormal{L}_{1}},\ldots,v_{\textnormal{L}_{N}}) and 𝐯U=(vU1,,vUN)\mathbf{v}_{\textnormal{U}}=(v_{\textnormal{U}_{1}},\ldots,v_{\textnormal{U}_{N}}), denoting respectively lower and upper bounds on the class values that can be obtained given the features that are allowed to take any value in their domain.

Feat. Initial values Changed values Predictions Dec. Resulting values
𝐯L\mathbf{v}_{\textnormal{L}} 𝐯U\mathbf{v}_{\textnormal{U}} 𝐯L\mathbf{v}_{\textnormal{L}} 𝐯U\mathbf{v}_{\textnormal{U}} κ(𝐯L)\kappa(\mathbf{v}_{\textnormal{L}}) κ(𝐯U)\kappa(\mathbf{v}_{\textnormal{U}}) 𝐯L\mathbf{v}_{\textnormal{L}} 𝐯U\mathbf{v}_{\textnormal{U}}
1 (10,10,5,0)(10{,}10{,}5{,}0) (10,10,5,0)(10{,}10{,}5{,}0) (0,10,5,0)(0{,}10{,}5{,}0) (10,10,5,0)(10{,}10{,}5{,}0) CC AA (10,10,5,0)(10{,}10{,}5{,}0) (10,10,5,0)(10{,}10{,}5{,}0)
2 (10,10,5,0)(10{,}10{,}5{,}0) (10,10,5,0)(10{,}10{,}5{,}0) (10,0,5,0)(10{,}0{,}5{,}0) (10,10,5,0)(10{,}10{,}5{,}0) EE AA (10,10,5,0)(10{,}10{,}5{,}0) (10,10,5,0)(10{,}10{,}5{,}0)
3 (10,10,5,0)(10{,}10{,}5{,}0) (10,10,5,0)(10{,}10{,}5{,}0) (10,10,0,0)(10{,}10{,}0{,}0) (10,10,5,0)(10{,}10{,}5{,}0) AA AA (10,10,0,0)(10{,}10{,}0{,}0) (10,10,10,0)(10{,}10{,}10{,}0)
4 (10,10,0,0)(10{,}10{,}0{,}0) (10,10,10,0)(10{,}10{,}10{,}0) (10,10,0,0)(10{,}10{,}0{,}0) (10,10,10,10)(10{,}10{,}10{,}10) AA AA (10,10,0,0)(10{,}10{,}0{,}0) (10,10,10,10)(10{,}10{,}10{,}10)
Table 1: Execution of algorithm for finding one AXp

Finding one AXp.

We detail below the main steps of algorithm 𝖿𝗂𝗇𝖽𝖠𝖷𝗉\mathsf{findAXp} (see Algorithm 1). (Lines 4 to 5 are used for enumerating explanations, and so we assume 𝒮={\mathcal{S}}=\emptyset for now.) The main goal of 𝖿𝗂𝗇𝖽𝖠𝖷𝗉\mathsf{findAXp} is to find a maximal set of features 𝒟{\mathcal{D}} which are allowed to take any value, i.e. that are free. For such a set 𝒟{\mathcal{D}}, the set of features that remain fixed to the value specified in 𝐯\mathbf{v}, i.e. 𝒫=𝒟{\mathcal{P}}={\mathcal{F}}\setminus{\mathcal{D}}, is a minimal set of (picked) features that is sufficient for the prediction, as intended. The different sets used by the algorithm are initialized in 3. (As noted earlier, the sets 𝒞{\mathcal{C}}, 𝒟{\mathcal{D}} and 𝒫{\mathcal{P}} form a partition of {\mathcal{F}}, and 𝒞={\mathcal{C}}=\emptyset upon termination.)

For 𝖿𝗂𝗇𝖽𝖠𝖷𝗉\mathsf{findAXp}, the second invariant of the algorithm is that κ(𝐯L)=κ(𝐯U)\kappa(\mathbf{v}_{\textnormal{L}})=\kappa(\mathbf{v}_{\textnormal{U}}), i.e. by allowing the features in 𝒫𝒞{\mathcal{P}}\cup{\mathcal{C}} to take the corresponding value in 𝐯\mathbf{v}, the value of the prediction is guaranteed not to change.

The use of the second invariant κ(𝐯L)=κ(𝐯U)\kappa(\mathbf{v}_{\textnormal{L}})=\kappa(\mathbf{v}_{\textnormal{U}}) is justified by the following result.

Proposition 1.

If κ(𝐯L)=κ(𝐯U)\kappa(\mathbf{v}_{\textnormal{L}})=\kappa(\mathbf{v}_{\textnormal{U}}), then it holds that, (𝐱𝔽).[𝐯L𝐱𝐯U][κ(𝐱)=κ(𝐯)]\forall(\mathbf{x}\in\mathbb{F}).[\mathbf{v}_{\textnormal{L}}\leq\mathbf{x}\leq\mathbf{v}_{\textnormal{U}}]\rightarrow[\kappa(\mathbf{x})=\kappa(\mathbf{v})].

The algorithm starts by enforcing the second invariant as the result of executing lines 1 and 2.

Moreover, 𝖿𝗂𝗇𝖽𝖠𝖷𝗉\mathsf{findAXp} analyzes one feature at a time. Starting from the set 𝒞{\mathcal{C}} of candidate features (in 7), the algorithm iteratively picks a feature ii from 𝒞{\mathcal{C}} and makes a decision about whether to drop the feature from the explanation. The first step is to assume that the feature ii can indeed be allowed to take any value. This is done in 8, by calling the following function 𝖥𝗋𝖾𝖾𝖠𝗍𝗍𝗋\mathsf{FreeAttr}:

  𝐯L(vL1,,λ(i),,vLN)\mathbf{v}_{\textnormal{L}}\leftarrow(v_{\textnormal{L}_{1}},\ldots,\lambda(i),\ldots,v_{\textnormal{L}_{N}})
  𝐯U(vU1,,μ(i),,vUN)\mathbf{v}_{\textnormal{U}}\leftarrow(v_{\textnormal{U}_{1}},\ldots,\mu(i),\ldots,v_{\textnormal{U}_{N}})
  (𝒜,)(𝒜{i},{i})({\mathcal{A}},{\mathcal{B}})\leftarrow({\mathcal{A}}\setminus\{i\},{\mathcal{B}}\cup\{i\})
  return  (𝐯L,𝐯U,𝒜,)(\mathbf{v}_{\textnormal{L}},\mathbf{v}_{\textnormal{U}},{\mathcal{A}},{\mathcal{B}})

where 𝒜{\mathcal{A}} is replaced by 𝒞{\mathcal{C}} and BB is replaced by 𝒟{\mathcal{D}}, and so feature ii is moved from 𝒞{\mathcal{C}} to 𝒟{\mathcal{D}}. In addition, the value of ii is now allowed to range from λ(i)\lambda(i) (in 𝐯L\mathbf{v}_{\textnormal{L}}) to μ(i)\mu(i) (in 𝐯U\mathbf{v}_{\textnormal{U}}),

The next step of the algorithm (in 9) is to decide whether allowing ii to take any value breaks the invariant κ(𝐯L)=κ(𝐯U)\kappa(\mathbf{v}_{\textnormal{L}})=\kappa(\mathbf{v}_{\textnormal{U}}). If the invariant is not broken, then the algorithm moves to analyze the next feature (in 7). However, if the invariant is broken, then the the feature cannot take any value, and so it must be fixed to the corresponding value in 𝐯\mathbf{v}. This is done by calling (in 10) the following function 𝖥𝗂𝗑𝖠𝗍𝗍𝗋\mathsf{FixAttr}:

  𝐯L(vL1,,vi,,vLN)\mathbf{v}_{\textnormal{L}}\leftarrow(v_{\textnormal{L}_{1}},\ldots,v_{i},\ldots,v_{\textnormal{L}_{N}})
  𝐯U(vU1,,vi,,vUN)\mathbf{v}_{\textnormal{U}}\leftarrow(v_{\textnormal{U}_{1}},\ldots,v_{i},\ldots,v_{\textnormal{U}_{N}})
  (𝒜,)(𝒜{i},{i})({\mathcal{A}},{\mathcal{B}})\leftarrow({\mathcal{A}}\setminus\{i\},{\mathcal{B}}\cup\{i\})
  return  (𝐯L,𝐯U,𝒜,)(\mathbf{v}_{\textnormal{L}},\mathbf{v}_{\textnormal{U}},{\mathcal{A}},{\mathcal{B}})

where 𝒜{\mathcal{A}} is replaced by 𝒟{\mathcal{D}} and {\mathcal{B}} is replaced by 𝒫{\mathcal{P}}, and so feature ii is moved from 𝒟{\mathcal{D}} to 𝒫{\mathcal{P}}. In addition, the value of ii is once again fixed to the corresponding value in 𝐯\mathbf{v}. After analyzing all features, the algorithm 𝖿𝗂𝗇𝖽𝖠𝖷𝗉\mathsf{findAXp} terminates (in 13) by return the (minimal) set of features 𝒫{\mathcal{P}} that are fixed to their value in 𝐯\mathbf{v}. It is immediate to conclude that each feature is analyzed once, and that for each feature, the classifier is invoked twice. Given the discussion above, we conclude that,

Theorem 1.

Given a monotonic classifier, an instance 𝐯\mathbf{v} with prediction c=κ(𝐯)c=\kappa(\mathbf{v}), Algorithm 1 computes one AXp in linear time in the running time complexity of the classifier.

We illustrate the operation of 𝖿𝗂𝗇𝖽𝖠𝖷𝗉\mathsf{findAXp}, with an example.

Example 3.

Given the monotonic classifier from Example 1, and the concrete case of student s1s_{1}, with (Q,X,H,R)=(10,10,5,0)(Q,X,H,R)=(10,10,5,0) and predicted mark AA, we show how one PI-explanation can computed. (In settings with more than one AXp, changing the order of how features are analyzed, may results in a different explanation being obtained.) For each feature ii, 1i41\leq{i}\leq 4, λ(i)=0\lambda(i)=0 and μ(i)=10\mu(i)=10. Moreover, features are analyzed in order: 1,2,3,4\langle 1,2,3,4\rangle; the order is arbitrary. The algorithm’s execution is summarized in Table 1. As can be observed, features 1 and 2 are kept as part of the PI-explanation (decision is in 9, i.e. invariant is broken and features are kept), whereas features 3 and 4 are dropped from the PI-explanation (decision is , i.e. invariant holds). As a result, the PI-explanation for the grade of student s1s_{1} is {1,2}\{1,2\}, which denotes that as long as (Q=10)(X=10)(Q=10)\land(X=10), the prediction will be AA.

Feat. Initial values Changed values Predictions Dec. Resulting values
𝐯L\mathbf{v}_{\textnormal{L}} 𝐯U\mathbf{v}_{\textnormal{U}} 𝐯L\mathbf{v}_{\textnormal{L}} 𝐯U\mathbf{v}_{\textnormal{U}} κ(𝐯L)\kappa(\mathbf{v}_{\textnormal{L}}) κ(𝐯U)\kappa(\mathbf{v}_{\textnormal{U}}) 𝐯L\mathbf{v}_{\textnormal{L}} 𝐯U\mathbf{v}_{\textnormal{U}}
1 (0,0,0,0)(0{,}0{,}0{,}0) (10,10,10,10)(10{,}10{,}10{,}10) (10,0,0,0)(10{,}0{,}0{,}0) (10,10,10,10)(10{,}10{,}10{,}10) EE AA (10,0,0,0)(10{,}0{,}0{,}0) (10,10,10,10)(10{,}10{,}10{,}10)
2 (10,0,0,0)(10{,}0{,}0{,}0) (10,10,10,10)(10{,}10{,}10{,}10) (10,10,0,0)(10{,}10{,}0{,}0) (10,10,10,10)(10{,}10{,}10{,}10) AA AA (10,0,10,0)(10{,}0{,}10{,}0) (10,10,10,10)(10{,}10{,}10{,}10)
3 (10,0,0,0)(10{,}0{,}0{,}0) (10,10,10,10)(10{,}10{,}10{,}10) (10,0,5,0)(10{,}0{,}5{,}0) (10,10,5,10)(10{,}10{,}5{,}10) EE AA (10,0,5,0)(10{,}0{,}5{,}0) (10,0,5,10)(10{,}0{,}5{,}10)
4 (10,0,5,0)(10{,}0{,}5{,}0) (10,10,5,10)(10{,}10{,}5{,}10) (10,0,5,0)(10{,}0{,}5{,}0) (10,10,5,0)(10{,}10{,}5{,}0) EE AA (10,0,5,0)(10{,}0{,}5{,}0) (10,10,5,0)(10{,}10{,}5{,}0)
Table 2: Execution of algorithm for finding one CXp

Finding one CXp.

The two algorithms 𝖿𝗂𝗇𝖽𝖠𝖷𝗉\mathsf{findAXp} and 𝖿𝗂𝗇𝖽𝖢𝖷𝗉\mathsf{findCXp} are organized in a similar way. (This in part results from the fact that AXps are minimal hitting sets of CXps and vice-versa (Ignatiev et al., 2020).) We briefly explain the differences when computing a CXp (see Algorithm 2). (Lines 4 to 5 are used for enumerating explanations, and so we assume 𝒮={\mathcal{S}}=\emptyset for now.)

The main goal of 𝖿𝗂𝗇𝖽𝖢𝖷𝗉\mathsf{findCXp} is to find a maximal set of features 𝒟{\mathcal{D}} that are only allowed to take the value specified in 𝐯\mathbf{v}, i.e. that are fixed. For such a set 𝒟{\mathcal{D}}, the set of features that are allowed to take any value, i.e. 𝒫=𝒟{\mathcal{P}}={\mathcal{F}}\setminus{\mathcal{D}}, is a minimal set that, by being allowed to take any value in their domain, suffices for allowing the prediction to change, as intended. The different sets used by the algorithm are initialized in 3.

For 𝖿𝗂𝗇𝖽𝖢𝖷𝗉\mathsf{findCXp}, the second invariant of the algorithm is that κ(𝐯L)κ(𝐯U)\kappa(\mathbf{v}_{\textnormal{L}})\not=\kappa(\mathbf{v}_{\textnormal{U}}), i.e. by allowing the features in 𝒫𝒞{\mathcal{P}}\cup{\mathcal{C}} to take any value, the value of the prediction does not change. The algorithm starts by enforcing the second invariant as the result of executing lines 1 and 2.

The use of the second invariant κ(𝐯L)κ(𝐯U)\kappa(\mathbf{v}_{\textnormal{L}})\not=\kappa(\mathbf{v}_{\textnormal{U}}) is justified by the following result.

Proposition 2.

If κ(𝐯L)κ(𝐯U)\kappa(\mathbf{v}_{\textnormal{L}})\not=\kappa(\mathbf{v}_{\textnormal{U}}), then it holds that, (𝐱𝔽).[𝐯L𝐱𝐯U][κ(𝐱)κ(𝐯)]\exists(\mathbf{x}\in\mathbb{F}).[\mathbf{v}_{\textnormal{L}}\leq\mathbf{x}\leq\mathbf{v}_{\textnormal{U}}]\land[\kappa(\mathbf{x})\not=\kappa(\mathbf{v})].

Similarly to 𝖿𝗂𝗇𝖽𝖠𝖷𝗉\mathsf{findAXp}, 𝖿𝗂𝗇𝖽𝖢𝖷𝗉\mathsf{findCXp} analyzes one feature at a time. Starting from the set 𝒞{\mathcal{C}} of candidate features (in 7), the algorithm iteratively picks a feature ii from 𝒞{\mathcal{C}} and makes a decision about whether to drop the feature from the explanation. The first step is to assume that the feature ii can indeed be fixed to the corresponding value in 𝐯\mathbf{v}. This is done in 8, by calling the following function 𝖥𝗂𝗑𝖠𝗍𝗍𝗋\mathsf{FixAttr}, where 𝒜{\mathcal{A}} is replaced by 𝒞{\mathcal{C}}, and BB is replaced by 𝒟{\mathcal{D}}, and so feature ii is moved from 𝒞{\mathcal{C}} to 𝒟{\mathcal{D}}. In addition, the value of ii is now fixed to its value in 𝐯\mathbf{v}.

The next step of the algorithm (in 9) is to decide whether fixing the value of ii breaks the invariant κ(𝐯L)κ(𝐯U)\kappa(\mathbf{v}_{\textnormal{L}})\not=\kappa(\mathbf{v}_{\textnormal{U}}). If the invariant is not broken, then the algorithm moves to analyze the next feature (in 7). However, if the invariant is broken, then the feature cannot be fixed, and so it must be allowed to take any value from its domain. This is done by calling (in 10) the following function 𝖥𝗋𝖾𝖾𝖠𝗍𝗍𝗋\mathsf{FreeAttr}, with 𝒜{\mathcal{A}} replaced by 𝒟{\mathcal{D}} and {\mathcal{B}} replaced by 𝒫{\mathcal{P}}, and so feature ii is moved from 𝒟{\mathcal{D}} to 𝒫{\mathcal{P}}. In addition, the value of ii is once again allowed to take any value from its domain. After analyzing all features, the algorithm 𝖿𝗂𝗇𝖽𝖢𝖷𝗉\mathsf{findCXp} terminates (in 13) by returning the (minimal) set of features 𝒫{\mathcal{P}} that are allowed to take any value from their domain. It is immediate to conclude that each feature is analyzed once, and that for each feature, the classifier is invoked twice. Given the discussion above, we conclude that,

Theorem 2.

Given a monotonic classifier, an instance 𝐯\mathbf{v} with prediction c=κ(𝐯)c=\kappa(\mathbf{v}), Algorithm 2 computes one CXp in linear time in the running time complexity of the classifier.

We illustrate the operation of 𝖿𝗂𝗇𝖽𝖢𝖷𝗉\mathsf{findCXp}, with an example.

Example 4.

For the running example (see Examples 1, 2 and 3), for instance 𝐯0=(10,10,5,0)\mathbf{v}_{0}=(10,10,5,0) with prediction AA, we illustrate the computation of one CXp. The algorithm’s execution is summarized in Table 2. (When computing one CXp, a feature is kept (decision is ) if it is declared free, and it is dropped (decision is ) if it must be fixed.) As can be observed, a contrastive explanation is: {2}\{2\}, i.e. there is an assignment to feature 2 (i.e. to XX), which guarantees a change of prediction when the other features are kept to their values. For example, by setting X=0X=0 (and keeping the remaining values fixed), the value of the prediction changes.

Complexity.

As can be readily concluded from Algorithm 1 and Algorithm 2, the algorithms execute in linear time in the number of features. However, in each iteration of the algorithm, the classifier is invoked twice, for finding the predicted classes for 𝐯L\mathbf{v}_{\textnormal{L}} and for 𝐯U\mathbf{v}_{\textnormal{U}}. We will represent the time required by the classifier as 𝒯{\mathcal{T}}_{\mathbb{C}}, and so the overall run time of each algorithm is 𝒪(||×𝒯){\mathcal{O}}(|{\mathcal{F}}|\times{\mathcal{T}}_{\mathbb{C}}).

3.2 Enumerating Explanations

We first show that for monotonic classifiers, the enumeration of explanations with polynomial-time delay is computationally hard.

Theorem 3.

Determining the existence of N/2+1\lfloor N/2\rfloor{+}1 AXp’s (or CXp’s) of a monotonic NN-feature classifier is NP-complete.

(The proof is included in the appendix.) Since the enumeration of AXp’s and CXp’s with polynomial delay is unlikely, we describe in this section how to use SAT reasoners for the enumeration of AXp’s and CXp’s of a monotonic classifier. (Although we prove the algorithm to be sound and complete, the algorithm necessarily has leeway in selecting the order in which AXp’s and CXp’s are listed.)

The algorithm uses the following propositional representation:

  1. 1.

    The algorithm will iteratively add clauses to a CNF formula {\mathcal{H}}. The clauses in {\mathcal{H}} account for the AXp’s and CXp’s already computed, and serve to prevent their repetition.

  2. 2.

    Formula {\mathcal{H}} is defined on a set of variables uiu_{i}, 1in1\leq{i}\leq{n}, where each uiu_{i} denotes whether feature ii is declared free (ui=1u_{i}=1) or is alternatively declared fixed (ui=0u_{i}=0).

The algorithm proposed in this section requires exactly one call to a SAT reasoner before computing one explanation (either AXp/CXp), and one additional call to decide that all explanations have been computed. As a result, the number of calls to a SAT reasoner is |AXp|+|CXp|+1|\textnormal{AXp}|+|\textnormal{CXp}|+1. Furthermore, the size of the formula grows by one clause after each AXp or CXp is computed. In practice, for a wide range of ML settings, both the number of variables and the number of clauses are well within the reach of modern SAT reasoners.

Proposition 3.

Let 𝐯\mathbf{v} be a point in feature space, let κ(𝐯)=c𝒦\kappa(\mathbf{v})=c\in{\mathcal{K}}, and let 𝒵{\mathcal{Z}}\subseteq{\mathcal{F}}. Then, either (1) (on page 1) holds, with 𝒳=𝒵{\mathcal{X}}={\mathcal{Z}}, or (2) (also on page 1) holds, with 𝒴=𝒵{\mathcal{Y}}={\mathcal{F}}\setminus{\mathcal{Z}}, but not both.

Proposition 3 essentially states that, given a set 𝒵{\mathcal{Z}} of features, if these are fixed, and the others are allowed to take any value from their domains, then either the prediction never changes, or there exists an assignment to the non-fixed features, which causes the prediction to change. The approach for enumerating AXp’s and CXp’s is shown in Algorithm 3.

Algorithm 3 Enumeration of AXp’s and CXp’s
1:   {\mathcal{H}}\leftarrow\emptyset// {\mathcal{H}} defined on set UU
2:  repeat
3:      (𝗈𝗎𝗍𝖼,𝐮)𝖲𝖠𝖳()(\mathsf{outc},\mathbf{u})\leftarrow\mathsf{SAT}({\mathcal{H}})
4:     if  𝗈𝗎𝗍𝖼=true\mathsf{outc}=\textbf{true}{} then
5:        𝐯L(vL1,,vLN),s.t.vLiite(ui,λ(i),vi)\mathbf{v}_{\textnormal{L}}\leftarrow(v_{\textnormal{L}_{1}},\ldots,v_{\textnormal{L}_{N}}),\>\textnormal{s.t.}\>v_{\textnormal{L}_{i}}\leftarrow\textnormal{ite}(u_{i},\lambda(i),v_{i})
6:        𝐯U(vU1,,vUN),s.t.vUiite(ui,μ(i),vi)\mathbf{v}_{\textnormal{U}}\leftarrow(v_{\textnormal{U}_{1}},\ldots,v_{\textnormal{U}_{N}}),\>\!\textnormal{s.t.}\;\!v_{\textnormal{U}_{i}}\leftarrow\textnormal{ite}(u_{i},\mu(i),v_{i})
7:        if κ(𝐯L)=κ(𝐯U)\kappa(\mathbf{v}_{\textnormal{L}})=\kappa(\mathbf{v}_{\textnormal{U}}) then
8:           𝒮{i|ui=1}{\mathcal{S}}\leftarrow\{i\in{\mathcal{F}}\,|\,u_{i}=1\}// 𝒮{\mathcal{F}}\setminus{\mathcal{S}}\supseteq some AXp
9:           𝒫𝖿𝗂𝗇𝖽𝖠𝖷𝗉(,𝒮,𝐮){\mathcal{P}}\leftarrow\mathsf{findAXp}({\mathcal{F}},{\mathcal{S}},\mathbf{u})
10:           𝗋𝖾𝗉𝗈𝗋𝗍𝖠𝖷𝗉(𝒫)\mathsf{reportAXp}({\mathcal{P}})
11:           {(i𝒫ui)}{\mathcal{H}}\leftarrow{\mathcal{H}}\cup\{(\lor_{i\in{\mathcal{P}}}u_{i})\}
12:        else
13:           𝒮{i|ui=0}{\mathcal{S}}\leftarrow\{i\in{\mathcal{F}}\,|\,u_{i}=0\}// 𝒮{\mathcal{F}}\setminus{\mathcal{S}}\supseteq some CXp
14:           𝒫𝖿𝗂𝗇𝖽𝖢𝖷𝗉(,𝒮,𝐮){\mathcal{P}}\leftarrow\mathsf{findCXp}({\mathcal{F}},{\mathcal{S}},\mathbf{u})
15:           𝗋𝖾𝗉𝗈𝗋𝗍𝖢𝖷𝗉(𝒫)\mathsf{reportCXp}({\mathcal{P}})
16:           {(i𝒫¬ui)}{\mathcal{H}}\leftarrow{\mathcal{H}}\cup\{(\lor_{i\in{\mathcal{P}}}\neg{u_{i}})\}
17:        end if
18:     end if
19:  until 𝗈𝗎𝗍𝖼=false\mathsf{outc}=\textbf{false}{}

The algorithm starts in 1 by initializing the CNF formula {\mathcal{H}} without clauses (these will be added as the algorithm executes). The main loop (from 2 to 19) is executed while the formula {\mathcal{H}} is satisfiable. This is decided with a call to a SAT reasoner (in 3). Any satisfying assignment to the formula {\mathcal{H}} partitions the features into two sets: one denoting the features that can take any value (with ui=1u_{i}=1) and another denoting the features that take the corresponding value in 𝐯\mathbf{v} (with ui=0u_{i}=0). (The assignment effectively identifies a set 𝒵{\mathcal{Z}}\subseteq{\mathcal{F}}, of fixed features, and thus we can invoke Proposition 3.) In 5 and 6, the algorithm creates 𝐯L\mathbf{v}_{\textnormal{L}} and 𝐯U\mathbf{v}_{\textnormal{U}}. For a fixed feature ii, both 𝐯L\mathbf{v}_{\textnormal{L}} and 𝐯U\mathbf{v}_{\textnormal{U}} are assigned value viv_{i}. For a free feature ii, 𝐯L\mathbf{v}_{\textnormal{L}} and 𝐯U\mathbf{v}_{\textnormal{U}} are respectively assigned to λ(i)\lambda(i) and μ(i)\mu(i). Let 𝒵{\mathcal{Z}} denote the set of fixed features. In 7, we check in which case of Proposition 3 applies.

If κ(𝐯L)=κ(𝐯U)\kappa(\mathbf{v}_{\textnormal{L}})=\kappa(\mathbf{v}_{\textnormal{U}}), then we know that the invariant of Algorithm 1 holds. Moreover, 𝒵{\mathcal{F}}\setminus{\mathcal{Z}} is a subset of an AXp. Hence, we set 𝒮=𝒵{\mathcal{S}}={\mathcal{F}}\setminus{\mathcal{Z}} as the seed for 𝖿𝗂𝗇𝖽𝖠𝖷𝗉\mathsf{findAXp}. This is shown in lines 8 and 9. After reporting the computed AXp, represented by the set of features 𝒫{\mathcal{P}}, we prevent the same AXp from being computed again by requiring that at least one of the fixed features must be free in future satisfying assignments of {\mathcal{H}}. This is represented as a positive clause.

Proposition 4.

In the case κ(𝐯L)=κ(𝐯U)\kappa(\mathbf{v}_{\textnormal{L}})=\kappa(\mathbf{v}_{\textnormal{U}}), set 𝒮{\mathcal{S}} is such that, for any previously computed AXp, at least one feature will be included in 𝒮{\mathcal{S}} (as a free literal). Since 𝖿𝗂𝗇𝖽𝖠𝖷𝗉\mathsf{findAXp} only grows 𝒮{\mathcal{S}}, then the Algorithm 3 does not repeat AXp’s.

Moreover, if κ(𝐯L)κ(𝐯U)\kappa(\mathbf{v}_{\textnormal{L}})\not=\kappa(\mathbf{v}_{\textnormal{U}}), then we know that the invariant of Algorithm 2 holds. Moreover, 𝒵{\mathcal{Z}} is a subset of a CXp. Hence, we set 𝒮=𝒵{\mathcal{S}}={\mathcal{Z}} as the seed for 𝖿𝗂𝗇𝖽𝖢𝖷𝗉\mathsf{findCXp}. This is shown in lines 13 and 14. After reporting the computed CXp, represented by the set of features 𝒫{\mathcal{P}}, we prevent the same CXp from being computed by requiring that at least one of the free features must be free in future satisfying assignments of {\mathcal{H}}. This is represented as negative clause.

Proposition 5.

In the case κ(𝐯L)κ(𝐯U)\kappa(\mathbf{v}_{\textnormal{L}})\not=\kappa(\mathbf{v}_{\textnormal{U}}), set 𝒮{\mathcal{S}} is such that, for any previously computed CXp, at least one feature will be included in 𝒮{\mathcal{S}} (as a fixed literal). Since 𝖿𝗂𝗇𝖽𝖢𝖷𝗉\mathsf{findCXp} only grows 𝒮{\mathcal{S}}, then the Algorithm 3 does not repeat CXp’s.

Given the above, and since the number of AXp’s and CXp’s (being subsets of {\mathcal{F}}) is finite, then we have,

Theorem 4.

Algorithm 3 is sound and complete for the enumeration of AXp’s and CXp’s.

Step {\mathcal{H}} 𝐮/𝗈𝗎𝗍\mathbf{u}~{}/~{}\mathsf{out} 𝐯L\mathbf{v}_{\textnormal{L}} 𝐯U\mathbf{v}_{\textnormal{U}} κ(𝐯L)\kappa(\mathbf{v}_{\textnormal{L}}) κ(𝐯U)\kappa(\mathbf{v}_{\textnormal{U}}) AXp CXp Clause added
1 \emptyset (0,0,1,0)(0,0,1,0) (10,10,0,0)(10,10,0,0) (10,10,10,0)(10,10,10,0) AA AA {1,2}\{1,2\} (u1u2)\begin{array}[]{l}(u_{1}\lor{u_{2}})\end{array}
2 (u1u2)\begin{array}[]{l}(u_{1}\lor{u_{2}})\end{array} (1,0,0,1)(1,0,0,1) (0,10,5,0)(0,10,5,0) (10,10,5,10)(10,10,5,10) CC AA {1}\{1\} (¬u1)\begin{array}[]{l}(\neg{u_{1}})\end{array}
3 (u1u2)(¬u1)\begin{array}[]{l}(u_{1}\lor{u_{2}})\\ (\neg{u_{1}})\end{array} (0,1,1,0)(0,1,1,0) (10,0,0,0)(10,0,0,0) (10,10,10,0)(10,10,10,0) EE AA {2}\{2\} (¬u2)\begin{array}[]{l}(\neg{u_{2}})\end{array}
4 (u1u2)(¬u1),(¬u2)\begin{array}[]{l}(u_{1}\lor{u_{2}})\\ (\neg{u_{1}}),(\neg{u_{2}})\end{array} UNSAT
Table 3: Execution of enumeration algorithm
Example 5.

Building on earlier examples, we summarize the main steps of the SAT oracle-based algorithm for enumerating AXp and CXp explanations. Table 3 illustrates one execution of the proposed algorithm. There are 1 AXp’s and 2 CXp’s. (Regarding the call to the SAT oracle, the satisfying assignments shown are intended to be as arbitrary as possible, given the existing constraints; other satisfying assignments could have been picked.) For each computed AXp, we add to {\mathcal{H}} one positive clause. In this example, we add the clause (u1u2)(u_{1}\lor{u_{2}}), since the AXp is {1,2}\{1,2\}. By adding this clause, we guarantee that features 1 and 2 will not both be deemed fixed by subsequent satisfying assignments of {\mathcal{H}}. Similarly, for each computed CXp, we add to {\mathcal{H}} one negative clause. For the example, the clauses added are (¬u1)(\neg{u_{1}}) for CXp {1}\{1\}, and (¬u2)(\neg{u_{2}}) for CXp {2}\{2\}. In both cases, the added clause guarantees that feature 1 (resp. 2) will not be deemed free by subsequent satisfying assignments of {\mathcal{H}}. One additional observation is that the number of SAT oracle calls matches the number of AXp’s plus the number of CXp’s and plus one final call to terminate the algorithm’s execution. For step 4 of the algorithm, it is easy to conclude that {\mathcal{H}} is unsatisfiable, as intended.

3.3 Related Work

The algorithms for computing one AXp or one CXp for a monotonic classifier are novel. However, the insight of analyzing elements of a set (i.e. features in our case) to find a minimal set respecting some property has been studied in a vast number of settings (e.g. (Chinneck, 2008) for an overview). The proposed solution for reasoning about ordinal features that can take boolean, integer or real values, represents another aspect of novelty. In the case of monotonic classifiers, we obtain a running time that is linear in the running time complexity of the classifier. This result applies in the case of any monotonic classifier, and so it improves significantly over the worst-case exponential time and space approach proposed in earlier work (Shih et al., 2018), for the concrete case of monotonic bayesian networks. The algorithm for enumerating AXp’s and CXp’s for a monotonic classifier is also novel. However, it is inspired by the MARCO algorithm for the analysis of inconsistent logic theories (Liffiton et al., 2016). Although MARCO can be optimized in different ways, Algorithm 3 can be related with its most basic formulation. Since computing one AXp or one CXp can be achieved in polynomial time (conditioned by the classifier run time complexity), then our approach guarantees that exactly one SAT reasoner call is required for each computed minimal set (i.e. AXp or CXp in our case).

4 Experiments

The objective of this section is to illustrate the scalability of both the algorithms for finding one explanation, but also the algorithm for enumerating explanations. The tool MonoXP implements the algorithms 12 and 3. As observed in recent work, most monotonic classifiers are not publicly available (Cano et al., 2019). We analyze two publicly available classifiers, and describe two experiments. The first experiment evaluates MonoXP for explaining two recently proposed tools, COMET777Available from
https://github.com/AishwaryaSivaraman/COMET.
 (Sivaraman et al., 2020) and monoboost888Available from
https://github.com/chriswbartley/monoboost.
 (Bartley et al., 2018). COMET is run on the Auto-MPG999https://archive.ics.uci.edu/ml/datasets/auto+mpg. dataset studied in earlier work (Sivaraman et al., 2020), with the choice justified by the time the classifier takes to run. monoboost is run on a monotonic dataset with two classes (as required by the tool) (Bartley et al., 2018). We use a monotonic subset (PimaMono) of the Pima dataset101010https://sci2s.ugr.es/keel/dataset.php?cod=21.. A second experiment compares MonoXP with Anchor (Ribeiro et al., 2018), both in terms of the number of calls to the classifier and running time, but also in terms of the quality of the computed explanations111111It should be underlined that neither Anchor (Ribeiro et al., 2018), LIME (Ribeiro et al., 2016) nor SHAP (Lundberg & Lee, 2017) can enumerate explanations, neither can these tools compute heuristic contrastive explanations., namely accuracy and size. This second experiment also considers two datasets. The first dataset is BankruptcyRisk (Greco et al., 1998) (which is monotonic if one instance is dropped). For this dataset, the monotonic decision tree classifier proposed in earlier work is used (Potharst & Bioch, 2000). The second dataset is PimaMono, and the classifier used is the one obtained with monoboost (as in the first experiment). All experiments were run on a MacBook Pro, with a 2.4GHz quad-core i5 processor, and 16 GByte of RAM, running MacOS Big Sur. For each dataset, we either pick 100 instances, randomly selected, or the total number of instances in the dataset, in case this number does not exceed 100.

Dataset/Tool #Inst. Avg. # expl. Avg. AXp sz Avg. CXp sz Avg. classif. time Avg. run time % classif. time
AutoMPG/CMT 100 2.35 1.49 1.02 105.90s 105.92s 99.99%
PimaMono/MBT 69 9.09 1.27 3.36 16.285s 16.360s 99.54%
(a) Assessing MonoXP on the Auto-MPG and PimaMono datasets, using resp. COMET or monoboost as the classifier
Dataset #Inst. Anchor MonoXP (AXp) % diff
Avg. Xp sz Avg. time # Cls calls Avg. # Xp Avg. Xp sz Avg. Xp time # Cls calls
B. Risk 39 2.18 0.11s 1217 1.03 2.0 0.009s 24 64.1
PimaMono 69 1.26 11.2s 2967 5.64 1.27 1.8s 16 18.8
(b) Assessing MonoXP and Anchor on the Bankruptcy Risk and Pima Mono datasets
Table 4: Results of running MonoXP

4.1 Cost of Computing Explanations

We run MonoXP on a neural network classifier envelope implemented with COMET for the Auto-MPG dataset, and on a tree ensemble obtained with monoboost for the PimaMono dataset. (Since the running times of COMET can be significant, this experiment does not consider a comparison with the heuristic explainer Anchor (Ribeiro et al., 2018). As shown below, Anchor calls the classifier a large number of times, and that would imply unwieldy running times.)

4(a) shows the results of running MonoXP using COMET as a monotonic envelope on the Auto-MPG dataset, and monoboost on the PimaMono dataset. As can be observed, the explanation sizes are in general small, which confirms the interpretability of computed AXp’s and CXp’s. As a general trend, CXp’s are on average smaller than AXp’s for Auto-MPG, but larger than AXp’s for PimaMono. Moreover, the classification time completely dominates the total running time (i.e. resp. 99.99% and 99.54% of the time is spent running the classifier, independently of the classifier used). These results offer evidence that the time spent on computing explanations is in general negligible for monotonic classifiers. For both datasets, and for the instances considered, it was possible to enumerate all AXp and CXp explanations, with negligible computational overhead.

4.2 Comparison with Anchor

This section compares MonoXP with Anchor, using two pairs of classifiers and datasets, i.e. a monotonic decision tree for BankruptcyRisk and monoboost for PimaMono.

4(b) shows the results of running Anchor and MonoXP on the BankruptcyRisk and the PimaMono datasets. MonoXP is significantly faster than Anchor (more than 1 order magnitude in the first case, and more than a factor of 5 in the second case). The justification is the much smaller number of calls to the classifier required by MonoXP than by Anchor. (While for Anchor the number of calls to the classifier can be significant, for MonoXP, each AXp is computed with at most a linear number of calls to the classifier. Thus, unless the number of features is very substantial, MonoXP has a clear performance edge over Anchor.) Somewhat surprisingly, over all instances, the average size of AXp’s computed by MonoXP is smaller than that of Anchor for the BankruptcyRisk dataset. For the PimaMono dataset, the average size is almost the same. These results suggest that formally defined explanations need not be significantly larger than the ones computed with heuristic approaches. Furthermore, for 64.1% (resp. 18.8%) of the instances, Anchor identifies an explanation that does not hold across all points of feature space, i.e. there are points in feature space for which the explanation of Anchor holds, but the ML model makes a different prediction121212Similar observations have been reported elsewhere (Ignatiev, 2020).. Observe that since MonoXP computes all AXp’s, we can be certain about whether the explanation of Anchor is a correct explanation.

5 Conclusions & Discussion

This paper proposes novel algorithms for computing a single PI or contrastive explanation for a monotonic classifier. In contrast with earlier work (Shih et al., 2018), the complexity of the proposed algorithms is polynomial on the number of features and the time it takes the monotonic classifier to compute its predictions. As the experiments demonstrate, for simple ML models, the algorithm achieves one order of magnitude speed up when compared with a well-known heuristic explainer (Ribeiro et al., 2018), achieving better quality explanations of similar size. In contrast, for complex ML models, the experiments confirm that the running time is almost entirely spent on the classifier. Furthermore, the paper proposes a SAT-based algorithm for enumerating PI and contrastive explanations. As the experimental results show, the use of a SAT solver for enumerating PI and contrastive explanations incurs a negligible overhead.

One possible criticism of the work is that SAT solvers are used for guiding the enumeration of explanations. This involves solving an NP-complete decision problem for each computed explanation, and so it might pose a scalability concern. (One alternative would be to consider explicit enumeration of candidate explanations, as proposed in the earlier works on model based diagnosis (Reiter, 1987; Greiner et al., 1989; Wotawa, 2001).) However, for classification problems with tens to hundreds of features and targeting thousands to tens of thousands explanations (and this far exceeds currently foreseen scenarios), the use of modern SAT reasoners (capable of solving problems with hundreds of thousands of variable and millions of clauses) can hardly be considered a limitation. Another possible criticism of this work is that full monotonicity is required. We conjecture that full monotonicity is necessary for tractable explanations (conditioned by the classifier run time complexity). Addressing partial monotonicity  (Daniels & Velikova, 2010) is a subject of future research.

Acknowledgments.

This work was supported by the AI Interdisciplinary Institute ANITI, funded by the French program “Investing for the Future – PIA3” under Grant agreement no. ANR-19-PI3A-0004, and by the H2020-ICT38 project COALA “Cognitive Assisted agile manufacturing for a Labor force supported by trustworthy Artificial intelligence”.

References

  • Audemard et al. (2020) Audemard, G., Koriche, F., and Marquis, P. On tractable XAI queries based on compiled representations. In KR, pp.  838–849, 2020.
  • Babin & Kuznetsov (2011) Babin, M. A. and Kuznetsov, S. O. Enumerating minimal hypotheses and dualizing monotone boolean functions on lattices. In FCA, pp.  42–48, 2011.
  • Barile & Feelders (2012) Barile, N. and Feelders, A. Active learning with monotonicity constraints. In SIAM ICDM, pp.  756–767, 2012.
  • Bartley et al. (2016) Bartley, C., Liu, W., and Reynolds, M. Effective monotone knowledge integration in kernel support vector machines. In ADMA, pp.  3–18, 2016.
  • Bartley et al. (2018) Bartley, C., Liu, W., and Reynolds, M. A novel framework for constructing partially monotone rule ensembles. In ICDE, pp.  1320–1323, 2018.
  • Bartley et al. (2019) Bartley, C., Liu, W., and Reynolds, M. Enhanced random forest algorithms for partially monotone ordinal classification. In AAAI, pp.  3224–3231, 2019.
  • Ben-David (1995) Ben-David, A. Monotonicity maintenance in information-theoretic machine learning algorithms. Mach. Learn., 19(1):29–43, 1995.
  • Ben-David et al. (1989) Ben-David, A., Sterling, L., and Pao, Y. Learning, classification of monotonic ordinal concepts. Comput. Intell., 5:45–49, 1989.
  • Biere et al. (2009) Biere, A., Heule, M., van Maaren, H., and Walsh, T. (eds.). Handbook of Satisfiability, 2009. IOS Press.
  • Bonakdarpour et al. (2018) Bonakdarpour, M., Chatterjee, S., Barber, R. F., and Lafferty, J. Prediction rule reshaping. In ICML, pp.  629–637, 2018.
  • Cano et al. (2019) Cano, J. R., Gutiérrez, P. A., Krawczyk, B., Wozniak, M., and García, S. Monotonic classification: An overview on algorithms, performance measures and data sets. Neurocomputing, 341:168–182, 2019.
  • Chinneck (2008) Chinneck, J. W. Feasibility and Infeasibility in Optimization:: Algorithms and Computational Methods. Springer Science & Business Media, 2008.
  • Daniels & Velikova (2010) Daniels, H. and Velikova, M. Monotone and partially monotone neural networks. IEEE Trans. Neural Networks, 21(6):906–917, 2010.
  • Duivesteijn & Feelders (2008) Duivesteijn, W. and Feelders, A. Nearest neighbour classification with monotonicity constraints. In ECML/PKDD, pp.  301–316, 2008.
  • Fard et al. (2016) Fard, M. M., Canini, K. R., Cotter, A., Pfeifer, J., and Gupta, M. R. Fast and flexible monotonic functions with ensembles of lattices. In NeurIPS, pp.  2919–2927, 2016.
  • Greco et al. (1998) Greco, S., Matarazzo, B., and Slowinski, R. A new rough set approach to evaluation of bankruptcy risk. In Operational tools in the management of financial risks, pp.  121–136. Springer, 1998.
  • Greiner et al. (1989) Greiner, R., Smith, B. A., and Wilkerson, R. W. A correction to the algorithm in Reiter’s theory of diagnosis. Artif. Intell., 41(1):79–88, 1989.
  • Gupta et al. (2016) Gupta, M. R., Cotter, A., Pfeifer, J., Voevodski, K., Canini, K. R., Mangylov, A., Moczydlowski, W., and Esbroeck, A. V. Monotonic calibrated interpolated look-up tables. J. Mach. Learn. Res., 17:109:1–109:47, 2016.
  • Ignatiev (2020) Ignatiev, A. Towards trustable explainable AI. In IJCAI, pp.  5154–5158, 2020.
  • Ignatiev et al. (2019) Ignatiev, A., Narodytska, N., and Marques-Silva, J. Abduction-based explanations for machine learning models. In AAAI, pp.  1511–1519, 2019.
  • Ignatiev et al. (2020) Ignatiev, A., Narodytska, N., Asher, N., and Marques-Silva, J. On relating ’why?’ and ’why not?’ explanations. CoRR, abs/2012.11067, 2020. URL https://arxiv.org/abs/2012.11067.
  • Liffiton et al. (2016) Liffiton, M. H., Previti, A., Malik, A., and Marques-Silva, J. Fast, flexible MUS enumeration. Constraints An Int. J., 21(2):223–250, 2016.
  • Liu et al. (2020) Liu, X., Han, X., Zhang, N., and Liu, Q. Certified monotonic neural networks. In NeurIPS, 2020.
  • Lundberg & Lee (2017) Lundberg, S. M. and Lee, S. A unified approach to interpreting model predictions. In NeurIPS, pp.  4765–4774, 2017.
  • Magdon-Ismail & Sill (2008) Magdon-Ismail, M. and Sill, J. A linear fit gets the correct monotonicity directions. Mach. Learn., 70(1):21–43, 2008.
  • Marques-Silva et al. (2020) Marques-Silva, J., Gerspacher, T., Cooper, M. C., Ignatiev, A., and Narodytska, N. Explaining naive bayes and other linear classifiers with polynomial time and delay. In NeurIPS, 2020.
  • Miller (2019) Miller, T. Explanation in artificial intelligence: Insights from the social sciences. Artif. Intell., 267:1–38, 2019.
  • Potharst & Bioch (2000) Potharst, R. and Bioch, J. C. Decision trees for ordinal classification. Intell. Data Anal., 4(2):97–111, 2000.
  • Reiter (1987) Reiter, R. A theory of diagnosis from first principles. Artif. Intell., 32(1):57–95, 1987.
  • Ribeiro et al. (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. ”Why should I trust you?”: Explaining the predictions of any classifier. In KDD, pp.  1135–1144, 2016.
  • Ribeiro et al. (2018) Ribeiro, M. T., Singh, S., and Guestrin, C. Anchors: High-precision model-agnostic explanations. In AAAI, pp.  1527–1535, 2018.
  • Shih et al. (2018) Shih, A., Choi, A., and Darwiche, A. A symbolic approach to explaining bayesian network classifiers. In IJCAI, pp.  5103–5111, 2018.
  • Sill (1997) Sill, J. Monotonic networks. In NIPS, pp.  661–667, 1997.
  • Sivaraman et al. (2020) Sivaraman, A., Farnadi, G., Millstein, T. D., and den Broeck, G. V. Counterexample-guided learning of monotonic neural networks. In NeurIPS, 2020.
  • Van den Broeck et al. (2020) Van den Broeck, G., Lykov, A., Schleich, M., and Suciu, D. On the tractability of SHAP explanations. CoRR, abs/2009.08634, 2020. URL https://arxiv.org/abs/2009.08634.
  • van der Gaag et al. (2004) van der Gaag, L. C., Bodlaender, H. L., and Feelders, A. J. Monotonicity in bayesian networks. In UAI, pp.  569–576, 2004.
  • Verbeke et al. (2017) Verbeke, W., Martens, D., and Baesens, B. RULEM: A novel heuristic rule learning approach for ordinal classification with monotonicity constraints. Appl. Soft Comput., 60:858–873, 2017.
  • Wang & Gupta (2020) Wang, S. and Gupta, M. R. Deontological ethics by monotonicity shape constraints. In AISTATS, pp.  2043–2054, 2020.
  • Wotawa (2001) Wotawa, F. A variant of Reiter’s hitting-set algorithm. Inf. Process. Lett., 79(1):45–51, 2001.
  • You et al. (2017) You, S., Ding, D., Canini, K. R., Pfeifer, J., and Gupta, M. R. Deep lattice networks and partial monotonic functions. In NeurIPS, pp.  2981–2989, 2017.

Appendix A Appendix

Theorem 3.

Determining the existence of N/2+1\lfloor N/2\rfloor{+}1 AXp’s (or CXp’s) of a monotonic NN-feature classifier is NP-complete.

Proof.

We say that a CNF is trivially satisfiable if some literal occurs in all clauses. Clearly, SAT restricted to non-trivial CNFs is still NP-complete. Let Φ\Phi be a not trivially-satisfiable CNF on variables x1,,xkx_{1},\ldots,x_{k}. Let N=2kN=2k. Let Φ~\tilde{\Phi} be identical to Φ\Phi except that each occurrence of a negative literal xi¯\overline{x_{i}} (1ik1\leq i\leq k) is replaced by xi+kx_{i+k}. Thus Φ~\tilde{\Phi} is a CNF on NN variables each of which occur only positively. Define the boolean classifier κ\kappa by κ(x1,,xN)=1\kappa(x_{1},\ldots,x_{N})=1 if xi=xi+k=1x_{i}=x_{i+k}=1 for some i{1,,k}i\in\{1,\ldots,k\} or Φ~(x1,,xN)=1\tilde{\Phi}(x_{1},\ldots,x_{N})=1 (and 0 otherwise). To show that κ\kappa is monotonic we need to show that 𝐚𝐛κ(𝐚)κ(𝐛)\mathbf{a}\leq\mathbf{b}\Rightarrow\kappa(\mathbf{a})\leq\kappa(\mathbf{b}). This follows by examining the two cases in which κ(𝐚)=1\kappa(\mathbf{a})=1: if ai=ai+k𝐚𝐛a_{i}{=}a_{i+k}\land\mathbf{a}{\leq}\mathbf{b}, then bi=bi+kb_{i}{=}b_{i+k}, whereas, if Φ~(𝐚)=1𝐚𝐛\tilde{\Phi}(\mathbf{a}){=}1\land\mathbf{a}{\leq}\mathbf{b}, then Φ~(𝐛)=1\tilde{\Phi}(\mathbf{b})=1 (by positivity of Φ~\tilde{\Phi}), so in both cases κ(𝐛)=1κ(𝐚)\kappa(\mathbf{b})=1{\geq}\kappa(\mathbf{a}).

We first consider AXp’s. Clearly κ(𝟏)=1\kappa(\mathbf{1})=1. There are N/2N/2 obvious AXp’s of this prediction, namely (i,i+k)(i,i{+}k) (1ik1{\leq}i{\leq}k). These are minimal by the assumption that Φ\Phi is not trivially satisfiable. Suppose that Φ(𝐮)=1\Phi(\mathbf{u}){=}1. Let 𝒳𝐮{\mathcal{X}}_{\mathbf{u}} be {i| 1ikui=1}{i+k| 1ikui=0}\{i\,|\,1{\leq}i{\leq}k\land u_{i}{=}1\}\cup\{i{+}k\,|\,1{\leq}i{\leq}k\land u_{i}{=}0\}. Then (some subset of) 𝒳𝐮{\mathcal{X}}_{\mathbf{u}} is an AXp of the prediction κ(𝟏)=1\kappa(\mathbf{1}){=}1. The converse also holds. Thus, determining whether κ(𝟏)=1\kappa(\mathbf{1}){=}1 has more than N/2N/2 AXp’s is equivalent to testing the satisfiability of Φ\Phi. NP-completeness follows from the fact that N/2+1\lfloor N/2\rfloor{+}1 AXp’s are a polytime verifiable certificate.

The proof for CXp’s is similar. Clearly κ(𝟎)=0\kappa(\mathbf{0})=0. Again, there are N/2N/2 obvious CXp’s of this prediction, namely (i,i+k)(i,i{+}k) (1ik1{\leq}i{\leq}k) and (some subset of) 𝒳𝐮{\mathcal{X}}_{\mathbf{u}} is a CXp iff Φ(𝐮)=1\Phi(\mathbf{u}){=}1. Thus, determining whether κ(𝟎)=0\kappa(\mathbf{0}){=}0 has more than N/2N/2 CXp’s is equivalent to testing the satisfiability of Φ\Phi, from which NP-completeness again follows. ∎

In the case of AXp’s, the theorem follows from a result on boolean monotone functions (Babin & Kuznetsov, 2011), but for clarity of exposition we preferred to give a direct proof.