A Unified Logical Framework for Explanations in Classifier Systems
Abstract
Recent years have witnessed a renewed interest in the explanation of classifier systems in the field of explainable AI (XAI). The standard approach is based on propositional logic. We present a modal language which supports reasoning about binary input classifiers and their properties. We study a family of classifier models, axiomatize it as two proof systems regarding the cardinality of the language and show completeness of our axiomatics. Moreover, we show that the satisfiability checking problem for our modal language is NEXPTIME-complete in the infinite-variable case, while it becomes polynomial in the finite-variable case. We moreover identify an interesting NP fragment of our language in the infinite-variable case. We leverage the language to formalize counterfactual conditional as well as a variety of notions of explanation including abductive, contrastive and counterfactual explanations, and biases. Finally, we present two extensions of our language: a dynamic extension by the notion of assignment enabling classifier change and an epistemic extension in which the classifier’s uncertainty about the actual input can be represented.
1 Introduction
The notions of explanation and explainability have been extensively investigated by philosophers [20, 25, 50] and are key aspects of AI-based systems given the importance of explaining the behavior and prediction of an artificial intelligent system. Classifier systems compute a given function in the context of a classification or prediction task. Artificial feedforward neural networks are special kinds of classifier systems aimed at learning or, at least approximating, the function mapping instances of the input data to their corresponding outputs. Explaining why a system has classified a given instance in a certain way is crucial for making the system intelligible and for finding biases in the classification process. This is the main target of explainable AI (XAI). Thus, a variety of notions have been defined and used to explain classifiers including abductive, contrastive and counterfactual explanations [4, 49, 13, 24, 36, 34, 37, 48, 35, 33].
Inputs of a classifier are called instances, i.e., valuations of all its variables/features/factors, and outputs are called classifications/predictions/decisions.111We use them as synonyms through the paper. Another set of synonyms is perturbation/intervention/manipulation. The variety of terminology is unfortunate. When both input and output of the classifier are binary, it is just a Boolean function , and furthermore can be expressed by a propositional formula. This isomorphism between Boolean functions and logic has been known ever since the seminal work of Boole. Recently there has been a renewed interest in Boolean functions in the area of logic-based approaches to XAI [41, 24, 12, 23, 40, 2, 1]. They concentrate on local explanations, i.e., on explaining why an actual instance is classified in a certain way.
We argue that it is natural and fruitful to represent binary (input) classifiers and their explanations with the help of a modal language. To that end let us first explain the conceptual foundation of explanation in the context of classifiers, which is largely ignored in the recent literature.
What is an explanation? Despite subtle philosophical debates,222E.g., whether all explanations are causal, whether metaphysical explanation/grounding should be distinguished from causal explanation. by explanation people usually mean causal explanation, an answer to a “why” question in terms of “because”. Then what is a causal explanation? Ever since the seminal deductive-nomological (D-N) model [20], one can view it as a logical relation between an explanandum (the proposition being explained) and an explanans (the proposition explaining), which is itself expressible by a logical formula. According to the D-N model, a causal explanation of a certain fact should include a reference to the laws that are used for deducing it from a set of premises.
More recently Woodward & Hitchcock [52, p. 2, p. 17] (see also [51, Ch. 5 and 6]) proposed that causal explanations make reference to generalizations, or descriptions of dependency relations, which specify relationships between the explanans and explanandum variables. No need of being laws, such generalizations exhibit how the explanandum variable is counterfactually dependent on the explanans variables by relating changes in the value of the latter to changes in the value of the former.333Using the notion of counterfactual dependence for reasoning about natural laws and causality traces back to [16, 28, 30]. The focus nowadays, e.g. [50, 19], is on the use of counterfactuals for modeling the notion of actual cause in order to test (rather than define) causality. According to Woodward & Hitchcock, a generalization used in a causal explanation is invariant under intervention insofar as it remains stable after changing the actual value of the variables cited in the explanation.444 Woodward & Hitchcock also discuss invariance with respect to the background conditions not figuring in the relationship between explanans and explanandum. Nonetheless, they consider this type of invariance less central to causal explanation.
We claim that existing notions of explanation leveraged in the XAI domain rest upon the idea of invariance under intervention. However, while Woodward & Hitchcock apply it to the notion of generalization, in the XAI domain it usually concerns the result of the classifier’s decision to be explained. Another minor difference with Woodward & Hitchcock is terminological: when explaining the decision of a binary classifier system, the term ‘perturbation’ is commonly used instead of ‘intervention’. But they both mean switching some features’ values from the current ones to other ones. Let us outline it by introducing informally our running example.
Example 1 (Applicant Alice, informal).
Alice applies for a loan. She is not male, she is employed, and she rents an apartment in the city center, which we note . The classifier only accepts the application if the applicant is employed, and either is a male or owns a property. Hence, Alice’s application is rejected.
In the XAI literature is called an abductive explanation (AXp) [24] or sufficient reason [12] of the actual decision of rejecting Alice’s application, because perturbing the values of the other features (‘employment’ and ‘address’ in this setting), while keeping the values of ‘gender’ or ‘ownership’ fixed, will not change the decision. More generally, for a term (a conjunction of literals) to be an abductive explanation of the classifier’s actual decision, the classifier’s decision should be invariant under perturbation on the variables not appearing in the term.555 AXp satisfies an additional restriction of minimality that will be elucidated at a later stage: an AXp is a ‘minimal’ term for which the classifier’s actual decision is invariant under perturbation.
On the contrary, is called a contrastive explanation (CXp) [23],666We prefer the notation AXp used by Ignatiev et al.[24, 23] for its connection with CXp. because perturbing nothing but ‘gender’ will change the decision from rejecting the application to accepting it. Therefore, the “duality” between two notions rests on the fact that AXp answers a why-question by indicating that the classification would stay unchanged under intervention on variables other than ‘gender’ and ‘ownership’, whereas CXp answers a why not-question by indicating that the classification would change under intervention on ‘gender’. More generally, for a term (a conjunction of literals) to be a contrastive explanation of the classifier’s actual decision, the classifier’s decision should be variant under perturbation on all variables appearing in the term, where ‘variant’ is assumed to be synonym of ‘non-invariant’.
As Woodward [50, p. 225, footnote 5] clarifies:
[I]nvariance is a modal notion – it has to do with whether a relationship would remain stable under various hypothetical changes.
Therefore, following Woodward, the most natural way of modeling invariance is by means of a modal language whereby the notions of necessity and possibility can be represented. This is the approach we take in this work.
In particular, in order to model explanations in classifier systems, we use a modal language with a ceteris paribus (other things being equal) flavor. Indeed, the notion of invariance under intervention we consider presupposes that one intervenes on specific input features of the classifier, while keeping the values of the other input features unchanged (i.e., the values of the other input features being equal). So, for Alice’s example we expect two modal formulas saying:
-
a)
‘gender’ and ‘ownership’ keeping their actual values, changing other features’ values necessarily does not affect the actual decision of rejecting Alice’s application;
-
b)
other features keeping their actual values, changing the value of ‘gender’ necessarily modifies the classifier’s decision of rejecting Alice’s application.
Specifically, we will extend the ceteris paribus modal logic introduced in [17] by a finite set of atoms representing possible decisions/classifications of a classifier and axioms regarding them. The resulting logic is called which stands for Binary input Classifier Logic, since the input variables of a classifier are assumed to be binary. One may roughly thinks of its models as S5 models supplemented with a classification function which allows us to fully represent a classifier system. Each state in the model corresponds to a possible input instance of the classifier. Moreover, the classification function induces a partition of the set of instances, where each part corresponds to a set of input instances which are classified equally by the classifier. We call these models classifier models. and its extensions open up new vistas including (i) defining counterfactual conditionals and studying their relationship with the notions of abductive and contrastive explanation, (ii) modeling classifier dynamics through the use of formal semantics for logics of communication and change [43, 46], and (iii) viewing a classifier as an agent and representing its uncertainty about the actual instance to be classified through the use of epistemic logic [14].
Before concluding this introduction, it is worth noting that a classifier system is a simple form of causal system whose only dependency relations are between the input variables and the single output variable. Unlike Bayesian networks or artificial neural networks, a classifier system does not include ‘intermediate’ endogenous variables that, at the same time, depend on the input variables and causally influence the output variable(s). Therefore, many distinctions and disputations addressed in the theory of causality and causal explanation do not emerge in our work. For example, the vital distinction between correlation and causality [38], the criticism of ceteris paribus as natural law [50], and whether a causal explanation requires providing information about a causal history or causal chain of events [29]. All these subtleties only show up when the causal structure is complex, and hence collapse in a classifier system, which has only two layers (input-output).
Outline
The paper is structured as follows. In Section 2 we introduce our modal language as well as its formal semantics using the notion of classifier model. In Section 3 two proof systems are given, and ‘weak’ (). We show they are sound and complete relative to the classifier system semantics with, respectively, finite-input and infinite-input variables. Section 4 presents a family of counterfactual conditional operators and elucidates their relevance for understanding the behavior of a classifier system. Section 5 is devoted to classifier explanation. We extend the existing notions of explanation for Boolean classifiers to binary input classifiers. The notions include AXp, CXp and bias in the field of XAI. We will see that in the binary input classifier setting their behaviors are subtler. Besides, their connection with counterfactual is studied. Finally, in Section 6 we present two extensions of our language: (i) a dynamic extension by the notion of assignment enabling classifier change and (ii) an epistemic extension in which the classifier’s uncertainty about the actual input can be represented. Further possible researches are discussed in the conclusion. Main results are either proven in the appendix or pointed out as corollaries.
Compared to our conference paper presented at CLAR 2021 [31], we do not restrict anymore to the language with finite variables. Thus two proof systems instead of one have to be presented, because in the infinite-variable setting the “functionality” property of classifiers cannot be syntactically expressed in a finitary way. The assumption of complete domain is dropped partially for the same technical reason. As a result, we are able to represent partial classifiers which were not expressible in the framework presented in our CLAR 2021 paper. Completeness and complexity results have been refined and improved. Other parts also changed according to possibly infinite variables and incomplete domain.
2 A Language for Binary Classifiers
In this section we introduce a language for modeling binary (input) classifiers and its semantics. The language has a ceteris paribus nature that comes from the ceteris paribus operators of the form it contains. They were first introduced in [17].777 More recently, similar operators have been used in the context of the logic of functional dependence by Baltag & van Benthem [3].
2.1 Basic Language and Classifier Model
Let be a countable set of atomic propositions with elements noted which are used to represent the value taken by an input variable (or feature). When referring to input variables/features we sometimes use the notation ‘’ to distinguish it from the symbol for atomic proposition. In this sense, the atomic proposition should be read “the Boolean input variable ‘’ takes value ”, while its negation should be read “the Boolean input variable ‘’ takes value ”.
We introduce a finite set to denote the output values (classifications, decisions) of the classifier. Elements of are also called classes in the jargon of classifiers. For this reason, we note them For any , we call a decision atom, to be read as “the actual decision (or output) takes value ”, and have . Finally, let be the set of atomic formulas. Notice symbols and have different statuses: is an atomic proposition representing an atomic fact, while is not. This explains why (an output value) and (an atomic formula representing the fact that the actual output has a certain value) are distinguished.
The modal language is hence defined by the following grammar:
where ranges over , ranges over , and is a finite subset of which we note .
The set of atomic formulas occurring in a formula is noted .
The formula has to be read “ is necessary all features in being equal” or “ is necessary regardless of the truth or falsity of the atoms in ”. Operator is the dual of and is defined as usual: .
The language is interpreted relative to classifier models whose class is defined as follows.
Definition 1 (Classifier model).
A classifier model (CM) is a tuple where:
-
•
is a set of states or input instances, and
-
•
is a decision (or classification) function.
The class of classifier models is noted .
A pointed classifier model is a pair with a classifier model and . Formulas in are interpreted relative to a pointed classifier model, as follows.
Definition 2 (Satisfaction relation).
Let be a pointed classifier model with and . Then:
We can think of a pointed model as a pair of with . Thus, is the output of the input instance according to . The condition , which induces an equivalence relation modulo , indeed stipulates that and are indistinguishable regarding the atoms (the features) in . The formula is true at a state if is true at all states that are modulo- equivalent to state . It has the selectis paribus (SP) (selected things being equal) interpretation “features in being equal, necessarily holds (under possible perturbation on the other features)”. When is finite, has the standard ceteris paribus (CP) interpretation “features other than being equal, necessarily holds (under possible perturbation of the features in )”.888We thank Giovanni Sartor for drawing the distinction between CP and SP. When , is the S5 universal modality since every state is modulo- equivalent to all states.
A formula of is said to be satisfiable relative to the class if there exists a pointed classifier model with such that . It is said to be valid relative to , noted , if is not satisfiable relative to . Moreover, we say that that is valid in the classifier model , noted , if for every .
It is worth noting that every modality can be defined by means of the universal modality . To show this, let us introduce the following abbreviation for every :
can be seen as the syntactic expression of a valuation on , and therefore represents a set of states in a classifier model satisfying the valuation. We have the following validity for the class :
It means that is true at state , if and only if, for whatever , if then for any state such that , is true at .
Let us close this section by formally introducing our running example.
Example 2 (Applicant Alice, formal).
Let , where and stand for and respectively. Suppose is a CM such that and
Consider the state . Then, stands for the instance Alice and for the classifier in Example 1 such that .
Now Alice is asking for explanations of the decision/classification, e.g., 1) which of her features (necessarily) lead to the current decision, 2) changing which features would make a difference, 3) perhaps most importantly, whether the decision for her is biased. In Section 5 we will show how to use the language and its semantics to answer these questions.
2.2 Discussion
In this subsection we discuss in more detail some subtleties of classifier models in relation with the modal language which is interpreted over them.
-Completeness
In the definition of classifier model (Definition 1) given above, we stipulated that the set of states does not necessarily include all possible input instances of a classifier. More generally, according to our definition, a classifier model could be incomplete with respect to a set of atoms from , that is, there could be a truth assignment for the atoms in which is not represented in the model. Incompleteness of a classifier model is justified by the fact that in certain domains of application hard constraints exist which prevent for some input instance to occur. For example, a hard constraint may impose that a male cannot be pregnant (i.e., all states in which atoms and are true should be excluded from the model).
However, it is interesting to see how completeness of a classifier with respect to a finite set of features can be represented in our semantics. This is what the following definition specifies.
Definition 3 (-completeness).
Let be a classifier model and . Then, is said to be -complete, if .
In plain words, the definition means that any truth assignment for the atoms in is represented by some state of the model. As the following proposition indicates, the class of -complete CMs can be syntactically represented. The proof is straightforward and omitted.
Proposition 1.
Let be a CM and . is -complete if and only if , we have , with
-Definiteness
In certain situations, there might be a portion of the feature space which is irrelevant for the classifier’s decision. For example, in the Alice’s example the fact of renting an apartment in the city center (the feature ) plays no role in the classification. In this case, we say that the classifier is definite with respect to the subset of features .
More generally, a classifier is said to be definite with respect to a set of features if its decision is only determined by the variables in , that is to say, the variables in the complementary set play no role in the classifier’s decision. In other words, the classifier is said to be -definite if its decision is independent of the variables in .
The following definition introduces the concept of -definiteness formally.
Definition 4 (-definiteness).
Let be a classifier model and . Then, is said to be -definite, if , if then .
-definiteness is tightly related to the notion of dependence studied in (propositional) dependence logic [53]. The latter focuses on so-called dependence atoms of the form where is a propositional variable and is a finite set of propositional variables. The latter expresses the fact that the truth value of the propositional variable only depends on the truth values of the propositional variables in . It turns out that dependence atoms can be expressed in our ceteris paribus modal language as abbreviations:
Interestingly, the notion of -definiteness is expressible in our modal language by means of the dependence atoms. This is what the following proposition indicates.
Proposition 2.
Let be a CM and . is -definite if and only if with
We conclude this section by mentioning some remarkable properties of -definiteness. The first fact to be noticed is that -definiteness is upward closed.
Fact 1.
For every and , if is -definite then is -definite too.
Secondly, -definiteness for some is guaranteed in the finite-variable case.
Fact 2.
For every , if is finite then is -definite.
This does not hold in the infinite case.
Fact 3.
If is countably infinite and then there exists such that, for all , is not -definite.
The previous fact is witnessed by any CM such that
-
•
,
-
•
,
-
•
, if then ,
where and denotes symmetric difference, viz., . It is easy to show that a CM so defined is not -definite for any .
3 Axiomatization and Complexity
In this section, we provide axiomatics for our logical setting. We distinguish the finite-variable from the infinite-variable case. We moreover prove complexity results for satisfiability checking for both cases. But before, we will first introduce an alternative Kripke semantics for the interpretation of the language . It will allow us to use the standard canonical model technique for proving completeness. Indeed, this technique cannot be directly applied to CMs in the infinite-variable case since our modal language is not expressive enough to capture the “functionality” property of CMs when is infinite. We think it would be possible to apply the canonical model argument directly to CMs in the finite-variable case. But we leave this to future work.
3.1 Alternative Kripke Semantics
In our alternative semantics the concept of classifier model is replaced by the following concept of decision model. It is a multi-relational Kripke structure with one accessibility relation per finite set of atoms plus a number of constraints over the accessibility relations and the valuation function for the atomic propositions.
Definition 5 (Decision model).
A decision model (DM) is a tuple such that is a set of possible worlds, is a valuation function for atomic formulas, and the following constraints are satisfied:
- ()
-
iff ,
- ()
-
,
- ()
-
if then ,
- ()
-
if then ;
where abbreviates . The class of DMs is noted .
A DM is called finite if is finite. The class of finite-DM is noted finite-DM.
The interpretation of formulas in relative to a pointed DM goes as follows.
Definition 6 (Satisfaction relation).
Let be a DM and let . Then,
Validity and satisfiability of formulas in relative to class (resp. finite-DM) is defined in the usual way.
The following theorem appears obvious, since it only has to do with the matter whether the decision function (classifier) is given as a constituent of the model or induced from the model. Notice that it holds regardless of being finite or countably infinite.
Theorem 1.
Let . Then, is satisfiable relative to the class if and only if it is satisfiable relative to the class .
3.2 Axiomatization: Finite-Variable Case
In this section we provide a sound and complete axiomatics for the language relative to the formal semantics defined above under the assumption that the set of atomic propositions is finite.
Definition 7 (Logic ).
We define (Binary Classifier Logic) to be the extension of classical propositional logic given by the following axioms and rules of inference:
(K[∅]) | |||
(T[∅]) | |||
(4[∅]) | |||
(B[∅]) | |||
(Red[∅]) | |||
(AtLeast) | |||
(AtMost) | |||
(Funct) | |||
(MP) | |||
(Nec[∅]) |
It can be seen that is an S5 style modal operator, Red[∅] reduces any to . represent the decision function syntactically and that every expression maps to some unique .
A decision model can contain two copies of the same input instance, while a classifier model cannot. Thus, Theorem 1 above shows that our modal language is not powerful enough to capture this difference between CMs and DMs. Axiom Funct intervenes in the finite-variable case to guarantee that two copies of the same input instance (that may exist in a DM) have the same output value. The expression used in the axiom is an instance of the abbreviation we defined in Section 2.1. It represents a specific input instance. Notice that this abbreviation is only legal when is finite. Otherwise it would be the abbreviation of an infinite conjunction which is not allowed, since our modal language is finitary.
The proof of the following theorem is entirely standard and based on a canonical model argument.
Theorem 2.
Let be finite. Then, the logic is sound and complete relative to the class .
Corollary 1.
Let be finite. Then, the logic is sound and complete relative to the class .
3.3 Axiomatization: Infinite-Variable Case
In Section 3.2, we have assumed that the set of atomic propositions representing input features is finite. In this section, we drop this assumption and prove completeness of the resulting logic.
An essential feature of the logic is the “functionality” Axiom Funct. Such an axiom cannot be represented in a finitary way when assuming that the set is countably infinite. For this reason, it has to be dismissed and the logic weakened.
Definition 8 (Logic ).
In order to obtain the completeness of relative to the class , besides decision models (DMs), we need additionally quasi-decision models (QDMs).
Definition 9 (Quasi-DM).
A quasi-DM is said to be finite if is finite. The class of finite quasi-DMs is noted finite-QDM.
Semantic interpretation of formulas in relative to quasi-DMs is analogous to semantic interpretation relative to DMs given in Definition 6. Moreover, validity and satisfiability of formulas in relative to class QDM (resp. finite-QDM) is again defined in the usual way.
We are going to show the equivalence between QDM and CM step by step. The following theorem is proven by filtration.
Theorem 3.
Let be countably infinite and . Then, is satisfiable relative to the class QDM if and only if is satisfiable relative to the class finite-QDM.
Then, let us establish the crucial fact that, in the infinite-variable case, the language cannot distinguish finite-DMs from finite-QDMs. We are going to prove that any formula satisfiable in a finite-QDM is also satisfiable in some finite-DM . Since the only condition to worry is , we just need to transform the valuation function of to guarantee that holds while still satisfying .
Theorem 4.
Let be countably infinite and . Then, is satisfiable relative to the class finite-QDM if and only if is satisfiable relative to the class finite-DM.
Recall Theorem 1 shows that can not distinguish between CMs and DMs regardless of being finite or infinite. Thus, we obtain the desired equivalence between model classes and in the infinite-variable case, as a corollary of Theorems 1, 3 and 4. This fact is highlighted by Figure 1. More generally, Figure 1 shows that when is countably infinite the five semantics for the modal language are all equivalent, since from every node in the graph we can reach all other nodes.
Theorem 5.
Let be countably infinite and . Then, is satisfiable relative to the class if and only if is satisfiable relative to the class .
As a consequence, we are in position of proving that the logic is also sound and complete for the corresponding classifier model semantics, under the infinite-variable assumption. The only missing block is the following completeness theorem. The proof is similar to the proof of Theorem 2 (with the only difference that the canonical QDM does not need to satisfy ), and omitted.
Theorem 6.
Let be countably infinite. Then, the logic is sound and complete relative to the class .
Corollary 2.
Let be countably infinite. Then, the logic is sound and complete relative to the class .

3.4 Complexity Results
Let us now move from axiomatics to complexity issues. Our first result is about complexity of checking satisfiability for formulas in relative to the class when is finite and fixed. It is in line with the satisfiability checking problem of the modal logic S5 which is known to be polynomial in the finite-variable case [18].
Theorem 7.
Let be finite and fixed. Then, checking satisfiability of formulas in relative to can be done in polynomial time.
As the following theorem indicates, the satisfiability checking problem becomes intractable when dropping the finite-variable assumption.
Theorem 8.
Let be countably infinite. Then, checking satisfiability of formulas in relative to is NEXPTIME-complete.
Let us consider the following fragment of the language where only the universal modality is allowed:
Clearly, satisfiability checking for formulas in remains polynomial when there are only finitely many primitive propositions. As the following theorem indicates, complexity decreases from NEXPTIME to NP when restricting to the fragment under the infinite-variable assumption.
Theorem 9.
Let be countably infinite. Then, checking satisfiability of formulas in relative to is NP-complete.
The complexity results of this section are summarized in Table 1.
Fixed finite variables | Infinite variables | |
---|---|---|
Fragment | Polynomial | NP-complete |
Full language | Polynomial | NEXPTIME-complete |
4 Counterfactual Conditional
In this section we investigate a simple notion of counterfactual conditional for binary classifiers, inspired from Lewis’ notion [27]. In Section 5, we will elucidate its connection with the notion of explanation.
We start our analysis by defining the following notion of similarity between states in a classifier model relative to a finite set of features .
Definition 10 (Similarity between states).
Let be a classifier model, and . The similarity between and in relative to the set of features , noted , is defined as follows:
A dual notion of distance between worlds can defined from the previous notion of similarity:
This notion of distance is in accordance with [11] in knowledge revision.999There are other options besides measuring distance by cardinality, e.g., distance in sense of subset relation as [6]. We will consider them in further research.
The following definition introduces the notion of counterfactual conditional as an abbreviation. It is a form of relativized conditional, i.e., a conditional with respect to a finite set of features.101010A similar approach to conditional is presented in [15]. They also refine Lewis’ semantics for counterfactuals by selecting the closest worlds according to not only the actual world and antecedent, but also a set of formulas noted . The main technical difference is that they allow any counterfactual-free formula as a member of , while in our setting only contains atomic formulas.
Definition 11 (Counterfactual conditional).
We write to mean that “if was true then would be true, relative to the set of features ” and define it as follows:
with
As the following proposition highlights, the previous definition of counterfactual conditional is in line with Lewis’ view: the conditional holds if all closest worlds to the actual world in which the antecedent is true satisfy the consequent of the conditional.
Proposition 3.
Let be a classifier model, and . Then, if and only if , where
and for every :
For notational convenience, we simply write instead of , when is finite. Formula captures the standard notion of conditional of conditional logic. One can show that satisfies all semantic conditions of Lewis’ logic .111111A remarkable fact is that not all satisfy the strong centering condition, which says that the actual world is the only closest world when the antecedent is already true there. To see it, consider a toy classifier model such that with , , , . We have , rather than . However, when is infinite, is not a well-formed formula since it ranges over an infinite set of atoms. In that case has to be always indexed by some finite .
The interesting aspect of the previous notion of counterfactual conditional is that it can be used to represent a binary classifier’s approximate decision for a given instance. Let us suppose the set of decision values includes a special symbol meaning that the classifier has no sufficient information enabling it to classify an instance in a precise way. More compactly, is interpreted as that the classifier abstains from making a precise decision. In this situation, the classifier can try to make an approximate decision: it considers the closest instances to the actual instance for which it has sufficient information to make a decision and checks whether the decision is uniform among all such instances. In other words, is the classifier’s approximate classification of (or decision for) the actual instance relative to the set of features , noted , if and only if “if a precise decision was made relative to the set of features , then this decision would be ”. Formally:
The following proposition provides two interesting validities.
Proposition 4.
Let be finite, . Then,
According to the first validity, a classifier cannot make two different approximate decisions relative to a fixed set of features .
According to the second validity, if the classifier is able to make a precise decision for a given instance, then its approximate decision coincides with it. This second validity works since the actual state/instance is the only closest state/instance to itself. Therefore, it the actual state/instance has a precise classification , all its closest states/instances also have it.
It is worth noting that the following formula is not valid relative to the class :
This means that a classifier may be unable to approximately classify the actual instance. The reason is that there could be different closest states to the actual one with different classifications.
5 Explanations and Biases
In this section, we are going to formalize some existing notions of explanation of classifiers in our logic, and deepen the current study from a (finitely) Boolean setting to a multi-valued output, partial domain and possibly infinite-variable setting. For this purpose it is necessary to introduce the following notations.
Let denote a conjunction of finitely many literals, where a literal is an atom or its negation . We write , call a part (subset) of , if all literals in also occur in ; and if but not . By convention is a term of zero conjuncts. In particular, suppose is for some , then will denote the conjunction resulting from “flipping” (or “perturbing”) all literals of , i.e., .
In the glossary of Boolean classifiers, is called a term or property (of an instance). The set of terms is noted . We use to denote all terms whose atoms are in . Additionally, to define the notion of bias we distinguish the set of protected features , like ‘gender’ and ‘race’, and the set of non-protected features .
Notice that in this section the cardinality of matters. Notions and results in Section 5.1 (without special instruction) apply to both finite and countably infinite. On the contrary, in Sections 5.2 and 5.3, we restrict to the case finite, which is due to the use of formulas and there. We clarify it here instead of clarifying it below repeatedly.
5.1 Prime Implicant and Abductive Explanation
We are in position to formalize the notion of prime implicant, which plays a fundamental role in the theory of Boolean functions since [39].
Definition 12 (Prime implicant ()).
We write to mean that is a prime implicant for and define it as follows:
It is a proper extension of the definition of prime implicant in the Boolean setting since it is a term such that 1) it necessarily implies the actual classification (why it is called an implicant); 2) any of its proper subsets fails to necessarily imply the actual classification (why it is called prime). Notice that being a prime implicant is a global property of the classifier, though we formalize it by means of a pointed model. The syntactic abbreviation for prime implicant can be better understood by observing that for a given CM and , we have:
To explain the actual classification of a given input, some XAI researchers consider prime implicants which are actually true. We use the terminology by [24] and call them abductive explanations (AXp).
Definition 13 (Abductive explanation ()).
We write to mean that abductively explains the decision and define it as follows:
AXp is a local explanation, because is not only a prime implicant for the classification, but also a property of the actual instance to be classified. AXp can be expanded to highlight its connection with the notion of variance/invariance.
Proposition 5.
Let and . Then, we have the following validity:
The formula expresses the idea of invariance under intervention (perturbation): as long as the explanans variables are kept fixed, namely the variables in , any perturbation on the other variables does not change the explanandum, namely classification .
Many names besides AXp are found in literature, e.g., PI-explanation [41] and sufficient reason [12]. Darwiche and Hirth in [12] proved that any decision has a sufficient reason in the Boolean setting. The result is not a surprise, for a Boolean function always has a prime implicant, since by definition the arity of a Boolean function is always finite. However, since we allow functions with infinitely many variables, AXps are not guaranteed to exist in general.
Fact 4.
Let be countably infinite and . Then, there exists some , , such that .
The statement can be proved by exhibiting the same infinite countermodel as in Fact 3 in Section 2.2. However, if a CM is -definite for some , then every state has an AXp, even when the CM is infinite.
Proposition 6.
Let and . If is -definite then .
Lastly, let us continue with the Alice example.
Example 3.
Recall the state of Alice }. We have , namely that Alice’s being female and not owning a property abductively explains the rejection of her application.
5.2 Contrastive Explanation (CXp)
AXp is a minimal part of the actual instance guaranteeing the current decision. A natural counterpart of AXp is contrastive explanation (CXp, named in [23]).
Definition 14 (Contrastive explanation (CXp)).
We write to mean that contrastively explains the decision and define it as follows:
The definition says nothing but 1) is part of the actual input instance; 2) if the values of all variables in are changed while the values of the other variables are kept fixed, then the actual classification may change; 3) the classification will not change, if the variables outside and at least one variable in keep their actual values. The latter captures a form of necessity: when the values of the variables outside are kept fixed, all variables in should be necessarily perturbed to change the actual classification.
The syntactic abbreviation for contrastive explanation can be better understood by observing that for a given CM and , we have:
CXp has a counterfactual flavor since it answers to question: would the classification differ from the actual one, if the values of all variables in the explanans were different? So, there seems to be a connection with the notion of counterfactual conditional we introduced in Section 4. Actually in XAI, many researchers consider contrastive explanation and counterfactual explanation either closely related [48] or even interchangeable [42]. The following proposition sheds light on this point.
Proposition 7.
Let be a term and let be a literal. Then, we have the following two validities:
According to the first validity, in the general case contrastive explanation implies counterfactual explanation. According to the second validity, when the explanans is a literal (a single-conjunct term), contrastive explanation coincides with counterfactual explanation given -completeness. Particularly, literal contrastively explains the decision if and only if (i) the actual decision is and (ii) if literal were perturbed, the decision would be different from . In other words, in the “atomic” case under the completeness assumption, CXp is the same as counterfactual explanation.
Note that the right-to-left direction of the first validity does not necessarily hold, even after assuming that the classifier is complete with respect to the set of all features . To see this, it is sufficient to suppose that and and to consider the CM such that with and . It is easy to check that in the model so defined we have
but at the same time,
The problem is that the model fails to satisfy the necessity condition of contrastive explanation: it is not necessary to perturb both literals in to change the actual decision from to , it is sufficient to perturb one of them. We can conclude that CXp is a special kind of counterfactual explanation with the additional requirement of necessity for the explanans.
Example 4.
In Alice’s case, we have . This means that both Alice’s being female and not owning property contrastively explain the rejection. Moreover, we have , namely if Alice was a male or an owner (of an immobile property), then her application would have been accepted.
Moreover, since the feature ‘gender’ is hard to change, owing a property is the (relatively) actionable explanation for Alice,121212For the significance of actionability in XAI, see e.g. [42]. if she intends to comply with the classifier’s decision. But surely Alice has another option, i.e., alleging the classifier as biased. As we will see in the next subsection, an application of CXp is to detect decision biases in a classifier.
5.3 Decision Bias
A primary goal of XAI is to detect and avoid biases. Bias is understood as making decision with respect to some protected features, e.g., ‘race’, ‘gender’ and ‘age’.
There is a widely accepted notion of decision bias in the setting of Boolean functions which can be represented in our Example 2 (see [12, 22]). Intuitively, the rejection for Alice is biased if there is another applicant, say Bob, who only differs from Alice on the protected feature ‘gender’, but gets accepted.
Definition 15 (Decision bias).
We write to mean that the decision is biased and define it as follows:
The definition says that the decision is biased at a given state , if (i) , and (ii) such that and . The latter, in plain words, requires another instance , which only differs from on some protected features, but obtains a different classification.
As we stated, CXp can be used to detect decision biases. The following result makes the statement precise.
Proposition 8.
We have the following validity:
Let us end up the whole section by answering the last question regarding Alice raised at the end of Section 2.1.
Example 5.
Split in Example 2 into and . We then have . The decision for Alice is biased since ‘gender’ is the protected feature which contrastively explains the rejection, and if Alice was male, her application would have been accepted.
6 Extensions
In this section, we briefly discuss two interesting extensions of our logical framework and analysis of binary classifiers. Their full development is left for future work.
6.1 Dynamic Extension
The first extension we want to discuss consists in adding to the language dynamic operators of the form with , where is a kind of assignment in the sense of [43, 47] and the formula has to be read “ holds after every decision is set to in context ”. The resulting language, noted , is defined by the following grammar:
where ranges over , ranges over , and . The interpretation of formula relative to a pointed classifier model with goes as follows:
where is the updated classifier model where, for every :
Intuitively, the operation consists in globally classifying all instances satisfying with value .
Dynamic operators are useful for modeling a classifier’s revision. Specifically, new knowledge can be injected into the classifier thereby leading to a change in its classification. For example, the classifier could learn that if an object is a furniture, has one or more legs and has a flat top, then it is a table. This is captured by the following assignment:
An application of dynamic change is to model the training process of a classifier, together with counterfactual conditionals with in Section 4. Suppose at the beginning we have a CM which is totally ignorant, i.e., . We then prepare to train the classifier. The training set consists of pairs where , and , implies . We train the classifier by revising it with one by one. Obviously the order does not matter here. In other words, we re-classify some states. With a bit abuse of notation, let denote the model resulting from the series of revisions. We finish training by inducing the final model from , where , if , otherwise . This is an example of modeling a special case of the so-called -nearest neighbour (KNN) classification in machine learning [10], where the distance is measured by cardinality. If a new case/instance has to be classified, we see how the most similar cases to the new case were classified. If all of them ( of them in the case of KNN) were classified using the same category, we put the new case into that category.
The logics and ( and with Decision Change) extend the logic and by the dynamic operators . They are defined as follows.
Definition 16 (Logics and ).
It is routine exercise to verify that the equivalences in Definition 16 are valid for the class and that the rule of replacement of equivalents (RE) preserves validity. The completeness of (resp. ) for this class of models under the finite-variable assumptions (resp. infinite-variable assumption) follows from Corollary 1 (resp. Corollary 2), in view of the fact that the reduction axioms and the rule of replacement of proved equivalents can be used to find, for any -formula, a provably equivalent -formula.
Theorem 10.
Let be finite. Then, the logic is sound and complete relative to the class .
Theorem 11.
Let be countably infinite. Then, the logic is sound and complete relative to the class .
The following complexity results are consequences of Theorems 7 and 8 and the fact that via the reduction axioms in Definition 8 we can find a polynomial reduction of satisfiability checking for formulas in to satisfiability checking for formulas in .
Theorem 12.
Let be finite and fixed. Then, checking satisfiability of formulas in relative to can be done in polynomial time.
Theorem 13.
Let be countably infinite. Then, checking satisfiability of formulas in relative to is NEXPTIME-complete.
6.2 Epistemic Extension
In the second extension we suppose that a classifier is an agent which has to classify what it perceives. The agent could have uncertainty about the actual instance to be classified since it cannot see all its input features.
In order to represent the agent’s epistemic state and uncertainty, we introduce an epistemic modality of the form which is used to represent what the agent knows in the light of what it sees. Similar notions of visibility-based knowledge can be found in [8, 45, 21, 44].
The language for our epistemic extension is noted and defined by the following grammar:
where ranges over , ranges over , and .
In order to interpret the new modality , we have to enrich classifier models with an epistemic component.
Definition 17 (Epistemic classifier model).
An epistemic classifier model (ECM) is a tuple where is a classifier model and is the set of atomic propositions that are visible to the agent. The class of ECMs is noted .
Given an ECM , we can define an epistemic indistinguishability relation which represents the agent’s uncertainty about the actual input instance.
Definition 18 (Epistemic indistinguishability relation).
Let be an ECM. Then, is the binary relation on such that, for all :
Clearly, the relation so defined is an equivalence relation. According to the previous definition, the agent cannot distinguish between two states and , noted , if and only if the truth values of the visible variables are the same at and .
The interpretation for formulas in extends the interpretation for formulas in given in Definition 2 by the following condition for the epistemic operator:
As the following theorem indicates, the complexity result of Section 3.2 for the finite-variable case generalizes to the epistemic extension.
Theorem 14.
Let be finite. Then, checking satisfiability of formulas in relative to can be done in polynomial time.
In order to illustrate the intuition behind the epistemic modality we go back to the example of the application for a loan to a bank.
Example 6.
Suppose the application is submitted through an online system which has to automatically decide whether it is acceptable or not. In his/her application, an applicant has to specify a value for each feature. Moreover, suppose the system receives an incomplete application: the applicant has only indicated that she is female, owns an apartment and lives in the city center, but she has forgotten to specify whether she has an employment or not. In this case, the value of the employement variable is not “visible” to the system. In formal terms, we extend the CM given in Example 2 by the visibility set to obtain a ECM . It is easy to check that the following holds:
This means that, on the basis of its partial knowledge of the applicant’s identity, the system does not know what to decide.
However, the system knows that if turns out that the applicant is employed then its application should be accepted:
Finally, the classifier knows that if turns out that the applicant is employed, then the fact that she is employed and that she owns a property will abductively explain the decision to accept her application:
7 Conclusion
We have introduced a modal language and a formal semantics that enable us to capture the ceteris paribus nature of binary classifiers. We have formalized in the language a variety of notions which are relevant for understanding a classifier’s behavior including counterfactual conditional, abductive and contrastive explanation, bias. We have provided two extensions that support reasoning about classifier change and a classifier’s uncertainty about the actual instance to be classified. We have also offered axiomatics and complexity results for our logical setting.
We believe that the complexity results presented in the paper are exploitable in practice. We have shown that satisfiability checking in the basic setting and in its dynamic and epistemic extension is polynomial when finitely many variables are assumed. In the infinite-variable setting, it becomes NEXPTIME-complete and NP-complete when restricting to the language in which the only primitive modal operator is the universal modality . In future work, we plan (i) to find a number of satisfiability preserving translations from our modal languages to the modal logic S5 and then from S5 to propositional logic using existing techniques [7], and (ii) to exploit SAT solvers for automated verification and generation of explanations and biases in binary classifiers.
Another direction of future research is the generalization of the epistemic extension given in Section 6.2 to the multi-agent case. The idea is to conceive classifiers as agents and to be able to represent both the agents’ uncertainty about the instance to be classified and their knowledge and uncertainty about other agents’ knowledge and uncertainty (i.e., higher-order knowledge and uncertainty). Similarly, we plan to investigate more in depth classifier dynamics we briefly discussed in Section 6.1. The idea is to see them as learning dynamics. Based on this idea, we plan to study the problem of finding a sequence of update operations guaranteeing that the classifier will be able to make approximate decisions for a given set of instances.
Finally, all classifiers we handle in this paper are essentially “white box”, in the sense that we have perfect knowledge of them, so that we can compute their explanations. However, “black box” classifiers are the most interesting ones to XAI. In [32] we conceived a “black box” classifier as an agent’s uncertainty among many possible “white box” classifiers. We represented it by extending our language with a modal operator ranging over all possible functions which are compatible with the agent’s partial knowledge. All notions of explanation we defined in this paper can be generalized to the “black box” setting. However, there are some important differences between the two settings. For instance, in a “black box” classifier AXp does not always exist, as we showed in [32], which contradicts Proposition 6.
Acknwoledgments
This work is supported by the ANR-3IA Artificial and Natural Intelligence Toulouse Institute (ANITI).
Appendix A Tecnical annex
This technical annex contains a selection of proofs of the results given in the paper.
A.1 Proof of Proposition 2
Proof.
Suppose is -definite but , which means that s.t. . W.l.o.g., we assume that . That is to say, , s.t. but . The latter indicates that , s.t. but , which violates -definiteness.
Let , and assume Then since , we have if then , which is what -definiteness says. ∎
A.2 Proof of Theorem 1
Proof.
For the left to right direction, given a CM and s.t. , we construct a DM as follows
-
•
-
•
if
-
•
.
It is easy to check that is indeed a DM and .
For the other direction, given a DM and s.t. , we construct a CM as follows
-
•
-
•
, if .
It is routine to check that is a CM, and .
∎
A.3 Proof of Theorem 2
Proof.
The proof is conducted by constructing the canonical model.
Definition 19 (Theory).
A set of formulas is said to be a -theory if it contains all theorems of and is closed under MP and Nec[∅]. It is said to be a consistent -theory if it is a theory and . It is said to be a maximal consistent -theory (MCT for short), if it is a consistent theory and for all consistent theory , if then .
Lemma 1 (Lindenbaum-type).
Let be a consistent -theory and Then, there is a maximal consistent -theory s.t. and .
The proof is standard and omitted (see, e.g. [5, p. 197] ).
Definition 20 (Canonical model).
The canonical decision model is defined as follows
-
•
-
•
-
•
We omit the superscript c whenever there is no misunderstanding.
Lemma 2.
Let be an MCT. Then .
Proof.
Suppose , then by the maximality of and , we have . Since is maximally consistent, there is exactly one s.t. . By we have , and by and we have . But than is inconsistent, since . Hence the supposition fails, which means . ∎
Lemma 3.
The canonical model is indeed a decision model.
Proof.
Check the conditions one by one. For C1, we need show , if implies . Suppose not, then w.l.o.g. we have some , by maximality of namely . However, we have , for is a theorem, and by definition of , hence , since . But now we have a contradiction. C2-4 hold obviously due to axioms , and respectively. ∎
Lemma 4 (Existence).
Let be the canonical model, be an MCT. Then, if then s.t. and .
The proof is following the same line in e.g. [5, p. 198-199] and omitted.
Lemma 5 (Truth).
Let be the canonical model, be an MCT, . Then .
Proof.
By induction on . We only show the interesting case when takes the form .
For direction, if , since for any , , then thanks to we have . By induction hypothesis this means , therefore .
For direction, suppose not, namely . Then consider a theory . It is consistent since . Then take any s.t. . We have , but by induction hypothesis. However, this contradicts . ∎
∎
A.4 Proof of Theorem 3
Proof.
Let be a QDM and s.t. . Let be the set of all subformulas of and let . Moreover, , we define , iff . Finally, we define .
Now we construct a filtration through , as follows
-
•
-
•
, , iff
-
•
is indeed a filtration. We need show that it satisfies the two conditions.
1) . Suppose . By construction of , , and and . As a result, which means .
2) If , then : if then . The crucial point is that , , if , then by the definitions of and . Hence by satisfaction relation of we have if then .
Moreover, is a finite-QDM. For it is given as the definition of . and hold because of .
Finally, we need prove iff . We only show when takes the form . Given , i.e. , if then . By definitions of and we have , by induction hypothesis , which means . If , i.e. , if then . Then by definitions of and we have , by induction hypothesis . ∎
A.5 Proof of Theorem 4
Proof.
The right to left direction is obvious since any finite-DM is a finite-QDM. For the other direction, suppose there is a finite-QDM and s.t. . Since is infinite, we can construct an injection . Then, we construct a finite-DM as follows
-
•
-
•
iff
-
•
.
It is easy to check that is indeed a finite-DM. By induction we show that . When is some , we have since the injection has nothing to do with . The case of is the same. The Boolean cases are straightforward. Finally when takes form . Again since does not change valuation in , we have . Hence we have if then if then . ∎
A.6 Proof of Theorem 7
Proof.
Suppose is finite and fixed. In order to determine whether a formula is satisfiable for the class , we are going to verify whether is satisfied in each CM, by doing this sequentially one CM after the other. The corresponding algorithm runs in polynomial time in the size of since: (i) there is a finite, constant number of CMs and (ii) model checking for the language relative to a pointed CM is polynomial. This means that, when is finite and fixed, satisfiability checking has the same complexity as model checking. Regarding (i), the finite, constant number of CMs in the class is . Indeed, for every , we consider the number of functions from to . Regarding (ii), it is easy to build a model checking algorithm running in polynomial time. It is sufficient to adapt the well-known “labelling” model checking algorithm for the basic multimodal logics and CTL [9]. The general idea of the algorithm is to label the states of a finite model step-by-step with sub-formulas of the formula to be checked, starting from the smallest ones, the atomic propositions appearing in . At each step, a formula should be added as a label to just those states of the model at which it is true.
∎
A.7 Proof of Theorem 8
Proof.
As for NEXPTIME-hardness, in [17] the following ceteris paribus modal language, noted , is considered with a countable set of atomic propositions:
where ranges over and is a finite set of atomic propositions from . Formulas for this language are interpreted relative to a simple model and a state in the expected way as follows (we omit boolean cases since they are interpreted in the usual way): ; . It is proved that, when is countably infinite, satisfiability checking for formulas in relative to the class of simple models is NEXPTIME-hard [17, Lemma 2 and Corollary 2]. It follows that satisfiability checking for formulas in our language with countably infinite is NEXPTIME-hard too.
As for membership, let be the following translation from the language to the language :
By induction on the structure of , it is routine to verify that is satisfiable for the class of Definition 9 if and only if is satisfiable for the class of simple models, with
Hence, by Theorem 5 we have that, when is countably infinite, is satisfiable for the class of classifier models if and only if is satisfiable for the class of simple models. Since the translation is linear and satisfiability checking for formulas in relative to the class is in NEXPTIME in the infinite-variable case [17, Lemma 2 and Corollary 1], checking satisfiability of formulas in relative to the class is in NEXPTIME too, with countably infinite.
∎
A.8 Proof of Theorem 9
Proof.
NP-hardness follows from the NP-harndess of propositional logic.
In order to prove NP-membership, we can use the translation given in the proof of Theorem 8 to give a polynomial reduction of satisfiability checking of formulas in relative to to satisfiability checking in the modal logic S5. The latter problem is known to be in NP in the infinite-variable case [26]. ∎
A.9 Proof of Proposition 3
Proof.
For the right direction, we have from the antecedent. Suppose towards a contradiction that the consequent does not hold. Then, with , s.t. . The last conjunct means that and . But the conjuncts together guarantee that , because , and it is an argmax by definition of . It is the desired contradiction, since .
For the other direction, we need show , given the antecedent. Suppose the opposite towards a contradiction. Then by definition, . Let , and . Then we have , which contradicts the antecedent. To see that, notice the second conjunct is because of , and the first conjunct because of and . ∎
A.10 Proof of Proposition 4
Proof.
The first validity is obvious, since if then given . For the second validity, notice that , if . Hence if , then we have . ∎
A.11 Proof of Proposition 5
Proof.
Let be a pointed CM and , which directly gives us . Now since is an implicant of , , for otherwise , s.t. ; and since is prime, we have , otherwise , s.t. and is also an implicant of . The other direction is proven in the same way and omitted. ∎
A.12 Proof of Proposition 6
Proof.
Suppose towards a contradiction that is finitely-definite, but , s.t. , if then . That is to say, s.t. but either or s.t. , but . Hence is neither -definite nor -definite. Either case is not finitely-definite, since is arbitrarily selected from . ∎
A.13 Proof of Proposition 7
Proof.
For the first validity, let and and suppose . By definition of we have . We need to show . By the antecedent, , s.t. and . It is not hard to show that . Therefore , since . For the second validity, the right direction of the iff is a special case of the first validity. To show the left direction, from -completeness and the counterfactual conditional we have , s.t. and . Hence , which is by definition . ∎
A.14 Proof of Proposition 8
Proof.
We show that for any , both directions are satisfied in for some . The right to left direction is obvious, since from the antecedent we know there is a property s.t. and , which means . The other direction is proven by contraposition. Suppose for any s.t. , , then it means , if , then , which means . ∎
A.15 Proof of Theorem 14
Proof.
Suppose is finite. As in the proof of Theorem 7, we can show that the size of the model class is bounded by some fixed integer. Thus, in order to determine whether a formula is satisfiable for this class, it is sufficient to repeat model checking a number of times which is bounded by some integer. Model checking for the language with respect to a pointed ECM is polynomial. ∎
References
- [1] Leila Amgoud and Jonathan Ben-Naim. Axiomatic foundations of explainability. In 31st International Joint Conference on Artificial Intelligence (IJCAI 2022), 2022.
- [2] Gilles Audemard, Steve Bellart, Louenas Bounia, Frédéric Koriche, Jean-Marie Lagniez, and Pierre Marquis. On the computational intelligibility of boolean classifiers. In Proceedings of the International Conference on Principles of Knowledge Representation and Reasoning, volume 18, pages 74–86, 2021.
- [3] Alexandru Baltag and Johan van Benthem. A simple logic of functional dependence. Journal of Philosophical Logic, 50(5):939–1005, 2021.
- [4] Or Biran and Courtenay Cotton. Explanation and justification in machine learning: A survey. In IJCAI-17 workshop on explainable AI (XAI), volume 8(1), pages 8–13, 2017.
- [5] Patrick Blackburn, Maarten de Rijke, and Yde Venema. Modal Logic. Cambridge University Press, Cambridge, Massachusetts, 2001.
- [6] Alexander Borgida. Language features for flexible handling of exceptions in information systems. ACM Transactions on Database Systems (TODS), 10(4):565–603, 1985.
- [7] Thomas Caridroit, Jean-Marie Lagniez, Daniel Le Berre, Tiago de Lima, and Valentin Montmirail. A SAT-based approach for solving the modal logic s5-satisfiability problem. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17), pages 3864–3870. AAAI Press, 2017.
- [8] Tristan Charrier, Andreas Herzig, Emiliano Lorini, Faustine Maffre, and François Schwarzentruber. Building epistemic logic from observations and public announcements. In Proceedings of the Fifteenth International Conference on Principles of Knowledge Representation and Reasoning (KR 2016), pages 268–277. AAAI Press, 2016.
- [9] Edmund M. Clarke and Bernd-Holger Schlingloff. Model checking. In Alan J. A. Robinson and Andrei Voronkov, editors, Handbook of automated reasoning, pages 1635–1790. Elsevier, 2001.
- [10] Padraig Cunningham and Sarah Jane Delany. K-nearest neighbour classifiers - a tutorial. ACM Computing Surveys, 54(6):1–25, 2022.
- [11] Mukesh Dalal. Investigations into a theory of knowledge base revision: preliminary report. In Proceedings of the Seventh National Conference on Artificial Intelligence, volume 2, pages 475–479. Citeseer, 1988.
- [12] Adnan Darwiche and Auguste Hirth. On the reasons behind decisions. In 24th European Conference on Artificial Intelligence (ECAI 2020), volume 325 of Frontiers in Artificial Intelligence and Applications, pages 712–720. IOS Press, 2020.
- [13] Amit Dhurandhar, Pin-Yu Chen, Ronny Luss, Chun-Chen Tu, Paishun Ting, Karthikeyan Shanmugam, and Payel Das. Explanations based on the missing: Towards contrastive explanations with pertinent negatives. In Advances in Neural Information Processing Systems, pages 592–603, 2018.
- [14] Ronald Fagin, Yoram Moses, Joseph Y Halpern, and Moshe Y Vardi. Reasoning about Knowledge. MIT Press, 1995.
- [15] Patrick Girard and Marcus Anthony Triplett. Ceteris paribus logic in counterfactual reasoning. In Proceedings of the Fifteenth Conference on Theoretical Aspects of Rationality and Knowledge (TARK 2015), pages 176–193, 2016.
- [16] Nelson Goodman. Fact, fiction, and forecast. Harvard University Press, 1955.
- [17] Davide Grossi, Emiliano Lorini, and François Schwarzentruber. The ceteris paribus structure of logics of game forms. Journal of Artificial Intelligence Research, 53:91–126, 2015.
- [18] Joseph Y. Halpern. The effect of bounding the number of primitive propositions and the depth of nesting on the complexity of modal logic. Artificial Intelligence, 75(2):361–372, 1995.
- [19] Joseph Y Halpern. Actual causality. MiT Press, 2016.
- [20] Carl G. Hempel and Paul Oppenheim. Studies in the logic of explanation. Philosophy of science, 15(2):135–175, 1948.
- [21] Andreas Herzig, Emiliano Lorini, and Faustine Maffre. A poor man’s epistemic logic based on propositional assignment and higher-order observation. In Proceedings of the 5th International Workshop on Logic, Rationality, and Interaction, Lecture Notes in Computer Science, pages 156–168. Springer, 2015.
- [22] Alexey Ignatiev, Martin C. Cooper, Mohamed Siala, Emmanuel Hebrard, and Joao Marques-Silva. Towards formal fairness in machine learning. In International Conference on Principles and Practice of Constraint Programming, pages 846–867. Springer, 2020.
- [23] Alexey Ignatiev, Nina Narodytska, Nicholas Asher, and Joao Marques-Silva. From contrastive to abductive explanations and back again. In International Conference of the Italian Association for Artificial Intelligence, pages 335–355. Springer, 2020.
- [24] Alexey Ignatiev, Nina Narodytska, and Joao Marques-Silva. Abduction-based explanations for machine learning models. In Proceedings of the Thirty-third AAAI Conference on Artificial Intelligence (AAAI-19), volume 33, pages 1511–1519, 2019.
- [25] Boris Kment. Counterfactuals and explanation. Mind, 115(458):261–310, 2006.
- [26] Richard E. Ladner. The computational complexity of provability in systems of modal propositional logic. SIAM journal on computing, 6(3):467–480, 1977.
- [27] David K. Lewis. Counterfactuals. Harvard University Press, 1973.
- [28] David K. Lewis. Counterfactual dependence and time’s arrow. Noûs, pages 455–476, 1979.
- [29] David K. Lewis. Causal explanation. In Philosophical Papers, volume 2, pages 214–240. Oxford University Press, 1986.
- [30] David K. Lewis. Causation. Journal of Philosophy, 70(17):556–567, 1995.
- [31] Xinghan Liu and Emiliano Lorini. A logic for binary classifiers and their explanation. In P. Baroni, C. Benzmüller, and Y. N. Wáng, editors, Logic and Argumentation - 4th International Conference, CLAR 2021, Hangzhou, China, 2021, Proceedings, Lecture Notes in Computer Science, pages 302–321. Springer, 2021.
- [32] Xinghan Liu and Emiliano Lorini. A logic of “black box” classifier systems. In Logic, Language, Information, and Computation: 28th International Workshop, WoLLIC 2022, Iași, Romania, 2022, Proceedings, pages 158–174. Springer Nature, 2022.
- [33] Silvan Mertes, Christina Karle, Tobias Huber, Katharina Weitz, Ruben Schlagowski, and Elisabeth André. Alterfactual explanations–the relevance of irrelevance for explaining ai systems. arXiv preprint arXiv:2207.09374, 2022.
- [34] Tim Miller. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence, 267:1–38, 2019.
- [35] Tim Miller. Contrastive explanation: A structural-model approach. The Knowledge Engineering Review, 36, 2021.
- [36] Brent Mittelstadt, Chris Russell, and Sandra Wachter. Explaining explanations in AI. In Proceedings of the 2019 conference on Fairness, Accountability, and Transparency, pages 279–288, 2019.
- [37] Ramaravind K. Mothilal, Amit Sharma, and Chenhao Tan. Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 607–617, 2020.
- [38] Judea Pearl. Causality. Cambridge university press, 2009.
- [39] Willard V. Quine. A way to simplify truth functions. The American mathematical monthly, 62(9):627–631, 1955.
- [40] Weijia Shi, Andy Shih, Adnan Darwiche, and Arthur Choi. On tractable representations of binary neural networks. arXiv preprint arXiv:2004.02082, 2020.
- [41] Andy Shih, Arthur Choi, and Adnan Darwiche. Formal verification of bayesian network classifiers. In International Conference on Probabilistic Graphical Models, pages 427–438. PMLR, 2018.
- [42] Kacper Sokol and Peter A. Flach. Counterfactual explanations of machine learning predictions: opportunities and challenges for ai safety. In SafeAI@ AAAI, 2019.
- [43] Johan Van Benthem, Jan Van Eijck, and Barteld Kooi. Logics of communication and change. Information and Computation, 204(11):1620–1662, 2006.
- [44] Wiebe van der Hoek, Petar Iliev, and Michael J Wooldridge. A logic of revelation and concealment. In Proceedings of the International Conference on Autonomous Agents and Multiagent Systems, (AAMAS 2012), pages 1115–1122. IFAAMAS, 2012.
- [45] Wiebe Van Der Hoek, Nicolas Troquard, and Michael J Wooldridge. Knowledge and control. In Proceedings of the 10th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2021), pages 719–726. IFAAMAS, 2011.
- [46] Hans van Ditmarsch, Wiebe van Der Hoek, and Barteld Kooi. Dynamic Epistemic Logic, volume 337 of Synthese Library. Springer, 2007.
- [47] Hans P van Ditmarsch, Wiebe van der Hoek, and Barteld P Kooi. Dynamic epistemic logic with assignment. In Proceedings of the 4th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2005), pages 141–148. ACM, 2005.
- [48] Sahil Verma, John Dickerson, and Keegan Hines. Counterfactual explanations for machine learning: A review. arXiv preprint arXiv:2010.10596, 2020.
- [49] Sandra Wachter, Brent Mittelstadt, and Chris Russell. Counterfactual explanations without opening the black box: Automated decisions and the gdpr. Harv. JL & Tech., 31:841, 2017.
- [50] James Woodward. Explanation and invariance in the special sciences. The British journal for the philosophy of science, 51(2):197–254, 2000.
- [51] James Woodward. Making Things Happen: a Theory of Causal Explanation. Oxford University Press, 2003.
- [52] James Woodward and Christopher Hitchcock. Explanatory generalizations, part i: A counterfactual account. Noûs, 37(1):1–24, 2003.
- [53] Fan Yang and Jouko Väänänen. Propositional logics of dependence. Annals of Pure and Applied Logic, 167(7):557–589, 2016.