This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Counterfactual Explanations of Black-box Machine Learning Models using Causal Discovery with Applications to Credit Rating
* thanks:

Daisuke Takahashi Graduate School of Data Science,
Shiga University, Hikone, Japan
[email protected]
   Shohei Shimizu Graduate School of Data Science,
Shiga University, Hikone, Japan
RIKEN Center for Advanced
Intelligence Project, Tokyo, Japan

[email protected]
   Takuma Tanaka Graduate School of Data Science,
Shiga University, Hikone, Japan
[email protected]
Abstract

Explainable artificial intelligence (XAI) has helped elucidate the internal mechanisms of machine learning algorithms, bolstering their reliability by demonstrating the basis of their predictions. Several XAI models consider causal relationships to explain models by examining the input-output relationships of prediction models and the dependencies between features. The majority of these models have been based their explanations on counterfactual probabilities, assuming that the causal graph is known. However, this assumption complicates the application of such models to real data, given that the causal relationships between features are unknown in most cases. Thus, this study proposed a novel XAI framework that relaxed the constraint that the causal graph is known. This framework leveraged counterfactual probabilities and additional prior information on causal structure, facilitating the integration of a causal graph estimated through causal discovery methods and a black-box classification model. Furthermore, explanatory scores were estimated based on counterfactual probabilities. Numerical experiments conducted employing artificial data confirmed the possibility of estimating the explanatory score more accurately than in the absence of a causal graph. Finally, as an application to real data, we constructed a classification model of credit ratings assigned by Shiga Bank, Shiga prefecture, Japan. We demonstrated the effectiveness of the proposed method in cases where the causal graph is unknown.

Index Terms:
Counterfactual explanations, explainable machine learning, causal discovery, rating classification

I Introduction

In recent years, the use of artificial intelligence (AI) has rapidly increased owing to its revolutionary impact on various aspects of society and industry. This is primarily attributed to advances in deep learning [1] and ensemble learning algorithms such as LightGBM [2]. However, it is impossible for humans to understand the basis for the predictions by these models due to the complex calculations and internal structures. Consequently, people are hesitant to utilize machine learning for tasks that require accountability (bank loans, medical diagnosis) [3]. To address this challenge, numerous methods have been developed in explainable artificial intelligence (XAI) to improve the explainability of black-box models [4].

Among the most promising approaches of XAI is the explanation of predictive models using causal inference [5]. Causal inference-based XAI considers the correlation between the input and output and the dependence of features; thus, it can efficiently explain the desired predicted value. In particular, the estimation by existing methods, such as decision tree-based feature importance [6] and SHAP [7], is based on the correlation of variables, which may be pseudo-correlation [8]. Most XAI methods based on causal inference employ counterfactual explanations, which elucidate a prediction by computing the change in the individual’s features to modify the prediction to the desired class [9]. A previous study [10] presented an XAI system called LEWIS, which leveraged causal models and counterfactual probabilities. This study employed loan approval as an example to estimate a counterfactual probability score for a binary classification. Based on this explanatory score, this method rendered the features requiring change as explicit to modify the prediction if the loan for a customer was predicted to be rejected.

However, LEWIS requires complete knowledge of the causal graph of features. When building machine learning models, background knowledge such as causal relationships are rarely completely known. This is a major challenge when applying this method to real data. When the causal graph is unknown, it can be estimated by causal discovery [11]; however, its performance in artificial and real-world data has not been explored.

Refer to caption
Figure 1: Framework of counterfactual probability explanations using causal structure information

This study proposed a new XAI framework that relaxed the constraint of the causal graph being known. As shown in Figure 1, a causal discovery can be possibly performed with prior knowledge on the causal structure, and the resulting causal graph can be used to explain the features using counterfactual probabilities. Our contributions are summarized as follows:

  • We conducted numerical experiments to analyze the variation of explanatory scores with causal structure and the proposal of useful prior information on causal structure.

  • Our artificial data experiments implied that the combination of causal discovery with certain prior information proposed above recovered the estimates of the explanatory score better than previous methods without causal discovery or graph assumption.

  • By applying the method to real world financial data from the Shiga bank, Ltd., which is the largest regional bank in Shiga prefecture of Japan, we demonstrated that useful explanations could be made from the graph estimated by causal discovery. To the best of our knowledge, this is the first real-world example of counterfactual probability explanations in case that the causal structure is unknown and is estimated by causal discovery.

The remainder of this paper is organized as follows. Section II provides an overview of related research and defines the symbols used in this paper. Section III analyzes the effects of causal structures on explanation scores and proposes the use of useful prior information on the causal structure. In Section IV, using artificial data, we provide counterfactual explanations by combining prior information on the causal structure and causal exploration. Consequently, we examine whether the estimated explanatory score can restore the estimate of the true explanatory score. Section V demonstrates that the proposed method is useful even when the causal graph is unknown using real data. Finally, Section VI summarizes the results and provides future perspectives.

II Background

This section introduces the mathematical symbols and formulas used in this paper and outlines the related methods.

II-A Structural Causal Model

Structural causal model (SCM) [12] is a mathematical framework for handling causal relationships. SCM comprises a set of endogenous variables 𝐕={V1,V2,,Vp}\mathbf{V}=\{V_{1},V_{2},...,V_{p}\}, a set of exogenous variables 𝐔={U1,U2,,Uq}\mathbf{U}=\{U_{1},U_{2},...,U_{q}\}, which are variables not determined by the endogenous variables, and a set of functions 𝐅={f1,f2,,fp}\mathbf{F}=\{f_{1},f_{2},...,f_{p}\}, which determines the values of endogenous variables from those of other endogenous and exogenous variables. In addition, we define the parent set of endogenous variables as pa(Vi)\mathrm{pa}(V_{i}) and we assume variables in 𝐔\mathbf{U} are independent. Consequently, if SCM is specified by

Vi=fi(pa(Vi),Ui)V_{i}=f_{i}(\mathrm{pa}(V_{i}),U_{i}) (1)

and the system is assumed to be autonomous, Equation (1) is called a structural causal model, wherein the quantitative causal relations of variables are expressed by a directed graph called a causal graph. Here, autonomy implies that changing any function or distribution of variables does not change the distribution of other functions or distributions.

II-B LEWIS

LEWIS [10] is an XAI method that provides counterfactual explanations for machine learning predictions based on structural causal models. Consider the following binary classification problem. The explanatory variable is X𝐕X\in\mathbf{V} and the value of the explanatory variable is {x,x}X\{x,x^{\prime}\}\in X (x>x)(x>x^{\prime}), wherein all the values of the explanatory variables are discretized. Let O{o,o}O\in\{o,o^{\prime}\} be the predicted value corresponding to the explanatory variable, where oo is the positive predicted value and oo^{\prime} is the negative predicted value. The counterfactual in any causal model M=V,U,FM=\langle V,U,F\rangle can be defined for a certain individual uUu\in U as [12]

X(u)=xYX=x(u)=YMx(u),X(u)=x\Rightarrow Y_{X=x}(u)=Y_{M_{x}}(u), (2)

where YX=xY_{X=x} is the counterfactual of the value of YY if the value of XX were xx, with YMxY_{M_{x}} being that of YY if the value of XX were xx in the causal model MM. Further, the counterfactual probability P(YX=x)P(Y_{X=x}) can be expressed as P(Y|𝑑𝑜(X=x))P(Y|\mathit{do}(X=x)) using the intervention symbol 𝑑𝑜\it{do}. It represents the conditional probability of YY if the value of XX were changed to xx and can be computed based on a causal graph.

In this setting, LEWIS defines the following three explanatory scores as the probability of a counterfactual, necessity score (Nec)

Nec(x,x)=P(oX=x|x,o),\mathrm{Nec}(x,x^{\prime})=P(o^{\prime}_{X=x^{\prime}}|x,o), (3)

sufficiency score (Suf)

Suf(x,x)=P(oX=x|x,o),\mathrm{Suf}(x,x^{\prime})=P(o_{X=x}|x^{\prime},o^{\prime}), (4)

and necessity and sufficiency score (Nesuf)

Nesuf(x,x)=P(oX=x,oX=x).\mathrm{Nesuf}(x,x^{\prime})=P(o^{\prime}_{X=x^{\prime}},o_{X=x}). (5)

The necessity score and sufficiency score (called reversal probability) calculate the probability of the degree of change in the predicted value if the explanatory variable had a different value, given the value and predicted value of a given explanatory variable. In particular, the necessity score represents the probability that the predicted value would change if the value of the explanatory variable decreased, whereas the sufficiency score represents the probability that the predicted value would change if the value increased. The necessity and sufficiency score is a score that balances the necessity score and sufficiency score and can be considered as a feature importance in machine learning [10].

In [10], LEWIS was used to examine a binary classification model that determines whether a loan was approved or rejected. For example, Nec=0.1\text{Nec}=0.1 for a feature implies that if a person whose loan is predicted to be approved decreased the value of a feature, there would be at most a 10% chance of a prediction for the loan to be rejected. In addition, Suf=0.8\text{Suf}=0.8 implies that if a person whose loan is predicted to be rejected increased the value of a certain feature, there is a maximum probability of 80% that the loan would be predicted to be approved.

These three scores are expressed as

Nec(x,x)=P(o|𝑑𝑜(X=x))P(o|x)P(o|x),\displaystyle\mathrm{Nec}(x,x^{\prime})=\frac{P(o^{\prime}|\mathit{do}(X=x^{\prime}))-P(o^{\prime}|x)}{P(o|x)}, (6)
Suf(x,x)=P(o|𝑑𝑜(X=x))P(o|x)P(o|x),\displaystyle\mathrm{Suf}(x,x^{\prime})=\frac{P(o|\mathit{do}(X=x))-P(o|x^{\prime})}{P(o^{\prime}|x^{\prime})}, (7)
Nesuf(x,x)=P(o|𝑑𝑜(X=x))P(o|𝑑𝑜(X=x)),\displaystyle\mathrm{Nesuf}(x,x^{\prime})=P(o^{\prime}|\mathit{do}(X=x^{\prime}))-P(o^{\prime}|\mathit{do}(X=x)), (8)

if the causal graph GG corresponding to the causal model MM is known and the monotonicity (x>xOx>Ox)(x>x^{\prime}\Rightarrow O_{x}>O_{x^{\prime}}) is satisfied. In this case, the global explanation score of LEWIS, maxNesuf(X)\mathrm{maxNesuf}(X), is given by the maximum value of Nesuf(x,x)\mathrm{Nesuf}(x,x’) for all pairs of the value of the explanatory variable, (x,x)(x,x’). Finally, if no causal graph is provided to LEWIS, it assumes that P(O|𝑑𝑜(X))=P(O|X)P(O|\mathit{do}(X))=P(O|X) under the assumption that there are no confounding factors, which is likely to be violated.

III Numerical experiment of exploring useful prior knowledge on causal structure

In this section, we analyze the effects of causal structures on explanation scores and propose useful prior information on the causal structure.

III-A Analysis of the influence of causal structure on explanation scores

III-A1 Experimental design

We built a machine learning model leveraging data generated from a three-variable causal structure and analyzed the characteristics of the LEWIS explanation score estimated for the predicted value. Data were generated from the five causal graphs in Figure 2. Structures A, B, and E are termed the collider, chain, and fork paths, respectively, and each exhibited its own unique independence [12]. Structure C exhibited confounding behavior, where the variable XX was the confounding variable. Structure D contained a variable XX that was independent of variable YY. The coefficients of each structural equation were all equal for each variable in structures B–E, and the coefficients of A were set to 1 and 1.5 for XX and ZZ, respectively. After generating data from the five causal structures and building a machine learning model using that data, we estimated the Nesuf score of Equation (5) with LEWIS. This was iterated 100 times and the characteristics of the average estimated value were analyzed.

TABLE I: Experimental conditions
Function {linear, nonlinear}
Probability of error variables uniform distribution of [0,1][0,1]
Model catboost
Sample size 5000
Discretization method equal-width discretization
Number of discretization bins 10

Table I presents the detailed experimental settings. In each repetition, error variables were generated from a uniform distribution of [0,1][0,1] using a linear or nonlinear (second-order monomial) function system for the causal structures A–E. The generated data were discretized using the same discretization bin number, and catboost was used as the classifier. Herein, the target variable YY was also discretized to binary values using equally spaced discretization. The sample size of the data was 5 0005\,000, which Nesuf was estimated for the predictions of.

Refer to caption
Figure 2: Causal graph used in analysis. The values on the directed edges represent the coefficients of the respective structural equations.

III-A2 Experimental results

The estimates of Nesuf are presented in Tables II and III. The results suggested two possible pieces of prior information on causal structure to determine Nesuf based on a linear causal structure. First, in case of multiple variables that are direct parents of a target variable such as structure A, variables with strong correlations would be more important. In structure A, the correlation coefficients with YY of variables ZZ and XX were 0.8075 and 0.4779 on average, respectively, indicating that variable ZZ exhibited a stronger correlation than XX and was therefore more important. Second, if the target variable is independent and has no directed edge from the variable XX, such as in the causal structure D or E, the importance will be zero. Thus, if an explanatory variable is independent of the target variable, the LEWIS explanatory score is zero because intervention on that explanatory variable cannot change the target variable. A similar argument is possible in the case of nonlinear causal structures to determine Nesuf.

TABLE II: Mean of Nesuf in the case of linear causal structure
A B C D E
xx 0.367 0.498 0.999 0 0
zz 0.999 0.999 0.325 1 1
TABLE III: Mean of Nesuf in the case of nonlinear causal structure
A B C D E
xx 0.377 0.556 0.999 0 0
zz 0.945 0.999 0.949 1 1

III-B Useful prior information on the causal structure

In Section III-A, we observed that the score differed greatly depending on whether the explanatory variable could yield the predictive variable. In particular, all explanatory scores will be 0 for the explanatory variables that are independent of the target variable. In addition, in case of a directed edge, or reverse causation, from the target variable to the explanatory variable, the target variable and explanatory variable are dependent. However, even if the value of the explanatory variables were changed, the value of the target variable would not be changed; thus, so the value of Nesuf becomes 0, and Suf and Nec provide no information. Therefore, we proposed the following prior information on the causal structure to compute the explanation scores:

  1. (a)

    Target variable has the direct parent-child relationship with all explanatory variables, that is, there is a direct causal path from the explanatory variables to the target variable.

  2. (b)

    Target variable is the sink variable, where a sink variable is a variable that does not cause any variable.

With prior information (a) (Figure 3a), the target variable is dependent on all explanatory variables. In this prior information, the estimated score is adjusted by variables that satisfy the backdoor criteria; if they don’t satisfy it, the intervention query is directly estimated, as in prior work. Hence if the explanatory variables and target variable are not independent, the estimated score will be at least the same as when no causal graph is used. This facilitates the performing of causal discovery using only explanatory variables. In prior information (b) (Figure 3b), we considered the influence of variables independent of the target variable and reverse causation, which could not be identified in prior information (a). In this prior information helps eliminate reverse causality from the target variable to the explanatory variables when performing causal discovery including the target variable.

Refer to caption
Figure 3: Prior information of causal structure. (a): Target variable Y has the direct parent-child relationship with all explanatory variables. (b): Target variable Y is the sink variable.

IV Numerical experiment and evaluation

This section describes the simulation and experimental results of explanations based on counterfactual probabilities through causal discovery using artificial data.

IV-A Numerical experiment setting

In the numerical experiment, we generated artificial data from the 8-variable causal graph shown in Figure 4, performed causal discovery based on prior information on the causal structure proposed in the previous section, and estimated the Nesuf for the predicted values. We performed evaluations considering the order of estimated the Nesuf and that of true Nesuf and the error of the estimated Nesuf value. We compared these results with the value of Nesuf estimated assuming that P(Y|𝑑𝑜(X))=P(Y|X)P(Y|\mathit{do}(X))=P(Y|X), similar to that in case of [10]. The artificial data to be generated were continuous and mixed data. Continuous variable values were generated from linear or non-linear function systems. For mixed data, only a linear system was used, the probability distribution of the error variables X1X_{1} and X7X_{7} was a Bernoulli distribution, and the probability distribution of the other error variables were a uniform or Gaussian distribution. The model, sample size, discretization method, and the number of discretization bins are presented in Table I.

Refer to caption
Figure 4: Causal graph in artificial data experiments

For the evaluation, we calculated the mean absolute error (MAE) with respect to the true Nesuf estimate. We used MAE¯\overline{\mathrm{MAE}}, which is the average of the MAE

MAE=1Ni=1N|maxNesufXi,truemaxNesufXi,est|\mathrm{MAE}=\frac{1}{N}\sum_{i=1}^{N}|\mathrm{maxNesuf}_{X_{i},\mathrm{true}}–\mathrm{maxNesuf}_{X_{i},\mathrm{est}}| (9)

for seven variables, X1,,X7X_{1},\,\ldots,\,X_{7}, where NN is the number of experimental trials, maxNesufXi,true\mathrm{maxNesuf}_{X_{i},\mathrm{true}} is the true value of maxNesuf(Xi)\mathrm{maxNesuf}(X_{i}), and maxNesufxi,est\mathrm{maxNesuf}_{x_{i},\mathrm{est}} is the estimated value of maxNesuf(Xi)\mathrm{maxNesuf}(X_{i}). The standard error of maxNesuf(Xi)\mathrm{maxNesuf}(X_{i}) was also evaluated.

Here, we assume that 𝐚={a1,,an}\mathbf{a}=\{a_{1},\ldots,a_{n}\} and 𝐛={b1,,bn}\mathbf{b}=\{b_{1},\ldots,b_{n}\} convert the value of the true Nesuf and the estimated Nesuf into a rank for nn variables, respectively. Defining 𝐝={d1,,dn}\mathbf{d}=\{d_{1},\ldots,d_{n}\} by di=aibid_{i}=a_{i}-b_{i}, we can calculate Spearman’s rank correlation coefficient [13] with

SPR=16i=1ndi2n(n21),\mathrm{SPR}=1-\frac{6\sum_{i=1}^{n}d_{i}^{2}}{n(n^{2}-1)}, (10)

where SPR\mathrm{SPR} attains the value 1SPR1-1\leqq\mathrm{SPR}\leqq 1. The closer it is to 1, the more the order of the estimated Nesuf matches the true order of feature importance. Moreover, the closer it is to 1-1, the more the order relationship is reversed. In this experiment, the evaluation was based on the average MAE, the standard error and the average SPR¯\overline{\mathrm{SPR}} of 100 trials.

Next, for causal discovery, we used six methods tailored to specific functional forms. We implemented the causal discovery algorithm in Python. We used the causal-learn [14] method for PC, linear non-Gaussian acyclic model (LiNGAM) package [15] for DirectLiNGAM, RESIT, and linear mixed (LiM) and causalnex[16] method for NOTEARS and NOTEARS-MLP.

  • PC [17]: The PC algorithm is a causal discovery algorithm based on conditional independence. The estimation algorithm first removes undirected edges by testing conditional independence. The direction of the causal relationship is determined by using v-structures [12] and orientation rule [18] for the remaining undirected skeleton. When using a PC to orient undirected edges, the directions may not be determined in the end, and the graph may be estimated as a partially oriented graph. In that case, our evaluation was based on the causal graph with all directed edges derived from the partially oriented graph that had the maximum (PC_Max) or minimum (PC_Min) Spearman’s rank correlation coefficient. We used the Fisher-zz test to investigate the conditional independence.

  • DirectLiNGAM [19]: DirectLiNGAM is a causal discovery algorithm that expresses causal relationships using a linear structural equation model called LiNGAM [20], which assumes that the graph is acyclic and that the probability distribution of the error terms is non-Gaussian. DirectLiNGAM estimates the causal structure by repeated regression and evaluating the independence of residuals of each variable.

  • RESIT [21]: RESIT is a causal discovery algorithm used when the causal structure is nonlinear and the error variables are additive (Additive Noise Model: ANM). Similar to DirectLiNGAM, the algorithm estimates the causal direction by evaluating the independence of each variable and the residual of nonlinear regressions.

  • LiM [22]: LiM causal discovery algorithm extends LiNGAM to handle the mixed data that comprises both continuous and discrete variables. The estimation is performed by first globally optimizing the log-likelihood function on the joint distribution of data with the acyclicity constraint and then applying a local combinatorial search to output a causal graph.

  • NOTEARS [23]: NOTEARS reformulates DAG structure learning as a continuous optimization problem over real matrices, avoiding combinatorial acyclicity constraints. It introduces a smooth characterization of acyclicity as an equality constraint h(W)=0h(W)=0 on the weighted adjacency matrix W. This equality-constrained program is solved using augmented Lagrangian methods and numerical optimizers. This method assumes that the probability distribution of the error term has equal variance. Empirically, it outperforms state-of-the-art methods, especially for linear.

  • NOTEARS-MLP[24]: NOTEARS-MLP is that DAG structure learning algorithm extends NOTEARS to handle non-linear functional relationships using MLP which consists of hidden layer units and an activation that sigmoid function. Especially when the causal structure is an additive noise model, this method is identifiable that assuming the nonlinear function are three times differentiable and not linear in any of its arguments.

The following estimates were based on either prior information (a), prior information (b), or no prior information, denoted by (a), (b), and (0), respectively. PC and DirectLiNGAM can incorporate prior information (a) and (b). However, RESIT, LiM, NOTEARS and NOTEARS-MLP cannot incorporate prior information (b). In addition, we evaluated cases wherein causal discovery was performed without a causal graph (No graph). Table IV presents the experimental settings used in the numerical experiment.

TABLE IV: Experimental settings for artificial data
Data {continuous, mix}
Function {linear, nonlinear}
Probability of error variables {uniform, Gaussian}
Sample size 5000
Discretization method equal-width discretization
Number of discretization bins 10
Model catboost
Prior knowledge {(0), (a), (b), (No graph)}

IV-B Results of experiment

The experimental results of MAE¯\overline{\mathrm{MAE}} ±\pm standard error and SPR¯\overline{\mathrm{SPR}} are presented in Tables VX. First, regarding the linear causal structure in Tables V and VI, when the causal discovery matched the true graph, the MAE¯\overline{\mathrm{MAE}} approached 0 and the SPR¯\overline{\mathrm{SPR}} approached 1. DirectLiNGAM assumed that the functional relationship was linear and that the probability distribution of the error terms was non-Gaussian. In such cases, MAE¯\overline{\mathrm{MAE}} was smaller and the SPR¯\overline{\mathrm{SPR}} was greater than that in the case No graph (Table V). As shown in Table VI, when a Gaussian distribution was assumed with no prior information or No graph, the MAE¯\overline{\mathrm{MAE}} was large and the SPR¯\overline{\mathrm{SPR}} was small for DirectLiNGAM. However, these scores were improved with prior information (a) and (b). Furthermore, the PC and NOTEARS algorithm consistently achieved high accuracy for both distributions. In all cases with a linear structure, we could estimate the true value and order of Nesuf scores better than that case when No graph was assumed.

TABLE V: Results of linear and uniform distribution
Method and prior information MAE¯\overline{\mathrm{MAE}} SPR¯\overline{\mathrm{SPR}}
DirectLiNGAM (0) 0.0032 ± 0.0197 0.9889
DirectLiNGAM (a) 0.1700 ± 0.0023 0.6140
DirectLiNGAM (b) 0.0032 ± 0.0197 0.9889
PC_Max (0) 0.0633 ± 0.0595 0.9007
PC_Min (0) 0.0953 ± 0.1127 0.8507
PC_Max (a) 0.1709 ± 0.0030 0.6117
PC_Min (a) 0.1709 ± 0.0030 0.6117
PC_Max (b) 0.0092 ± 0.0249 0.9817
PC_Min (b) 0.0703 ± 0.0656 0.9082
NOTEARS (0) 0.1708 ± 0.0030 0.6117
NOTEARS (a) 0.1708 ± 0.0030 0.6117
No graph 0.1709 ± 0.0030 0.6117

Values presented in bold indicate the best results.

TABLE VI: Results of linear and Gaussian distribution
Method and prior information MAE¯\overline{\mathrm{MAE}} SPR¯\overline{\mathrm{SPR}}
DirectLiNGAM (0) 0.3605 ± 0.1290 0.3307
DirectLiNGAM (a) 0.1910 ± 0.0100 0.5920
DirectLiNGAM (b) 0.2846 ± 0.1080 0.7403
PC_Max (0) 0.0981 ± 0.0461 0.8796
PC_Min (0) 0.1186 ± 0.0920 0.8492
PC_Max (a) 0.1912 ± 0.0097 0.5550
PC_Min (a) 0.1912 ± 0.0097 0.5550
PC_Max (b) 0.0088 ± 0.0287 0.9764
PC_Min (b) 0.1013 ± 0.0584 0.8971
NOTEARS (0) 0.4162 ± 0.0804 0.6739
NOTEARS (a) 0.1913 ± 0.0095 0.5550
No graph 0.1912 ± 0.0097 0.5550

Tables VII and VIII present the results in the case of a nonlinear causal structure. When the error term was uniform distribution, RESIT with no prior information and PC with prior information yielded smaller MAE¯\overline{\mathrm{MAE}} values and greater SPR¯\overline{\mathrm{SPR}} values than that for No graph in Table VII. For the Gaussian distribution causal structure as shown in Table VIII, NOTEARS-MLP with prior information had the highest performance. However, in this experimental setup, the performance of all methods was lower compared to other setups. The reason may be that monotonicity, another assumption of LEWIS, is not satisfied in nonlinear cases.

TABLE VII: Results of nonlinear and uniform distribution
Method and prior information MAE¯\overline{\mathrm{MAE}} SPR¯\overline{\mathrm{SPR}}
RESIT (0) 0.0250 ± 0.0190 0.8114
RESIT (a) 0.0334 ± 0.0059 0.7285
PC_Max (0) 0.0398 ± 0.0273 0.5540
PC_Min (0) 0.0398 ± 0.0273 0.5540
PC_Max (a) 0.0337 ± 0.0064 0.7781
PC_Min (a) 0.0337 ± 0.0064 0.7781
PC_Max (b) 0.0183 ± 0.0226 0.7781
PC_Min (b) 0.0183 ± 0.0226 0.7781
NOTEARS-MLP (0) 0.0521 ± 0.032 0.6053
NOTEARS-MLP (a) 0.0337 ± 0.0063 0.7067
No graph 0.0337 ± 0.0064 0.7067
TABLE VIII: Results of nonlinear and Gaussian distribution
Method and prior information MAE¯\overline{\mathrm{MAE}} SPR¯\overline{\mathrm{SPR}}
RESIT (0) 0.0292 ± 0.0115 0.3296
RESIT (a) 0.0377 ± 0.0128 0.3235
PC_Max (0) 0.0323 ± 0.0177 0.1853
PC_Min (0) 0.0325 ± 0.0179 0.1810
PC_Max (a) 0.0372 ± 0.0128 0.3317
PC_Min (a) 0.0372 ± 0.0128 0.3317
PC_Max (b) 0.0325 ± 0.0179 0.1810
PC_Min (b) 0.0325 ± 0.0179 0.1810
NOTEARS-MLP (0) 0.0575 ± 0.0098 0.3364
NOTEARS-MLP (a) 0.0382 ± 0.0130 0.3857
No graph 0.0372 ± 0.0128 0.3317

Finally, Tables IX and X present the results of the mixed data. In Table IX, the assumptions of LiM were satisfied, thus, the MAE¯\overline{\mathrm{MAE}} was smaller and the SPR¯\overline{\mathrm{SPR}} was larger than that when no causal discovery was performed. In addition, in Table X, the LiM assumption that the probability distribution of the error terms was non-Gaussian was not satisfied, the score was the same as the case of No graph. Even in the case of linear mixed data, it can be said that the true Nesuf can be partially restored using the proposed prior information of causal structure.

TABLE IX: Results of linear and uniform distribution in mix data
Method and prior information MAE¯\overline{\mathrm{MAE}} SPR¯\overline{\mathrm{SPR}}
LiM (0) 0.1234 ± 0.0287 0.6571
LiM (a) 0.1380 ± 0.0195 0.6892
No graph 0.1741 ± 0.0145 0.4292
TABLE X: Results of linear and Gaussian distribution in mix data
Method and prior information MAE¯\overline{\mathrm{MAE}} SPR¯\overline{\mathrm{SPR}}
LiM (0) 0.1726 ± 0.0184 0.4350
LiM (a) 0.1540 ± 0.0234 0.4635
No graph 0.1546 ± 0.0028 0.4603

This result confirms that prior information (a) is at least the same and more than that as when no causal graph is used. Futhermore prior information (0) and (b) provide relatively high performance compared to the case with No graph. On the other hand, depending on the nature of the data and the causal discovery method, it may be worse than the case with No graph, so it is necessary to consider multiple causal discovery methods.

V Application to real data

In this section, we demonstrate the effectiveness of this framework for the case where the causal graph is unknown by applying our method to real data.

V-A Dataset and preprocessing

We applied our method to the anonymized credit rating data of 14 01814\,018 business customers provided by the Shiga Bank, Ltd. A credit rating was assigned by a bank to a debtor based on the analysis of its financial statements [25]. Although there were several grades in the credit rating, we simplified the grades to high (excellent) and low (poor), which facilitated its modeling as a binary classification problem. We used the industry type, amount of capital stock, number of employees, most recent annual sales, and total liabilities and equity as the explanatory variables. We performed equal-frequency discretization with 10 bins because the capital stock, number of employees, most recent annual sales, and total liabilities and equity had highly skewed distributions. These discretized variables are ordinal measures with enough many levels and can be seen as continuous variables. As a machine learning model, we used Random Forest, which is an algorithm that performs classification by combining the results of multiple decision trees constructed from randomly selected learning data and explanatory variables; it is an ensemble learning algorithm [26]. Random Forest was expected to have sufficient predictive accuracy for real data in this study.

Although the data involved mixed data where only the industry type is discrete variable and the others are regarded as continuous variables, the industry type could not be caused by the other explanatory variables. Thus, we used DirectLiNGAM for continuous variables, assuming that industry type was an exogenous discrete variable that affected all other variables. In this case, DirectLiNGAM can handle discrete variable similarity to the conventional structural equation modeling [27]. Because we aimed at a quantitative analysis of all explanatory variables, we conducted our analysis assuming that the target variable exhibited a direct parent-child relationship with all explanatory variables in the causal structure, that is, prior information (b).

V-B Analysis results and discussion

The black lines of Figure 5 show the results with DirectLiNGAM for the entire dataset. The interpretation of the causal direction of the results is as follows. The causal direction from capital stock to total liabilities and equity was consistent with domain knowledge because the total liabilities and equity are the sum of debt and equity on the balance sheet. In addition, the causal direction from capital stock to sales was consistent with the domain knowledge that companies conducted business activities based on capital and that this influenced sales. These variables can affect the credit rating on the basis of prior information of the causal structure in Figure 5 (red lines).

Refer to caption
Figure 5: Causal graph estimated by DirectLiNGAM (black lines). Prior information on the causal structure (red lines).

Next, Nesuf estimated using the causal graph and No graph is shown in Figure 6. The importance ranking of each of these variables obtained by LEWIS can be explained as follows using the causal graph shown in Figure 5. The industry type was an exogenous variable that affected the other four variables on the causal graph. If the industry type were to be changed, the values in the other four variables would also change, which is likely to affect the final predicted value. Thus, its importance in LEWIS is the greatest. Similarly, the value of capital stock changed the value of the sales and total liabilities and equity, and the total liabilities and equity changed the value of sales. The number of employees and sales were less important because changing these values did not change the values of other variables. However, the Nesuf scores estimated with No graph were smaller, and those of the capital stock in particular were significantly smaller, which contradicted domain knowledge.

Refer to caption
Figure 6: Nesuf estimated from causal graph and No graph

The LEWIS reversal probability score is shown in Figure 7. Overall, the Suf score (orange) was high, and there was a high probability that changing the value of the variable would change the company from a low-rated one to a high-rated one. Industry type also significantly affected the predicted value if it were changed. Reversal probabilities can assist the decision-making about which variables are likely to change the rating for a low-rated company and can quantify the company’s strengths and weaknesses based on each score.

Refer to caption
Figure 7: Reversal probability score estimated from the causal graph. Nec (blue) is the probability that the prediction would change by lowering the value of that variable for a company whose rating is predicted to be high. Suf (orange) is the probability that the prediction would change by increasing the value of the variable for a company whose rating is predicted to be low.

VI Conclusion

This study proposed a new causal XAI framework that combined causal structure information and causal discovery without the knowledge of the causal graph. We analyzed the global explanation scores by using counterfactual explanations based on the causal structure and proposed prior information on the causal structure. Numerical experiments demonstrated the possibility of estimating the global explanatory score and the order of the true feature importance even if the causal graph was not fully known. By applying our method to real data, we demonstrated the usefulness of the proposed framework even if the causal graph is unknown.

As an extension of LEWIS, a method for multi-class classification was proposed [10]. However, whether proposed prior information on causal structure is valid even for the multi-class classification remains an open question.

Acknowledgment

We would like to express our gratitude to Shiga Bank, Ltd. for their valuable comments on this study and for providing us with the data used. This work was partially supported by JSPS KAKENHI 20K11708 and JST CREST JPMJCR22D2.

References

  • [1] Y. LeCun, Y. Bengio and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015.
  • [2] X. Dong, Z. Yu, W. Cao, Y. Shi and Q. Ma, “A survey on ensemble learning,” Front. Comput. Sci., vol. 14, pp. 241–258, 2020.
  • [3] A. Das and P. Rad, “Opportunities and challenges in explainable artificial intelligence (XAI): A Survey,” arXiv preprint arXiv:2006.11371, 2020.
  • [4] D. Gunning and D. Aha, “DARPA’s explainable artificial intelligence (XAI) program,” AI Mag., vol. 40, no. 2, pp. 44–58, 2019.
  • [5] S. Beckers, “Causal explanations and XAI,” Proceedings of the First Conference on Causal Learning and Reasoning, Proc. Mach. Learn. Res., vol. 140, pp. 1–20, 2022.
  • [6] S. M. Lundberg, G. Erion, H. Chen, A. DeGrave, J. M. Prutkin, B. Nair, R. Katz, J. Himmelfarb, N. Bansai and S. Lee, “From local explanations to global understanding with explainable AI for trees,” Nat. Mach. Intell., vol. 2, pp. 56–67, 2020.
  • [7] S. M. Lundberg and S. Lee, “A unified approach to interpreting model predictions,” NIPS’17: 31st Conference on Neural Information Processing System, pp. 4768–4777, 2017.
  • [8] T. Heskes, E. Sijben, I. G. Buchr and T. Classen, “Causal Shapley values: exploiting causal knowledge to explain individual predictions of complex models,” NeurIPS 2020: 34th Conference on Neural Information Processing System, pp. 1–12, 2020.
  • [9] J. Kaddour, A. Lynch, Q. Liu, M. J. Kusner and R. Silva, “Causal machine learning: a survey and open problems,” arXiv preprint arXiv:2206.15475, 2022.
  • [10] S. Galhotra, R. Pradhan and B. Salimi, “Explaining black-box algorithms using probabilistic contrastive counterfactuals,” Proceedings of the 2021 International Conference on Management of Data, pp. 577–590, 2021.
  • [11] P. Spirtes, C. Glymour and R. Scheines, Causation, Prediction, and Search, 2nd ed., Massachusetts: MIT Press, 2001.
  • [12] J. Pearl, Causality: Models, reasoning, and inference, 2nd ed., Cambridge: Cambridge University Press, 2009.
  • [13] C. Spearman, “The proof and measurement of association between two things,” Am. J. Psychol., vol. 15, no. 1, pp. 72–101, 1904.
  • [14] Y. Zheng, B. Huang, W. Chen J. Ramsey, M. Gong, R. Cai, S. Shimizu, P. Spirtes and K. Zhang, “Causal-learn: Causal Discovery in Python,” arXiv preprint arXiv:2307.16405, 2023.
  • [15] T. Ikeuchi, M. Ide, Y. Zeng, T. N. Maeda and S. Shimizu. “Python package for causal discovery based on LiNGAM,” J. Mach. Learn. Res., vol. 24, no. 14, pp. 1–8, 2023.
  • [16] P. Beaumont, B. Horsburgh, P. Pilgerstorfer, A. Droth, R. Oentaryo, S. Ler, H. Nguyen, G. A. Ferreira, Z. Patel and W. Leong, “CausalNex [Computer software],” 2021.
  • [17] P. Spirtes and C. N. Glymour, “An algorithm for fast recovery of sparse causal graphs,” Social Sci Comput Rev, vol. 9, no. 1, pp. 62–72, 1990.
  • [18] T. S. Verma and J. Pearl, “An algorithm for deciding if a set of observed independencies has a causal explanation”, Uncertainty in Artificial Intelligence, pp. 323–330, 1992.
  • [19] S. Shimizu, T. Inazumi, Y. Sogawa, A. Hyvärinen, Y. Kawahara, T. Washio, P. O. Hoyer and K. Bollen, “DirectLiNGAM: A direct method for learning a linear non-Gaussian structural equation model,” J. Mach. Learn. Res., vol. 12, pp. 1225–1248, 2011.
  • [20] S. Shimizu, P. O. Hoyer, A. Hyvärinen and A. Kerminen, “A linear non-Gaussian acyclic model for causal discovery,” J. Mach. Learn. Res., vol. 7, no. 72, pp. 2003–2030, 2006.
  • [21] J. Peter, J. Mooij, D. Janzing and B. Schölkopf, “Causal discovery with continuous additive noise models,” J. Mach. Learn. Res., vol. 15, pp. 2009–2053, 2014.
  • [22] Z. Yan, S. Shimizu, H. Matsui and F. Sun, “Causal discovery for linear mixed data,” Proc. First Conf. Causal Learn. Reason., vol. 177, pp. 994—1009, 2022.
  • [23] X. Zheng, B. Aragam, P. Ravikumar, and E. P. Xing, “DAGs with NO TEARS: continuous optimization for structure learning,” In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 9492–9503, 2018.
  • [24] X. Zheng, C. Dan, B. Aragam, P. Ravikumar, and E. Xing. “Learning sparse nonparametric DAGs,” In International Conference on Artificial Intelligence and Statistics, pp. 3414–3425, 2020.
  • [25] Financial Services Agency, “Financial Inspection Manual (Inspection Manual for Financial Institutions Accepting Deposits, etc.),” https://www.fsa.go.jp/manual/manualj/manual_yokin/14.pdf (Accessed on 01/05/2024). [In Japanese]
  • [26] L. Breiman, “Random Forests,” Mach. Learn., vol. 45, no. 1, pp. 5–32, 2001.
  • [27] B. O. Muthén, “Beyond SEM: general latent variable modeling,” Behaviormetrika, vol. 29, pp. 81–117, 2002.