This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

MANIFOLD-BASED SHAPLEY FOR SAR RECOGNIZATION NETWORK EXPLANATION

Abstract

Explainable artificial intelligence (XAI) holds immense significance in enhancing the deep neural network’s transparency and credibility, particularly in some risky and high-cost scenarios, like synthetic aperture radar (SAR). Shapley is a game-based explanation technique with robust mathematical foundations. However, Shapley assumes that model’s features are independent, rendering Shapley explanation invalid for high dimensional models. This study introduces a manifold-based Shapley method by projecting high-dimensional features into low-dimensional manifold features and subsequently obtaining Fusion-Shap, which aims at (1) addressing the issue of erroneous explanations encountered by traditional Shap; (2) resolving the challenge of interpretability that traditional Shap faces in SAR recognization tasks.

Index Terms—  synthetic aperture radar, explainable artificial intelligence, shapley explanation

1 Introduction

Synthetic aperture radar (SAR) is widely utilized in earth observation, electronic reconnaissance, and other fields due to its day-and-night and all-weather imaging capability. With the extraordinary ability of feature representation, deep neural networks (DNNs) are widely used in various SAR tasks, like object detection/localization, target identification, etc. However, due to the black-box nature of DNNs, it is difficult for human to understand DNN’s decision-making logic. Simultaneously, [1] demonstrates there are significant differences in DNN’s decision-making processes between optical images and SAR images. These probably cause uncertainty in decision-making and conceal vulnerability of DNNs, particularly in critical domains such as military target reconnaissance. Therefore, network explanation is significant for evaluating system reliability and robustness.

Shapley method [2, 3] is a commonly used network explanation that transforms the network explanation problem into an optimal allocation problem of network confidence. Specifically, the saliency map is obtained by calculating the marginal contribution of each input pixel to the network’s confidence. Shapley is a typical local, ex-post, model-agnostic explanation that can explain any model. However, shapley method is usually based on an assumption that model’s features are independent. This assumption is generally valid in low-dimensional models, but it often yields incorrect explanation when the network has high-dimensional features. When calculating the marginal contribution of a feature coalition, shapley method often provides feature coalition that do not conform to the manifold, rendering the calculated shapley value without practical significance.

Currently, researchers have begun to address this problem [4]. [5] obtain more credible explanations by analyzing data distribution, [6] solve the feature correlation problem through gradient methods, and [7] attempt to extend the kernel method in shapley to obtain more accurate explanation estimates. [8] proposed a shapley method that respects manifold distribution and provided two ways (unsupervised and supervised) to obtain interpretation estimations. Some efforts also aim to solve the manifold problem of explanation [9, 10]. However, these methods are challenging to extend to high-dimensional data. With the development of generative networks, it has become possible to obtain reliable data manifold. This study utilizes generative adversarial networks (GAN) [11] as a manifold deduction method and obtains fusion shapley by combining traditional and manifold shapley. The primary contributions of this paper are as follows: (1) We proposed a novel explanation method called Fusion-Shap, combining manifold and traditional Shap to obtain a reliable network explanation in SAR tasks. (2) We proposed a manifold-based shapley method that relies on obtaining a reliable manifold distribution through advanced generation networks.

Refer to caption
Fig. 1: UMAP [12] visualization of low-dimensional manifolds and high-dimensional features.

2 Methodology

As discussed in Section 1, traditional shapley method makes an assumption that features are independent, which, however, frequently does not hold. This assumption can result in misleading interpretations of high-dimensional features and consequently yield inaccurate network explanation. We employ StyleGAN to transform high-dimensional features into manifold to address this issue. This section details the implementation of Fusion-Shap.

2.1 Calculation of data manifold

We denote the network as ff, with the sample to be explained represented as IC×W×HI\in\mathbb{R}^{C\times W\times H}, and the data manifold as U1×LU\in\mathbb{R}^{1\times L}. We leverage the StyleGAN2 [13] framework to train the generator, denoted as GG, which learns the underlying manifold structure and decodes the mapping, resulting in I=G(U)I=G(U). Subsequently, we proceed to train the reconstructor, denoted as RR, using Image2StyleGAN [14] to obtain the encoding mapping: U=R(I)U=R(I).

Now, we have successfully established a mutual mapping relationship between high-dimensional data and low-dimensional manifolds, denoted as IUI\Leftrightarrow U.

Refer to caption
Fig. 2: Left: Implementation of Fusion-shap; Right: The black-box model under interpretation:FF. StyleGAN generators(GG) and Image2StyleGAN(RR), which enable the transformation of high-dimensional feature and low-dimensional manifolds.

2.2 Manifold-based Shapley

In section 2.1, we have successfully mapped the high-dimensional feature to the low-dimensional manifold. This mapping allows us to compute the shapley value on the manifold directly. For a manifold comprising LL features, we can express formula (LABEL:eq1) as follows:

ϕv(p)=\displaystyle\phi_{v}(p)= SUL\{p}|SU|!(l|SU|1)!l!\displaystyle\sum_{S_{U}\subseteq L\backslash\{p\}}\frac{\left|S_{U}\right|!\left(l-\left|S_{U}\right|-1\right)!}{l!} (1)
[v(SU{p})v(SU)].\displaystyle\cdot\left[v\left(S_{U}\cup\{p\}\right)-v\left(S_{U}\right)\right].

In this equation, SUS_{U} represents a subset SUL={1,2,,l}S_{U}\subseteq L=\{1,2,\ldots,l\} of the feature set LL, while SUL\{p}S_{U}\subseteq L\backslash\{p\} indicates that SUS_{U} is a subset of LL and does not include feature pp. The symbol vv denotes the value function. For a specific sample denoted as uu^{\prime}, the function vv can be expressed as:

v(SU)=E[f(g(U))US=US].v\left(S_{U}\right)=E\left[f(g(U))\mid U_{S}=U_{S}^{\prime}\right]. (2)

2.3 Shapley mapping

We have successfully established a mapping between the high-dimensional feature, II, and the low-dimensional manifold, UU. However, the shapley value represents feature importance and the challenge now lies in determining how to mapping this importance effectively.

We adopte a gradient-based approach to assess this importance mapping. First, define UpU_{p} as an element within the position-encoded pp-index manifold. It is intuitive to understand that slight perturbations in UpU_{p} will induce changes in the high-dimensional features represented by II. We define this mapping as γ\gamma, unit-importance-mapping, which can be expressed mathematically as follows:

ϕv(Ip)=γ[ϕv(Up)].\phi_{v}\left(I^{p}\right)=\gamma\left[\phi_{v}\left(U_{p}\right)\right]. (3)

In this equation, II represents an image, a coalition of pixels. Formula (3) can be interpreted as mapping the shapley value from the low-dimensional manifold to the high-dimensional feature space for each feature:

ϕv(Ic,w,hp)=Kφc,w,hpϕv(Up)=KIc,w,hUpϕv(Up).\phi_{v}\left(I_{c,w,h}^{p}\right)=K\cdot\varphi_{c,w,h}^{p}\cdot\phi_{v}\left(U_{p}\right)=K\cdot\frac{\partial I_{c,w,h}}{\partial U_{p}}\cdot\phi_{v}\left(U_{p}\right). (4)

To preserve the shapley characteristics within the shapley methods, we express the shapley value ϕv(Up)\phi_{v}\left(U_{p}\right) of UpU_{p} as a weighted sum of the individual shapley values ϕv(Ic,w,hp)\phi_{v}\left(I_{c,w,h}^{p}\right) within the high-dimensional feature space II:

ϕv(Up)=cwhωc,w,hpϕv(Ic,w,hp).\phi_{v}\left(U_{p}\right)=\sum_{c}\sum_{w}\sum_{h}\omega_{c,w,h}^{p}\cdot\phi_{v}\left(I_{c,w,h}^{p}\right). (5)

Combining Formulas (4) and (5):

ϕv(Ic,w,hp)=Ic,w,hUpϕv(Up)C×W×H.\phi_{v}\left(I_{c,w,h}^{p}\right)=\frac{\partial I_{c,w,h}}{\partial U_{p}}\cdot\frac{\phi_{v}\left(U_{p}\right)}{C\times W\times H}. (6)

For all elements in manifold UU:

ϕvM(Ic,w,h)=pϕv(Ic,w,hp)=KpIc,w,hUpϕv(Up)\phi_{v}^{M}\left(I_{c,w,h}\right)=\sum_{p}\phi_{v}\left(I_{c,w,h}^{p}\right)=K\sum_{p}\frac{\partial I_{c,w,h}}{\partial U_{p}}\cdot\phi_{v}\left(U_{p}\right) (7)

Now, we have obtained the saliency map denoted as Mmanifold =ϕvM(Ic,w,h)M_{\text{manifold }}=\phi_{v}^{M}\left(I_{c,w,h}\right).

2.4 Shapley Fusion

Manifold-based successfully addresses the issue of traditional Shapely method’s overlooking on feature interdependence. Nevertheless, the manifold dimension somewhat constrains this method, and altering the manifold dimension necessitates relearning the mapping relationship between high-dimensional features and low-dimensional manifold—a process that incurs substantial computational costs. To tackle this problem, we propose a hybrid approach by reintegrating the feature-independent traditional shapley method into the manifold shapley method:

Mfusion=αMmanifold+(1α)Mtraditional.M_{fusion}=\alpha M_{manifold}+(1-\alpha)M_{traditional}. (8)

The fusion coefficient α[0,1]\alpha\in[0,1] is determined with the objective of minimizing average drop in confidence, which can be expressed mathematically as:

argmint=1C{hardmax[f(I)t]f(MfusionI)t},\operatorname{argmin}\sum_{t=1}^{C}\left\{\operatorname{hardmax}\left[f(I)_{t}\right]-f\left(M_{fusion}\otimes I\right)_{t}\right\}, (9)

among them, \otimes stands for Hadamard product. The set t={1,2,C}t=\{1,2\ldots,C\} signifies that the network encompasses a total of CC confidence categories, with tt representing the Top-1 confidence category assigned to the input II. Utilizing equation (8), we can derive the fusion coefficient α\alpha and subsequently compute Fusion-Shap, denoted as MfusionM_{fusion}, elements in saliency map are Fusion-shapley value ϕvF\phi_{v}^{F}.

3 Experimental Results

3.1 Experiment Settings

Dataset: This paper employs MSTAR dataset.

Evaluation Metrics: We conducted qualitative and quantitative assessments of explanation methods. First, drawing upon prior research [15], we ascertain whether these methods adhere to a robust mathematical foundation, explicitly addressing the three primary criteria of interpretability. Secondly, we introduce quantitative metrics, fidelity and sensitivity, to mathematically evaluate the performance of explanation methods [16].

Refer to caption
Fig. 3: Results visualization. Columns from left to right: Origin image, Grad CAM, LRP, IG, SG, SHAP and F-SHAP.

3.2 Visualization and Subjective Evaluation

In contrast to optical images and human intuitive perception, it is more difficult to understand DNNs’ decision-making logic on SAR images due to the intricate nature of SAR imaging mechanism. SAR recognition network’s decision-making is not solely contingent upon the target area; interference spots and shadow regions also holds significance in network decision-making. Figure 3 illustrates visualization results of Grad CAM [17], LRP [18], IG [19], SG [20], SHAP [3] and Fusion-SHAP. Table 1 shows the results of subjective evaluation (explanation validity from lowest to highest: 1-10).

3.3 Axiomatic Validation

Taylor Interactions [15] considers various explanation methods by aggregating the individual effects φ(k)\varphi(k) and the interaction I(k)I(k) of characteristic effects, each governed by distinct rules. Subsequently, it introduces three criteria for assessing the attribution method’s reliability: Low Approximation Error (LAE), No Unrelated Allocation (NUA) and Complete Allocation (CA). The results of the attribution validation are presented in Table 1.

Table 1: Axioms, Sen., Infd. and subjective evaluation.
Axiomatic Metric Sub.
LAE NUA CA INFD SEN
Grad[21] 0.0423 269.31 2.15
GradCAM[17] 3.5e-6 2.4124 1.00
LRP[18] 0.0552 48.120 5.10
I-Grad[19] 0.0697 3.4425 2.90
S-Grad[20] 0.0079 4.0212 4.15
Shapley[3] 0.0009 3.4156 5.70
F-SHAP 7.1e-5 2.1285 5.95

(1) compute the shapley values for high-dimensional features and low-dimensional manifolds. Both of these components notably satisfy the three computed aforementioned characteristics. The contribution of each feature within manifold can be decomposed as follows:

ϕvp=kΩpφ(k)+|SU|>1,pSUkΩS1|SU|I(k).\phi_{v}^{p}=\sum_{k\in\Omega_{p}}\varphi(k)+\sum_{\left|S_{U}\right|>1,p\in S_{U}}\sum_{k\in\Omega_{S}}\frac{1}{\left|S_{U}\right|}I(k). (10)

Fusion-Shap can be viewed as the redistribution of shapley values through manifold information. Since ϕv(i)=ϕv(p)\sum\phi_{v}(i)=\sum\phi_{v}(p), this distribution still adheres to the shapley value properties. To illustrate this more intuitively, the shapley value of feature UpU_{p} within manifold UU is reassigned to the high-dimensional space II, corresponding to the cumulative shapley values of multiple high-dimensional features. We reasonably hypothesize that the shapley value of feature UpU_{p} in manifold UU is mapped to the high-dimensional space and allocated to two or more features. Thus, we can express Fusion-Shap in the form of Taylor interaction:

ϕvF(i)=\displaystyle\phi_{v}^{F}(i)= (1α)ϕv(i)+αϕvM(i)\displaystyle(1-\alpha)\phi_{v}(i)+\alpha\phi_{v}^{M}(i) (11)
=\displaystyle= (1α)ϕv(i)+αpKpϕv(p)\displaystyle(1-\alpha)\phi_{v}(i)+\alpha\sum_{p}K_{p}\phi_{v}(p)
=\displaystyle= (1α)[kΩiφ(k)+|S|>1,iSkΩS1|SU|I(k)]\displaystyle(1-\alpha)\left[\sum_{k\in\Omega_{i}}\varphi(k)+\sum_{|S|>1,i\in S}\sum_{k\in\Omega_{S}}\frac{1}{\left|S_{U}\right|}I(k)\right]
+αpKp[kΩpφ(k)+pSUkΩS1|SU|I(k)]\displaystyle+\alpha\sum_{p}K_{p}\left[\sum_{k^{\prime}\in\Omega_{p}}\varphi\left(k^{\prime}\right)+\sum_{p\in S_{U}}\sum_{k^{\prime}\in\Omega_{S}}\frac{1}{\left|S_{U}\right|}I\left(k^{\prime}\right)\right]
=\displaystyle= (1α)kΩiφ(k)+iSkΩS1|S|I(k)\displaystyle(1-\alpha)\sum_{k\in\Omega_{i}}\varphi(k)+\sum_{i\in S^{\prime}}\sum_{k\in\Omega_{S}^{\prime}}\frac{1}{\left|S^{\prime}\right|}I(k)
+(1α)|S|>1,iS\SkΩS1|S\S|I(k).\displaystyle+(1-\alpha)\sum_{|S|>1,i\in S\backslash S^{\prime}}\sum_{k\in\Omega_{S}}\frac{1}{\left|S\backslash S^{\prime}\right|}I(k).

This Taylor interaction formulation satisfies three axioms, substantiating the effectiveness of Fusion Shap.

Refer to caption
Fig. 4: Infidelity and sensitivity vary across different dimensions of manifold.

3.4 Explanation Sensitivity and Infidelity

We introduce fidelity and sensitivity [16] for evaluating model performance. For a given method ψ(f,I)\psi(f,I), we consider a given meaningful perturbation PP with probability distribution μP\mu_{P}. Infidelity of ψ\psi can be defined as:

INFD(f,I,ψ)=EPμP[PTψ(f,x)(f(X)f(xP))2],\operatorname{INFD}(f,I,\psi)=E_{P\sim\mu_{P}}\left[P^{T}\psi(f,x)-(f(X)-f(x-P))^{2}\right], (12)

and the sensitivity index can be expressed as:

SEN(f,I,ψ,r)=maxIIrψ(f,I)ψ(f,I),\operatorname{SEN}(f,I,\psi,r)=\max_{\|I^{\prime}-I\|\leq r}\|\psi(f,I^{\prime})-\psi(f,I)\|, (13)

among them, parameter rr denotes the radius of the domain or field under consideration.

Table 1 presents the results of infidelity and sensitivity. F-shap meets the axiom verification criteria and achieves the best infidelity, sensitivity, and subjective evaluation metrics.

3.5 Manifold Dimension and Explanation

We have examined several manifold dimension configurations and calculated the fidelity and sensitivity of network explanations. Figure 4 showcase the manifold dimensions and model reliability. It is evident that increasing the manifold dimension reduces model’s infidelity and sensitivity.

4 Conclusions

This study introduces a shapley-based method for elucidating SAR recognization networks. By combining low-dimensional manifold shap and high-dimensional feature shap, this approach rectifies the inherent assumption of feature independence. Simultaneously, we introduce shapley mapping to achieve the transformation between manifold shap and original shap. Experimental results confirm the efficacy of our method in terms of visualization, subjective evaluation, axiom validation, and infidelity and sensitivity assessment.

References

  • [1] Zhenpeng Feng, Hongbing Ji, Miloš Daković, Mingzhe Zhu, and Ljubiša Stanković, “Analytical interpretation of the gap of cnn’s cognition between sar and optical target recognition,” Neural Networks, vol. 165, pp. 982–986, 2023.
  • [2] Lloyd S Shapley et al., “A value for n-person games,” 1953.
  • [3] Scott M Lundberg and Su-In Lee, “A unified approach to interpreting model predictions,” Advances in neural information processing systems, vol. 30, 2017.
  • [4] Hugh Chen, Ian C Covert, Scott M Lundberg, and Su-In Lee, “Algorithms to estimate shapley value feature attributions,” Nature Machine Intelligence, pp. 1–12, 2023.
  • [5] Chun-Hao Chang, Elliot Creager, Anna Goldenberg, and David Duvenaud, “Explaining image classifiers by counterfactual generation,” in International Conference on Learning Representations, 2018.
  • [6] Christopher Anders, Plamen Pasliev, Ann-Kathrin Dombrowski, Klaus-Robert Müller, and Pan Kessel, “Fairwashing explanations with off-manifold detergent,” in International Conference on Machine Learning. PMLR, 2020, pp. 314–323.
  • [7] Kjersti Aas, Martin Jullum, and Anders Løland, “Explaining individual predictions when features are dependent: More accurate approximations to shapley values,” Artificial Intelligence, vol. 298, pp. 103502, 2021.
  • [8] Christopher Frye, Damien de Mijolla, Tom Begley, Laurence Cowton, Megan Stanley, and Ilya Feige, “Shapley explainability on the data manifold,” in International Conference on Learning Representations, 2020.
  • [9] Yongchan Kwon and James Y Zou, “Weightedshap: analyzing and improving shapley based feature attributions,” Advances in Neural Information Processing Systems, vol. 35, pp. 34363–34376, 2022.
  • [10] Emanuele Albini, Jason Long, Danial Dervovic, and Daniele Magazzeni, “Counterfactual shapley additive explanations,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 1054–1070.
  • [11] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio, “Generative adversarial nets,” Advances in neural information processing systems, vol. 27, 2014.
  • [12] Leland McInnes, John Healy, Nathaniel Saul, and Lukas Grossberger, “Umap: Uniform manifold approximation and projection,” The Journal of Open Source Software, vol. 3, no. 29, pp. 861, 2018.
  • [13] Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila, “Analyzing and improving the image quality of StyleGAN,” in Proc. CVPR, 2020.
  • [14] Rameen Abdal, Yipeng Qin, and Peter Wonka, “Image2stylegan: How to embed images into the stylegan latent space?,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 4432–4441.
  • [15] Huiqi Deng, Na Zou, Mengnan Du, Weifu Chen, Guocan Feng, Ziwei Yang, Zheyang Li, and Quanshi Zhang, “Understanding and unifying fourteen attribution methods with taylor interactions,” arXiv preprint arXiv:2303.01506, 2023.
  • [16] Chih-Kuan Yeh, Cheng-Yu Hsieh, Arun Suggala, David I Inouye, and Pradeep K Ravikumar, “On the (in) fidelity and sensitivity of explanations,” Advances in Neural Information Processing Systems, vol. 32, 2019.
  • [17] Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra, “Grad-cam: Visual explanations from deep networks via gradient-based localization,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 618–626.
  • [18] Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek, “On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation,” PloS one, vol. 10, no. 7, pp. e0130140, 2015.
  • [19] Mukund Sundararajan, Ankur Taly, and Qiqi Yan, “Axiomatic attribution for deep networks,” in International conference on machine learning. PMLR, 2017, pp. 3319–3328.
  • [20] Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg, “Smoothgrad: removing noise by adding noise,” arXiv preprint arXiv:1706.03825, 2017.
  • [21] Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje, “Learning important features through propagating activation differences,” in International conference on machine learning. PMLR, 2017, pp. 3145–3153.