This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Proto-lm: A Prototypical Network-Based Framework for Built-in Interpretability in Large Language Models

Sean Xie
Department of Computer Science
Dartmouth College
[email protected]
&Soroush Vosoughi
Department of Computer Science
Dartmouth College
[email protected]
&Saeed Hassanpour11footnotemark: 1
Department of Biomedical Data Science
Dartmouth College
[email protected]
 Co-corresponding Authors.
Abstract

Large Language Models (LLMs) have significantly advanced the field of Natural Language Processing (NLP), but their lack of interpretability has been a major concern. Current methods for interpreting LLMs are post hoc, applied after inference time, and have limitations such as their focus on low-level features and lack of explainability at higher-level text units. In this work, we introduce proto-lm, a prototypical network-based white-box framework that allows LLMs to learn immediately interpretable embeddings during the fine-tuning stage while maintaining competitive performance. Our method’s applicability and interpretability are demonstrated through experiments on a wide range of NLP tasks, and our results indicate a new possibility of creating interpretable models without sacrificing performance. This novel approach to interpretability in LLMs can pave the way for more interpretable models without the need to sacrifice performance. We release our code at https://github.com/yx131/proto-lm.

1 Introduction

In recent years, Large Language Models (LLMs) have significantly improved results on a wide range of Natural Language Processing (NLP) tasks. However, despite their state-of-the-art performance, LLMs, such as BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019) and BART Lewis et al. (2019), are not easily interpretable. Interpretability is a crucial aspect of language models, especially LLMs, as it enables trust and adoption in various domains. To address this issue, there is a growing interest in improving model interpretability for LLMs and neural models in general.

Current interpretation methods have several limitations, such as requiring a surrogate model to be built Ribeiro et al. (2016); Lundberg and Lee (2017) or being applied post-hoc to each instance, separate from the original decision-making process of the model Shrikumar et al. (2016, 2017); Springenberg et al. (2014). These limitations add extra computational complexity to the explanation process and can result in approximate explanations that are unfaithful111Faithfulness is defined as how accurately the explanation reflects the decision-making process of the model Jacovi and Goldberg (2020). to the original model’s decision-making process Sun et al. (2020); DeYoung et al. (2019). Finally, current interpretability methods focus on attributing importance to different words in the input and do not explain the model’s decision at the sentence or sample level. Sundararajan et al. (2017); Vaswani et al. (2017); Murdoch et al. (2018).

Refer to caption
Figure 1: Illustration of the inherently interpretable decision-making process of the proto-lm model for an example from the SST5 (5-class fine-grained sentiment analysis) dataset. The figure highlights the words identified by token-level attention, with a stronger shade indicating a higher attention score. proto-lm correctly predicts the input by relating it to prototypes of the “Very Positive” class. The decision process of proto-lm is transparent, and no separate interpretability method is required.

To address these challenges, we propose a novel framework that utilizes a prototypical network to learn an interpretable prototypical embedding layer on top of a fine-tuned LLM, which can be trained end-to-end for a downstream task. Our framework utilizes trainable parameters, called prototypes, to both perform the downstream task by capturing important features from the training dataset and provide explanations for the model’s decisions via projection onto the most influential training examples. Additionally, we utilize a token-level attention layer before the prototypical layer to select relevant parts within each input text, allowing our model to attribute importance to individual words like existing interpretability methods. Our framework, named proto-lm, achieves competitive results on a wide range of NLP tasks and offers interpretability while addressing the previously mentioned limitations of current interpretability methods.

Fig. 1 shows our proposed proto-lm framework applied to a multi-classification example from the SST5 dataset Socher et al. (2013). As the figure illustrates, the model’s decision-making process is simple, transparent, and accurate as it classifies the input as “Very Positive” based on its highest similarity to prototypes from the “Very Positive” class and the input’s distance (lack of similarity) to “Negative” and “Very Negative” prototypes. A key advantage of our framework is its inherent interpretability. Our prototypical network does not require an additional surrogate model for explanation, and we can observe the exact contribution of each prototype to the final output, resulting in faithful explanations. Additionally, our framework offers interpretability at both the token and sample level, allowing for the identification of important words in the input text as well as the attribution of importance to impactful samples in the training data. In this work, we make the following contributions:

  • We introduce a novel framework, proto-lm based on prototypical networks that provides inherent interpretability to LLMs.

  • We demonstrate proto-lm’s applicability on three LLMs and show its competitive performance on a wide range of NLP tasks.

  • We conduct ablation studies to demonstrate important characteristics of proto-lm under different hyperparameters.

  • We evaluate the interpretability of proto-lm under several desiderata and show that the explanations provided by proto-lm are of high quality.

2 proto-lm

The architecture of our proposed framework, proto-lm, is shown in Figure 2. We utilize a pre-trained LLM as the underlying language model to encode the input and build a prototypical layer on top of it to provide interpretable prototypes. The goal of the learned prototypes is to capture features representative enough of each class so that they can be fine-tuned for the downstream classification task. In the classification process, similarities between the encoded input and the learned prototypes are fed through a fully-connected layer to produce logits. We formalize this below.

2.1 Token-level Attention

Let xix_{i} represent a single input text and yiy_{i} its associated label. We denote f(xi)f(x_{i}) as the encoding of xix_{i} by an underlying LLM, ff. We pass f(xi)f(x_{i}) through a token-level attention layer that learns to emphasize different parts of the text, allowing us to identify important words within each sample. We first calculate υt\upsilon_{t}, the output of a fully-connected layer with bias (ψ\psi ), applied to hth_{t}, which the embedding of each token tt in f(xi)f(x_{i}), followed by a tanh\tanh activation function (σ\sigma). We then compute νt\nu_{t}, the dot product of υt\upsilon_{t}, and a token-level weight vector WνW_{\nu}.

υt=σ(ψ(ht))νt=υtWν\upsilon_{t}=\sigma(\psi(h_{t}))\;\;\;\;\;\;\;\;\nu_{t}=\upsilon_{t}\cdot W_{\nu} (1)

We calculate the attention score αt\alpha_{t} for each token tt using the softmax function. The attended encoding of f(xi)f(x_{i}), SiS_{i}, is then obtained by taking the weighted sum of the token embeddings, where the weight for each token is determined by its attention score.

αt=exp(νt)tf(xi)exp(νt)Si=tf(xi)αtht\alpha_{t}=\frac{\exp(\nu_{t})}{{\sum_{t\in f(x_{i})}}\exp(\nu_{t})}\;\;\;\;\;S_{i}=\sum_{t\in f(x_{i})}\alpha_{t}h_{t} (2)
Refer to caption
Figure 2: The protoypical network-based architecture of proto-lm.

2.2 Prototypical Layer

In the prototypical layer PP, we create NN prototype vectors and assign nn prototypes to each class cCc\in C, where CC represents the set of all classes in the dataset. To ensure an equal number of prototype vectors are allocated to capture representative features for each class, we set n=N|C|n=\frac{N}{|C|}, where |C||C| denotes the total number of classes in the dataset. The input SiS_{i} is then fed into PP, where we calculate the similarity between SiS_{i} and every prototype vector pPp\in P by inverting the L2L_{2}-distance. We then concatenate all NN similarities to obtain the vector MiM_{i}, which serves as the embedding of input xix_{i} in the prototypical space. Each dimension of MiM_{i} can be interpreted as the similarity between MiM_{i} and a prototype. Subsequently, MiM_{i} is fed through a fully-connected layer WhW_{h} of dimension N×|C|N\times|C| to obtain logits for each class. Let WcW_{c} denote the set of weights in WhW_{h} that connect similarities in MiM_{i} to the logit of class cc. Our model’s output probabilities are computed as follows:

Pr(y^=c|xi)=exp(Miwc)cCexp(Miwc)Pr(\hat{y}=c\;|\;x_{i})=\frac{\exp(M_{i}w_{c})}{{\sum_{c\in C}}\exp(M_{i}w_{c})} (3)

2.3 Training Objective

To create more representative prototypes and shape the clusters of prototypes in the prototypical embedding space, we introduce a cohesion loss term coh\mathscr{L}_{coh} and a separation loss term sep\mathscr{L}_{sep} into our loss function in addition to the cross entropy loss ce\mathcal{L}_{ce}. Let KK be an integer hyperparameter, pjp_{j} denote a singular prototype, and PyiP_{y_{i}} represent all prototypes in PP that belong to class yiy_{i}, our loss terms are defined as follows:

coh=1Kj:pjPyimaxKSipj22\mathscr{L}_{coh}=\frac{1}{K}\cdot\;{\sum}_{\forall j:p_{j}\in P_{y_{i}}}{\max_{K}}\lVert{S_{i}-p_{j}}\rVert^{2}_{2} (4)
sep=1Kj:pjPyiminKSipj22\mathscr{L}_{sep}=-\frac{1}{K}\cdot\;{\sum}_{\forall j:p_{j}\not\in P_{y_{i}}}{\min_{K}}\lVert{S_{i}-p_{j}}\rVert^{2}_{2} (5)

For every SiS_{i} in the input batch, coh\mathscr{L}_{coh} penalizes the average distance between SiS_{i} and the KK most distant prototypes of its class (yiy_{i}) while sep\mathscr{L}_{sep} penalizes the average distance between SiS_{i} and the KK most similar prototypes that do not belong to yiy_{i}. Intuitively, for every SiS_{i}, coh\mathscr{L}_{coh} “pulls” KK prototypes of the correct class close while sep\mathscr{L}_{sep} “pushes” KK prototypes of the incorrect class away. We then add cross-entropy loss ce\mathscr{L}_{ce} and weight each loss term with a hyperparameter λ\lambda such that λ0+λ1+λ2=1\lambda_{0}+\lambda_{1}+\lambda_{2}=1 to obtain the following loss function:

total=λ0ce+λ1coh+λ2sep{\mathscr{L}_{total}}=\lambda_{0}\cdot\mathscr{L}_{ce}+\lambda_{1}\cdot\mathscr{L}_{coh}+\lambda_{2}\cdot\mathscr{L}_{sep} (6)

2.4 Prototype Projection

To understand each prototype vector pjp_{j} in natural language, we project each prototype onto the nearest token-level attended encoding (SjS_{j}) of a sample text (xj)x_{j}) in the training data that belongs to the same class as pjp_{j} and assign the token-attended text of that prototype. Let DD be the training dataset. We formally denote the projection as in eq.7: (xj,yj)D such that yj=c\forall(x_{j},y_{j})\in D\text{ such that }y_{j}=c:

Text ofpjargminSjSjpj22\text{Text of}\;p_{j}\leftarrow\underset{S_{j}}{{{\operatorname*{argmin}}}}\;{\lVert{S_{j}-p_{j}}\rVert^{2}_{2}} (7)

In other words, for each prototype pjp_{j}, we find the nearest SjS_{j} in the training dataset that belongs to the same class as pjp_{j} and assign the corresponding token-attended text to pjp_{j} (Details in App. D & E).

3 Performance Experiments

SST2 QNLI MNLI WNLI RTE IMDb SST5
proto-lm/BERT-base 93.6 ±\pm0.02/93.2 90.5 ±\pm0.02 /90.2 84.0 ±\pm0.03 /84.6 47.9/46.4* 68.1 ±\pm 0.01/66.4 91.5 ±\pm 0.02/91.3 55.3 ±\pm 0.03/54.9
\hdashlineproto-lm/BERT-large 95.2 ±\pm0.03/94.9 92.4 ±\pm0.03 /92.7 86.3 ±\pm0.02 /86.7 47.9/47.9* 76.2 ±\pm 0.02/70.1 93.6 ±\pm 0.03/92.0 56.4 ±\pm 0.02/56.2
\hdashlineproto-lm/RoBERTa-base 94.0 ±\pm0.03/93.6 92.2 ±\pm0.01 /92.5 86.9 ±\pm0.03 /87.2 56.3/56.3* 84.2 ±\pm 0.03/78.7 95.0 ±\pm 0.02/94.7 56.8 ±\pm 0.03/56.4
\hdashlineproto-lm/RoBERTa-large 96.5 ±\pm0.02/96.4 94.6 ±\pm0.02 /94.7 90.1 ±\pm0.02 /90.2 56.4/56.4* 88.6 ±\pm 0.02/86.6 95.8 ±\pm 0.03/95.6 58.0 ±\pm 0.04/57.9
\hdashlineproto-lm/BART-large 97.0 ±\pm0.04/96.6 94.2 ±\pm0.05 /94.9 89.3 ±\pm0.06 /90.2 - 89.2 ±\pm 0.05/87.0 - -
Table 1: Predictive accuracy of proto-lm compared against fine-tuned versions of their base LLM’s. We observe competitive performance when compared to baselines on all tasks, with bolded and underlined values representing better and equivalent performances, respectively. indicates that the performance is a result of our experiments on that task and not reported in the original work cited. All results for proto-lm are results under the optimal arrangement of hyperparameters for that task, which we obtain through experimentation.

As we do not want interpretability to come at the cost of performance, we first conduct experiments to assess the modeling capability of proto-lm. We implement proto-lm using BERT (base-uncased and large-uncased), RoBERTa (base and large) and BART-large as the base LLM encoders and train the token-level attention module, prototypical layer and classification head as described in §2. We evaluate the predictive accuracy of proto-lm on 5 tasks (SST-2, QNLI, MNLI, WNLI, RTE) from the GLUE dataset (Wang et al., 2018), IMDb sentiment analysis Maas et al. (2011) and SST-5 Socher et al. (2013). For baselines, we use the same fine-tuned versions of the LLMs with a classification head. We tune all hyperparameters using the respective validation data in each dataset. We present the mean performance as well as the standard deviation over 5 runs under their respective optimal configurations of hyperparameters (cf. App.A) and compare them to their baselines in Table 1. Across our experiments 222We conduct additional performance experiments with proto-lm in App.B, we observe that proto-lm either closely matches or exceeds the performance of its baseline LLM, proving that there is not a trade-off between proto-lm’s interpretability and performance.

4 Prototype Interpretability

Refer to caption
Figure 3: Two-dimensional prototypical space representation of four test examples, where each dimension represents the normalized similarity between the sample and a prototype. A positive prototype is chosen for the horizontal axis, and a negative prototype for the vertical axis. Examples #1 and #3 are correctly classified, as they are much more similar to the prototype of their respective ground-truth class. Examples #2 and #4 are misclassified, due to their high similarities to the prototype of the incorrect class.

4.1 Interpretable prototypical space and decision-making

Compared to black-box models such as BERT/RoBERTa/BART, which have uninterpretable embedding dimensions, proto-lm offers the added benefit of interpretability in conjunction with competitive performance. We show an example of a 2-dimensional prototypical space in Fig. 3, where proto-lm provides insight into the model’s decision-making process by allowing us to visualize examples in prototypical space, where each dimension represents the normalized similarity [0,1]\in[0,1] between an example and one prototype.

In Fig. 3, we select one positive and one negative prototype from the prototypical layer of the model to create a 2-dimensional space in which we place examples #1-#4. The vertical axis represents similarity to the negative prototype, and the horizontal axis represents similarity to the positive prototype. From this, we can see that example #1 is much more similar to the negative prototype than to the positive prototype, and example #3 is much more similar to the positive prototype than to the negative prototype, and both are correctly classified as a result.

From the prototypical space, we can see clearly that the model’s decision to misclassify example #2 as positive is due to elements in the example that make it similar to the positive prototype. Similarly, the prototypical space of proto-lm reveals that example #4 was misclassified because, while it contains elements that are similar to both the positive and negative prototypes, it is closer to the negative prototype. The interpretable prototypical space of proto-lm provides an explanation for why difficult cases such as examples #2 and #4 are misclassified. It should be noted that while similar analyses and explanations can be obtained through post-hoc techniques such as generating sentence embeddings Reimers and Gurevych (2019); Gao et al. (2021), proto-lm has this capability built-in.

4.2 Prototype quality and performance

We investigate the effect of different weightings of the loss terms in our loss function. We train proto-lm, holding all other hyperparameters constant, under 11 different values of λ0\lambda_{0} evenly distributed on [0,1][0,1], and report our results in Figure 4. For these experiments, we place equal weight on coh\mathscr{L}_{coh} and sep\mathscr{L}_{sep} such that λ1=λ2=1λ02\lambda_{1}=\lambda_{2}=\frac{1-\lambda_{0}}{2} (additional details in App. C).

Because prototypes not only help interpret the model but also capture the most representative features from the data, placing an excessive emphasis on ce\mathcal{L}_{ce} actually achieves the adverse effect. As shown in Figure 4, increasing λ0\lambda_{0} comes at the cost of learning representative prototypes (which coh\mathscr{L}_{coh} and sep\mathscr{L}_{sep} do) and this is reflected in decreasing classification accuracy on the downstream task. We observe that the optimal accuracy is achieved when λ0=0.3\lambda_{0}=0.3. As we increase λ0\lambda_{0} from 0.3, we see not only a decline in accuracy but also a decline in the quality of the prototypes associated with the input. On the other, placing sole emphasis on coh\mathscr{L}_{coh} and sep\mathscr{L}_{sep} without any emphasis on ce\mathscr{L}_{ce} (as in the case of λ0=0\lambda_{0}=0) leads to prototypes that do not help with the downstream task.

Refer to caption
Figure 4: Predictive accuracy of the proto-lm model with RoBERTa-large as the base LM on the IMDb dataset as a function of λ0\lambda_{0}, with N=1400N=1400, n=700n=700, and K=350K=350. We illustrate the closest (most similar) prototypes to the positive input generated by proto-lm when trained under different values of λ0\lambda_{0}. We observe a non-linear relationship between accuracy and λ0\lambda_{0}.

4.3 Size of prototypical space

As noted in §4.2, prototypes serve to capture features from data that are useful for making predictions in addition to interpretability. As a result, the number of prototypes (NN) in the prototypical layer directly affects the expressiveness of our model. We conduct experiments varying the number of prototypes NN using RoBERTa-Large as the base and show our results in Fig. 5. We observe a general increase in accuracy up until N=1000N=1000, after which accuracy plateaus for some tasks while increasing only slightly for others. We reason that increasing NN can only improve the model’s expressiveness until the point when NN reaches the embedding dimension of the underlying LLM, after which more prototypes in the prototypical space no longer aid in capturing more useful features from the training data. In Fig. 5’s case, as RoBERTa-large’s output dimension is 1024, we see the increasing trend of accuracy stop at around 1000 prototypes.

Refer to caption
Figure 5: Performance of proto-lm on tasks under varying numbers of prototypes in the prototypical layer (NN).

4.4 Prototype uniqueness and performance

The interpretability of proto-lm stems from the retrieval of the most informative training examples. If all prototypes are predominantly associated with a limited number of training examples, this reduces their utility from an interpretability perspective. Hence, it is beneficial to encourage prototype segregation, that is, a broader projection onto the dataset and a more diverse representation of different samples. Besides normalizing prototype distances, which has been shown to influence prototype segregation (Das et al., 2022), proto-lm introduces an additional hyperparameter, KK. This parameter controls the number of prototypes that each training example associates and disassociates with during training. As changing KK also influences the decision-making process of the model by altering the number of samples the models compare for each input, we examine the impact of KK on both prototype segregation and model performance. In our experiment, we employ proto-lm with RoBERTa-large as our base on SST2 and IMDb. We set NN, the total number of prototypes, to be 1000, and n=1000/2n=1000/2, the number of prototypes per class, to be 500. We train models under seven different KK values, keeping all other hyperparameters constant. We report the accuracy of the models and the percentage of unique prototypes in PP under each different KK in Fig.6.

A prototype is considered unique if no other prototype in PP projects onto the same sample in the dataset. We observe that the best prototype segregation (highest number of unique prototypes) occurs when K=1K=1, and the number of unique prototypes significantly drops as we increase KK. Intuitively, if more prototypes are drawn close to each sample during training (eq.4), it becomes challenging to create unique prototypes. It is important to note that eq. 5 is less related to the uniqueness of prototypes as prototypes are projected onto samples from the same class. We also witness a slight rise in model accuracy as we increase KK. We conjecture that while unique prototypes contribute to interpretability, the model doesn’t necessarily require prototypes to be unique to make accurate decisions. Thus, we observe a minor trade-off between interpretability and model performance.

Refer to caption
Figure 6: Performance and prototype uniqueness in proto-lm when trained with different KK’s.

5 Evaluating the Interpretability of proto-lm

Through the usage of its prototypical space, proto-lm is a white-box, inherently interpretable model. But just how well do the explanations provided by proto-lm satisfy common desiderata in interpretability? We conduct experiments to try to answer that question in this section. Specifically, we evaluate the inherent interpretability provided by proto-lm via measures of faithfulness DeYoung et al. (2019); Jacovi and Goldberg (2020) and simulatability Pruthi et al. (2022); Fernandes et al. (2022).

5.1 Faithfulness experiments

Recently, the concept of faithfulness, which measures the extent to which an explanation accurately reflects the true decision-making process of a model, has garnered attention as a criterion for evaluating explainability methods (Jacovi and Goldberg, 2020, 2021; Chan et al., 2022; Dasgupta et al., 2022). For textual inputs, faithfulness is concretely evaluated using the metrics of comprehensiveness (Comp) and sufficiency (Suff), as defined by DeYoung et al. (2019). Specifically, Comp and Suff quantify the reduction in the model’s confidence for its output when salient features are removed and retained, respectively.

We extend the application of Comp and Suff to prototypical networks to assess the faithfulness of proto-lm. For an input xix_{i}, we initially identify the top k%k\% of most similar prototypes and the bottom k%k\% of least similar prototypes. Let pikp^{k}_{i} denote the k%k\% prototypes identified, we compute Comp as the percentage difference in model confidence when pikp^{k}_{i} are removed (by setting pikp^{k}_{i}’s weights in WhW_{h} are set to 0). Specifically, let y^i\hat{y}_{i} be the prediction of a model \mathcal{M} on xix_{i}, let pry^ipr_{\hat{y}_{i}} be the output logit of MM for y^i\hat{y}_{i}, and let pry^i(Ppik)pr_{\hat{y}_{i}}(P\setminus p^{k}_{i}) be the output logit when pikp^{k}_{i} prototypes are removed from the prototypical layer, we calculate Comp as follows:

Comp=pry^ipry^i(Ppik)pry^i\text{Comp}=\frac{pr_{\hat{y}_{i}}-pr_{\hat{y}_{i}}(P\setminus p^{k}_{i})}{pr_{\hat{y}_{i}}} (8)

We analogously calculate Suff as the percentage difference in model confidence when the k%k\% of prototypes are retained:

Suff=pry^ipry^i(pik)pry^i\text{Suff}=\frac{pr_{\hat{y}_{i}}-pr_{\hat{y}_{i}}(p^{k}_{i})}{pr_{\hat{y}_{i}}} (9)

We compute Comp and Suff k1,5,10,20,50k\in{1,5,10,20,50} and our present mean results across SST2, QNLI, MNLI and SST5 in Fig 7. Our formulation of prototype Comp and Suff are inspired by DeYoung et al. (2019). We note here that under this formulation, a lower sufficiency is better i.e., a smaller amount of reduction in model confidence when only salient features are retained is better. As we remove more top k%k\% prototypes, we observe a general increase in Comp. We also note a general decrease in Suff (a lower Suff is preferable) as we retain more top k%k\% prototypes. Moreover, when the bottom k%k\% of prototypes are removed, there are relatively small changes in Comp and large changes in Suff when only the bottom k%k\% of prototypes are retained. These trends underscore the influence of the learned prototypes on the model’s decision-making process and their faithfulness.

Refer to caption
Figure 7: The comprehensiveness and sufficiency of proto-lm with kk% prototypes removed.

5.2 Simulatability experiments

Refer to caption
Figure 8: Simulatability and accuracy of human annotators under different settings of provided explanations from proto-lm.

Simulatability refers to the capacity of a human to mimic or replicate the decision-making process of a machine learning model Doshi-Velez and Kim (2017); Pruthi et al. (2022). It is a desirable attribute as it inherently aligns with the goal of transparently communicating the model’s underlying behavior to human users Fernandes et al. (2022). We assess the simulatability of proto-lm by providing human annotators with various explanations of model outputs on SST2/QNLI and recording the percentage of model outputs that the annotators can replicate. We provide explanations under the following four settings:

  • No Explanations (NE): Annotators are presented with only the sample data, without any explanations, and asked to make a decision.

  • Random Explanations (Random): Each sample in the dataset is presented along with prototypes from proto-lm chosen randomly as explanations. This setting serves as our baseline.

  • Prototypical Explanations (PE): Each sample in the dataset is presented along with the top 5 prototypes most similar to the sample when proto-lm made its decision.

  • PE + Output: In addition to the prototypical explanations, the model decision for each sample is also presented.

We employ workers from Amazon Mechanical Turk to crowdsource our evaluations and presented 3 workers with 50 examples each from the SST2 and QNLI datasets, along with explanations for the model’s decisions in the settings mentioned in §5.2. We use the best performing models for SST2 and QNLI (those presented in Table 1), since previous studies found that the utility of PE’s are reliant on model accuracy (Das et al., 2022). Additionally, we assess the accuracy of the human annotators in relation to the ground truth labels. We present results for both in Fig. 8. The results show that the case-based reasoning explanations offered by PEs are more effective in assisting annotators in simulating model predictions than the random baseline. We also notice a minor drop in accuracy when we provide PE + output compared to just PE. We attribute this to the fact that the models are not entirely accurate themselves, and in instances where the model is inaccurate, presenting the output leads annotators to replicate inaccurate decisions.

Furthermore, we compare proto-lm’s simulatability against 3 other well-known interpretability methods: LIME, Integrated Gradients, and Guided Backprop. We employed three workers for each example and reported the mean percentage of examples where the workers correctly replicated the model’s decision. For an example of the a simulatability questionnaire with PE, see Fig.12. For an example of an accuracy questionnaire with Random explanations, see Fig.13. For an example of the questionnaire for LIME/IG/GB explanations, see Fig.14. For results of proto-lm against LIME, IG and GB, please see Table 2. Our addditional results further indicate that proto-lm allow the workers to replicate the model’s decisions better than all the other interpretability methods on both tasks.

SST-2 QNLI
\hdashlineRandom Explanations 42.3% 48.7%
\hdashlineLIME 87.3% 90.3%
\hdashlineIntegrated Gradient 84.7% 84.3%
\hdashlineGuided Backprop 78.0% 88.0%
\hdashlinePrototype explanations
from proto-lm 90.0% 92.0%
Table 2: Simulatability of proto-lm on SST-2 and QNLI reported as the mean percentage of model decisions replicated by three workers.

6 Related Works

Prototypical networks (Snell et al., 2017) have shown strong performance in few-shot image and text classification tasks Sun et al. (2019); Gao et al. (2019). However, these approaches do not actively learn prototypes and instead rely on summarizing salient parts of the training data. Their focus is on learning one representative prototype per class while their performance is dependent on the size of the support set in few-shot learning scenarios. Due to these limitations, there have been relatively few works that utilize prototypical networks to provide interpretability for LLM’s in NLP Garcia-Olano et al. (2022); Das et al. (2022); Van Aken et al. (2022). Chen et al. (2019) and Hase et al. (2019) use prototypical parts networks with multiple learned prototypes per class but only apply their methods to image classification tasks.

Our work is most closely related to Das et al. (2022) and Van Aken et al. (2022) in terms of the approach taken. However, our work differs in that the architecture in Das et al. (2022) only utilizes a single negative prototype for binary classification, while proto-lm enables multi-class classification by using multiple prototypes for each class. Additionally, we have extended the work of Das et al. (2022) by implementing token-level attention to identify not only influential samples but also influential sections of text within each sample. Moreover, different from the single-prototype-as-a-summary approach in Van Aken et al. (2022), by learning multiple prototypes per class, proto-lm creates a prototypical space (§4.1), where, unlike the embedding space of LLM’s, each dimension is meaningful, specifically the distance to a learned prototype, and can be used to explain a decision. Our work is also similar to Friedrich et al. (2021) in terms of our loss function design. However, Friedrich et al. (2021) ’s loss function aims to maximize the similarity of the closest prototypes of the same class. Conversely, our approach strives to minimize the distance of the furthest prototypes of the same class. This results in Friedrich et al. (2021)’s approach tending to draw a single prototype closer to a specific example, potentially limiting prototype diversity and representation power. Friedrich et al. (2021) counteracts this potential limitation by introducing an additional diversity loss term. proto-lm, in contrast, ensures prototype diversity by leveraging the K hyperparameter, which we delve into in section §6.

7 Conclusion

We introduce proto-lm, a white-box framework designed to offer inherent interpretability for Language Model Learning (LLMs) through interpretable prototypes. These prototypes not only explain model decisions but also serve as feature extractors for downstream tasks. Our experimental results indicate that proto-lm delivers competitive performance across a range of Natural Language Processing (NLP) tasks and exhibits robustness under various hyperparameter settings. The interpretability of proto-lm is evaluated, and our findings show that proto-lm delivers faithful explanations that can effectively assist humans in understanding and predicting model decisions.

8 Limitations

While proto-lm offers inherent interpretability by creating a connection between input text and pertinent parts of training data through the use of prototypes, it remains dependent on an underlying language model to convert text into a semantic space. Consequently, the interpretability of proto-lm is constrained by the interpretability of the foundational language model.

Moreover, interpreting the significance of a learned prototype in the context of Natural Language Processing (NLP) tasks is still an open research area. Computer Vision (CV) methods used for visualizing prototypes in image-based tasks, such as upsampling a prototypical tensor, are not transferrable to language embeddings. Instead, researchers depend on projecting prototypes onto nearby training examples to decode prototype tensors into comprehensible natural language.

9 Ethics Statement

We present proto-lm, a framework that enhances the interpretability of Language Model Learning (LLMs) by providing explanations for their decisions using examples from the training dataset. We anticipate that the broad applicability of proto-lm across various NLP tasks will promote transparency and trust in the use of LLMs, thereby encouraging their wider adoption. As observed by authors like Rudin (2019) and Jiménez-Luna et al. (2020), models with higher levels of interpretability inspire more confidence and enjoy greater utilization. We aim to contribute to advancing the field of interpretability research in NLP through our work.

For assessing the simulatability of our method, we employed Amazon Mechanical Turk (MTurk) for our human evaluation. To ensure English proficiency among workers, we restricted their location to the United States. Furthermore, only workers with a HIT approval rate of at least 99% were permitted to undertake our tasks. We provided a compensation of $0.20 per task, which roughly translates to about $24 per hour, significantly exceeding the US federal minimum wage. To uphold anonymity, we refrained from collecting any personal information from the annotators.

10 Acknowledgements

This research was supported in part by grants from the US National Library of Medicine (R01LM012837 & R01LM013833) and the US National Cancer Institute (R01CA249758). In addition, we would like to extend our gratitude to Joseph DiPalma, Yiren Jian, Naofumi Tomita, Weicheng Ma, Alex DeJournett, Wayner Barrios Quiroga. Peiying Hua, Weiyi Wu, Ting Shao, Guang Yang, and Chris Cortese for their valuable feedback on the manuscript and support during the research process.

References

  • Baker et al. (2016) Simon Baker, Ilona Silins, Yufan Guo, Imran Ali, Johan Högberg, Ulla Stenius, and Anna Korhonen. 2016. Automatic semantic classification of scientific literature according to the hallmarks of cancer. Bioinformatics, 32(3):432–440.
  • Chan et al. (2022) Aaron Chan, Maziar Sanjabi, Lambert Mathias, Liang Tan, Shaoliang Nie, Xiaochang Peng, Xiang Ren, and Hamed Firooz. 2022. Unirex: A unified learning framework for language model rationale extraction. In International Conference on Machine Learning, pages 2867–2889. PMLR.
  • Chen et al. (2019) Chaofan Chen, Oscar Li, Daniel Tao, Alina Barnett, Cynthia Rudin, and Jonathan K Su. 2019. This looks like that: deep learning for interpretable image recognition. Advances in neural information processing systems, 32.
  • Das et al. (2022) Anubrata Das, Chitrank Gupta, Venelin Kovatchev, Matthew Lease, and Junyi Jessy Li. 2022. Prototex: Explaining model decisions with prototype tensors. arXiv preprint arXiv:2204.05426.
  • Dasgupta et al. (2022) Sanjoy Dasgupta, Nave Frost, and Michal Moshkovitz. 2022. Framework for evaluating faithfulness of local explanations. In International Conference on Machine Learning, pages 4794–4815. PMLR.
  • Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
  • DeYoung et al. (2019) Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C Wallace. 2019. Eraser: A benchmark to evaluate rationalized nlp models. arXiv preprint arXiv:1911.03429.
  • Doshi-Velez and Kim (2017) Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.
  • Fernandes et al. (2022) Patrick Fernandes, Marcos Treviso, Danish Pruthi, André Martins, and Graham Neubig. 2022. Learning to scaffold: Optimizing model explanations for teaching. Advances in Neural Information Processing Systems, 35:36108–36122.
  • Friedrich et al. (2021) Felix Friedrich, Patrick Schramowski, Christopher Tauchmann, and Kristian Kersting. 2021. Interactively providing explanations for transformer language models. arXiv preprint arXiv:2110.02058.
  • Gao et al. (2019) Tianyu Gao, Xu Han, Zhiyuan Liu, and Maosong Sun. 2019. Hybrid attention-based prototypical networks for noisy few-shot relation classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6407–6414.
  • Gao et al. (2021) Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. Simcse: Simple contrastive learning of sentence embeddings. arXiv preprint arXiv:2104.08821.
  • Garcia-Olano et al. (2022) Diego Garcia-Olano, Yasumasa Onoe, Joydeep Ghosh, and Byron C Wallace. 2022. Intermediate entity-based sparse interpretable representation learning. arXiv preprint arXiv:2212.01641.
  • Gu et al. (2021) Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2021. Domain-specific language model pretraining for biomedical natural language processing. ACM Transactions on Computing for Healthcare (HEALTH), 3(1):1–23.
  • Hase et al. (2019) Peter Hase, Chaofan Chen, Oscar Li, and Cynthia Rudin. 2019. Interpretable image recognition with hierarchical prototypes. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, volume 7, pages 32–40.
  • Jacovi and Goldberg (2020) Alon Jacovi and Yoav Goldberg. 2020. Towards faithfully interpretable nlp systems: How should we define and evaluate faithfulness? arXiv preprint arXiv:2004.03685.
  • Jacovi and Goldberg (2021) Alon Jacovi and Yoav Goldberg. 2021. Aligning faithful interpretations with their social attribution. Transactions of the Association for Computational Linguistics, 9:294–310.
  • Jiménez-Luna et al. (2020) José Jiménez-Luna, Francesca Grisoni, and Gisbert Schneider. 2020. Drug discovery with explainable artificial intelligence. Nature Machine Intelligence, 2(10):573–584.
  • Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
  • Lewis et al. (2019) Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.
  • Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
  • Lundberg and Lee (2017) Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.
  • Maas et al. (2011) Andrew Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies, pages 142–150.
  • Murdoch et al. (2018) W James Murdoch, Peter J Liu, and Bin Yu. 2018. Beyond word importance: Contextual decomposition to extract interactions from lstms. arXiv preprint arXiv:1801.05453.
  • Pruthi et al. (2022) Danish Pruthi, Rachit Bansal, Bhuwan Dhingra, Livio Baldini Soares, Michael Collins, Zachary C Lipton, Graham Neubig, and William W Cohen. 2022. Evaluating explanations: How much do explanations from the teacher aid students? Transactions of the Association for Computational Linguistics, 10:359–375.
  • Reimers and Gurevych (2019) Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084.
  • Ribeiro et al. (2016) Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144.
  • Rudin (2019) Cynthia Rudin. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5):206–215.
  • Shrikumar et al. (2017) Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. 2017. Learning important features through propagating activation differences. CoRR, abs/1704.02685.
  • Shrikumar et al. (2016) Avanti Shrikumar, Peyton Greenside, Anna Shcherbina, and Anshul Kundaje. 2016. Not just a black box: Learning important features through propagating activation differences. arXiv preprint arXiv:1605.01713.
  • Snell et al. (2017) Jake Snell, Kevin Swersky, and Richard Zemel. 2017. Prototypical networks for few-shot learning. Advances in neural information processing systems, 30.
  • Socher et al. (2013) Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631–1642.
  • Soğancıoğlu et al. (2017) Gizem Soğancıoğlu, Hakime Öztürk, and Arzucan Özgür. 2017. Biosses: a semantic sentence similarity estimation system for the biomedical domain. Bioinformatics, 33(14):i49–i58.
  • Springenberg et al. (2014) Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. 2014. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806.
  • Sun et al. (2019) Shengli Sun, Qingfeng Sun, Kevin Zhou, and Tengchao Lv. 2019. Hierarchical attention prototypical networks for few-shot text classification. In Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP), pages 476–485.
  • Sun et al. (2020) Zijun Sun, Chun Fan, Qinghong Han, Xiaofei Sun, Yuxian Meng, Fei Wu, and Jiwei Li. 2020. Self-explaining structures improve NLP models. CoRR, abs/2012.01786.
  • Sundararajan et al. (2017) Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. CoRR, abs/1703.01365.
  • Van Aken et al. (2022) Betty Van Aken, Jens-Michalis Papaioannou, Marcel G Naik, Georgios Eleftheriadis, Wolfgang Nejdl, Felix A Gers, and Alexander Löser. 2022. This patient looks like that patient: Prototypical networks for interpretable diagnosis prediction from clinical text. arXiv preprint arXiv:2210.08500.
  • Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.
  • Wang et al. (2018) Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461.

Appendix A Training Details

In our experiments, we utilized two NVIDIA Titan RTX and two GeForce RTX 3090 GPUs to run our experiments. We conducted an extensive search for the best hyperparameters through experimentation. Our models were trained for a maximum of 40 epochs. We initialize weights (in the classification layer) connecting each prototype’s similarity to the logit of their respective weights to 1 and other weights to -.5. This technique (Chen et al., 2019) allows for better association of prototypes and faster convergence. In terms of initializing the parameters within the prototypes, we initialized them randomly from [0,1). We used a learning rate and batch size to be 3e-6 and 128, respectively. We employed Adam Kingma and Ba (2014) as our optimizer with β\beta’s of (0.9,0.999)(0.9,0.999). Additionally, we limited the input size (number of tokens) to 512 during tokenization. We also investigated the optimal number of NN and tested proto-nlp under N{100,200,400,800,1000,1200,1400}N\in\{100,200,400,800,1000,1200,1400\}. We discuss thsi in §4.3. Similarly, we tested K{1,2,5,10,100,250,500}K\in\{1,2,5,10,100,250,500\} and reported the results in §6. Furthermore, we evaluated the effect of λ0,λ1\lambda_{0},\lambda_{1}, and λ2\lambda_{2} on our framework. The results of one of these tasks (IMDb) are presented in Figure 4 in the paper. The optimal λ0\lambda_{0} for the remaining tasks are reported in Table.3.

λ0\lambda_{0} λ1\lambda_{1} λ2\lambda_{2}
\hdashlineSST2 0.2 0.4 0.4
\hdashlineQNLI 0.1 0.45 0.45
\hdashlineMNLI 0.1 0.45 0.45
\hdashlineWNLI 0.2 0.4 0.4
\hdashlineRTE 0.3 0.35 0.35
\hdashlineIMDb 0.3 0.35 0.35
\hdashlineSST5 0.2 0.4 0.4
Table 3: Optimal λ\lambda’s for each task found through experiments.

We utilized pretrained transformer models from Hugging Face, including:

  • BERT-base-uncased: https://huggingface.co/bert-base-uncased

  • BERT-large-uncased: https://huggingface.co/bert-large-uncased

  • RoBERTa-base: https://huggingface.co/roberta-base

  • RoBERTa-large: https://huggingface.co/roberta-large

  • BART-large: https://facebook/bart-large

Appendix B Additional experiments and interpretability examples

We perform additional experiments with proto-lm on two biomedical datasets: The Hallmarks of Cancer Corpus (HoC) (Baker et al., 2016) and Sentence Similarity Estimation System for the Biomedical Domain (BIOSSES) (Soğancıoğlu et al., 2017). HoC is a text classification dataset comprising of 1852 abstracts from PubMed publications that have been annotated by medical experts based on a taxonomy. BIOSSES consists of 100 pairs of PubMed sentences, with each pair having been evaluated by five expert annotators. The sentences are assigned a similarity score ranging from 0 (indicating no relation) to 4 (indicating equivalent meanings).

We build proto-lm with two pretrained LLM’s for the biomedical domain: PubMedBERT (Gu et al., 2021) and BioGPT (Gu et al., 2021). We report our results on the biomedical datasets in Table 4. For the HoC results, we use N=1000N=1000, λ0=0.3\lambda_{0}=0.3, λ1=0.35\lambda_{1}=0.35, λ2=0.35\lambda_{2}=0.35, and K=1K=1. For the BIOSSES results, we use N=25N=25, λ0=0.6\lambda_{0}=0.6, λ1=0.25\lambda_{1}=0.25, λ2=0.25\lambda_{2}=0.25, and K=1K=1. Also, as BIOSSES is a regression task, we set CC to 1, forgo the softmax layer in the classification head, and replace ce\mathcal{L}_{ce} with mse\mathcal{L}_{mse}. Similar to our results in Table. 1, we observe competitive performances from proto-lm, once again showing that proto-lm offers interpretability, but not at the cost of performance. To demonstrate proto-lm’s interpretability, we additionally show 3 examples from HoC and the most/least similar prototypes found for those examples in Fig. 11.

HoC (F1) BIOSSES (MSE)
proto-lm/PubMedBERT 83.15 ±\pm 0.88/82.32 1.32 ±\pm 0.14 / 1.14
\hdashlineproto-lm/BioGPT 82.78 ±\pm 0.43/85.12 1.16 ±\pm 0.10 / 1.08
Table 4: proto-lm’s performance on biomedical datasets HoC and BIOSSES

Appendix C Effect of unequal λ1\lambda_{1} and λ2\lambda_{2}

We conduct experiments on proto-lm, varying only λ1\lambda_{1} and λ2\lambda_{2} (the weights of coh and sep losses, respectively) while keep other hyperparameters the same. We show the results for these instances of proto-lm on SST5 in Figure. 9. We observe that while differing λ1\lambda_{1} and λ2\lambda_{2} values lead to convergence, placing equal emphasis on λ1\lambda_{1} and λ2\lambda_{2} leads to convergence at a lower loss. Intuitively, relying on either only pulling the correct prototypes together (cohesion) or only relying on pushing the incorrect prototypes apart (separation) is sufficient to create a meaningful prototypical space that allows for adequate performance on the task. Our experimental results show that placing equal emphasis leads to better performance.

Refer to caption
Figure 9: Loss of proto-lm on SST5 with different combinations of λ1\lambda_{1} and λ2\lambda_{2}. We observe faster convergence at a lower loss when λ1==λ2\lambda_{1}==\lambda_{2}.

Appendix D Projecting prototypes

In order to help human users understand prototypes in natural language, we identify the closest training data sample for each prototype and project the prototype onto that sample. The projection of prototypes onto the nearest sample is a well-studied and established technique (Das et al., 2022; Van Aken et al., 2022; Chen et al., 2019; Hase et al., 2019). We compare the quality of our projections against the projections obtained via the training procedure of Proto-tex (Das et al., 2022) and the loss function used in Protoypical Network (Chen et al., 2019) by measuring the average normalized distance between each prototype and their projected sample’s embedding. We show the results for two datasets in Fig. 10. We observe that proto-lm is able to train prototypes that are much closer in distance to their projected samples’ embeddings than the prototypes obtained via Protoypical Network loss and within a margin of error to that of Proto-tex.

Refer to caption
Figure 10: Average normalized distance between prototypes and the output embedding of the prototypes’ projected samples.

Appendix E Prototype faithfulness

In addition to experiments in §5.1 and §5.2, we provide a theoretical justification for the inherent interpretability in proto-lm in this section. Denote dijd^{j}_{i} as the distance between two prototypes and/or training samples ii and jj. Let Πj=π1:|D|j\Pi_{j}=\pi^{j}_{1:|D|} be the probability distribution for prototype pjp_{j} over the dataset DD, where πij=ηjdij\pi^{j}_{i}=\frac{\eta_{j}}{d^{j}_{i}} and ηj\eta_{j} is a constant for normalization. Let aa and bb represent two training samples, their relative probabilities (for being projected onto by pjp_{j}) would be πajπbj=ηj/dajηj/dbj=dbjdaj\frac{\pi^{j}_{a}}{\pi^{j}_{b}}=\frac{\eta_{j}/d^{j}_{a}}{\eta_{j}/d^{j}_{b}}=\frac{d^{j}_{b}}{d^{j}_{a}}. In addition, kDπkj=1=kDηj/dkjηj=1kD1/dkj\sum_{k\in D}\pi^{j}_{k}=1=\sum_{k\in D}\eta_{j}/d^{j}_{k}\rightarrow\eta_{j}=\frac{1}{\sum_{k\in D}1/d^{j}_{k}}, which means that each prototype is a soft-clustering over examples in DD. Moreover, since daj/dbj=1πaj/πbjd^{j}_{a}/d^{j}_{b}=\frac{1}{\pi^{j}_{a}/\pi^{j}_{b}}, if a aa is nn times further away from pjp_{j} than bb, then bb is nn times more probable in the probability distribution Πj\Pi_{j}. Similarly, during inference time, for example kk, if a prototype ii is nn times away from kk than prototype jj, then dki/dkj=nπkj/πki=nd^{i}_{k}/d^{j}_{k}=n\rightarrow\pi^{j}_{k}/\pi^{i}_{k}=n, i.e. jj will be nn times more probable to be the prediction than ii for kk.

Appendix F Interpretability examples and sample Mturk questionnaires

Refer to caption
Figure 11: Example inputs from HoC, their predictions and prototypes identified by proto-lm
Refer to caption
Figure 12: Example Amazon Mechanic Turk questionnaire used to evaluate simulatability with prototypical explanations (PE)
Refer to caption
Figure 13: Example Amazon Mechanic Turk questionnaire we used to evaluate accuracy with random explanations.
Refer to caption
Figure 14: Example Amazon Mechanic Turk questionnaires we used to evaluate simulatability of LIME/Integrated Gradients/Guided Backprop.