latexFont shape declaration has incorrect series value
Deeply Explain CNN via Hierarchical Decomposition
Abstract
In computer vision, some attribution methods for explaining CNNs attempt to study how the intermediate features affect the network prediction. However, they usually ignore the feature hierarchies among the intermediate features. This paper introduces a hierarchical decomposition framework to explain CNN’s decision-making process in a top-down manner. Specifically, we propose a gradient-based activation propagation (gAP) module that can decompose any intermediate CNN decision to its lower layers and find the supporting features. Then we utilize the gAP module to iteratively decompose the network decision to the supporting evidence from different CNN layers. The proposed framework can generate a deep hierarchy of strongly associated supporting evidence for the network decision, which provides insight into the decision-making process. Moreover, gAP is effort-free for understanding CNN-based models without network architecture modification and extra training process. Experiments show the effectiveness of the proposed method. The code and interactive demo website will be made publicly available.
Index Terms:
Explaining CNNs, hierarchical decomposition.1 Introduction
Deep convolutional neural networks (CNN) have made significant improvements on various computer vision tasks, such as image recognition [1, 2, 3], object detection [4, 5, 6], semantic segmentation [7, 8, 9, 10], traffic environment analysis [11, 12], and medical image understanding [13, 14]. Despite the high performance, CNNs are usually used as black boxes as their internal decision process is unclear. Moreover, plenty of recent research [15, 16, 17], has pointed out that the previous successful CNN models can still be fooled by adversarial examples where the changes can not even be noticed by human eyes. With the above prior, it is difficult for human beings to trust the good-performing yet opaque CNN models. Therefore, the interpretability of CNNs is as crucial as their performance, especially in some critical applications.
A fully interpretable convolutional neural network is a long-standing holy grail for deep learning researchers. To this end, researchers have proposed a wide range of techniques. Feature attribution (or saliency) methods [18, 19, 20] provide a powerful tool for interpretability. They attribute an output prediction of CNN to the input image, where the generated saliency map can tell us which pixels are important to the prediction. Such ability helps humans to understand how the input affects the prediction. Another set of feature attribution methods [21, 22] measures the importance of intermediate features towards a prediction. They further select important features and study their impact on the prediction. Apart from generating the feature importance, the relationships among intermediate features [23] are also important to understand the predictions but receive little attention.
CNNs have demonstrated a strong ability to gradually abstract image contents and generate features at different semantic levels, e.g., blobs/edges, textures, and object parts [24]. While discovering important features can provide a rich set of evidence for the output prediction, isolated evidence is less convincing and informative than the evidence chain [25] or the evidence pyramid [26]. According to the feature integration theory developed by Treisman et al. [27], the human brain first extracts basic features and then utilizes attention to combine individual features to perceive the object. Ideally, we would expect a hierarchical evidence tree as demonstrated in Fig. 1, which attributes a CNN decision to multiple key features, and each of them can be recursively attributed to more basic features. By associating intermediate features like ‘head’, ‘face’, ‘eye’, ‘nose’, and ‘edge’ in this example, a group of strongly associating evidence corresponding to the network’s inner state are emerging, reviewing real-world facts of the human perception decision.
There are two major challenges for existing feature attribution methods to achieve the hierarchical decomposition. Firstly, directly decomposing millions of feature responses in all channels and all spatial locations is both computationally infeasible and cognitively overloading for humans. Meanwhile, feature attribution methods such as [21, 22] are quite time-consuming because they need to repeat the backpropagation process many times. Secondly, some attribution methods [28, 29] generate an attention map for the whole layer, rather than a group of attention maps for each feature channel. The channel-wise attention maps are crucial for the iterative decomposition process as they can indicate the most important neuron in a feature channel to be decomposed. To alleviate these issues, we propose an efficient gradient-based Activation Propagation (gAP) module, which decomposes a feature response at any CNN location to its lower layer. As the gAP module generates an activation map for each feature channel, we can easily select a few mostly activated feature channels as crucial evidence, obtaining human-scale explanations. For each of those selected feature channels, the CNN feature at the most activated spatial position can be iteratively decomposed. By avoiding decomposing features at too many spatial locations, we can further reduce the number of potential visualizations to the human scale.
The proposed decomposition framework can effectively generate hierarchical explanations (see Fig. 1), which builds relationships among crucial intermediate features. We have conducted extensive experiments on several aspects, including a sanity check of the gAP module and understanding the network decisions. Experiments show the effectiveness of our framework to explain network decisions. In summary, we make two major contributions:
-
•
We propose an efficient gradient-based Activation Propagation (gAP) module, which decomposes the network decision and intermediate features to find their key supporting evidence from previous layers.
-
•
We propose a hierarchical decomposition framework, which builds relationships among important intermediate features, enabling hierarchical explanations with human-scale supporting evidence.
2 Related Work
The interpretability of CNNs has been actively studied, with major progress in three main areas, including feature attribution, feature visualization, and knowledge distillation to explainable models.


2.1 Feature Attribution
Feature attribution methods typically generate a saliency map to locate the input locations important to the output. We classify them into three categories: backpropagation-based methods, perturbation-based methods, and activation-based methods.
Backpropagation-based methods. In the early days, Sung et al. [32] learn to rank the importance of input for the backpropagation networks by several tools such as sensitivity analysis. Baehrens et al. [33] identify the feature importance for a particular instance by computing the gradients of the decision function. Simonyan et al. [20] backpropagate the gradients of the output prediction w.r.t. the input image and generate a saliency map that indicates the importance of each pixel in the image. Guided Backpropagation [30] and Deconvnet [24] utilize different backpropagation logics through ReLU, where they both zero out the negative gradients. Sundararajan et al. [18] consider the saturation and thresholding problem. They compute the saliency map by accumulating the gradients along a path from the base image to the input image. Another set of methods, such as LRP [34], DeepTayor [35], RectGrad [36], DeepLift [19], FullGrad [37], PatternAttribution [38], and Excitation Backprop [39], utilize different top-down relevance propagation rules. Yang et al. [40] attempt to learn the propagation rule automatically for attribution map generation. SmoothGrad [41] sharpen the gradient-based salience maps to reduce visual noise. Zintgraf et al. [42] not only identify the important regions supporting the network decision but also identify the regions against the decision. Moreover, some methods [21, 22] measure the importance of the hidden unit to the prediction based on the backpropagation. These methods can find out the most important features from different layers of deep networks. Kim et al. [43] study the high-level concepts instead of low-level features for interpreting the internal state of the neural network. They utilize the directional derivatives to quantify the importance of high-level concepts to a classification result.
Perturbation-based methods. These methods perturb the input to observe the output changes. Zeiler et al. [24] occlude the input image by sliding a gray square and use the change of the output as the importance. Petsiuk et al. [44] randomly sampled a masked region. Ribeiro et al. [45] utilize the super-pixel to select occluded image regions. They learn a local linear model to compute the contribution of each super-pixel. Besides, the recent methods [46, 47, 48] learn a perturbation map, where the map applied to the input image can maximumly affect the prediction. Fong et al. [47] also apply the input attribution method to study the salient channels of deep networks.
Activation-based methods.
These methods [28, 29, 49]
generate a coarse class activation map by linearly combining
the feature channels from the convolutional layer.
The class activation map is upsampled to the size of the input image
and provides image-level evidence that is important
for the network prediction, as demonstrated in Fig. 2 (b).
Zhou et al. [28] propose Class Activation Mapping (CAM).
They need a specific network with the global average pooling layer
to generate class activation maps.
Later, Grad-CAM [29] and Grad-CAM++
[49] generalize the CAM method to other tasks
by utilizing the task-specific gradients as weights.
Unlike Grad-CAM, Score-CAM [50] utilize the forward passing
score on the target class to obtain the weight for each activation.
Recently, Zhou et al. [31] attempt to
decompose the network decision into several semantic components
and study each component’s contribution.
As shown in Fig. 2 (c),
the class activation map is decomposed into several semantic components.
The aforementioned attribution methods mostly focus on generating saliency/activation maps to study how the input affects the output prediction. Although some attribution methods can measure the importance of intermediate features to the output prediction, they usually neglect to study the relationships among different intermediate features. As pointed by Olah et al. [23], the relationships among different intermediate features are also important to interpret a prediction. We decompose not only the network decision but also the intermediate features to find their supporting evidence from previous layers, explaining how these associated intermediate features affect each other. While LRP [34] method propagates feature importance to intermediate features, the feature importance for different channels is coupled in the back-propagation process. This method generates simple explanations for the entire network behavior rather than hierarchical explanations.
2.2 Feature Visualization
Visualizing the CNN features of the intermediate layers can provide insight into what these layers learn. For the first layer of the CNN, we can directly project its three-channel weights into the image space. To visualize the features from higher layers, researchers have proposed many alternative approaches. Among them, Erhan et al. [51] and Simonyan et al. [20] utilize the gradient ascent algorithm to find the optimal stimuli in the image space that maximizes the neuron activations. Other methods [24, 30, 52] identify the image patches from the dataset that maximize the neuron activation of the CNN layers, as shown in Fig. 2 (a). Guided Backpropagation [30] and Deconvnet [24] also utilize the top-down gradients to discover the patterns that the intermediate layers learn. Using the natural image prior, feature inversion methods [53, 54, 55, 56, 57] learn an image to reconstruct the neuron activation. Furthermore, the recent methods [58, 59, 60] attempt to detect the concepts learned by intermediate CNN layers. The above feature visualization methods explore what the intermediate features detect, but they do not answer how the network assembles individual features to make a prediction.
2.3 Distill Knowledge to Explainable Models
Recently, another research line has attempted to transfer the powerful ability of CNN to explainable models, such as the decision tree or linear model, to approximate the behavior of the original model. Chen et al. [61] distill the knowledge into an explainable additive model. Ribeiro et al. [45] utilize a local linear model to approximate the original model, studying how the input affects any classifier’s decisions. Frosst et al. [62] and Liu et al. [63] distill the learned knowledge of CNN into the decision tree. These methods only bridge between the network decision and the input. They cannot help the user understand how the internal features of CNNs affect the network decision and each other. Our hierarchical decomposition is also an approximation to the original model. Unlike the above methods, our hierarchical decomposition not only highlights the important features for the network decision but builds the relationships among the feature channels from different layers. From our method, we can obtain the states of the internal features and how the internal features affect each other and the network decision.
2.4 Intrinsic Interpretable Models
Except for the post-hoc interpretability analysis for a trained CNN, some researchers have attempted to explore inherently interpretable models. Chen et al. [64] proposed a deep network architecture called prototypical part network. The network has a transparent reasoning process that first computes the similarity scores between the image patches and the learned prototypes. Then the network makes predictions based on a weighted sum of the similarity scores. Concept bottleneck models [65, 66, 67] are also inherently interpretable. Unlike those post-hoc methods [58, 59] that utilize human-specific concepts to generate explanations, they directly predict a set of human-specific concepts at training time and then use these concepts to make predictions, where the reasoning process is interpretable. Some recent intrinsic interpretable models [65, 64] utilize VGG [1] or ResNet [2] to extract high-level features firstly and perform the reasoning process on the high-level features. Our method is complementary to these CNN-based intrinsic interpretable models because one can use the hierarchical decomposition to provide more hierarchical evidence from the feature extractor if needed.
3 Methodology
3.1 Gradient-based Activation Propagation
We begin by defining the notation for the CNN, as illustrated in Fig. 3. In the CNN layer, the features , partial gradients , and corresponding neuron activations are 3D tensors with the same size, i.e., , where is the number of channels and is the spatial size in the CNN layer . To find supporting evidence for the final CNN decision or any intermediate feature response, we propose a gradient-based activation propagation (gAP) method. Using the gAP module, we can understand a decision of interest at a CNN layer by localizing the most related evidence in its previous layer.
As shown in Fig. 3, we decompose a CNN feature (i.e., a decision of interest) at the convolutional layer , channel , and spatial position , to find the supporting evidence in its previous convolutional layer . In this work, we are interested in understanding the strong feature response that has the largest contribution to the decision among the feature channel. In typical CNNs, a certain feature at layer is computed as a linear combination of features from its previous layer and a ReLU. For the strong feature , we have
(1) |
where is the linear weight for combining feature channel of . To obtain the weight, we first use backpropagation to compute the partial gradient map of the feature w.r.t. the feature map by
(2) |
The gradient map captures the ‘importance’ of the feature map for the decision .
We employ the gradient map to generate an activation map
(3) |
The activation map indicates the contribution of each feature in to the decision . Based on its corresponding activation map, each channel’s contribution to the decision can be computed by
(4) |
where denotes the number of spatial positions in the activation map . We can also identify the feature in feature channel that contributes the most to the decision in which
(5) |
Thus, for each decision, we can find the most important feature channel according to the contribution computed by Eqn. (4). In the most important channel, we can also identify the feature that contributes most to the decision according to Eqn. (5). In the top row of Fig. 4, we show the three most important activation maps , , and in layer conv4_3 for the decision from . These activation maps provide spatial channel responses to the decision, benefiting human understanding. Using Guided Backpropagation [30], we visualize the most contributing feature by generating sharp visualizations, which highlight the associated input. An example is shown in the bottom row of Fig. 4.
Discussion. Our gAP module is inspired by CAM [28] and Grad-CAM [29], which explain CNN decisions by class activation localization. To explain the relation and difference to our gAP module, we first revisit CAM and Grad-CAM. Smilkov et al. have proofed that Grad-CAM is a strict generalization of CAM. Without loss of generality, we consider the same network discussed in [28]. For an image classification CNN, the CNN features of the last convolutional layer are spatially pooled using the global average pooling layer to obtain feature vectors. The network performs a linear combination of the feature vectors by feeding them into a fully connected layer before the softmax. Let denote the number of classes. The classification score before softmax for each class is
(6) | |||
where is the weight connecting the feature map with the class. The contribution of a feature to is . CAM generates a class activation map by summing over all feature maps,
(7) |
where each value in indicates the contribution of each spatial location to .
For the linear function, the importance weight is also equal to the gradient. Thus, we can also obtain the weight by computing the back-propagating gradients,
(8) |
The detailed derivations of is depicted in [29]. Eqn. (8) is also the way that Grad-CAM computes the weight . A little difference is that Grad-CAM multiplies by a proportionality constant, i.e.,
(9) |
where the proportionality constant will be normalized.
Considering the scores as CNN features with channels and spatial size , i.e., , we can plug Eqn. (2) into Eqn. (9) and get
(10) |
Due to the global average pooling layer, the gradient of each element in is the same, i.e., . The class activation map can be written as
(11) |
Eqn. (11) suggests that the activation map for Grad-CAM can be generated by simply adding the activations maps from our gAP.
The differences between gAP and Grad-CAM/CAM are:
-
•
Grad-CAM/CAM combine all activation maps to generate a single class activation map , which highlights important regions supporting the prediction. Our gAP method explains a decision of interest by generating a group of activation maps . Each activation map corresponds to a feature channel, which is crucial for our iterative decomposition process.
-
•
Grad-CAM/CAM generate class activation maps from the last convolutional layer to explain the prediction. Our gAP generalizes this idea and iteratively decomposes a decision of any CNN layer to its lower layer.
While the above derivations apply to adjacent layers, we empirically find that satisfactory decomposition results can also be obtained when applying the gAP module between two layers from different stages of CNN (see Sec. 4.1). In the following, we will describe how we build hierarchical explanations for the network decisions.
3.2 Hierarchical Decomposition
Fig. 5 demonstrates an example of our hierarchical decomposition process. First, we decompose the network decision to the last convolutional layer and find the top few most crucial supporting features. Then, we decompose each of the supporting features to their previous layer and iteratively repeat the decomposition process until the bottom layer. As mentioned in Sec. 1, the key challenge is that naively building the hierarchical decomposition will generate too many visualizations, which will be a cognitive burden for humans. Even if we only decompose a single maximal contributed feature in each channel (see also Eqn. (5)), directly decomposing all channels in VGG-16 will generate visualizations.
To obtain human-scale visualizations, we propose two strategies to reduce the number of visualizations. Firstly, we only decompose the top few most important features at each layer. Experiment (see Sec. 4.1) has verified that a small subset of feature channels in a layer accounts for the majority of the contributions to a decision. Thus, we select the top few most important channels. We simplify the top-down decision decomposition process by utilizing the last convolutional layer of each stage. Current popular CNNs [1, 2] usually reduce the spatial size of feature maps after each stage, where a stage is composed of a set of convolution layers with the same output resolution. Each stage learns different patterns, such as blobs/edges, textures, and object parts [30, 24]. Experiments verify that when using the gAP module between two layers from two consecutive stages, we can obtain visually meaningful decomposition results (see Fig. 5). By these two strategies, we can largely reduce the number of visualizations to obtain human-scale explanations.
An example of the VGG-16 classification network is shown in Fig. 5. We select conv1_2, conv2_2, conv3_3, conv4_3, conv5_3 and index these layers as , where . The network output before softmax could be considered as the CNN layer, with features . The decomposition process starts from the CNN decision , where corresponds to the ‘person’ class. Using gAP, we first decompose the CNN decision to layer. The decomposition generates a set of activation maps at layer for . We use Eqn. (4) to select the top (e.g., =3) important activation maps, i.e., , , and . We continue to decompose the decisions from , , , and find the top most important activation maps at layer for them, respectively. However, directly decomposing the feature map is not easy. Because not all of the features in a feature map contribute to decision (see the activation maps in the top row of Fig. 4). We select the most representative feature that contributes most to a decision and decompose this feature. We utilize Eqn. (5) to find the feature corresponding to the maximum activation. Then we decompose it to layer using gAP. This hierarchical decomposition process recursively runs until we decompose the CNN decision to the lowest layer.
The number of visualizations is a flexible parameter, which controls how many top response feature channels will be selected during each decomposition. To make human cognition easier, is set to 3 in Fig. 5. Moreover, we make the hierarchical decomposition interactive, so that the users can choose the features to be decomposed, easily accessing the information they need. We also provide a video about the interactive demo, shown in supplementary materials. In Fig. 5, we can see that the features detected in high-level layers can be decomposed to different parts detected in low-level layers. The hierarchical decomposition process tracks important features and recursively explains the evidence using evidence from lower layers. For instance, the classification results of ‘person’ have been decomposed to ‘face’ and ‘hand’ evidence. The ‘face’ evidence is then decomposed to ‘eye’, ‘nose’, and ‘lower jaw’. This process continues until we reach the lowest layer, which usually detects edge and blob features.
Difference to layer-wise attribution methods. Some attribution methods, such as LRP [34], hierarchically propagate importance to the input in a layer-wise manner. They generate a single saliency map that indicates the importance of each pixel in the input. Unlike them, our method decouples the importance propagation chain and produces a rich hierarchy of activation maps and corresponding visualizations. To explain a person image, our method finds a group of evidence, e.g., activation maps for ‘face’, ‘hand’, etc. Each evidence associated with their own supporting evidence, e.g., ‘face’ has supporting activation maps for ‘eye’, ‘nose’, etc. Our method provides informational details of the internal features and their relations.
4 Experiments
In this section, we first conduct experiments to verify the correctness and efficiency of the decision decomposition. Then, we use the hierarchical decomposition process to analyze network characteristics and explain network decisions. We conduct experiments on two popular datasets, ImageNet [68] and PASCAL VOC [69]. On the PASCAL VOC dataset, the augmented training set containing 10582 training images is used to fine-tune different classification networks. All the experiments are tested on a single RTX 2080Ti GPU.
4.1 Sanity Check for gAP
The effectiveness of gAP. We have shown that the gradient-based Activation Propagation (gAP) module helps to decompose the network decision hierarchically for the CNN-based models. During the decomposition process, what matters most is the accuracy of the channel contributions calculated by the gAP module. Thus, we first examine the accuracy of the channel contributions to the decision of interest. Following [21, 70, 60], we take the decision score drop, when removing a feature channel at a time, as the ground truth of the channel’s contribution. Specifically, given an input image , let be a decision score at the layer. denotes the decision score when setting the feature channel in the layer to the average activation. The score drop denotes the channel’s ground truth contribution to this decision.
ImageNet | T S5 | S5 S4 | S4 S3 | S3 S2 | S2 S1 |
AA | 0.985 | 0.959 | 0.933 | 0.898 | 0.895 |
MA | 0.897 | 0.912 | 0.894 | 0.864 | 0.890 |
AG | 0.623 | 0.421 | 0.497 | 0.545 | 0.472 |
MG | 0.454 | 0.456 | 0.567 | 0.594 | 0.606 |
VOC | T S5 | S5 S4 | S4 S3 | S3 S2 | S2 S1 |
AA | 0.987 | 0.961 | 0.932 | 0.899 | 0.893 |
MA | 0.917 | 0.913 | 0.892 | 0.856 | 0.897 |
AG | 0.702 | 0.492 | 0.525 | 0.564 | 0.480 |
MG | 0.575 | 0.525 | 0.536 | 0.583 | 0.669 |
The Pearson Correlation Coefficient (PCC) metric [71] is utilized to measure the linear correlations between ground truth contribution and the contribution estimated by Eqn. (4). When the PCC value equals 1, there are linear correlations between the two variables (0 denotes no linear correlations, -1 denotes total negative linear correlations). The PCC metric is computed by
(12) |
where and denote the mean and variance, respectively.
As shown in Tab. I, we study several strategies of calculating the contribution of a feature channel to the decision of interest. It can be seen that the contribution computed by averaging activations (i.e., Eqn. (4)) obtains the highest PCC value with the ground truth. For all stages in VGG-16, there are strong linear correlations between the computed contributions and the ground truth. This high correlation verifies the effectiveness of the gAP module.
Taking the computational efficiency into account, it is rather a time-consuming style of measuring channel contributions by calculating the score drop when removing feature channels iteratively in a layer [21, 70, 60]. In comparison, only one backpropagation process is needed when gAP calculates the channel contributions to a decision. On a VGG-16 backbone, calculating the ground truth channel contributions of an image takes about 10s, while the gAP module only takes about 50ms, nearly 200x faster. With the efficiency advantages of the gAP module, our hierarchical decomposition process can immediately yield detailed explanations of a network decision.
The distribution of contributions. As shown in the first five curves of Fig. 6, we can observe that the distribution of the channel contributions in a CNN layer is long-tailed. A small number of feature channels play the most important role for a decision of interest. With deeper layers of the networks, the proportion of the important feature channels decreases. In high-level layers, the feature channels are usually more discriminative. This fact is in line with the accepted notion [24]. Besides, we also check how many feature channels at a CNN layer work together to determine a decision for the higher CNN layers. We call the channel with as the activated channels and compute the number of the activated channels in the decision decomposition process. As shown in the last chart of Fig. 6, when decomposing a decision from layer conv2_2 to layer conv1_2, nearly all channels in layer conv1_2 are found activated. However, for the decomposition from the final decision to layer conv5_3, we can see that the activated channels’ number is much less than the number of all channels in layer conv5_3.
Channel-effect overlaps. Using the gAP module, we observe that the activation maps of some channels decomposed from the same decision often have strong activations in similar spatial locations. Such spatial locations usually denote an underlying concept [58, 59], contributing to the decision. When presenting visualizations of the hierarchical decomposition, we will merge these duplicate channels with similar effects for human better understanding. Specifically, when decomposing a decision of interest into the lower layer, we will obtain activation maps corresponding to each channel in this layer. We first threshold the activation maps into binary masks and then compute Intersection-over-Union (IoU) between them. Then we apply the non-maximum suppression algorithm [73] to suppress activation maps with an IoU score larger than 0.9, where the activation maps are sorted using the contribution scores by Eqn. (4). As shown in Fig. 6, we present how many activated channels have large overlaps with each other. In low-level layers, the number of activated channels with large overlaps is very small. But in high-level layers, there are many activated channels with similar effects to a decision.
Sanity checks for gAP. Adebayo et al. [74] propose the model parameter and data randomization test for sanity check for visual attribution methods. These two tests are used to check whether the attribution method is sensitive to the model parameters and the labeling of the data. An attribution method insensitive to the model parameters and data labels is inadequate for debugging the model and explaining the mechanism that depends on the relationship between the instances and the labeling of the data. To generate saliency maps from our gAP, we hierarchically decompose the decision until the data layer and sum all the gradients from each decomposition. We do the model parameter randomization test on the pretrained ResNet-18 model [2] and randomly initialize the model parameters from the top layer to the bottom layer in a cascading manner. We utilize the spearman rank correlation metric to compute the difference between the attribution maps from the original model and the randomly initialized model. Besides, we do the data randomization test by comparing the saliency maps from CNNs trained with true labels and permuted labels, respectively.
In Fig. 7(a), the low spearman metric indicates that the attribution maps from the original model and the randomly initialized model differ substantially, which demonstrates that gAP is sensitive to model parameters. In Fig. 7(b), the low spearman metric also indicates gAP is sensitive to the labeling of the data. The experimental results verify that our method can be used for debugging models. The visual comparisons are shown in supplementary materials.
Is the top-k decomposition a good approximation to the original model? We have tested the classification accuracy of the sparser surrogate model generated by gAP. Moreover, we measure the match rate by comparing the predictions between the sparser surrogate model and the original model. Specifically, we decompose from the decision of the predicted category to the bottom layers and select the top-k important features in each decomposition to make predictions. As shown in Fig. 8, when using top-16 decomposition, the sparser surrogate model has a similar classification accuracy to the original model. According to the match rate, when using top-16 decomposition, the predictions of the sparser surrogate model and the original model are consistent on almost all samples. The sparser surrogate model selecting a small number of feature channels can make a good approximation to the original model.
Comparison with individual-based methods. the individual-based methods [21, 22] compute the importance of each channel from different layers to the final network decision. Compared with individual-based methods, gAP can help us explore the relationships among different feature channels. To directly compare with them, we propagate the importance of each selected channel of the top layer to the shallow layer. We select the top- most important channels from different layers of VGG-16 and ablate them to watch the change of the classification accuracy. We conduct experiments on the ILSVRC validation set [75]. As shown in Fig. 9, when removing the top few most important feature channels, gAP obtains lower classification accuracy than other individual-based methods. We analyze that gAP only propagates the contributions of those important feature channels to lower layers, which reduces the interference of other feature channels. Compared with the individual-based methods, gAP can not only effectively detect the important features but also how these features affect each other.
4.2 Diagnosing CNN
Analyzing failure predictions of CNN. Previous work [29] can generate class activation maps for the network predictions, highlighting the most important image regions supporting the network decision. However, such an explanation is not informative enough. The hierarchical decomposition can further provide a more detailed explanation for the network decision. We decompose the network’s decision iteratively to the low-level layers and find the most important feature channels at different layers. We can see each channel’s contribution to the network decision. Further, important channels and their corresponding activation maps can also be studied.


As shown in Fig. 10, we use the hierarchical decomposition to examine the CNN’s wrong decision. Fig. 10 demonstrates a failure case. A dog image misclassified to the cat category with a probability of 99%. We first decompose the network decision to layer conv5_3 and find the most important channel, i.e., the channel, with a 32.3% contribution. We further present the decomposition from channel to layer conv4_3 and find the most important channel, i.e., the channel. The activation map has strong activations at the ear region. Moreover, the patterns that maximumly activate the channel are the ear image patches of the cat category. We find the dog’s ear of this example has a similar shape to those ear image patches of the cat category. We further occlude the image region of the dog’s ear and observe that the CNN correctly predicts the dog category with a probability of 65%. With the hierarchical decomposition, we found that CNN makes the wrong decision because it takes the dog’s ear as the cat’s ear in this example.
Analyzing adversarial attacks. Current CNN models are vulnerable to adversarial attacks. When the adversarial attack algorithms add a small perturbation to the original images, these CNN models easily misclassify them. To understand how the adversarial images successfully fool the CNN models, following [60], we study the change of the feature responses for important channels. As shown in Fig. 11(a), we present the original image (top row) and the adversarial image (bottom row). The adversarial image is generated by a popular attack algorithm [76]. VGG-16 classifies the original image to the picketfence category (probability ) and the adversarial image to the church category (probability ). Through our decomposition from the network decision to layer conv5_3, we find the top few most important feature channels for the picketfence and church category, respectively.
As shown in Fig. 11(b), when comparing the adversarial image to the original image, we observe that the peak feature responses of important channels for the picketfence category, i.e., the , , , channels, largely decrease by 11.3, 14.5, 4.7, and 11.1. However, the peak feature responses of important channels for the church category, i.e., the , , , channels, largely increase by 8.7, 10.4, 16.4, and 5.5. As shown in Fig. 11(c), we also compute the average peak responses of important channels on the whole ILSVRC validation dataset [75]. The adversarial attack algorithms change the feature responses of important channels to affect the final network decision. For the important channels, they reduce the correct category’s feature responses and increase the wrong category’s feature responses.
The context in activation maps. Context information [77, 78] is crucial for recognition. A known prior is that the target category usually appears in a specific context. For example, the boats usually appear in the seas or lakes, and the birds often stand on the tree branch. Through our decision decomposition, we find some context in the activation maps to support the CNN prediction. Fig. 12 shows that the channel in layer conv5_3 has strong responses to the image’s ‘boat’ region. We decompose the peak point indicated by the activation map to layer conv4_3. The , , and channels are the top-3 most important channels. The most important channel is the channel, whose corresponding activation map locates the sea.
To quantitatively analyze the context information contained in the activation maps, we utilize the PASCAL-Context dataset [79] for evaluation. We select the images with context annotations from the PASCAL VOC validation set [69] and compute the most frequent context labels for each category. Specifically, we perform the hierarchical decomposition to layer conv4_3, obtaining the activation map for each selected channel. The activation map is first thresholded to a binary map. Then we compute the IoU between the binary activation map and each context region. The activation map is assigned with the label of the context region corresponding to the largest IoU. In Fig. 13, we have shown the top few most frequent context labels for three categories, i.e., bird, boat, and train. These categories usually appear in a specific environment. This fact suggests that the context of the objects is critical for recognition. The context information of other categories and the qualitative examples are shown in the supplementary materials.
Channel discrimination analysis. We utilize the hierarchical decomposition to explore the discriminative information of the channels in different layers. Specifically, we define a discriminative degree to measure the discriminative information of a channel. When performing the hierarchical decomposition process for the images with label , we count the number of times for channel when its contribution to a decision ranking top-3. is summed on all images from the validation set. Then the discriminative degree is computed by
(13) |
where denotes the number of categories in the dataset. When the channel is only decomposed from one single category, the discriminative degree = 1. Besides, we can get the minimum value of when the channel decomposed from each category with equal times: .
We apply the hierarchical decomposition to different CNNs. As shown in Fig. 14, the channels’ discriminative degrees in low-level layers are very small. They usually have strong activations for multiple categories. This fact indicates that the basic features detected by channels in low-level layers are shared among different categories, which lacks discriminative information for classification. However, in high-level layers of CNNs, the channels’ discriminative degrees are much larger than those in low-level layers. Because the high-level layers in CNNs gradually combine basic features from low-level layers to form more discriminative features. In high-level layers, different categories tend to highlight their own discriminative channels. These results provide additional evidence for the conclusion found by Zeiler et al. [24].
Moreover, for the high-level layers of different CNNs, the discriminative degrees of the channels gradually increase with the growth of the network depth (ResNet-50 [2] VGG-16 [1] AlexNet [75]). Such difference suggests that the high-level layers of ResNet-50 have a stronger discriminative ability. The strong discriminative ability of the channels can effectively reduce confusion among different categories, which helps ResNet-50 achieve higher classification accuracy than VGG-16 and AlexNet.
5 Limitation
The proposed hierarchical decomposition method explains the individual decision by selecting a set of strongly correlative channels from different layers of CNN. These feature channels provide a rich hierarchy of evidence. However, the feature channels are less confident for an unprofessional user to understand the network’s reasoning process because not all examples are as easy to understand as the person image. So in the future, we will attempt to build connections between the selected feature channels and human-specific concepts for better human understanding.
Besides, following [21, 60], we have removed channels individually to study their contributions. However, as verified in [59, 80], the representations are usually distributed among multiple channels. We observe that the activation maps of some channels decomposed from the same decision often have strong activations in similar spatial locations. This phenomenon suggests that multiple feature channels produce class responses together. One possible solution to the flaw of removing channels individually is that we can first find those feature channels with similar effects by measuring the overlap between their corresponding activation maps. Then we analyze these feature channels together to the network decision. In this paper, we focus on building the evidence hierarchy. The issue of removing individual channels will be our future work.
6 Conclusion
We present a novel gradient-based activation propagation (gAP) scheme that can decompose any CNN layer’s decision to its lower layers. Based on the gAP, the network decision can be hierarchically decomposed to a rich set of the evidence pyramid associated with all layers of the CNN model. Our method allows users to delve deep into the CNN’s decision-making process in a top-down manner. We have experimentally verified the effectiveness of our method and demonstrated its ability to understand and diagnose CNN predictions. While currently mostly focus on explaining CNN-based image classifiers, we will study how to generalize the framework to other tasks and other deep learning models in the future. The source code and interactive demo website will be made publicly available.
References
- [1] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Int. Conf. Learn. Represent., 2015.
- [2] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conf. Comput. Vis. Pattern Recog., 2016.
- [3] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in IEEE Conf. Comput. Vis. Pattern Recog., 2017, pp. 4700–4708.
- [4] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in IEEE Conf. Comput. Vis. Pattern Recog., 2014, pp. 580–587.
- [5] R. Girshick, “Fast r-cnn,” in Int. Conf. Comput. Vis., 2015, pp. 1440–1448.
- [6] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in Adv. Neural Inform. Process. Syst., 2015, pp. 91–99.
- [7] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in IEEE Conf. Comput. Vis. Pattern Recog., 2015.
- [8] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” IEEE Trans. Pattern Anal. Mach. Intell., 2017.
- [9] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” in IEEE Conf. Comput. Vis. Pattern Recog., 2017.
- [10] G. Lin, A. Milan, C. Shen, and I. Reid, “Refinenet: Multi-path refinement networks with identity mappings for high-resolution semantic segmentation,” in IEEE Conf. Comput. Vis. Pattern Recog., 2017.
- [11] Z. Zhu, D. Liang, S. Zhang, X. Huang, B. Li, and S. Hu, “Traffic-sign detection and classification in the wild,” in IEEE Conf. Comput. Vis. Pattern Recog., 2016, pp. 2110–2118.
- [12] Y. Hou, Z. Ma, C. Liu, and C. C. Loy, “Learning lightweight lane detection cnns by self attention distillation,” in Int. Conf. Comput. Vis., 2019, pp. 1013–1021.
- [13] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Int. Conf. Medical image computing and computer-assisted intervention, 2015, pp. 234–241.
- [14] G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Medical image analysis, vol. 42, pp. 60–88, 2017.
- [15] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” in Int. Conf. Learn. Represent., 2014.
- [16] A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” in Int. Conf. Learn. Represent. Worksh., 2017.
- [17] A. Athalye, L. Engstrom, A. Ilyas, and K. Kwok, “Synthesizing robust adversarial examples,” in Int. Conf. Mach. Learn., 2018.
- [18] M. Sundararajan, A. Taly, and Q. Yan, “Axiomatic attribution for deep networks,” in Int. Conf. Mach. Learn., 2017.
- [19] A. Shrikumar, P. Greenside, and A. Kundaje, “Learning important features through propagating activation differences,” in Int. Conf. Mach. Learn., 2017.
- [20] K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep inside convolutional networks: Visualising image classification models and saliency maps,” in Int. Conf. Learn. Represent. Worksh., 2014.
- [21] K. Dhamdhere, M. Sundararajan, and Q. Yan, “How important is a neuron?” Int. Conf. Learn. Represent., 2019.
- [22] K. Leino, S. Sen, A. Datta, M. Fredrikson, and L. Li, “Influence-directed explanations for deep convolutional networks,” in 2018 IEEE International Test Conference (ITC). IEEE, 2018, pp. 1–8.
- [23] C. Olah, N. Cammarata, L. Schubert, G. Goh, M. Petrov, and S. Carter, “Zoom in: An introduction to circuits,” Distill, vol. 5, no. 3, pp. e00 024–001, 2020.
- [24] M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in Eur. Conf. Comput. Vis. Springer, 2014.
- [25] P. C. Giannelli, “Chain of custody and the handling of real evidence,” Am. Crim. L. Rev., vol. 20, p. 527, 1982.
- [26] M. H. Murad, N. Asi, M. Alsawas, and F. Alahdab, “New evidence pyramid,” BMJ Evidence-Based Medicine, vol. 21, no. 4, pp. 125–127, 2016.
- [27] A. M. Treisman and G. Gelade, “A feature-integration theory of attention,” Cognitive psychology, vol. 12, no. 1, pp. 97–136, 1980.
- [28] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, “Learning deep features for discriminative localization,” in IEEE Conf. Comput. Vis. Pattern Recog., 2016.
- [29] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-cam: Visual explanations from deep networks via gradient-based localization,” Int. J. Comput. Vis., vol. 128, no. 2, pp. 336–359, 2020.
- [30] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller, “Striving for simplicity: The all convolutional net,” in Int. Conf. Learn. Represent. Worksh., 2015.
- [31] B. Zhou, Y. Sun, D. Bau, and A. Torralba, “Interpretable basis decomposition for visual explanation,” in Eur. Conf. Comput. Vis., 2018, pp. 119–134.
- [32] A. Sung, “Ranking importance of input parameters of neural networks,” Expert systems with Applications, vol. 15, no. 3-4, pp. 405–411, 1998.
- [33] D. Baehrens, T. Schroeter, S. Harmeling, M. Kawanabe, K. Hansen, and K.-R. Müller, “How to explain individual classification decisions,” The Journal of Machine Learning Research, vol. 11, pp. 1803–1831, 2010.
- [34] S. Bach, A. Binder, G. Montavon, F. Klauschen, K.-R. Müller, and W. Samek, “On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation,” PloS one, vol. 10, no. 7, p. e0130140, 2015.
- [35] G. Montavon, S. Lapuschkin, A. Binder, W. Samek, and K.-R. Müller, “Explaining nonlinear classification decisions with deep taylor decomposition,” Pattern Recognition, vol. 65, pp. 211–222, 2017.
- [36] B. Kim, J. Seo, S. Jeon, J. Koo, J. Choe, and T. Jeon, “Why are saliency maps noisy? cause of and solution to noisy saliency maps,” in IEEE ICCVW. IEEE, 2019, pp. 4149–4157.
- [37] S. Srinivas and F. Fleuret, “Full-gradient representation for neural network visualization,” in Adv. Neural Inform. Process. Syst., 2019, pp. 4124–4133.
- [38] P.-J. Kindermans, K. T. Schütt, M. Alber, K.-R. Müller, D. Erhan, B. Kim, and S. Dähne, “Learning how to explain neural networks: Patternnet and patternattribution,” in Int. Conf. Learn. Represent., 2018.
- [39] J. Zhang, Z. Lin, J. Brandt, X. Shen, and S. Sclaroff, “Top-down neural attention by excitation backprop,” in Eur. Conf. Comput. Vis., 2016.
- [40] Y. Yang, J. Qiu, M. Song, D. Tao, and X. Wang, “Learning propagation rules for attribution map generation,” in Eur. Conf. Comput. Vis., 2020, pp. 672–688.
- [41] D. Smilkov, N. Thorat, B. Kim, F. Viégas, and M. Wattenberg, “Smoothgrad: removing noise by adding noise,” in Int. Conf. Mach. Learn. Worksh., 2017.
- [42] L. M. Zintgraf, T. S. Cohen, T. Adel, and M. Welling, “Visualizing deep neural network decisions: Prediction difference analysis,” in Int. Conf. Learn. Represent., 2017.
- [43] B. Kim, M. Wattenberg, J. Gilmer, C. Cai, J. Wexler, F. Viegas et al., “Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav),” in Int. Conf. Mach. Learn. PMLR, 2018, pp. 2668–2677.
- [44] V. Petsiuk, A. Das, and K. Saenko, “Rise: Randomized input sampling for explanation of black-box models,” in Brit. Mach. Vis. Conf., 2018.
- [45] M. T. Ribeiro, S. Singh, and C. Guestrin, “”Why should i trust you?” Explaining the predictions of any classifier,” in ACM SIGKDD, 2016, pp. 1135–1144.
- [46] R. C. Fong and A. Vedaldi, “Interpretable explanations of black boxes by meaningful perturbation,” in Int. Conf. Comput. Vis., 2017, pp. 3429–3437.
- [47] R. Fong, M. Patrick, and A. Vedaldi, “Understanding deep networks via extremal perturbations and smooth masks,” in Int. Conf. Comput. Vis., 2019, pp. 2950–2958.
- [48] P. Dabkowski and Y. Gal, “Real time image saliency for black box classifiers,” in Adv. Neural Inform. Process. Syst., 2017.
- [49] A. Chattopadhay, A. Sarkar, P. Howlader, and V. N. Balasubramanian, “Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks,” in IEEE Winter Conf. Appl. Comput. Vis., 2018, pp. 839–847.
- [50] H. Wang, Z. Wang, M. Du, F. Yang, Z. Zhang, S. Ding, P. Mardziel, and X. Hu, “Score-cam: Score-weighted visual explanations for convolutional neural networks,” in IEEE Conf. Comput. Vis. Pattern Recog. Worksh., 2020, pp. 24–25.
- [51] D. Erhan, Y. Bengio, A. Courville, and P. Vincent, “Visualizing higher-layer features of a deep network,” University of Montreal, vol. 1341, no. 3, p. 1, 2009.
- [52] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, “Object detectors emerge in deep scene cnns,” in Int. Conf. Learn. Represent., 2015.
- [53] A. Mordvintsev, C. Olah, and M. Tyka, “Inceptionism: Going deeper into neural networks,” 2015.
- [54] J. Yosinski, J. Clune, A. Nguyen, T. Fuchs, and H. Lipson, “Understanding neural networks through deep visualization,” in Int. Conf. Mach. Learn. Worksh., 2015.
- [55] A. Mahendran and A. Vedaldi, “Understanding deep image representations by inverting them,” in IEEE Conf. Comput. Vis. Pattern Recog., 2015, pp. 5188–5196.
- [56] A. Dosovitskiy and T. Brox, “Inverting visual representations with convolutional networks,” in IEEE Conf. Comput. Vis. Pattern Recog., 2016, pp. 4829–4837.
- [57] C. Olah, A. Mordvintsev, and L. Schubert, “Feature visualization,” Distill, vol. 2, no. 11, p. e7, 2017.
- [58] D. Bau, B. Zhou, A. Khosla, A. Oliva, and A. Torralba, “Network dissection: Quantifying interpretability of deep visual representations,” in IEEE Conf. Comput. Vis. Pattern Recog., 2017, pp. 6541–6549.
- [59] R. Fong and A. Vedaldi, “Net2vec: Quantifying and explaining how concepts are encoded by filters in deep neural networks,” in IEEE Conf. Comput. Vis. Pattern Recog., 2018, pp. 8730–8738.
- [60] D. Bau, J.-Y. Zhu, H. Strobelt, A. Lapedriza, B. Zhou, and A. Torralba, “Understanding the role of individual units in a deep neural network,” Proceedings of the National Academy of Sciences, 2020.
- [61] R. Chen, H. Chen, J. Ren, G. Huang, and Q. Zhang, “Explaining neural networks semantically and quantitatively,” in Int. Conf. Comput. Vis., 2019, pp. 9187–9196.
- [62] N. Frosst and G. Hinton, “Distilling a neural network into a soft decision tree,” in CEX workshop at AIIA, 2017.
- [63] X. Liu, X. Wang, and S. Matwin, “Improving the interpretability of deep neural networks with knowledge distillation,” in IEEE Int. Conf. Data Mining Worksh. IEEE, 2018, pp. 905–912.
- [64] C. Chen, O. Li, D. Tao, A. Barnett, C. Rudin, and J. K. Su, “This looks like that: Deep learning for interpretable image recognition,” in Adv. Neural Inform. Process. Syst., vol. 32, 2019, pp. 8930–8941.
- [65] P. W. Koh, T. Nguyen, Y. S. Tang, S. Mussmann, E. Pierson, B. Kim, and P. Liang, “Concept bottleneck models,” in Int. Conf. Mach. Learn., 2020, pp. 5338–5348.
- [66] N. Kumar, A. C. Berg, P. N. Belhumeur, and S. K. Nayar, “Attribute and simile classifiers for face verification,” in Int. Conf. Comput. Vis., 2009, pp. 365–372.
- [67] C. H. Lampert, H. Nickisch, and S. Harmeling, “Learning to detect unseen object classes by between-class attribute transfer,” in IEEE Conf. Comput. Vis. Pattern Recog. IEEE, 2009, pp. 951–958.
- [68] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein et al., “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vis., vol. 115, no. 3, pp. 211–252, 2015.
- [69] M. Everingham, S. A. Eslami, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes challenge: A retrospective,” Int. J. Comput. Vis., 2015.
- [70] Q. Zhang, Y. Yang, H. Ma, and Y. N. Wu, “Interpreting cnns via decision trees,” in IEEE Conf. Comput. Vis. Pattern Recog., 2019, pp. 6261–6270.
- [71] J. Benesty, J. Chen, Y. Huang, and I. Cohen, “Pearson correlation coefficient,” in Noise reduction in speech processing. Springer, 2009, pp. 1–4.
- [72] P. Sedgwick, “Spearman’s rank correlation coefficient,” Bmj, vol. 349, 2014.
- [73] A. Neubeck and L. Van Gool, “Efficient non-maximum suppression,” in 18th International Conference on Pattern Recognition (ICPR’06), vol. 3. IEEE, 2006, pp. 850–855.
- [74] J. Adebayo, J. Gilmer, M. Muelly, I. Goodfellow, M. Hardt, and B. Kim, “Sanity checks for saliency maps,” in Adv. Neural Inform. Process. Syst., vol. 31, 2018.
- [75] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Adv. Neural Inform. Process. Syst., 2012, pp. 1097–1105.
- [76] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” in Int. Conf. Learn. Represent., 2018.
- [77] M. Oquab, L. Bottou, I. Laptev, and J. Sivic, “Is object localization for free?-weakly-supervised learning with convolutional neural networks,” in IEEE Conf. Comput. Vis. Pattern Recog., 2015, pp. 685–694.
- [78] S. Kumar and M. Hebert, “A hierarchical field framework for unified context-based classification,” in Int. Conf. Comput. Vis., vol. 2. IEEE, 2005, pp. 1284–1291.
- [79] R. Mottaghi, X. Chen, X. Liu, N.-G. Cho, S.-W. Lee, S. Fidler, R. Urtasun, and A. Yuille, “The role of context for object detection and semantic segmentation in the wild,” in IEEE Conf. Comput. Vis. Pattern Recog., 2014, pp. 891–898.
- [80] M. L. Leavitt and A. S. Morcos, “Selectivity considered harmful: evaluating the causal impact of class selectivity in dnns,” in Int. Conf. Learn. Represent., 2020.