Structured Network Pruning by Measuring Filter-wise Interactions
Abstract
Structured network pruning is a practical approach to reduce computation cost directly while retaining the CNNs’ generalization performance in real applications. However, identifying redundant filters is a core problem in structured network pruning, and current redundancy criteria only focus on individual filters’ attributes. When pruning sparsity increases, these redundancy criteria are not effective or efficient enough. Since the filter-wise interaction also contributes to the CNN’s prediction accuracy, we integrate the filter-wise interaction into the redundancy criterion. In our criterion, we introduce the filter importance and filter utilization strength to reflect the decision ability of individual and multiple filters. Utilizing this new redundancy criterion, we propose a structured network pruning approach SNPFI (Structured Network Pruning by measuring Filter-wise Interaction). During the pruning, the SNPFI can automatically assign the proper sparsity based on the filter utilization strength and eliminate the useless filters by filter importance. After the pruning, the SNPFI can recover pruned model’s performance effectively without iterative training by minimizing the interaction difference. We empirically demonstrate the effectiveness of the SNPFI with several commonly used CNN models, including AlexNet, MobileNetv1, and ResNet-50, on various image classification datasets, including MNIST, CIFAR-10, and ImageNet. For all experimental CNN models, nearly 60% of computation is reduced in a network compression while the classification accuracy remains.
1 Introduction
Network pruning can transfer the pre-trained heavy CNNs models to a lightweight form with comparable precision by removing models’ inherent redundancy [5]. On the one hand, this transformation effectively boosts the real-time performance of CNNs on the edge device. On the other hand, network pruning prevents over-fitting by directly eliminating unnecessary parameters [21]. Structured network pruning removes a group of parameters in various granularity (e.g.kernel, filter). Since network pruning at the filter level of granularity requires no special-designed hardware accelerator, filter-wise structured network pruning becomes one of the research focuses currently [3, 19, 52].

Identifying redundant filters is a core problem in structured network pruning. The filter is redundant if its contribution to the performance is subtle. Because either the filter’s weight or output feature can be the clue to discriminate its redundancy, structured network pruning algorithms can be categorized as weight-based [29, 51, 16] and feature-based approaches [53, 34, 6]. Assuming the vital filters have large weights, the LWCE [16] identifies redundant filters by the L1-norm. Considering the similarity among filters in feature, the FPGM [19] ranks filters by the difference from the geometric median. When pruning intensity is modest, these static criteria can efficiently identify redundancy within a single evaluation. Thus, they are also known as one-shot pruning. Under high pruning intensity, one-shot pruning approaches struggle to recover the performance effectively. Dynamic evaluation overcomes this drawback by measuring redundancy with more caution. Ye et al. [47] searches the redundant filters multiple times and recovers the weight through iterative training. Current feature-based approaches dynamically evaluate filters during the inference and the training [34, 14]. The Network slimming [34] embeds the filter selector layers into the original CNN to improve the accuracy of the redundancy discrimination. Despite various redundancy criteria existing, how to effectively and efficiently identify redundancy is still an unsettled problem.
Solely focusing on filter’s unitary attributes, the above redundancy criteria neglect the filter-wise interaction when measuring filter’s contribution on prediction. Many network interpretation studies have shown that filter-wise interactions do contribute on the prediction [44, 25, 9]. On the one hand, each filter’s interaction can reflect its inherent decision ability [25]. On the other hand, the filter-wise interactions are universal [40] and the functionally close-coupled layer exists in common CNNs as shown in Figure 1-(a). In this scenario, filters with various importance have to interact to positively contribute to the output. Therefore, these motivate us to integrate the filter-wise interaction into our redundancy criterion . As shown in Figure 1-(a), important filters (green cube) can benefit considerably from interacting with the subtle filters (yellow cube). Thus, only measuring the unitary attribution of filters can not effectively identify redundancy. Since the useless filters are functionally loosely-coupled (blue region) in the layer, measuring the strength of interaction among multiple filters can avoid excluding the filters that seem to be useless but contribute.
To identify the functionally loosely-coupled filters, we measure the filter’s importance and the utilization strength of interaction by the filter-wise interaction in our redundancy criterion. The contributions of individuals or groups of filters on output reflect the importance of each filter and the strength of interaction. To distribute the unit and the group contribution fairly, we model the filter-wise interaction by the cooperative game theory [7]. For the different number of filters in each layer, we refer to the normalized strength of interaction among multiple filters as the filter utilization strength. Since the filter utilization strength can reflect the decision ability of a group of filters, we can provide a theoretical lower bound of sparsity concerning the performance. What’s more, we identify the interactions that might cause the generalization gap by comparing the interaction result of the pruned model and the original model. For each pruned computational module, we refer to this disparity as the interaction difference to quantify the potential generalization gap and fine-tune the pruned model with the interaction differences to boost the optimization effectively. In this way, we can maintain the performance of the pruned model by our redundancy criterion.

To attain the optimal combination of the layer-wise pruning sparsity, we propose a structured network pruning approach SNPFI (Structured network pruning by measuring filter-wise interaction) in Figure 2 based on our redundancy criterion. SNPFI includes three parts: the RL (reinforcement learning) module, the Layer-wise Pruning module, and the Fine-tuning module. The RL module predicts the pruning sparsity in layer-wise and consists of the Environment and the Agent. For each computational module, the Environment first generates the state and the sparsity lower bound via the filter utilization strength; then the Agent predicts the sparsity by the state and the sparsity lower bound; in the end, the Environment rewards this pruning decision in constant time. Once the pruning is finished, the Agent is updated by the pruning decision and reward. To alleviate the delayed reward problem of model-free reinforcement learning in the pruning scenario [4], we formulate the reward function by filter utilization strength. In this way, SNPFI can automatically and efficiently optimize the compression plan; The Layer-wise pruning module removes the redundant filters based on the predicted sparsity. During the layer-wise pruning, the redundant filters are removed via the filter importance and the sparsity; The Fine-tuning module recovers the weight of the pruned model according to the interaction difference. In this way, we can ensure the pruned model acquires meaningful interaction behaviors inherent in the original model even under a high pruning intensity. As shown in Figure 1-(b), SNPFI can outperform the other state-of-the-art pruning approach even with a more limited computation overhead.
The contributions of our work are as follows:
-
•
We propose a redundancy criterion by filter-wise interaction and theoretically and experimentally prove its effectiveness. In this new redundancy criterion, the filter importance and filter utilization strength imply the decision ability of individual and multiple filters. According to our redundancy criterion, the interaction difference abstracts the potential generalization gap due to the pruning and effectively guides the weight recovery of the pruned model;
-
•
We present a model compression algorithm SNPFI which prunes efficiently and effectively; Utilizing our redundancy criterion and the interaction difference, SNPFI avoids pruning the relatively useful filters and recovers the performance by minimizing the generalization gap caused by pruning;
-
•
We experimentally demonstrate the availability of SNPFI among MNIST, CIFAR10, and IMAGENET across AlexNet, MobileNetv1, and ResNet-50. SNPFI can always prune around 60% (20% higher than experience) of filters of the original model without significant performance decline at a single-pass pruning process.
2 Related work
Weight-based and Feature-based network pruning. Network pruning is a process that decouples the useless structure from the CNN by the redundancy criteria. The unitary attributes (e.g.weight, feature) can spot redundancy well under a conservative pruning sparsity [29, 19, 51]. The OBD [29] measures neurons’ importance by the second-order derivative of weight; The TMI-GKP [51] evaluates the redundancy by the similarities among filters by weight. Since the coupling degrees of different computational modules might vary in diverse applications [40, 25], static evaluations of redundancy are not applicable when pruning intensity increases [47]. Therefore, current feature-based pruning approaches design filter selectors to decouple the useless filters at the expense of the computation[14, 34]. The NPPM [14] can foresee the future impact on accuracy for any layer-wise pruning, but requires extra training on the filter selector. Thus, how identifying redundancy effectively and efficiently under the high pruning intensity is still an unsettled problem. We address this problem with a new redundancy criterion by the filter-wise interaction in Section 3.1.
Structured pruning and AutoML. Structured pruning [3] removes groups of neuron connections at once, which means the adjacent and preceding layer’s output channels shrink with the current layer’s input channels when pruning at filter granularity. Since the computation cost exponentially decreases when scheduling the model’s compression plan at the filter level, AutoML [46] (Automated machine learning) can effectively assign layers’ sparsity by repeated retraining on each pruned model [33]. Since repeated retraining is necessary to bond the accuracy and the sparsity, the reinforcement learning approaches also struggle to balance exploration and exploitation due to the delayed reward problem [20] or lack of exploration [49]. The delayed reward problem is the reinforcement learning tasks with sparse or episodic rewards [4]. Aiming to alleviate the delayed reward problem in real pruning scenarios, we formulate the reward function by filter utilization rate in Section 3.2.
3 Method
To identify the redundancy effectively, we introduce a new redundancy criterion by the filter-wise interaction in Section 3.1; To demonstrate the feasibility of our redundancy criteria, we propose a reinforcement learning-based structured network pruning approach SNPFI in Section 3.2; To minimize the performance disparity between the original and the pruned model, we propose the interaction difference for fine-tuning in Section 3.3.
3.1 Measuring the redundancy
In the pruning scenario, there are two types of filters in each computational module: the redundant and the useful. Taking a CNN with computational layers as an example, the -th layer is a set of filters , where each filter is , and are the -layer’s output and input channels, is the kernel size. The layer’s pruning sparsity indicates the ratio of number of the redundant filters out of all the filters [3]. According to the redundancy criteria and the -th layer’s pruning sparsity, assume filters are useful, then the pruning sparsity is . The useful filter set of the -th layer is , where the is a set of the index of the useful filters identified by their importance according to the specific redundancy criteria. In the inference, each filter consumes a comparable amount of computation but contributes to the output differently. Since the redundant filters’ contributions are much lower than the useful filters, current pruning algorithms believe that the influence of pruning the redundant filter is subtle[19, 41, 3]. However, these approaches neglect the filter-wise interactions’ contributions during the redundancy evaluation.
Since a relatively useless filter might have a considerable collaborative contribution to prediction [25, 40], we need to fairly quantify the impacts of the potential filter-wise interactions on prediction. To achieve this goal, we regard each inference process achieved by filters in the -th layer as a collaborative game [15]. During the inference on a image , each filter is a player and players align the coalition with the contribution , where , [10] and is the prediction with . According to the for each coalition , we can distribute the contributions on the output when multiple filters interact. In this way, we can quantify each filter’s importance and each layer’s filter utilization strength to identify the functionally loosely-coupled filters. To fairly measure the individual contributions of each filter, we formulate the importance of the -th filter in the -th layer by the Shapley value [37] in Eq.(1).
(1) |
where . When the -th filter forms the with other filters in the -th layer, it might positively contribute to the prediction. Considering the potential contribution of each filter, the filter importance assigns the relatively useless filter to a small value. As proven in [2], the is unique and fair to each filter. Therefore, we can fairly discriminate the relative useless filter from the useful filter.
To fairly distribute the contribution of interaction, we first introduce the filter interaction . The filter interaction is the contribution of the interaction among at least two distinct filters. In this scenario, coalition has to consist of filters, where , , , and . Utilizing the Shapley interaction index [15], the filter interaction among , when the other filters exist, define in Eq.(2).
(2) |
where . The larger , the stronger interaction when , form a coalition with the other filters. Notably, it can be proved that is unique and fair among all coalitions[37]. With the , we can measure the filter utilization strength of the -th layer in Eq.(3).
(3) |
where the is the calibration dataset and by default. A high value of indicates that the interaction strength is intensive when filters exist. If subtle filters exist, achieves a high value with a relatively small . In this way, we can estimate the number of useless filters by .
3.2 Filter-wise interaction based structured network pruning
In the pruning scenario, the approximation of the optimal pruning plan can be time-consuming. Since each layer’s pruning sparsity and the accuracy of the pruned model are non-linearly related, previous studies spend considerable amount of computation to sample the best accuracy achieved by the according pruning sparsity[14, 33, 49]. Based on our redundancy criterion, we can estimate each layer’s sparsity lower bound by the expert experience. Given the desirable filter utilization strength to maintain the basic functionality after pruning, the lower bound of the -th layer’s sparsity is estimated in Eq.(4).
(4) |
According to the and , the relatively useful and important filters always remain in the layer-wise pruning as shown in the dotted box of Figure.2. However, scheduling the layer-wise pruning plan only by the expert experience might compromise with the local optimum [20, 49]. The objective function of the scheduling process is in the E.q.(5).
(5) |
where and are the computation and the accuracy of the pruned model following the pruning plan , and is the legal validation accuracy for the pruned model. To efficiently approximate the through enormous combination of , we integrate into E.q.(5):
(6) |
According to E.q.(5) and E.q.(6), it is straightforward that the searching space sharply shrinks and is feasible for the RL algorithm. Therefore, we integrate our filter-wise interaction-based redundancy criterion into the RL algorithm as shown in Figure 2. In the following, we explain the details of the RL module of the SNPFI.
State is defined below for the -th accessible layer:
(7) |
The is the type of the layer, is the number of the parameters, and is the number of floating-point operations. The average and the standard deviation of the filter utilization strength, and , are included in the state.
Action is the pruning sparsity of the -th layer. In the Agent, the is predicted by the policy network (the Actor in Figure 2) on the state and bounded by the . is defined in the Eq.(8).
(8) |
where the is the trainable parameters of . The output range of the policy network is in .
Reward should convey the future effects on the model’s computational overhead and performance caused by the following layer-wise pruning. Since the is naturally related to the number of filters and their cooperative contribution to the inference, we can formulate the reward function of the -th layer via and to alleviate the delayed reward problem [38] in the AMC [20]:
(9) |
where is the number of remaining filters and is the predicted pruning plan. For CIFAR-10 and MNIST, we evaluate the on the validation set; for ImageNet, we use the training set. When the ensures the performance of the pruned model, this reward function encourages the to achieve a higher filter utilization strength with fewer filters during the optimization; When the violates the legal performance of the pruned model , this reward penalizes the actor for the aggressive pruning sparsity. In this way, the step by step.
3.3 Interaction difference based fine-tuning
During the deployment on the edge device, removing relatively subtle filters with considerable collaborative contribution might be inevitable. The generalization gap between the pruned and the original model emerges in this scenario. Since our redundancy criterion prevents pruning the filters both useful and important, this gap is caused by the distinct interactions of the pruned filters [25, 9]. Therefore, we propose to utilize the filter-wise interaction for fine-tuning as shown in Figure.2. In the perspective of the collaborative game, the remaining filters and the entire filters form two different coalitions after pruning on the -th layer: and . If the generalization gap exists, at least one cooperative filter exists in the . To quantify this generalization gap, we define the interaction difference among and in the Eq.(11).
(11) |
Theorem 1.
If is such that for any computational layer indexed by , then .
Based on the theorem 1, the non-negative interaction indicates the existence of meaningful interactions that leads to a better generalization. Hence, we propose the ID loss in Eq.(12) to encourage the pruned model to learn these important interactions.
(12) |
where is the Shannon entropy[39] that uses the for classification on image , is sampled from the training set , is the number of the categories and is the prediction based on the . By minimizing the , the pruned model can learn the important interaction eventually. Since is non-negative, can be effective guidance by avoiding the gradient explosion. Therefore, we integrate with ground truth during the fine-tuning as shown in Figure 2.
4 Experiments

We empirically demonstrate the effectiveness of our proposed redundancy metric and the SNPFI on MNIST [30], CIFAR-10 [27], and ImageNet [11]. We implemented our method based on the publicly available Torch implementation for ResNet-50 [17], AlexNet [28], and MobileNetv1 [23]. As a commonly used lightweight network, MobileNetv1 is used to test the reliability of SNPFI when pruning on a sparse architecture. AlexNet and ResNet-50 are used to test the commonality of SNPFI with different densities of network connections.
Baselines. We mainly compare our approach with feature-based pruning: NS (Network slimming) [34] and NPPM [14]. The NS is a one-shot pruning algorithm and requires no extract training after each pruning; the NPPM requires training for the feature selection network. To demonstrate the efficiency of SNPFI, weight-based pruning [36, 45, 8, 18] and Auto-ML methods [12, 20] are included.
Pruning. For optimization of the SNPFI agent, we use the Adam algorithm to optimize the actor and the critic networks . The learning rate is for the actor and for the critic. For fine-tuning, We utilize our proposed ID and wight it and the original loss as 0.5 and 1. For fine-tuning, we use SGD with a Nesterov momentum of 0.9. For AlexNet and MobileNetv1 on MNIST, 15 epochs are used with the initial learning rate of 5e-5; For MobieleNet-v1 and ResNet-50 on CIFAR10, 120 epochs are used with an initial learning rate of 8e-4; For MobieleNet-v1 and ResNet-50 on ImageNet, 160 epochs are used with an initial learning rate of 1e-3.
4.1 Reliability of Filter-wise interaction
Effectiveness of the filter utilization strength. In the ideal, the number of filters has a linear relationship with the accuracy, which indicates that lower sparsity leads to better performance with promise. In reality, the accuracy doesn’t monotonously increase with the proportion of total filters as shown in the shallow or middle layer in AlexNet (Figure 3-left). Thus, assigning all layers with the same sparsity is not effective. Since the accuracy monotonously increases with the filter utilization strength , a higher can ensure better accuracy and is more effective than the sparsity. In this way, our reward function in Eq.(9) can effectively alleviate the delayed reward problem. Meanwhile, the converges around in diverse layers and applications as shown in Figure 3-left. Thus, we can estimate the sparsity lower bound in Eq.(4) with a uniform filter utilization strength for various layers. Since the filter-wise interactions are diverse among layers, the identical value of implies different . We utilize this property to boost the optimization of the pruning plan as shown in Eq.(5).

Various interactive mode. As shown in Figure 3-right, a model has three types of interactive modes: low channel first, multi-channel first, and high channel first. As the of the shallow layer in AlexNet shown, the ‘low channel first’ is the interaction mode when layers’ reach the inflection point with a few filters; As the of AlexNet’s the middle layer in AlexNet shown, the ‘multi-channel first’ is the interaction mode that layers’ distinctly increase multiple times with the growing number of the filters; As the of the deep layer in MobileNetv1 on the MNIST shown, the ‘high channel first’ is the interaction mode that layers’ distinctly soar with a high number of channels. Since the interactive mode is diverse in the same model, scheduling aggressive pruning sparsity for a single layer can risk the generalization performance of the whole model. Specifically, even the identical model might behave variously when applied in different scenarios. As shown in Figure 3-right, MobileNetv1 tends to interact in ‘multi-channel first’ in MNIST but shifts from ‘multi-channel first’ to ‘low channel first’ and ends in ‘high channel first’ in CIFAR-10. Hence, our redundancy criterion is vital in real applications to prevent pruning the useful filters.
Effectiveness of the interaction difference. As shown in Figure 4-left, our proposed interaction difference loss is effective in weight recovery for aggressive pruning. Knowledge distillation [22] struggles to converge when the ground truth is intricate for training the pruned model. In contrast, the steadily guides the pruned model to an increasing training accuracy without drastic fluctuation of validation loss. As shown in Figure 4-right, the interaction difference can refine the interactive mode for the pruned model. Before pruning, the ResNet-50 tends to interact in ‘low channel first’ for shallower layers and in ‘multi-channel first’ for deeper layers. After fine-tuning with , all layers tend to interact in ‘multi-channel first’ mode, which means filters strongly cooperate and few redundancies exist.
4.2 Pruning models on different datasets
Pruning on CIFAR-10. As shown in Table 1(a), SNPFI reduces more than 60% computation of the MobileNetv1 and ResNet-50 without a significant accuracy drop. In the deployment scenario, our method, nearly faster than the QPNN, can achieve comparable performance with QPNN on the FPGA device without any particular software accelerator [48]. When compressing the highly computational network ResNet-50, SNPFI achieves the highest compression and accuracy among other state-of-the-art methods. Compared with AMC and TAS, SNPFI provides better network architecture by overcoming the delayed reward; When comparing with Greg-2, our method realizes 3.7% more reduction and 2.54% higher accuracy without iterative training to recover the weight; Without empowering the channel selection ability by manual modification of the network, our methods are more applicable and better performance than NS and NPPM.
Method | Acc | Acc Drop | PR |
---|---|---|---|
MobileNetv1 | |||
QPNN (2020) | 91.53 | -0.22 | 37.50 |
NS (2017) | 90.29 | 1.02 | 46.30 |
NPPM (2021) | 90.56 | 0.75 | 67.79 |
SNPFI(our) | 90.42 | 0.89 | 63.00 |
ResNet-50 | |||
TAS 2019 | 93.69 | -0.77 | 52.70 |
AMC (2018b) | 91.90 | 1.83 | 50.00 |
Greg-2 (2021) | 93.36 | -0.44 | 60.80 |
NS (2017) | 93.44 | -0.52 | 57.80 |
NPPM (2021) | 93.48 | -0.56 | 58.30 |
SNPFI(our) | 95.90 | -2.98 | 64.50 |
Method | Acc | Acc Drop | PR |
---|---|---|---|
MobileNetv1 | |||
AMC (2018b) | 73.00 | -2.94 | 56.00 |
NS (2017) | 68.97 | 1.09 | 62.00 |
SNPFI(our) | 71.93 | -1.87 | 52.50 |
ResNet-50 | |||
TAS (2019) | 76.2 | 1.26 | 43.50 |
AMC (2018b) | 76.11 | 1.35 | 50.00 |
Greg-2 (2021) | 76.13 | 1.32 | 32.90 |
LeGR (2019) | 75.70 | 1.76 | 42.00 |
SFP (2018a) | 74.61 | 2.85 | 41.80 |
NPPM (2021) | 75.96 | 1.50 | 56.00 |
SNPFI(our) | 78.80 | -1.34 | 53.60 |
Pruning on ImageNet. As shown in Table 1(b), SNPFI achieves the best accuracy when pruning generously on ResNet-50. When comparing with Auto-ML methods, SNPFI prunes a 10% of Filters more than the TAS and performs a 2.39% of accuracy higher than the AMC; When comparing with weight-based pruning methods, SNPFI prunes a 10% of channels more than the Greg-2 and LeGR and performs exceeding 2% of accuracy higher than those methods; When comparing with feature-based pruning methods, SNPFI out-performs the SFP and NPPM under comparable pruning ratio. When pruning on MobileNetv1, SNPFI realizes a comparable compression result with the AMC and NS.
RGB model for single-band image. According to Table 2(a) and 2(b), we can experimentally prove that the architectural sparsity varies for the single-band and the multi-band image. On the one hand, different pruning methods can generalize well with less than 45% of the origin when processing single-band images as shown in Table 2(a), which is lower than the empirical value (60% at most [34]).On the other hand, the redundancy does lie at the architecture level when processing single-band images by RGB model. As shown in Table 2(b), applying SNPFI can decline 12% higher of the computation overhead in the ResNet-50 for gray-scale image than the RGB image. The accuracy disparity between the RGB and the gray-scale input might be caused by the absence of colorful information in the gray-scale image, which is vital for object recognition.
Image mode | ACC | Acc Drop | PR |
---|---|---|---|
RGB | 95.90 | -2.98 | 63.0 |
Gray-scale | 93.06 | -1.02% | 75.51 |
5 Conclusion
In this paper, we discussed redundancy from the filter interaction aspect. On the one hand, our proposed redundancy metric can effectively predict the model’s inherent sparsity and suggest the proper model for a real application. On the other hand, computing the filter utilization rate of our redundancy metric can be time-consuming. Since the number of filters grows with the depth of CNN, the combination number of the interaction can be enormous. In the future, a considerable improvement in computation can be achieved by clustering on filters to evaluate the filter-wise utilization among distinct filters only.
References
- Aghli and Ribeiro [2021] Nima Aghli and Eraldo Ribeiro. Combining weight pruning and knowledge distillation for cnn compression. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3191–3198, 2021.
- Ancona et al. [2020] Marco Ancona, Cengiz Öztireli, and Markus Gross. Shapley value as principled metric for structured network pruning. arXiv preprint arXiv:2006.01795, 2020.
- Anwar et al. [2017] Sajid Anwar, Kyuyeon Hwang, and Wonyong Sung. Structured pruning of deep convolutional neural networks. ACM Journal on Emerging Technologies in Computing Systems (JETC), 13(3):1–18, 2017.
- Arjona-Medina et al. [2019] Jose A Arjona-Medina, Michael Gillhofer, Michael Widrich, Thomas Unterthiner, Johannes Brandstetter, and Sepp Hochreiter. Rudder: Return decomposition for delayed rewards. Advances in Neural Information Processing Systems, 32, 2019.
- Asif et al. [2019] Umar Asif, Jianbin Tang, and Stefan Harrer. Ensemble knowledge distillation for learning improved and efficient networks. arXiv preprint arXiv:1909.08097, 2019.
- Bu et al. [2021] Jie Bu, Arka Daw, M Maruf, and Anuj Karpatne. Learning compact representations of neural networks using discriminative masking (dam). Advances in Neural Information Processing Systems, 34:3491–3503, 2021.
- Chalkiadakis et al. [2012] Georgios Chalkiadakis, Edith Elkind, and Michael Wooldridge. Cooperative game theory: Basic concepts and computational challenges. IEEE Intelligent Systems, 27(3):86–90, 2012.
- Chin et al. [2019] T. W. Chin, R. Ding, C. Zhang, and D. Marculescu. Towards efficient model compression via learned global ranking. arXiv, 2019.
- Cui et al. [2019] Tianyu Cui, Pekka Marttinen, and Samuel Kaski. Learning global pairwise interactions with bayesian neural networks. In European Conference on Artificial Intelligence, 2019.
- Deng et al. [2021] Huiqi Deng, Qihan Ren, Hao Zhang, and Quanshi Zhang. Discovering and explaining the representation bottleneck of dnns. arXiv preprint arXiv:2111.06236, 2021.
- Deng et al. [2009] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
- Dong and Yang [2019] Xuanyi Dong and Yi Yang. Network Pruning via Transformable Architecture Search. arXiv e-prints, art. arXiv:1905.09717, May 2019. doi: 10.48550/arXiv.1905.09717.
- Douzas et al. [2018] Douzas, Georgios, Bacao, and Fernando. Effective data generation for imbalanced learning using conditional generative adversarial networks. Expert Systems with Application, 2018.
- Gao et al. [2021] Shangqian Gao, Feihu Huang, Weidong Cai, and Heng Huang. Network pruning via performance maximization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9270–9280, 2021.
- Grabisch and Roubens [1999] Michel Grabisch and Marc Roubens. An axiomatic approach to the concept of interaction among players in cooperative games. International Journal of Game Theory, 28:547–565, 1999.
- Han et al. [2015] Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. Advances in neural information processing systems, 28, 2015.
- He et al. [2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
- He et al. [2018a] Yang He, Guoliang Kang, Xuanyi Dong, Yanwei Fu, and Yi Yang. Soft filter pruning for accelerating deep convolutional neural networks. arXiv preprint arXiv:1808.06866, 2018a.
- He et al. [2019] Yang He, Ping Liu, Ziwei Wang, Zhilan Hu, and Yi Yang. Filter pruning via geometric median for deep convolutional neural networks acceleration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4340–4349, 2019.
- He et al. [2018b] Yihui He, Ji Lin, Zhijian Liu, Hanrui Wang, Li-Jia Li, and Song Han. Amc: Automl for model compression and acceleration on mobile devices. In Proceedings of the European conference on computer vision (ECCV), pages 784–800, 2018b.
- Heaton and Jeff [2017] Heaton and Jeff. Ian goodfellow, yoshua bengio, and aaron courville: Deep learning. Genetic Programming and Evolvable Machines, pages 1–3, 2017.
- Hinton et al. [2015] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
- Howard et al. [2017] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
- Ioffe and Szegedy [2015] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. JMLR.org, 2015.
- Janizek et al. [2021] Joseph D Janizek, Pascal Sturmfels, and Su-In Lee. Explaining explanations: Axiomatic feature interactions for deep networks. The Journal of Machine Learning Research, 22(1):4687–4740, 2021.
- Jia et al. [2021] Xinyu Jia, Chuang Zhu, Minzhen Li, Wenqi Tang, and Wenli Zhou. Llvip: A visible-infrared paired dataset for low-light vision. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3496–3504, 2021.
- Krizhevsky et al. [2009] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
- Krizhevsky et al. [2017] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6):84–90, 2017.
- LeCun et al. [1989] Yann LeCun, John Denker, and Sara Solla. Optimal brain damage. Advances in neural information processing systems, 2, 1989.
- LeCun et al. [1998] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
- Lee et al. [2020] Jaeho Lee, Sejun Park, Sangwoo Mo, Sungsoo Ahn, and Jinwoo Shin. Layer-adaptive sparsity for the magnitude-based pruning. arXiv preprint arXiv:2010.07611, 2020.
- Lillicrap et al. [2015] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
- Lin et al. [2020] Mingbao Lin, Rongrong Ji, Yuxin Zhang, Baochang Zhang, Yongjian Wu, and Yonghong Tian. Channel pruning via automatic structure search. arXiv preprint arXiv:2001.08565, 2020.
- Liu et al. [2017] Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, and Changshui Zhang. Learning efficient convolutional networks through network slimming. In Proceedings of the IEEE international conference on computer vision, pages 2736–2744, 2017.
- Lundberg and Lee [2017] S. Lundberg and S. I. Lee. A unified approach to interpreting model predictions. In Nips, 2017.
- Paupamah et al. [2020] Kimessha Paupamah, Steven Jamefs, and Richard Klein. Quantisation and pruning for neural network compression and regularisation. In 2020 International SAUPEC/RobMech/PRASA Conference, pages 1–6, 2020. doi: 10.1109/SAUPEC/RobMech/PRASA48453.2020.9041096.
- Rozemberczki et al. [2022] Benedek Rozemberczki, Lauren Watson, Péter Bayer, Hao-Tsung Yang, Olivér Kiss, Sebastian Nilsson, and Rik Sarkar. The shapley value in machine learning. arXiv preprint arXiv:2202.05594, 2022.
- Ruder [2016] Sebastian Ruder. An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747, 2016.
- Shannon [1948] Claude E. Shannon. A mathematical theory of communication. Bell Syst. Tech. J., 27:623–656, 1948.
- Singh et al. [2018] Chandan Singh, W James Murdoch, and Bin Yu. Hierarchical interpretations for neural network predictions. arXiv preprint arXiv:1806.05337, 2018.
- Song et al. [2016] H. Song, H. Mao, and W. J. Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In ICLR, 2016.
- Sundararajan et al. [2017] Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In International conference on machine learning, pages 3319–3328. PMLR, 2017.
- Sutskever et al. [2013] Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In International conference on machine learning, pages 1139–1147. PMLR, 2013.
- Tsang et al. [2018] Michael Tsang, Dehua Cheng, and Yan Liu. Detecting statistical interactions from neural network weights. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/forum?id=ByOfBggRZ.
- Wang et al. [2021] H. Wang, C. Qin, Y. Zhang, and Y. Fu. Neural pruning via growing regularization. In International Conference on Learning Representations, 2021.
- Yao et al. [2018] Quanming Yao, Mengshuo Wang, Yuqiang Chen, Wenyuan Dai, Yu-Feng Li, Wei-Wei Tu, Qiang Yang, and Yang Yu. Taking human out of learning applications: A survey on automated machine learning. arXiv preprint arXiv:1810.13306, 2018.
- Ye et al. [2018] Jianbo Ye, Xin Lu, Zhe Lin, and James Z. Wang. Rethinking the smaller-norm-less-informative assumption in channel pruning of convolution layers. In 6th International Conference on Learning Representations, ICLR 2018,Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018.
- Yingge et al. [2020] Huo Yingge, Imran Ali, and Kang-Yoon Lee. Deep neural networks on chip-a survey. In 2020 IEEE International Conference on Big Data and Smart Computing (BigComp), pages 589–592. IEEE, 2020.
- Yu et al. [2022] Sixing Yu, Arya Mazaheri, and Ali Jannesari. Topology-aware network pruning using multi-stage graph embedding and reinforcement learning. In International Conference on Machine Learning, pages 25656–25667. PMLR, 2022.
- Zhang et al. [2018] Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, and Yun Fu. Image super-resolution using very deep residual channel attention networks. In Proceedings of the European conference on computer vision (ECCV), pages 286–301, 2018.
- Zhong et al. [2022] Shaochen Zhong, Guanqun Zhang, Ningjia Huang, and Shuai Xu. Revisit kernel pruning with lottery regulated grouped convolutions. In International Conference on Learning Representations, 2022.
- Zmora et al. [2019] Neta Zmora, Guy Jacob, Lev Zlotnik, Bar Elharar, and Gal Novik. Neural network distiller: A python package for dnn compression research. arXiv preprint arXiv:1910.12232, 2019.
- Zou et al. [2018] Junhua Zou, Ting Rui, You Zhou, Chengsong Yang, and Sai Zhang. Convolutional neural network simplification via feature map pruning. Comput. Electr. Eng., 70:950–958, 2018.