This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Structured Network Pruning by Measuring Filter-wise Interactions

Wenting Tang
Beijing Key Laboratory of Digital Media,
School of Computer Science and Engineering,
Beihang University
Beijing, China
wtang13@buaa.edu.cn
Xingxing Wei
Beijing Key Laboratory of Digital Media,
School of Computer Science and Engineering,
Beihang University
Beijing, China
xxwei@buaa.edu.cn
Bo Li
Beijing Key Laboratory of Digital Media,
School of Computer Science and Engineering,
Beihang University
Beijing, China
boli@buaa.edu.cn
Abstract

Structured network pruning is a practical approach to reduce computation cost directly while retaining the CNNs’ generalization performance in real applications. However, identifying redundant filters is a core problem in structured network pruning, and current redundancy criteria only focus on individual filters’ attributes. When pruning sparsity increases, these redundancy criteria are not effective or efficient enough. Since the filter-wise interaction also contributes to the CNN’s prediction accuracy, we integrate the filter-wise interaction into the redundancy criterion. In our criterion, we introduce the filter importance and filter utilization strength to reflect the decision ability of individual and multiple filters. Utilizing this new redundancy criterion, we propose a structured network pruning approach SNPFI (Structured Network Pruning by measuring Filter-wise Interaction). During the pruning, the SNPFI can automatically assign the proper sparsity based on the filter utilization strength and eliminate the useless filters by filter importance. After the pruning, the SNPFI can recover pruned model’s performance effectively without iterative training by minimizing the interaction difference. We empirically demonstrate the effectiveness of the SNPFI with several commonly used CNN models, including AlexNet, MobileNetv1, and ResNet-50, on various image classification datasets, including MNIST, CIFAR-10, and ImageNet. For all experimental CNN models, nearly 60% of computation is reduced in a network compression while the classification accuracy remains.

1 Introduction

Network pruning can transfer the pre-trained heavy CNNs models to a lightweight form with comparable precision by removing models’ inherent redundancy [5]. On the one hand, this transformation effectively boosts the real-time performance of CNNs on the edge device. On the other hand, network pruning prevents over-fitting by directly eliminating unnecessary parameters [21]. Structured network pruning removes a group of parameters in various granularity (e.g.kernel, filter). Since network pruning at the filter level of granularity requires no special-designed hardware accelerator, filter-wise structured network pruning becomes one of the research focuses currently [3, 19, 52].

Refer to caption
Figure 1: (a) Parts of binary filter interaction of the 123123-th layer of the Resnet-50 [17] when inference on CIFAR-10 [27]. The filters are sorted by the L1-norm of weight in descending order[16]. The binary interactions among the important filters are in the green square and the interactions among the subtle filters are in the yellow square; filters’ influences on output are computed by [40] and normalized. (b) The compression performance on CIFAR-10 by state-of-the-are pruning algorithms [14, 34, 12, 20, 45] and SNPFI (our). The red arrow indicates better compression results.

Identifying redundant filters is a core problem in structured network pruning. The filter is redundant if its contribution to the performance is subtle. Because either the filter’s weight or output feature can be the clue to discriminate its redundancy, structured network pruning algorithms can be categorized as weight-based [29, 51, 16] and feature-based approaches [53, 34, 6]. Assuming the vital filters have large weights, the LWCE [16] identifies redundant filters by the L1-norm. Considering the similarity among filters in feature, the FPGM [19] ranks filters by the difference from the geometric median. When pruning intensity is modest, these static criteria can efficiently identify redundancy within a single evaluation. Thus, they are also known as one-shot pruning. Under high pruning intensity, one-shot pruning approaches struggle to recover the performance effectively. Dynamic evaluation overcomes this drawback by measuring redundancy with more caution. Ye et al. [47] searches the redundant filters multiple times and recovers the weight through iterative training. Current feature-based approaches dynamically evaluate filters during the inference and the training [34, 14]. The Network slimming [34] embeds the filter selector layers into the original CNN to improve the accuracy of the redundancy discrimination. Despite various redundancy criteria existing, how to effectively and efficiently identify redundancy is still an unsettled problem.

Solely focusing on filter’s unitary attributes, the above redundancy criteria neglect the filter-wise interaction when measuring filter’s contribution on prediction. Many network interpretation studies have shown that filter-wise interactions do contribute on the prediction [44, 25, 9]. On the one hand, each filter’s interaction can reflect its inherent decision ability [25]. On the other hand, the filter-wise interactions are universal [40] and the functionally close-coupled layer exists in common CNNs as shown in Figure 1-(a). In this scenario, filters with various importance have to interact to positively contribute to the output. Therefore, these motivate us to integrate the filter-wise interaction into our redundancy criterion . As shown in Figure 1-(a), important filters (green cube) can benefit considerably from interacting with the subtle filters (yellow cube). Thus, only measuring the unitary attribution of filters can not effectively identify redundancy. Since the useless filters are functionally loosely-coupled (blue region) in the layer, measuring the strength of interaction among multiple filters can avoid excluding the filters that seem to be useless but contribute.

To identify the functionally loosely-coupled filters, we measure the filter’s importance and the utilization strength of interaction by the filter-wise interaction in our redundancy criterion. The contributions of individuals or groups of filters on output reflect the importance of each filter and the strength of interaction. To distribute the unit and the group contribution fairly, we model the filter-wise interaction by the cooperative game theory [7]. For the different number of filters in each layer, we refer to the normalized strength of interaction among multiple filters as the filter utilization strength. Since the filter utilization strength can reflect the decision ability of a group of filters, we can provide a theoretical lower bound of sparsity concerning the performance. What’s more, we identify the interactions that might cause the generalization gap by comparing the interaction result of the pruned model and the original model. For each pruned computational module, we refer to this disparity as the interaction difference to quantify the potential generalization gap and fine-tune the pruned model with the interaction differences to boost the optimization effectively. In this way, we can maintain the performance of the pruned model by our redundancy criterion.

Refer to caption
Figure 2: Overview of the proposed SNPFI. The blue parallelogram is the computational module before and after pruning. Each computational module consists of a few filters (blue circle). According to our redundancy criterion, the RL module can predict the proper layer-wise sparsity efficiently by the filter utilization strength; the Layer-wise pruning module can identify the relative redundant filters accurately by the filter importance; the Fine-tuning module can recover performance effectively by the interaction difference.

To attain the optimal combination of the layer-wise pruning sparsity, we propose a structured network pruning approach SNPFI (Structured network pruning by measuring filter-wise interaction) in Figure 2 based on our redundancy criterion. SNPFI includes three parts: the RL (reinforcement learning) module, the Layer-wise Pruning module, and the Fine-tuning module. The RL module predicts the pruning sparsity in layer-wise and consists of the Environment and the Agent. For each computational module, the Environment first generates the state and the sparsity lower bound via the filter utilization strength; then the Agent predicts the sparsity by the state and the sparsity lower bound; in the end, the Environment rewards this pruning decision in constant time. Once the pruning is finished, the Agent is updated by the pruning decision and reward. To alleviate the delayed reward problem of model-free reinforcement learning in the pruning scenario [4], we formulate the reward function by filter utilization strength. In this way, SNPFI can automatically and efficiently optimize the compression plan; The Layer-wise pruning module removes the redundant filters based on the predicted sparsity. During the layer-wise pruning, the redundant filters are removed via the filter importance and the sparsity; The Fine-tuning module recovers the weight of the pruned model according to the interaction difference. In this way, we can ensure the pruned model acquires meaningful interaction behaviors inherent in the original model even under a high pruning intensity. As shown in Figure 1-(b), SNPFI can outperform the other state-of-the-art pruning approach even with a more limited computation overhead.

The contributions of our work are as follows:

  • We propose a redundancy criterion by filter-wise interaction and theoretically and experimentally prove its effectiveness. In this new redundancy criterion, the filter importance and filter utilization strength imply the decision ability of individual and multiple filters. According to our redundancy criterion, the interaction difference abstracts the potential generalization gap due to the pruning and effectively guides the weight recovery of the pruned model;

  • We present a model compression algorithm SNPFI which prunes efficiently and effectively; Utilizing our redundancy criterion and the interaction difference, SNPFI avoids pruning the relatively useful filters and recovers the performance by minimizing the generalization gap caused by pruning;

  • We experimentally demonstrate the availability of SNPFI among MNIST, CIFAR10, and IMAGENET across AlexNet, MobileNetv1, and ResNet-50. SNPFI can always prune around 60% (20% higher than experience) of filters of the original model without significant performance decline at a single-pass pruning process.

2 Related work

Weight-based and Feature-based network pruning. Network pruning is a process that decouples the useless structure from the CNN by the redundancy criteria. The unitary attributes (e.g.weight, feature) can spot redundancy well under a conservative pruning sparsity [29, 19, 51]. The OBD [29] measures neurons’ importance by the second-order derivative of weight; The TMI-GKP [51] evaluates the redundancy by the similarities among filters by weight. Since the coupling degrees of different computational modules might vary in diverse applications [40, 25], static evaluations of redundancy are not applicable when pruning intensity increases [47]. Therefore, current feature-based pruning approaches design filter selectors to decouple the useless filters at the expense of the computation[14, 34]. The NPPM [14] can foresee the future impact on accuracy for any layer-wise pruning, but requires extra training on the filter selector. Thus, how identifying redundancy effectively and efficiently under the high pruning intensity is still an unsettled problem. We address this problem with a new redundancy criterion by the filter-wise interaction in Section 3.1.

Structured pruning and AutoML. Structured pruning [3] removes groups of neuron connections at once, which means the adjacent and preceding layer’s output channels shrink with the current layer’s input channels when pruning at filter granularity. Since the computation cost exponentially decreases when scheduling the model’s compression plan at the filter level, AutoML [46] (Automated machine learning) can effectively assign layers’ sparsity by repeated retraining on each pruned model [33]. Since repeated retraining is necessary to bond the accuracy and the sparsity, the reinforcement learning approaches also struggle to balance exploration and exploitation due to the delayed reward problem [20] or lack of exploration [49]. The delayed reward problem is the reinforcement learning tasks with sparse or episodic rewards [4]. Aiming to alleviate the delayed reward problem in real pruning scenarios, we formulate the reward function by filter utilization rate in Section 3.2.

3 Method

To identify the redundancy effectively, we introduce a new redundancy criterion by the filter-wise interaction in Section 3.1; To demonstrate the feasibility of our redundancy criteria, we propose a reinforcement learning-based structured network pruning approach SNPFI in Section 3.2; To minimize the performance disparity between the original and the pruned model, we propose the interaction difference for fine-tuning in Section 3.3.

3.1 Measuring the redundancy

In the pruning scenario, there are two types of filters in each computational module: the redundant and the useful. Taking a CNN with nn computational layers as an example, the ll-th layer is a set of filters Nl={Fil|i=1,,coutl}N_{l}=\left\{F_{i}^{l}|i=1,...,c_{out}^{l}\right\}, where each filter is FilCinl×kl×klF_{i}^{l}\in\mathbb{R}^{C_{in}^{l}\times k^{l}\times k^{l}}, coutlc_{out}^{l} and cinlc_{in}^{l} are the ll-layer’s output and input channels, klk^{l} is the kernel size. The layer’s pruning sparsity indicates the ratio of number of the redundant filters out of all the filters NlN_{l} [3]. According to the redundancy criteria and the ll-th layer’s pruning sparsity, assume rlr^{l} filters are useful, then the pruning sparsity is coutlrlcoutl\frac{c_{out}^{l}-r^{l}}{c_{out}^{l}}. The useful filter set SlS_{l} of the ll-th layer is Sl={Fil|iIMP}S_{l}=\left\{F_{i}^{l}|i\in IMP\right\}, where the IMPIMP is a set of the index of the useful filters identified by their importance according to the specific redundancy criteria. In the inference, each filter consumes a comparable amount of computation but contributes to the output differently. Since the redundant filters’ contributions are much lower than the useful filters, current pruning algorithms believe that the influence of pruning the redundant filter is subtle[19, 41, 3]. However, these approaches neglect the filter-wise interactions’ contributions during the redundancy evaluation.

Since a relatively useless filter might have a considerable collaborative contribution to prediction [25, 40], we need to fairly quantify the impacts of the potential filter-wise interactions on prediction. To achieve this goal, we regard each inference process achieved by mm filters in the ll-th layer as a collaborative game <Ml,V><M_{l},V> [15]. During the inference on a image II, each filter is a player and mm players align the coalition MlM_{l} with the contribution V(Ml)V(M_{l}), where m=|Ml|[1,coutl]m=|M_{l}|\in[1,c_{out}^{l}], V(Ml)=logP(y^=cls|Ml,I)1P(y^=cls|Ml,I)V(M_{l})=log\frac{P(\hat{y}=cls|M_{l},I)}{1-P(\hat{y}=cls|M_{l},I)} [10] and y^\hat{y} is the prediction with MlM_{l}. According to the V(Ml)V(M_{l}) for each coalition MlM_{l}, we can distribute the contributions on the output when multiple filters interact. In this way, we can quantify each filter’s importance and each layer’s filter utilization strength to identify the functionally loosely-coupled filters. To fairly measure the individual contributions of each filter, we formulate the importance of the cc-th filter in the ll-th layer by the Shapley value [37] in Eq.(1).

tcl=|cMl,MlNl|Ml|!(coutl+1|Ml|)!coutl!V(Ml,c)|,t_{c}^{l}=\left|\sum_{c\in M_{l},M_{l}\subseteq N_{l}}\frac{|M_{l}|!(c_{out}^{l}+1-|M_{l}|)!}{c_{out}^{l}!}\bigtriangleup V(M_{l},c)\right|, (1)

where V(Ml,c)=V(Ml)V(Ml{c})\bigtriangleup V(M_{l},c)=V(M_{l})-V(M_{l}-\left\{c\right\}). When the cc-th filter forms the MlM_{l} with other filters in the ll-th layer, it might positively contribute to the prediction. Considering the potential contribution of each filter, the filter importance tclt_{c}^{l} assigns the relatively useless filter to a small value. As proven in [2], the tclt_{c}^{l} is unique and fair to each filter. Therefore, we can fairly discriminate the relative useless filter from the useful filter.

To fairly distribute the contribution of interaction, we first introduce the filter interaction uld(i,j)u_{l}^{d}(i,j). The filter interaction is the contribution of the interaction among at least two distinct filters. In this scenario, coalition MlM_{l} has to consist of m=d+2m=d+2 filters, where {i,j}Ml\left\{i,j\right\}\subseteq M_{l}, i,j[1,coutl]i,j\in[1,c_{out}^{l}], iji\neq j, d[0,coutl2]d\in[0,c_{out}^{l}-2] and m[2,coutl]m\in[2,c_{out}^{l}]. Utilizing the Shapley interaction index [15], the filter interaction uld(i,j)u_{l}^{d}(i,j) among i,ji,j, when the other dd filters exist, define in Eq.(2).

uld(i,j)={i,j}MlNl,|Ml|=d+2V(i,j,Ml),u^{d}_{l}(i,j)=\sum_{\left\{i,j\right\}\subseteq M_{l}\subseteq N_{l},\left|M_{l}\right|=d+2}\bigtriangleup V(i,j,M_{l}), (2)

where V(i,j,Ml)=(V(Ml)V(Ml{j}))(V(Ml{i})V(Ml{i,j}))\bigtriangleup V(i,j,M_{l})=(V(M_{l})-V(M_{l}-\left\{j\right\}))-(V(M_{l}-\left\{i\right\})-V(M_{l}-\left\{i,j\right\})). The larger uld(i,j)u^{d}_{l}(i,j), the stronger interaction when ii,jj form a coalition with the other dd filters. Notably, it can be proved that uld(i,j)u^{d}_{l}(i,j) is unique and fair among all coalitions[37]. With the uld(i,j)u^{d}_{l}(i,j), we can measure the filter utilization strength Ul(m)U_{l}(m) of the ll-th layer in Eq.(3).

Ul(m)=q=0m2𝔼IΩ𝔼i,jNl[ulq(i,j)]p=0coutl2𝔼IΩ𝔼i,jNl[ulp(i,j)],U_{l}(m)=\frac{\sum_{q=0}^{m-2}\mathbb{E}_{I\in\Omega}\mathbb{E}_{i,j\in N_{l}}[u^{q}_{l}(i,j)]}{\sum_{p=0}^{c_{out}^{l}-2}\mathbb{E}_{I\in\Omega}\mathbb{E}_{i,j\in N_{l}}[u^{p}_{l}(i,j)]}, (3)

where the Ω\Omega is the calibration dataset and |Ω|=256|\Omega|=256 by default. A high value of Ul(m)U_{l}(m) indicates that the interaction strength is intensive when mm filters exist. If subtle filters exist, Ul(m)U_{l}(m) achieves a high value with a relatively small mm. In this way, we can estimate the number of useless filters by Ul(m)U_{l}(m).

3.2 Filter-wise interaction based structured network pruning

In the pruning scenario, the approximation of the optimal pruning plan SS^{*} can be time-consuming. Since each layer’s pruning sparsity and the accuracy of the pruned model are non-linearly related, previous studies spend considerable amount of computation to sample the best accuracy achieved by the according pruning sparsity[14, 33, 49]. Based on our redundancy criterion, we can estimate each layer’s sparsity lower bound by the expert experience. Given the desirable filter utilization strength θ\theta to maintain the basic functionality after pruning, the lower bound of the ll-th layer’s sparsity slbls_{lb}^{l} is estimated in Eq.(4).

minmslbl=m+2coutls.t.Ul(m)θ,m[2,coutl]m+\min_{m}s_{lb}^{l}=\frac{m+2}{c_{out}^{l}}\,s.t.\,U_{l}(m)\geqslant\theta,\,m\in[2,c_{out}^{l}]\land m\in\mathbb{Z}^{+} (4)

According to the slbls_{lb}^{l} and tclt_{c}^{l}, the relatively useful and important filters always remain in the layer-wise pruning as shown in the dotted box of Figure.2. However, scheduling the layer-wise pruning plan only by the expert experience might compromise with the local optimum [20, 49]. The objective function of the scheduling process is in the E.q.(5).

S=minSCOMP(S)s.t.ACC(S)α,S={sl|l=1,,n},sl(0,1.0]S^{*}=\min_{S}COMP(S)\,s.t.\,ACC(S)\geqslant\alpha,\,S=\left\{s^{l}|l=1,...,n\right\},\,s^{l}\in(0,1.0] (5)

where COMP(S)COMP(S) and ACC(S)ACC(S) are the computation and the accuracy of the pruned model following the pruning plan SS, and α\alpha is the legal validation accuracy for the pruned model. To efficiently approximate the SS^{*} through enormous combination of SS, we integrate slbls_{lb}^{l} into E.q.(5):

S=minSCOMP(S)s.t.ACC(S)α,S={sl|l=1,,n},sl[slbl,1.0]S^{*}=\min_{S}COMP(S)\,s.t.\,ACC(S)\geqslant\alpha,\,S=\left\{s^{l}|l=1,...,n\right\},\,s^{l}\in[s_{lb}^{l},1.0] (6)

According to E.q.(5) and E.q.(6), it is straightforward that the searching space sharply shrinks and is feasible for the RL algorithm. Therefore, we integrate our filter-wise interaction-based redundancy criterion into the RL algorithm as shown in Figure 2. In the following, we explain the details of the RL module of the SNPFI.

State sls^{l} is defined below for the ll-th accessible layer:

sl=<type,#param,FLOPs,kl,μ(Ul(m)),σ(Ul(m)),cinl,coutl>s^{l}=<type,\#param,FLOPs,k^{l},\mu(U_{l}(m)),\sigma(U_{l}(m)),c_{in}^{l},c_{out}^{l}> (7)

The typetype is the type of the layer, #param\#param is the number of the parameters, and FLOPsFLOPs is the number of floating-point operations. The average and the standard deviation of the filter utilization strength, μ(Ul(m))\mu(U_{l}(m)) and σ(Ul(m))\sigma(U_{l}(m)), are included in the state.

Action ala^{l} is the pruning sparsity of the ll-th layer. In the Agent, the ala^{l} is predicted by the policy network πϵ\pi_{\epsilon} (the Actor in Figure 2) on the state sls^{l} and bounded by the slbls_{lb}^{l}. ala^{l} is defined in the Eq.(8).

al=max(slbl,πϵ(sl))a^{l}=\max(s_{lb}^{l},\pi_{\epsilon}(s^{l})) (8)

where the ϵ\epsilon is the trainable parameters of π\pi. The output range of the policy network is in (0,1](0,1].

Reward should convey the future effects on the model’s computational overhead and performance caused by the following layer-wise pruning. Since the Ul(m)U_{l}(m) is naturally related to the number of filters and their cooperative contribution to the inference, we can formulate the reward function Rl()R_{l}(\cdot) of the ll-th layer via Ul(m)U_{l}(m) and rlr^{l} to alleviate the delayed reward problem [38] in the AMC [20]:

Rl(rl)={Ul(rl)rl1coutl,1l<ni=1n1Ri(ri)n+coutn×Un(rn)rnrn×coutn×n,l=nACC(S)αi=1n1Ri(ri)n+coutn×Un(rn)rnrn×coutn×nn2,l=nACC(S)<αR_{l}(r^{l})=\left\{\begin{matrix}\frac{U_{l}(r^{l})}{r^{l}}-\frac{1}{c_{out}^{l}}&,1\leqslant l<n\\ \frac{\sum_{i=1}^{n-1}R_{i}(r^{i})}{n}+\frac{c_{out}^{n}\times U_{n}(r^{n})-r^{n}}{r^{n}\times c_{out}^{n}\times n}&,l=n\wedge ACC(S)\geqslant\alpha\\ \frac{\sum_{i=1}^{n-1}R_{i}(r^{i})}{n}+\frac{c_{out}^{n}\times U_{n}(r^{n})-r^{n}}{r^{n}\times c_{out}^{n}\times n}-n^{2}&,l=n\wedge ACC(S)<\alpha\end{matrix}\right. (9)

where rl=coutl×alr^{l}=\lfloor c_{out}^{l}\times a^{l}\rfloor is the number of remaining filters and S={al|l=1,,n}S=\left\{a^{l}|l=1,...,n\right\} is the predicted pruning plan. For CIFAR-10 and MNIST, we evaluate the ACC(S)ACC(S) on the validation set; for ImageNet, we use the training set. When the SS ensures the performance of the pruned model, this reward function encourages the SS to achieve a higher filter utilization strength with fewer filters during the optimization; When the SS violates the legal performance of the pruned model α\alpha, this reward penalizes the actor for the aggressive pruning sparsity. In this way, the SSS\to S^{*} step by step.

Policy Update. We utilize the DDPG [32] algorithm to optimize the pruning policy. The parameters of the policy network π\pi are updated by the Eq.(10).

J(ϵ)=𝔼slρβ[Qπ(sl,πϵ(sl))]J(\epsilon)=\mathbb{E}_{s^{l}\sim\rho^{\beta}}[Q^{\pi}(s^{l},\pi_{\epsilon}(s^{l}))] (10)

In DDPG, ρβ\rho^{\beta} is the critic network (the Critic in Figure.2) to encourage the right action, QπQ^{\pi} is the value function to estimate current policy.

3.3 Interaction difference based fine-tuning

During the deployment on the edge device, removing relatively subtle filters with considerable collaborative contribution might be inevitable. The generalization gap between the pruned and the original model emerges in this scenario. Since our redundancy criterion prevents pruning the filters both useful and important, this gap is caused by the distinct interactions of the pruned filters [25, 9]. Therefore, we propose to utilize the filter-wise interaction for fine-tuning as shown in Figure.2. In the perspective of the collaborative game, the remaining rlr^{l} filters and the entire coutlc_{out}^{l} filters form two different coalitions after pruning on the ll-th layer: SlS_{l} and NlN_{l}. If the generalization gap exists, at least one cooperative filter exists in the {NlSl}\left\{N_{l}-S_{l}\right\}. To quantify this generalization gap, we define the interaction difference ΔI(Sl,Nl)\Delta I(S_{l},N_{l}) among SlS_{l} and NlN_{l} in the Eq.(11).

ΔI(Sl,Nl)=𝔼(Sl,Nl)[rlcoutl×V(Nl)V(Sl)]\Delta I(S_{l},N_{l})=\mathbb{E}_{(S_{l},N_{l})}[\frac{r^{l}}{c_{out}^{l}}\times V(N_{l})-V(S_{l})] (11)
Theorem 1.

If V(Sl),V(Nl)V(S_{l}),V(N_{l})\in\mathbb{R} is such that V(Sl)V(Nl)V(S_{l})\neq V(N_{l}) for any computational layer indexed by ll, then ΔI(Sl,Nl)>0\Delta I(S_{l},N_{l})>0.

Based on the theorem 1, the non-negative interaction indicates the existence of meaningful interactions that leads to a better generalization. Hence, we propose the ID loss in Eq.(12) to encourage the pruned model to learn these important interactions.

LID=1nl=1ncls=1P(y^=cls|ΔI(Sl,Nl),I)logP(y^=cls|ΔI(Sl,Nl),I)L_{ID}=-\frac{1}{n}\sum_{l=1}^{n}\sum_{cls=1}^{\mathbb{C}}P(\hat{y}=cls|\Delta I(S_{l},N_{l}),I)logP(\hat{y}=cls|\Delta I(S_{l},N_{l}),I) (12)

where LIDL_{ID} is the Shannon entropy[39] that uses the ΔI(Sl,Nl)\Delta I(S_{l},N_{l}) for classification on image II, II is sampled from the training set ,\mathbb{C} is the number of the categories and y^\hat{y} is the prediction based on the ΔI(Sl,Nl)\Delta I(S_{l},N_{l}). By minimizing the LIDL_{ID}, the pruned model can learn the important interaction eventually. Since ΔI(Sl,Nl)\Delta I(S_{l},N_{l}) is non-negative, LIDL_{ID} can be effective guidance by avoiding the gradient explosion. Therefore, we integrate ΔI(Sl,Nl)\Delta I(S_{l},N_{l}) with ground truth during the fine-tuning as shown in Figure 2.

4 Experiments

Refer to caption
Figure 3: Left: TOP-1 accuracy on CIFAR-10 (up) and MNIST (bottom) after pruning via filter utilization strength Ul(m)U_{l}(m) and sparsity in AlexNet. Notably, we evaluate accuracy on the soft-pruned [18] AlexNet without fine-tuning; Right: filter utilization strength in various layers of the AlexNet (up) and Mobilenetv1 (bottom) in CIFAR-10 (green) and MNIST (purple).

We empirically demonstrate the effectiveness of our proposed redundancy metric and the SNPFI on MNIST [30], CIFAR-10 [27], and ImageNet [11]. We implemented our method based on the publicly available Torch implementation for ResNet-50 [17], AlexNet [28], and MobileNetv1 [23]. As a commonly used lightweight network, MobileNetv1 is used to test the reliability of SNPFI when pruning on a sparse architecture. AlexNet and ResNet-50 are used to test the commonality of SNPFI with different densities of network connections.

Baselines. We mainly compare our approach with feature-based pruning: NS (Network slimming) [34] and NPPM [14]. The NS is a one-shot pruning algorithm and requires no extract training after each pruning; the NPPM requires training for the feature selection network. To demonstrate the efficiency of SNPFI, weight-based pruning [36, 45, 8, 18] and Auto-ML methods [12, 20] are included.

Pruning. For optimization of the SNPFI agent, we use the Adam algorithm to optimize the actor πϵ\pi_{\epsilon} and the critic networks ρβ\rho_{\beta}. The learning rate is 10410^{-4} for the actor and 10510^{-5} for the critic. For fine-tuning, We utilize our proposed ID and wight it and the original loss as 0.5 and 1. For fine-tuning, we use SGD with a Nesterov momentum of 0.9. For AlexNet and MobileNetv1 on MNIST, 15 epochs are used with the initial learning rate of 5e-5; For MobieleNet-v1 and ResNet-50 on CIFAR10, 120 epochs are used with an initial learning rate of 8e-4; For MobieleNet-v1 and ResNet-50 on ImageNet, 160 epochs are used with an initial learning rate of 1e-3.

4.1 Reliability of Filter-wise interaction

Effectiveness of the filter utilization strength. In the ideal, the number of filters has a linear relationship with the accuracy, which indicates that lower sparsity leads to better performance with promise. In reality, the accuracy doesn’t monotonously increase with the proportion of total filters as shown in the shallow or middle layer in AlexNet (Figure 3-left). Thus, assigning all layers with the same sparsity is not effective. Since the accuracy monotonously increases with the filter utilization strength Ul(m)U_{l}(m), a higher Ul(m)U_{l}(m) can ensure better accuracy and is more effective than the sparsity. In this way, our reward function in Eq.(9) can effectively alleviate the delayed reward problem. Meanwhile, the Ul(m)U_{l}(m) converges around 0.50.5 in diverse layers and applications as shown in Figure 3-left. Thus, we can estimate the sparsity lower bound slbls_{lb}^{l} in Eq.(4) with a uniform filter utilization strength θ\theta for various layers. Since the filter-wise interactions are diverse among layers, the identical value of Ul(m)U_{l}(m) implies different slbls_{lb}^{l}. We utilize this property to boost the optimization of the pruning plan as shown in Eq.(5).

Refer to caption
Figure 4: Left: The performance recovery outcomes by plain training [38] (red), Knowledge distillation [22] (black), and our proposed interaction difference (green) when pruning around 60% filters of MobileNetv1. Right: U(m)U(m) of different layers in ResNet-50 (red) and the pruned ResNet-50 (green).

Various interactive mode. As shown in Figure 3-right, a model has three types of interactive modes: low channel first, multi-channel first, and high channel first. As the U(m)U(m) of the shallow layer in AlexNet shown, the ‘low channel first’ is the interaction mode when layers’ U(m)U(m) reach the inflection point with a few filters; As the U(m)U(m) of AlexNet’s the middle layer in AlexNet shown, the ‘multi-channel first’ is the interaction mode that layers’ U(m)U(m) distinctly increase multiple times with the growing number of the filters; As the U(m)U(m) of the deep layer in MobileNetv1 on the MNIST shown, the ‘high channel first’ is the interaction mode that layers’ U(m)U(m) distinctly soar with a high number of channels. Since the interactive mode is diverse in the same model, scheduling aggressive pruning sparsity for a single layer can risk the generalization performance of the whole model. Specifically, even the identical model might behave variously when applied in different scenarios. As shown in Figure 3-right, MobileNetv1 tends to interact in ‘multi-channel first’ in MNIST but shifts from ‘multi-channel first’ to ‘low channel first’ and ends in ‘high channel first’ in CIFAR-10. Hence, our redundancy criterion is vital in real applications to prevent pruning the useful filters.

Effectiveness of the interaction difference. As shown in Figure 4-left, our proposed interaction difference loss LIDL_{ID} is effective in weight recovery for aggressive pruning. Knowledge distillation [22] struggles to converge when the ground truth is intricate for training the pruned model. In contrast, the LIDL_{ID} steadily guides the pruned model to an increasing training accuracy without drastic fluctuation of validation loss. As shown in Figure 4-right, the interaction difference can refine the interactive mode for the pruned model. Before pruning, the ResNet-50 tends to interact in ‘low channel first’ for shallower layers and in ‘multi-channel first’ for deeper layers. After fine-tuning with LIDL_{ID}, all layers tend to interact in ‘multi-channel first’ mode, which means filters strongly cooperate and few redundancies exist.

4.2 Pruning models on different datasets

Pruning on CIFAR-10. As shown in Table 1(a), SNPFI reduces more than 60% computation of the MobileNetv1 and ResNet-50 without a significant accuracy drop. In the deployment scenario, our method, nearly 2×2\times faster than the QPNN, can achieve comparable performance with QPNN on the FPGA device without any particular software accelerator [48]. When compressing the highly computational network ResNet-50, SNPFI achieves the highest compression and accuracy among other state-of-the-art methods. Compared with AMC and TAS, SNPFI provides better network architecture by overcoming the delayed reward; When comparing with Greg-2, our method realizes 3.7% more reduction and 2.54% higher accuracy without iterative training to recover the weight; Without empowering the channel selection ability by manual modification of the network, our methods are more applicable and better performance than NS and NPPM.

Table 1: Pruning result on CIFAR10 (1(a)) and ImageNet(1(b)). ’Acc’ is the top-1 classification accuracy on the validation set, and PR (Pruning Ratio) is computed by 1PrunedFLOPsOriginalFLOPs1-\frac{Pruned\,FLOPs}{Original\,FLOPs}.
(a) Pruning result on CIFAR10.
Method Acc Acc Drop PR
MobileNetv1
QPNN (2020) 91.53%\% -0.22%\% 37.50%\%
NS (2017) 90.29%\% 1.02%\% 46.30%\%
NPPM (2021) 90.56%\% 0.75%\% 67.79%\%
SNPFI(our) 90.42%\% 0.89%\% 63.00%\%
ResNet-50
TAS 2019 93.69%\% -0.77%\% 52.70%\%
AMC (2018b) 91.90%\% 1.83%\% 50.00%\%
Greg-2 (2021) 93.36%\% -0.44%\% 60.80%\%
NS (2017) 93.44%\% -0.52%\% 57.80%\%
NPPM (2021) 93.48 %\% -0.56%\% 58.30%\%
SNPFI(our) 95.90%\% -2.98%\% 64.50%\%
(b) Pruning result on ImageNet.
Method Acc Acc Drop PR
MobileNetv1
AMC (2018b) 73.00%\% -2.94%\% 56.00%\%
NS (2017) 68.97%\% 1.09%\% 62.00%\%
SNPFI(our) 71.93%\% -1.87%\% 52.50%\%
ResNet-50
TAS (2019) 76.2%\% 1.26%\% 43.50%\%
AMC (2018b) 76.11%\% 1.35%\% 50.00%\%
Greg-2 (2021) 76.13%\% 1.32%\% 32.90%\%
LeGR (2019) 75.70%\% 1.76%\% 42.00%\%
SFP (2018a) 74.61%\% 2.85%\% 41.80%\%
NPPM (2021) 75.96%\% 1.50%\% 56.00%\%
SNPFI(our) 78.80%\% -1.34%\% 53.60%\%

Pruning on ImageNet. As shown in Table 1(b), SNPFI achieves the best accuracy when pruning generously on ResNet-50. When comparing with Auto-ML methods, SNPFI prunes a 10% of Filters more than the TAS and performs a 2.39% of accuracy higher than the AMC; When comparing with weight-based pruning methods, SNPFI prunes a 10% of channels more than the Greg-2 and LeGR and performs exceeding 2% of accuracy higher than those methods; When comparing with feature-based pruning methods, SNPFI out-performs the SFP and NPPM under comparable pruning ratio. When pruning on MobileNetv1, SNPFI realizes a comparable compression result with the AMC and NS.

RGB model for single-band image. According to Table 2(a) and 2(b), we can experimentally prove that the architectural sparsity varies for the single-band and the multi-band image. On the one hand, different pruning methods can generalize well with less than 45% of the origin when processing single-band images as shown in Table 2(a), which is lower than the empirical value (60% at most [34]).On the other hand, the redundancy does lie at the architecture level when processing single-band images by RGB model. As shown in Table 2(b), applying SNPFI can decline 12% higher of the computation overhead in the ResNet-50 for gray-scale image than the RGB image. The accuracy disparity between the RGB and the gray-scale input might be caused by the absence of colorful information in the gray-scale image, which is vital for object recognition.

Table 2: Pruning result on MNIST (2(a)) and the ResNet-50’s sparsity when processing the input image in the RGB and gray-scale modes (2(b)).
(a) Pruning result on MNIST.
Method Acc Acc Drop PR
AlexNet
NS (2017) 99.00%\% -0.04%\% 51.20%\%
NPPM (2021) 99.13 -0.17 55.10
SNPFI(our) 99.14%\% -0.18%\% 55.80%\%
MobileNetv1
NS (2017) 96.64%\% 0.16%\% 40.80%\%
NPPM (2021) 96.54%\% 0.26%\% 62.96%\%
SNPFI(our) 97.48%\% -0.68%\% 59.10%\%
(b) The ResNet-50’s sparsity when processing the RGB image and the gray-scale image in CIFAR10. We use the pruning ratio (PR) to represent the sparsity. The pruning is achieved by the SNPFT. We generate the gray-scale image by the red band information of the RGB image. The baseline model for processing gray-scale CIFAR10 is trained from the sketch with the same training setting for processing the original CIFAR10.
Image mode ACC Acc Drop PR
RGB 95.90%\% -2.98%\% 63.0%\%
Gray-scale 93.06%\% -1.02% 75.51%\%

5 Conclusion

In this paper, we discussed redundancy from the filter interaction aspect. On the one hand, our proposed redundancy metric can effectively predict the model’s inherent sparsity and suggest the proper model for a real application. On the other hand, computing the filter utilization rate of our redundancy metric can be time-consuming. Since the number of filters grows with the depth of CNN, the combination number of the interaction can be enormous. In the future, a considerable improvement in computation can be achieved by clustering on filters to evaluate the filter-wise utilization among distinct filters only.

References

  • Aghli and Ribeiro [2021] Nima Aghli and Eraldo Ribeiro. Combining weight pruning and knowledge distillation for cnn compression. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3191–3198, 2021.
  • Ancona et al. [2020] Marco Ancona, Cengiz Öztireli, and Markus Gross. Shapley value as principled metric for structured network pruning. arXiv preprint arXiv:2006.01795, 2020.
  • Anwar et al. [2017] Sajid Anwar, Kyuyeon Hwang, and Wonyong Sung. Structured pruning of deep convolutional neural networks. ACM Journal on Emerging Technologies in Computing Systems (JETC), 13(3):1–18, 2017.
  • Arjona-Medina et al. [2019] Jose A Arjona-Medina, Michael Gillhofer, Michael Widrich, Thomas Unterthiner, Johannes Brandstetter, and Sepp Hochreiter. Rudder: Return decomposition for delayed rewards. Advances in Neural Information Processing Systems, 32, 2019.
  • Asif et al. [2019] Umar Asif, Jianbin Tang, and Stefan Harrer. Ensemble knowledge distillation for learning improved and efficient networks. arXiv preprint arXiv:1909.08097, 2019.
  • Bu et al. [2021] Jie Bu, Arka Daw, M Maruf, and Anuj Karpatne. Learning compact representations of neural networks using discriminative masking (dam). Advances in Neural Information Processing Systems, 34:3491–3503, 2021.
  • Chalkiadakis et al. [2012] Georgios Chalkiadakis, Edith Elkind, and Michael Wooldridge. Cooperative game theory: Basic concepts and computational challenges. IEEE Intelligent Systems, 27(3):86–90, 2012.
  • Chin et al. [2019] T. W. Chin, R. Ding, C. Zhang, and D. Marculescu. Towards efficient model compression via learned global ranking. arXiv, 2019.
  • Cui et al. [2019] Tianyu Cui, Pekka Marttinen, and Samuel Kaski. Learning global pairwise interactions with bayesian neural networks. In European Conference on Artificial Intelligence, 2019.
  • Deng et al. [2021] Huiqi Deng, Qihan Ren, Hao Zhang, and Quanshi Zhang. Discovering and explaining the representation bottleneck of dnns. arXiv preprint arXiv:2111.06236, 2021.
  • Deng et al. [2009] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
  • Dong and Yang [2019] Xuanyi Dong and Yi Yang. Network Pruning via Transformable Architecture Search. arXiv e-prints, art. arXiv:1905.09717, May 2019. doi: 10.48550/arXiv.1905.09717.
  • Douzas et al. [2018] Douzas, Georgios, Bacao, and Fernando. Effective data generation for imbalanced learning using conditional generative adversarial networks. Expert Systems with Application, 2018.
  • Gao et al. [2021] Shangqian Gao, Feihu Huang, Weidong Cai, and Heng Huang. Network pruning via performance maximization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9270–9280, 2021.
  • Grabisch and Roubens [1999] Michel Grabisch and Marc Roubens. An axiomatic approach to the concept of interaction among players in cooperative games. International Journal of Game Theory, 28:547–565, 1999.
  • Han et al. [2015] Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. Advances in neural information processing systems, 28, 2015.
  • He et al. [2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  • He et al. [2018a] Yang He, Guoliang Kang, Xuanyi Dong, Yanwei Fu, and Yi Yang. Soft filter pruning for accelerating deep convolutional neural networks. arXiv preprint arXiv:1808.06866, 2018a.
  • He et al. [2019] Yang He, Ping Liu, Ziwei Wang, Zhilan Hu, and Yi Yang. Filter pruning via geometric median for deep convolutional neural networks acceleration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4340–4349, 2019.
  • He et al. [2018b] Yihui He, Ji Lin, Zhijian Liu, Hanrui Wang, Li-Jia Li, and Song Han. Amc: Automl for model compression and acceleration on mobile devices. In Proceedings of the European conference on computer vision (ECCV), pages 784–800, 2018b.
  • Heaton and Jeff [2017] Heaton and Jeff. Ian goodfellow, yoshua bengio, and aaron courville: Deep learning. Genetic Programming and Evolvable Machines, pages 1–3, 2017.
  • Hinton et al. [2015] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
  • Howard et al. [2017] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
  • Ioffe and Szegedy [2015] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. JMLR.org, 2015.
  • Janizek et al. [2021] Joseph D Janizek, Pascal Sturmfels, and Su-In Lee. Explaining explanations: Axiomatic feature interactions for deep networks. The Journal of Machine Learning Research, 22(1):4687–4740, 2021.
  • Jia et al. [2021] Xinyu Jia, Chuang Zhu, Minzhen Li, Wenqi Tang, and Wenli Zhou. Llvip: A visible-infrared paired dataset for low-light vision. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3496–3504, 2021.
  • Krizhevsky et al. [2009] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
  • Krizhevsky et al. [2017] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6):84–90, 2017.
  • LeCun et al. [1989] Yann LeCun, John Denker, and Sara Solla. Optimal brain damage. Advances in neural information processing systems, 2, 1989.
  • LeCun et al. [1998] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
  • Lee et al. [2020] Jaeho Lee, Sejun Park, Sangwoo Mo, Sungsoo Ahn, and Jinwoo Shin. Layer-adaptive sparsity for the magnitude-based pruning. arXiv preprint arXiv:2010.07611, 2020.
  • Lillicrap et al. [2015] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
  • Lin et al. [2020] Mingbao Lin, Rongrong Ji, Yuxin Zhang, Baochang Zhang, Yongjian Wu, and Yonghong Tian. Channel pruning via automatic structure search. arXiv preprint arXiv:2001.08565, 2020.
  • Liu et al. [2017] Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, and Changshui Zhang. Learning efficient convolutional networks through network slimming. In Proceedings of the IEEE international conference on computer vision, pages 2736–2744, 2017.
  • Lundberg and Lee [2017] S. Lundberg and S. I. Lee. A unified approach to interpreting model predictions. In Nips, 2017.
  • Paupamah et al. [2020] Kimessha Paupamah, Steven Jamefs, and Richard Klein. Quantisation and pruning for neural network compression and regularisation. In 2020 International SAUPEC/RobMech/PRASA Conference, pages 1–6, 2020. doi: 10.1109/SAUPEC/RobMech/PRASA48453.2020.9041096.
  • Rozemberczki et al. [2022] Benedek Rozemberczki, Lauren Watson, Péter Bayer, Hao-Tsung Yang, Olivér Kiss, Sebastian Nilsson, and Rik Sarkar. The shapley value in machine learning. arXiv preprint arXiv:2202.05594, 2022.
  • Ruder [2016] Sebastian Ruder. An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747, 2016.
  • Shannon [1948] Claude E. Shannon. A mathematical theory of communication. Bell Syst. Tech. J., 27:623–656, 1948.
  • Singh et al. [2018] Chandan Singh, W James Murdoch, and Bin Yu. Hierarchical interpretations for neural network predictions. arXiv preprint arXiv:1806.05337, 2018.
  • Song et al. [2016] H. Song, H. Mao, and W. J. Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In ICLR, 2016.
  • Sundararajan et al. [2017] Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In International conference on machine learning, pages 3319–3328. PMLR, 2017.
  • Sutskever et al. [2013] Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In International conference on machine learning, pages 1139–1147. PMLR, 2013.
  • Tsang et al. [2018] Michael Tsang, Dehua Cheng, and Yan Liu. Detecting statistical interactions from neural network weights. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/forum?id=ByOfBggRZ.
  • Wang et al. [2021] H. Wang, C. Qin, Y. Zhang, and Y. Fu. Neural pruning via growing regularization. In International Conference on Learning Representations, 2021.
  • Yao et al. [2018] Quanming Yao, Mengshuo Wang, Yuqiang Chen, Wenyuan Dai, Yu-Feng Li, Wei-Wei Tu, Qiang Yang, and Yang Yu. Taking human out of learning applications: A survey on automated machine learning. arXiv preprint arXiv:1810.13306, 2018.
  • Ye et al. [2018] Jianbo Ye, Xin Lu, Zhe Lin, and James Z. Wang. Rethinking the smaller-norm-less-informative assumption in channel pruning of convolution layers. In 6th International Conference on Learning Representations, ICLR 2018,Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018.
  • Yingge et al. [2020] Huo Yingge, Imran Ali, and Kang-Yoon Lee. Deep neural networks on chip-a survey. In 2020 IEEE International Conference on Big Data and Smart Computing (BigComp), pages 589–592. IEEE, 2020.
  • Yu et al. [2022] Sixing Yu, Arya Mazaheri, and Ali Jannesari. Topology-aware network pruning using multi-stage graph embedding and reinforcement learning. In International Conference on Machine Learning, pages 25656–25667. PMLR, 2022.
  • Zhang et al. [2018] Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, and Yun Fu. Image super-resolution using very deep residual channel attention networks. In Proceedings of the European conference on computer vision (ECCV), pages 286–301, 2018.
  • Zhong et al. [2022] Shaochen Zhong, Guanqun Zhang, Ningjia Huang, and Shuai Xu. Revisit kernel pruning with lottery regulated grouped convolutions. In International Conference on Learning Representations, 2022.
  • Zmora et al. [2019] Neta Zmora, Guy Jacob, Lev Zlotnik, Bar Elharar, and Gal Novik. Neural network distiller: A python package for dnn compression research. arXiv preprint arXiv:1910.12232, 2019.
  • Zou et al. [2018] Junhua Zou, Ting Rui, You Zhou, Chengsong Yang, and Sai Zhang. Convolutional neural network simplification via feature map pruning. Comput. Electr. Eng., 70:950–958, 2018.