AutoSculpt: A Pattern-based Model Auto-pruning Framework Using Reinforcement Learning and Graph Learning
Abstract
As deep neural networks (DNNs) are increasingly deployed on edge devices, optimizing models for constrained computational resources is critical. Existing auto-pruning methods face challenges due to the diversity of DNN models, various operators (e.g., filters), and the difficulty in balancing pruning granularity with model accuracy. To address these limitations, we introduce AutoSculpt, a pattern-based automated pruning framework designed to enhance efficiency and accuracy by leveraging graph learning and deep reinforcement learning (DRL). AutoSculpt automatically identifies and prunes regular patterns within DNN architectures that can be recognized by existing inference engines, enabling runtime acceleration. Three key steps in AutoSculpt include: (1) Constructing DNNs as graphs to encode their topology and parameter dependencies, (2) embedding computationally efficient pruning patterns, and (3) utilizing DRL to iteratively refine auto-pruning strategies until the optimal balance between compression and accuracy is achieved. Experimental results demonstrate the effectiveness of AutoSculpt across various architectures, including ResNet, MobileNet, VGG, and Vision Transformer, achieving pruning rates of up to 90% and nearly 18% improvement in FLOPs reduction, outperforming all baselines. The codes can be available at https://anonymous.4open.science/r/AutoSculpt-DDA0
1 Introduction
Deploying resource-intensive deep neural networks (DNNs) on edge devices, such as mobile phones, robots, and self-driving cars, presents significant challenges due to limited computing resources. This makes model compression, particularly model pruning, an essential strategy [4, 58]. By removing less critical parameters through pruning, DNNs can effectively reduce their computational requirements, enhancing their suitability for resource-constrained scenarios [52, 17].

Model pruning can be categorized based on the granularity of the process into three types (see Figure 1): Unstructured pruning (or fine-grained pruning) [11, 39, 41, 34], pattern-based pruning (or semi-structured pruning) [27, 10], and structured pruning (or coarse-grained pruning) [55, 53, 7, 26, 31]. For instance, unstructured pruning can remove weights from arbitrary locations in a weight tensor, leading to an irregular distribution of non-zero weights. This irregularity requires specialized software and hardware support for effective acceleration [36, 56]. In contrast, structured pruning removes entire channels, filters (e.g., filter 1 in Figure 1), or layers, preserving a regular network structure that is easier to execute on standard inference engines. However, due to its coarser granularity, structured pruning often struggles to achieve an optimal balance between compression rate and accuracy, potentially due to the loss of critical topology information [7, 52].
Pattern-based pruning represents an intermediate approach, focusing on removing specific “regular structures” (highlighted in yellow in Figure 1) within the weight tensor that are conducive to hardware efficiency [27, 10]. The development of specialized model compilers, such as TVM [3] and TensorRT [43], has made pattern-based pruning more practical. These compilers can automatically adapt to pruned structures without manual intervention, significantly enhancing runtime acceleration by bypassing zeroed regions [56]. Nonetheless, pattern-based pruning techniques are still underexplored, indicating a pressing need for further research.
To search for optimal pruning policies, automated approaches such as AutoML (Automated Machine Learning) [4, 14] have recently gained attention. These methods use graph learning techniques, specifically graph neural networks (GNNs), to represent or embed DNNs. The embeddings are then fed into deep reinforcement learning (DRL) algorithms to find the optimal pruning policy, achieving greater compression efficiency [14, 22, 52]. However, while existing automatic pruning methods have made significant progress, they focus primarily on structured and unstructured pruning, and pattern-based pruning approaches [27, 10, 37, 35] are limited and face three main challenges: (1) They mainly handle convolutional neural networks (CNNs) and lack generalizability to various DNN architectures, especially in representing them, ignoring parameters in architectures like Transformers [47]; (2) they are predominantly designed for specific operators111We unify the concepts of convolution (conv), filters, and encoder/decoder blocks under the term “operator”. with fixed size (e.g., conv.) in CNNs, which might limit their effectiveness for other shapes; (3) there is potential to more fully leverage pattern information to improve the effectiveness of the pruning strategies.
To address these challenges, we propose a universal pattern-based model auto-pruning framework, namely AutoSculpt. We first construct various pretrained DNNs as graphs, and integrate regular patterns that are adaptable to diverse operators into the graph. Then, we utilize a GNN encoder to learn the graph embeddings, and feed these embedings into DRL to automate the search for optimal pruning patterns. To validate the feasibility and effectiveness of AutoSculpt, we conduct experimental evaluations across several datasets and serveral DNNs with different architecture, including ResNet, MobileNet, VGG, and Vision Transformer. Our results are compared against state-of-the-art (SOTA) methods, demonstrating that our framework achieves competitive performance. Specifically, we achieve a pruning ratio that exceeds all baseline methods, reaching up to 90%, while reducing FLOPs by nearly 18% compared to the latest auto-pruning method, and achieving SOTA performance levels.
In summary, our contributions are as follows:
-
•
We propose a universal pattern-based auto-pruning framework, AutoSculpt, that supports a variety of DNN models, integrating GNN and DRL to effectively leverage DNN topology, weight parameter information, and regular zero-region patterns in pruning decisions.
-
•
We combine pattern representation with DRL to achieve automated pruning and designed a reward function suitable for pattern pruning, effectively training agents to complete the pattern policy search.
-
•
We validate AutoSculpt through experiments on various popular DNNs, achieving SOTA results.
2 Related Work
Pruning Granularities. Model pruning has emerged as a popular research direction to enhance inference performance on resource-constrained devices. Generally, it can be categorized into unstructured, structured, and semi-structured pruning [4]. Unstructured pruning [11, 41, 34, 39] removes weights arbitrarily across the network, theoretically achieving the highest pruning rates without altering the network’s overall structure. However, the resulting irregularity and sparsity of the compressed weight tensors make hardware acceleration challenging. Consequently, most researchers have shifted their focus to structured pruning methods [12]. Structured pruning [14, 52, 7, 49, 46, 16, 28, 45, 15, 29, 54, 20] eliminates entire filters, channels, or layers, leading to more regular and hardware-friendly network architectures. Mainstream approaches in structured pruning include applying sparse regularization techniques to model parameters during training, such as LASSO [50] and ADMM [24]; dynamically adding masks to weights during training and inference for pruning (also known as soft pruning) [13, 23, 18]; and utilizing mathematical techniques like second-order Taylor approximation [48] and Variational Bayesian methods [57] for pruning solutions. However, the granularity of structured pruning can be too coarse, potentially resulting in the removal of important parameters. Semi-structured (or pattern-based) pruning [27, 35, 37] seeks suitable “regular patterns” to be removed on weight matrices. Models pruned using pattern-based methods exhibit more regular zero-shaped regions, thereby achieving inference acceleration while maintaining accuracy.

Automatic Pruning Methods. The architectures of modern DNNs are becoming increasingly complex and contain rich topology information, making it difficult to achieve satisfactory model compression results through manual heuristic methods [35, 37, 52, 10]. Consequently, AutoML united GNN and DRL becomes mainstream. MetaPruning [32] pre-trains a meta-network to predict the weights of the target DNN and then uses an evolutionary process to search for the best-performing pruned network. AMC [14] is the first to introduce DRL into structured pruning for CNN, performing model compression by searching redundant channels. AGMC [51] and GNN-RL [52] model CNN as graph, obtaining graph representations (or embeddings) through GNN and using DRL to pruning entire convolution or pooling layers. DTMM [10] proposed a pattern called “filterlet” as the pruning unit and designed a convolution operator to optimize the pruning strategy with the help of the pruning strategy scheduler. NPAS [27] proposes an automated pattern pruning and architecture search framework with compiler awareness.
However, existing work mainly focuses on coarse blocks (or layers) or less diverse types of DNNs. The pruning granularity in these approaches may limit their applicability to varous DNNs such as Transformer. There is still insufficient research on how to embed regular patterns into various DNN architectures, utilize DRL to identify optimal pruning strategies, and ultimately boost inference performance through automated techniques and regular patterns.
3 Method
We first introduce the AutoSculpt framework in Section 3.1, and then elaborate on its two key components: The pattern-based graph constructor and encoder in Section 3.2 and the DRL-based pruning strategy search in Section 3.3.
3.1 Overview of AutoSculpt
AutoSculpt is a multi-step framework for pruning DNN models, as shown in Figure 2. First, we calculate the compression parameters (e.g. inference accuracy, FLOPs and pruning ratio) of the DNN models to determine if they meet the constraints (Step 1). If the constraints are unmet, the Graph Constructor extracts topological features from the model and assigns pruning patterns to build the graph (Step 2). Next, we use a GNN, specifically the Graph Attention Network (GAT), to encode the graph and obtain the embeddings. Then, the Agent, calculates the probability distribution for each pattern based on graph embeddings and selects suitable pruning patterns to optimize the model using Pattern Sampler while interacting iteratively with the environment (Step 3). Once constraints are satisfied, we can obtain the pruned DNN through Pattern Pruner (Step 4), followed by fine-tuning to restore accuracy (Step 5). Finally, the pruned model can be deployed using an AI compilation framework (Step 6).
3.2 DNN-Graph Constructor and Encoder
A general graph constructed from a DNN can be represented as , where is the set of nodes and is the set of edges. The subsets represent nodes associated with weight tensors, is the input tensor node set, and is the output node set. The subsets and correspond to the edges linked to . The nodes in different layers of the DNN can be further divided into finer-grained sets. For example, the nodes in the -th layer are contained in . Next, we describe the graph constructors of two typical DNN architectures222In this paper, we focus on CNN and Transformer models, but other types of DNN can also be applied. : CNN and Transformer.
CNN-Graph Constructor. Given a general CNN , we use a set of weights to parameterize the architechture of CNN, where represents the weight tensor with channels of -th kernel of -th layer, and is the number of convolutional layers, is the number of convolutional kernels in -th layer. And the related input feature maps and output feature maps (with size) of -th layer are and , respectively.

Then, the specific process of graph construction is as follows: First, map the weight set of the -th layer to a set of nodes , where a node represents the weight of the kernel . At the same time, map the input and output feature maps of the -th layer to the nodes and , respectively. The dependencies of the convolution operation are then mapped to the connection relationships between the node set and the node sets and , forming the sets of edges in the graph . After completing the above steps, we obtain the graph , as shown in Figure 3. Since the parameters of the CNN model are primarily located in the convolutional layers, and the final fully connected layer connects to the output layer, our focus during graph construction is on the convolutional layers.
After the basic structure of the graph is constructed, we integrate the pruning pattern (shown as Patterns Library derived from Agent in Figure 2) into . Considering that integrating key information into edge feature embeddings yields better results than integrating it into node feature embeddings [40], we adapt the following integration scheme: The pattern is fused into the edge feature embeddings of the edge set , while the weights of the CNN corresponding to the node set are integrated into their node feature embeddings. In addition, the node feature embeddings in the node sets and , as well as the edge feature embeddings in the edge set , are assigned using random initialization. In short, we denote this process as Eq. 1:
(1) |
Additionally, in CNN architectures like ResNet, there are residual connections.These residual connections establish additional pathways between layers, allowing the flow of information across layers without being disrupted by pruning. As a result, when building the graph, we represent the residual connections as additional edges that link nodes corresponding to the layers involved in the residual paths. This ensures that the graph accurately reflects the full structure of the CNN, including these crucial connections, which are essential to maintain the performance of the CNN model during pruning.
Transformer-Graph Constructor. To model the Transformer encoder , we assume the input embedding of -th encoder is , where means the input sequence consists of tokens and is the embedding dimension. Then, we have , , and , where are learnable weight matrices for Querys, Keys, and Values, and is the dimension of each. The attention score is shown as and the output is .

Although the Transformer model structure differs significantly from CNNs, a graph can also be constructed for it to enable pattern pruning. The method of constructing the graph for the attention module in a Transformer model is illustrated in Figure 4. For the -th encoder, we map to the nodes , respectively. Similarly, the inputs and outputs of the -th encoder are maped to and , respectively. Then, the nodes set together with the related edges set constitutes the target graph . Additionally, since the Transformer model contains multiple encoder and decoder blocks, each of which includes a feedforward network (MLP) that holds a substantial portion of the model’s parameters, this also needs to be considered when constructing the graph. In this case, nodes can represent both the attention mechanism components (such as the Query, Key, and Value matrices) and the MLP layers, while the edges capture dependencies between these components within each encoder and decoder block.
Note that in this version of AutoSculpt, we manually define up to 10 pruning patterns in the Patterns Library for the Agent, as illustrated in Figure 3.1, ensuring that the pattern shapes are consistent with the DNN’s weight tensor shape. Expanding the range of patterns is planned as future work and is beyond the scope of this paper.
Pattern-based DNN-Graph Encoder. After the graph construction is completed, we need to encode the graph and extract its representative embeddings , denoted as Eq. 2. As an effective graph encoder, GAT introduces an attention mechanism between nodes, allowing us to capture the inner dependencies that exist in the constructed DNN graph. In this paper, we use an improved version of GAT, GATv2 [2], as the graph encoder.
(2) |
In detail, GATv2 introduces a dynamic attention mechanism that can better aggregate information between nodes. For example, to obtain the feature embedding of a node after aggregating information, it can be calculated using Eq. 3:
(3) |
where is the feature embedding of node after information aggregation, the attention coefficient of and can be calculated using Eq. 4, and is the weight matrix. is the feature embedding of node ’s neighbor .
(4) |
(5) |
In Eqs. 4 and 5, the scoring function is used to calculate the score between node and node , representing the importance of neighbor to . , , , and are learnable parameters. represents the edge feature embedding between node and node containing pattern information.
After the node feature embedings are aggregated, they already contain rich DNN topology information and pattern distribution information. Next, global pooling is applied to extract the graph embedding from the node feature embeddings. Thus, Eq. 2 can be further elaborated as Eq. 6:
(6) |
3.3 DRL-based Pruning Strategy
We leverage DRL to find the optimal pruning ratios efficiently. In the following, we describe the details of DRL-based pruning strategy.
Environment States . Our training objective is to combine the model parameters of the DNN with topological structure information to determine the pruning patterns for its operators. Therefore, we pass the graph embedding obtained from the graph encoder as the environment states (Eq. 6). Since the pruning pattern applied to the DNN changes after each iteration, the graph needs to be reconstructed to update the DNN representation.
Action Space . The direct output obtained by Agent is the probability distribution of all predefined patterns, making its action space continuous: , where is the number of predefined patterns. The actions can be calculated using Eq. 7, and then sample target patterns for the DNN we prune (represented by Eq. 8).
(7) |
(8) |
Reward . The design of the reward function is crucial for training in DRL. Generally, a higher pruning rate results in a more severe decline in model inference accuracy [4]. As training progresses, the DRL Agent tends to favor lower pruning rates to achieve better inference accuracy. Therefore, we design the reward function by combining model compression metrics (using FLOPs as an example, though other metrics like Multiply–Accumulate operations (MACs) can be similarly applied) and model inference accuracy as follows:
(9) |
where is a learnable parameter. When is larger, Agent is encouraged to adopt a more aggressive compression strategy, prioritizing model compression over accuracy. Conversely, when is smaller, Agent is incentivized to take a more conservative approach.
After that, to train the Agent, we adapt the PPO-Clip algorithm [42].
Pattern Pruner. When meeting the constraints , we set the corresponding position weights of DNN to zero according to the assigned patterns, which can be represented by Eq. 10:
(10) |
Input: DNN , Constraint set
Output: Pruned DNN
Algorithm. Algorithm 1 summarizes the pattern pruning process. Lines 1-3 initialize tasks, including putting DNN into environment , initializing patterns for Agent and initializing replaybuffer for storing historical training data. The following lines 4-23 describe the pattern pruning process (as depicted in Section 3.1 Steps 1-4).
4 Experiments
4.1 Settings
Datasets and Platform Settings. We conduct experimental validation on models in the image classification domain. The datasets include CIFAR-10/100 [21] and ImageNet-1K (ILSVRC-2012) [5]. During the pruning and fine-tuning processes, we utilize all training data, randomly sampling 50% of the data from the test set for validation. The experiments are carried out on a hardware platform equipped with an AMD EPYC 7402 processor and four RTX 4090 GPUs. Due to the large size of the ImageNet dataset, we implement multi-GPU parallel training.
Baselines. To ensure fairness and authority, we directly cite validated results from the original work, and the baselines compared are:
- •
- •
- •
- •
- •
- •
- •
DNN Settings. We conducte experiments on various DNNs with different architectures. For ResNet-32/56/110 trained on CIFAR-10 and VGG-19 trained on CIFAR-100, we use self-pretrained parameters for pruning. For MobileNet-v1/v2, ResNet-50, VGG-16 and ViT-B/16 trained on ImageNet, we utilize the pretrained parameters built into PyTorch for pruning. Considering the complexity of optimizing and deploying pruned models, we share the pruning patterns for the residual connection layers when pruning the ResNet models with residual connections. Additionally, we ignore the bias terms in the model computations for all pruned models.
Pruning Process Settings. During the model pruning process, the initial feature embedding size for the nodes in the constructed graph of the DNN is set to (with the same size for edge feature embeddings). After information aggregation, the feature embedding size increases to , and the overall graph feature embedding size is . When using the PPO algorithm for policy updates of the Agent, we employ the Adam optimizer to optimize both the Actor and the Critic. The learning rate for the Actor is set to , while the learning rate for the Critic is set to . The discount factor for controlling the reward attention is set to , and the PPO clipping range is set to . The replay buffer size is set to , and policy updates occur once the replay buffer is full, with a total of update iterations.
Fine-tuning Settings. During the fine-tuning of the pruned models, we use the SGD optimizer with the following settings: , , and the initial learning rate set to . In the training process, we update the learning rate using a multi-step decay approach, with the decay multiplicative factor . For models trained on CIFAR-10/100, the milestones are set to , totaling training epochs. For models trained on ImageNet, the milestones are set to , totaling training epochs.
DNN | Method | P. Acc. | Acc. | FLOPs |
ResNet-32 | DDG | 93.21 | -0.01 | 43.40 |
ABP | 92.55 | -0.08 | 46.30 | |
SOKS | 92.44 | -0.38 | 46.85 | |
AGMC | 90.96 | -1.67 | 50.00 | |
GNN-RL | 92.58 | -0.05 | 51.00 | |
SOSP | 95.06 | -0.24 | 67.36 | |
Ours | 92.16 | -0.20 | 70.00 | |
ResNet-56 | MDP | 94.29 | +0.55 | 45.11 |
ABP | 92.55 | -0.08 | 46.30 | |
AGMC | 92.76 | -0.63 | 50.00 | |
NPPM | 93.40 | +0.36 | 50.00 | |
DLRFC | 93.57 | -0.51 | 52.58 | |
MFP | 92.76 | -0.83 | 52.60 | |
GNN-RL | 93.49 | +0.10 | 54.00 | |
DepGraph | 93.64 | +0.11 | 61.00 | |
Ours | 93.35 | +0.09 | 63.00 | |
ResNet-110 | Hrank | 94.23 | +0.73 | 41.20 |
ABP | 93.95 | +0.32 | 46.20 | |
AGMC | 93.08 | -0.60 | 50.00 | |
GNN-RL | 94.31 | +0.63 | 52.00 | |
CHIP | 94.44 | +0.94 | 52.10 | |
MFP | 93.31 | -0.37 | 52.30 | |
Ours | 93.91 | +0.41 | 55.00 | |
VGG-19 | SOSP | 73.11 | -0.34 | 51.61 |
SCP | 72.15 | -0.41 | 61.94 | |
ProsPr | 72.29 | -0.21 | 80.00 | |
GREG | 67.55 | -6.47 | 88.69 | |
DepGraph | 70.39 | -3.11 | 88.79 | |
Ours | 67.97 | -2.38 | 90.00 |
DNN | Method | P. Acc. | Acc. | FLOPs |
MobileNet-v1 | SMCP | 71.00 | -1.60 | 37.43 |
DRLP | 70.60 | -0.30 | 50.00 | |
RollBack | 49.34 | - | 50.00 | |
AGMC | 69.40 | -1.20 | 60.00 | |
GNN-RL | 69.50 | -1.40 | 60.00 | |
Ours | 69.85 | -1.29 | 70.00 | |
MobileNet-v2 | NPPM | 72.02 | +0.02 | 29.70 |
RollBack | 52.86 | - | 40.00 | |
GNN-RL | 70.04 | -1.83 | 42.00 | |
GFP | 69.16 | -6.58 | 50.00 | |
DepGraph | 68.46 | -3.41 | 54.55 | |
DNCP | 66.50 | -5.80 | 67.67 | |
Ours | 67.65 | -4.22 | 75.00 | |
ResNet-50 | GFP | 76.42 | -0.37 | 50.11 |
SOSP | 74.39 | -1.76 | 51.00 | |
DepGraph | 75.83 | -0.32 | 51.82 | |
GNN-RL | 74.28 | -1.82 | 53.00 | |
MFP | 74.13 | -2.02 | 53.50 | |
Ours | 74.03 | -2.10 | 65.00 | |
VGG-16 | NM | 61.18 | -12.18 | 38.41 |
CC | 68.81 | -2.78 | 52.39 | |
GNN-RL | 70.99 | +0.49 | 80.00 | |
AMC | 69.10 | -1.40 | 80.00 | |
AGMC | 70.35 | -0.15 | 80.00 | |
Ours | 71.47 | -0.12 | 82.00 | |
ViT-B/16 | CP-ViT | 77.36 | -0.55 | 33.52 |
DepGraph | 79.17 | -1.90 | 40.90 | |
Ours | 79.22 | -1.85 | 45.00 |
4.2 Results and Analysis
Performance. Table 1 and 2 present the results of our AutoSculpt compared with baselines on two datasets. We can see that our method achieves the best results on compression ratio (FLOPs ) with an relative smaller accuracy variety ( Acc.), reaching the state-of-the-art. After pruning, the FLOPs of the models significantly decreased, while the inference accuracy could be restored to the level of the initial models after fine-tuning. This effectively maintains model performance while compressing the model, meaning that only the truly useful parameters are retained.
Dig deeper, for all pruned DNNs, DNNs with simpler architecture can achieve higher pruning ratios, such as VGG-19 with 90% and VGG-16 with 82%. In contrast, DNNs containing more complex structures (e.g. residual connection and transformer encoder) will get more accuracy loss if we apply a higher pruning ratio, for instance, ViT-B/16 with 45% pruning ratio, ResNet-50 and ResNet-110 with 55% pruning ratio. In addition, our method achieves a reduction in FLOPs of nearly 18% compared to the second-best auto-pruning method, GNN-RL, on ResNet-50, with the accuracy loss across all pruned models remaining under 3%.
Moreover, we observed that for some models (such as ResNet-110 and MobileNet-v1), the accuracy after pruning even exceeded that of the original models. Upon analysis, we believe that the initial pre-trained models were not sufficiently trained, leading to improved inference accuracy after fine-tuning, surpassing the performance of the initial models.
Since some DNNs have similar architectures (e.g., ResNet-32, ResNet-56, and ResNet-110 all have residual structures but differ in network depth), we select ResNet-110, VGG-19, MobileNet-v1, and ViT-B/16 for the following experiments.
Latency Comparison. We also test the inference latency of the pruned models and compare them with the original models, as shown in Table 3. We can see that our method significantly reduces the MACs, thereby lowering the inference latency while maintaining comparable or even improved accuracy. For example, in ResNet-110, the MACs were reduced from 0.26 G to 0.1 G, with a corresponding decrease in latency from 5.7 ms to 5.3 ms. Notably, the accuracy even slightly improved from 93.50% to 93.91%, indicating that the pruning method not only reduces the model size but also retains and potentially enhances its performance. For VGG-19, the MACs were drastically reduced from 0.4 G to 52.36 M, leading to a reduction in latency from 5.3 ms to 4.1 ms (nearly 29% speed improvement). Although the accuracy slightly dropped from 70.35% to 67.97% (decreased by 2%), this trade-off may be acceptable considering the significant improvement in inference speed and efficiency.
DNN | Type | MACs | Acc. | Latency |
---|---|---|---|---|
ResNet-110 | Original | 0.26 G | 93.50 | 5.7 ms |
Pruned | 0.10 G | 93.91 | 5.3 ms | |
VGG-19 | Original | 0.40 G | 70.35 | 5.3 ms |
Pruned | 52.36 M | 67.97 | 4.1 ms | |
MobileNet-v1 | Original | 0.59 G | 71.14 | 6.5 ms |
Pruned | 0.24 G | 69.85 | 6.0 ms | |
ViT-B/16 | Original | 17.61 G | 81.07 | 5.4 ms |
Pruned | 10.11 G | 79.22 | 5.1 ms |



Parameter Sensitivity. We test the impact of different hyperparameters on the accuracy of pruned models, as shown in Figure 5. First, we evaluate the effect of the number of patterns on the accuracy of pruned models. From Figure 5 (a), we can observe that increasing the number of patterns improves the accuracy of the inference for all models. Meanwhile, when the number of patterns reaches a certain threshold, the accuracy gains steadily. In particular, when the number of patterns is 2, it represents structured pruning, where the entire operator is either retained or pruned. Once the number of patterns exceeds a certain limit (e.g., 10), the granularity becomes finer, which can be regarded as unstructured pruning. Therefore, to balance the inference accuracy and compression rate, we use 6 patterns for pruning. Next, we test the effect of the node embedding size for information aggregation by the graph encoder on the accuracy of pruned models, as shown in Figure 5 (b). We can see that ResNet-110 is more sensitive to node embedding size, likely due to its deeper and more complex network structure. Lastly, we evaluate the changes in inference accuracy under different pruning ratios for each model, as depicted in Figure 5 (c). As expected, with increasing pruning ratios, the inference accuracy declines. However, different models exhibit varying sensitivity to pruning rates. For example, complex network architectures like ResNet-110 are more affected by changes in pruning ratio. In conclusion, the accuracy loss of our method is relatively stable with the increase of the pruning ratio.


Efficiency of Accuracy Recovery. Figure 7 illustrates the efficiency of inference accuracy recovery after model pruning across different DNN architectures. As we can see, the models pruned by our method can rapidly recover from accuracy loss, demonstrating that the models pruned by our methods can be deployed with a lower cost.
4.3 Ablation Study
We validate the effectiveness of incorporating graph encoders, including GCN and GAT, in the pruning process. We evaluate this on ResNet-110, VGG-19, MobileNet-v1, and ViT-B/16, as shown in Figure 6. The results demonstrate a significant improvement in the accuracy when using GCN or GAT to represent the DNN compared to the method (w/o GE) without using graph encoders. For instance, the pruned ResNet-110’s accuracy increased from 65.74% without a graph encoder to 93.16% with GCN and 93.91% with GAT, highlighting the substantial advantage of using graph encoders, especially in deep networks like ResNet-110. Moreover, the results obtained with the GAT encoder were slightly better than those with the GCN encoder. Overall, integrating graph encoders effectively captures the topological and pattern information within the deep neural network structure, leading to better performance after pruning.
5 Conclusion
We introduced AutoSculpt, a pattern-based model auto-pruning framework that uses GNN and DRL to efficiently compress DNNs. AutoSculpt represented DNNs as graphs and applied compute-friendly pruning patterns, allowing it to find better pruning strategies that work well across different DNN architectures and with standard inference engines. Experimental results showed that AutoSculpt achieved high compression rates with minimal accuracy loss, making it a strong solution for deploying DNNs on devices with limited resources. Future work could focus on improving pattern generation methods and expanding the framework to work with a wider range of DNN types, making it even more flexible and effective.
References
- Alizadeh et al. [2022] Milad Alizadeh, Shyam A. Tailor, Luisa M Zintgraf, Joost van Amersfoort, Sebastian Farquhar, Nicholas Donald Lane, and Yarin Gal. Prospect pruning: Finding trainable weights at initialization using meta-gradients. In International Conference on Learning Representations, 2022.
- Brody et al. [2022] Shaked Brody, Uri Alon, and Eran Yahav. How attentive are graph attention networks? In International Conference on Learning Representations, 2022.
- Chen et al. [2018] Tianqi Chen, Thierry Moreau, Ziheng Jiang, Lianmin Zheng, Eddie Yan, Haichen Shen, Meghan Cowan, Leyuan Wang, Yuwei Hu, Luis Ceze, et al. TVM: An automated end-to-end optimizing compiler for deep learning. In 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI), pages 578–594, 2018.
- Cheng et al. [2024] Hongrong Cheng, Miao Zhang, and Javen Qinfeng Shi. A survey on deep neural network pruning: Taxonomy, comparison, analysis, and recommendations. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1–20, 2024.
- Deng et al. [2009] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 248–255. IEEE, 2009.
- Fan et al. [2022] Hanwei Fan, Jiandong Mu, and Wei Zhang. Bayesian optimization with clustering and rollback for cnn auto pruning. In European Conference on Computer Vision, pages 494–511. Springer, 2022.
- Fang et al. [2023] Gongfan Fang, Xinyin Ma, Mingli Song, Michael Bi Mi, and Xinchao Wang. DepGraph: Towards any structural pruning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 16091–16101, 2023.
- Gao et al. [2021] Shangqian Gao, Feihu Huang, Weidong Cai, and Heng Huang. Network pruning via performance maximization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9270–9280, 2021.
- Guo et al. [2020] Jinyang Guo, Wanli Ouyang, and Dong Xu. Multi-dimensional pruning: A unified framework for model compression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 1508–1517, 2020.
- Han et al. [2024] Lixiang Han, Zhen Xiao, and Zhenjiang Li. DTMM: Deploying TinyML models on extremely weak IoT devices with pruning. In IEEE International Conference on Computer Communications (INFOCOM). IEEE, 2024.
- Han et al. [2015] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015.
- He and Xiao [2023] Yang He and Lingao Xiao. Structured pruning for deep convolutional neural networks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
- He et al. [2018a] Yang He, Guoliang Kang, Xuanyi Dong, Yanwei Fu, and Yi Yang. Soft filter pruning for accelerating deep convolutional neural networks. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, page 2234–2240. AAAI Press, 2018a.
- He et al. [2018b] Yihui He, Ji Lin, Zhijian Liu, Hanrui Wang, Li-Jia Li, and Song Han. AMC: Automl for model compression and acceleration on mobile devices. In Proceedings of the European Conference on Computer Vision (ECCV), pages 784–800, 2018b.
- He et al. [2022a] Yang He, Ping Liu, Linchao Zhu, and Yi Yang. Filter pruning by switching to neighboring cnns with good attributes. IEEE Transactions on Neural Networks and Learning Systems, 34(10):8044–8056, 2022a.
- He et al. [2022b] Zhiqiang He, Yaguan Qian, Yuqi Wang, Bin Wang, Xiaohui Guan, Zhaoquan Gu, Xiang Ling, Shaoning Zeng, Haijiang Wang, and Wujie Zhou. Filter pruning via feature discrimination in deep neural networks. In European Conference on Computer Vision, pages 245–261. Springer, 2022b.
- [17] Xueyu Hou, Yongjie Guan, Tao Han, and Ning Zhang. DistrEdge: Speeding up convolutional neural network inference on distributed edge devices. In 2022 IEEE International Parallel and Distributed Processing Symposium (IPDPS), pages 1097–1107.
- Humble et al. [2022] Ryan Humble, Maying Shen, Jorge Albericio Latorre, Eric Darve, and Jose Alvarez. Soft masking for cost-constrained channel pruning. In European Conference on Computer Vision, pages 641–657. Springer, 2022.
- Kang and Han [2020] Minsoo Kang and Bohyung Han. Operation-aware soft channel pruning using differentiable masks. In Proceedings of the 37th International Conference on Machine Learning, pages 5122–5131. PMLR, 2020.
- Kim et al. [2020] Woojeong Kim, Suhyun Kim, Mincheol Park, and Geunseok Jeon. Neuron merging: Compensating for pruned neurons. Advances in Neural Information Processing Systems, 33:585–595, 2020.
- Krizhevsky et al. [2009] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
- Kurtz et al. [2020] Mark Kurtz, Justin Kopinsky, Rati Gelashvili, Alexander Matveev, John Carr, Michael Goin, William Leiserson, Sage Moore, Nir Shavit, and Dan Alistarh. Inducing and exploiting activation sparsity for fast inference on deep neural networks. In Proceedings of the 37th International Conference on Machine Learning, pages 5533–5543. PMLR, 2020.
- Li et al. [2021a] Fanrong Li, Gang Li, Xiangyu He, and Jian Cheng. Dynamic dual gating neural networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5330–5339, 2021a.
- Li et al. [2019] Tuanhui Li, Baoyuan Wu, Yujiu Yang, Yanbo Fan, Yong Zhang, and Wei Liu. Compressing convolutional neural networks via factorized convolutional filters. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3977–3986, 2019.
- Li et al. [2021b] Yuchao Li, Shaohui Lin, Jianzhuang Liu, Qixiang Ye, Mengdi Wang, Fei Chao, Fan Yang, Jincheng Ma, Qi Tian, and Rongrong Ji. Towards compact cnns via collaborative compression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 6438–6447, 2021b.
- Li et al. [2022] Yawei Li, Kamil Adamczewski, Wen Li, Shuhang Gu, Radu Timofte, and Luc Van Gool. Revisiting random channel pruning for neural network compression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 191–201, 2022.
- Li et al. [2021c] Zhengang Li, Geng Yuan, Wei Niu, Pu Zhao, Yanyu Li, Yuxuan Cai, Xuan Shen, Zheng Zhan, Zhenglun Kong, Qing Jin, et al. NPAS: A compiler-aware framework of unified network pruning and architecture search for beyond real-time mobile acceleration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 14255–14266, 2021c.
- Lin et al. [2020] Mingbao Lin, Rongrong Ji, Yan Wang, Yichen Zhang, Baochang Zhang, Yonghong Tian, and Ling Shao. Hrank: Filter pruning using high-rank feature map. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 1529–1538, 2020.
- Liu et al. [2022] Guangzhe Liu, Ke Zhang, and Meibo Lv. Soks: Automatic searching of the optimal kernel shapes for stripe-wise network pruning. IEEE Transactions on Neural Networks and Learning Systems, 34(12):9912–9924, 2022.
- Liu et al. [2021a] Liyang Liu, Shilong Zhang, Zhanghui Kuang, Aojun Zhou, Jing-Hao Xue, Xinjiang Wang, Yimin Chen, Wenming Yang, Qingmin Liao, and Wayne Zhang. Group fisher pruning for practical network compression. In International Conference on Machine Learning, pages 7021–7032. PMLR, 2021a.
- Liu et al. [2021b] Liyang Liu, Shilong Zhang, Zhanghui Kuang, Aojun Zhou, Jing-Hao Xue, Xinjiang Wang, Yimin Chen, Wenming Yang, Qingmin Liao, and Wayne Zhang. Group fisher pruning for practical network compression. In International Conference on Machine Learning, pages 7021–7032. PMLR, 2021b.
- Liu et al. [2019] Zechun Liu, Haoyuan Mu, Xiangyu Zhang, Zichao Guo, Xin Yang, Kwang-Ting Cheng, and Jian Sun. MetaPruning: Meta learning for automatic neural network channel pruning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3296–3305, 2019.
- Liu et al. [2021c] Zechun Liu, Xiangyu Zhang, Zhiqiang Shen, Yichen Wei, Kwang-Ting Cheng, and Jian Sun. Joint multi-dimension pruning via numerical gradient update. IEEE Transactions on Image Processing, 30:8034–8045, 2021c.
- Louizos et al. [2018] Christos Louizos, Max Welling, and Diederik P. Kingma. Learning sparse neural networks through regularization. In International Conference on Learning Representations, 2018.
- Ma et al. [2020] Xiaolong Ma, Fu-Ming Guo, Wei Niu, Xue Lin, Jian Tang, Kaisheng Ma, Bin Ren, and Yanzhi Wang. Pconv: The missing but desirable sparsity in dnn weight pruning for real-time execution on mobile devices. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 5117–5124, 2020.
- Ma et al. [2021] Xiaolong Ma, Sheng Lin, Shaokai Ye, Zhezhi He, Linfeng Zhang, Geng Yuan, Sia Huat Tan, Zhengang Li, Deliang Fan, Xuehai Qian, et al. Non-structured DNN weight pruning—Is it beneficial in any platform? IEEE Transactions on Neural Networks and Learning Systems, 33(9):4930–4944, 2021.
- Niu et al. [2020] Wei Niu, Xiaolong Ma, Sheng Lin, Shihao Wang, Xuehai Qian, Xue Lin, Yanzhi Wang, and Bin Ren. Patdnn: Achieving real-time dnn execution on mobile devices with pattern-based weight pruning. In Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems, pages 907–922, 2020.
- Nonnenmacher et al. [2022] Manuel Nonnenmacher, Thomas Pfeil, Ingo Steinwart, and David Reeb. SOSP: Efficiently capturing global correlations by second-order structured pruning. In International Conference on Learning Representations, 2022.
- Park* et al. [2020] Sejun Park*, Jaeho Lee*, Sangwoo Mo, and Jinwoo Shin. Lookahead: A far-sighted alternative of magnitude-based pruning. In International Conference on Learning Representations, 2020.
- Qin et al. [2023] Guangming Qin, Lexue Song, Yanwei Yu, Chao Huang, Wenzhe Jia, Yuan Cao, and Junyu Dong. Graph structure learning on user mobility data for social relationship inference. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 4578–4586, 2023.
- Sanh et al. [2020] Victor Sanh, Thomas Wolf, and Alexander Rush. Movement pruning: Adaptive sparsity by fine-tuning. Advances in Neural Information Processing Systems, 33:20378–20389, 2020.
- Schulman et al. [2017] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
- Shafi et al. [2021] Omais Shafi, Chinmay Rai, Rijurekha Sen, and Gayathri Ananthanarayanan. Demystifying TensorRT: Characterizing neural network inference engine on nvidia edge devices. In 2021 IEEE International Symposium on Workload Characterization (IISWC), pages 226–237, 2021.
- Song et al. [2022] Zhuoran Song, Yihong Xu, Zhezhi He, Li Jiang, Naifeng Jing, and Xiaoyao Liang. CP-ViT: Cascade vision transformer pruning via progressive sparsity prediction. arXiv preprint arXiv:2203.04570, 2022.
- Sui et al. [2021] Yang Sui, Miao Yin, Yi Xie, Huy Phan, Saman Aliari Zonouz, and Bo Yuan. Chip: Channel independence-based pruning for compact neural networks. Advances in Neural Information Processing Systems, 34:24604–24616, 2021.
- Tian et al. [2021] Guanzhong Tian, Yiran Sun, Yuang Liu, Xianfang Zeng, Mengmeng Wang, Yong Liu, Jiangning Zhang, and Jun Chen. Adding before pruning: Sparse filter fusion for deep convolutional neural networks via auxiliary attention. IEEE Transactions on Neural Networks and Learning Systems, 2021.
- Vaswani [2017] A Vaswani. Attention is all you need. Advances in Neural Information Processing Systems, 2017.
- Wang et al. [2019] Chaoqi Wang, Roger Grosse, Sanja Fidler, and Guodong Zhang. EigenDamage: Structured pruning in the kronecker-factored eigenbasis. In International Conference on Machine Learning, pages 6566–6575. PMLR, 2019.
- Wang et al. [2021] Huan Wang, Can Qin, Yulun Zhang, and Yun Fu. Neural pruning via growing regularization. In International Conference on Learning Representations (ICLR), 2021.
- Wen et al. [2016] Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. In Advances in Neural Information Processing Systems. Curran Associates, Inc., 2016.
- Yu et al. [2021] Sixing Yu, Arya Mazaheri, and Ali Jannesari. Auto graph encoder-decoder for neural network pruning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6362–6372, 2021.
- Yu et al. [2022] Sixing Yu, Arya Mazaheri, and Ali Jannesari. Topology-aware network pruning using multi-stage graph embedding and reinforcement learning. In International Conference on Machine Learning, pages 25656–25667. PMLR, 2022.
- Zhai et al. [2023] Pucheng Zhai, Kailing Guo, Fang Liu, Xiaofen Xing, and Xiangmin Xu. LAPP: Layer adaptive progressive pruning for compressing CNNs from scratch. arXiv preprint arXiv:2309.14157, 2023.
- Zheng et al. [2022a] Yu-Jie Zheng, Si-Bao Chen, Chris HQ Ding, and Bin Luo. Model compression based on differentiable network channel pruning. IEEE Transactions on Neural Networks and Learning Systems, 34(12):10203–10212, 2022a.
- Zheng et al. [2022b] Yu-Jie Zheng, Si-Bao Chen, Chris HQ Ding, and Bin Luo. Model compression based on differentiable network channel pruning. IEEE Transactions on Neural Networks and Learning Systems, 34(12):10203–10212, 2022b.
- Zhou et al. [2021] Aojun Zhou, Yukun Ma, Junnan Zhu, Jianbo Liu, Zhijie Zhang, Kun Yuan, Wenxiu Sun, and Hongsheng Li. Learning N:M fine-grained structured sparse neural networks from scratch. In International Conference on Learning Representations, 2021.
- Zhou et al. [2019a] Yuefu Zhou, Ya Zhang, Yanfeng Wang, and Qi Tian. Accelerate cnn via recursive bayesian pruning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3306–3315, 2019a.
- Zhou et al. [2019b] Zhi Zhou, Xu Chen, En Li, Liekang Zeng, Ke Luo, and Junshan Zhang. Edge intelligence: Paving the last mile of artificial intelligence with edge computing. Proceedings of the IEEE, 107:1738–1762, 2019b.