Balanced Group Convolution: An Improved Group Convolution Based on Approximability Estimates
Abstract
The performance of neural networks has been significantly improved by increasing the number of channels in convolutional layers. However, this increase in performance comes with a higher computational cost, resulting in numerous studies focused on reducing it. One promising approach to address this issue is group convolution, which effectively reduces the computational cost by grouping channels. However, to the best of our knowledge, there has been no theoretical analysis on how well the group convolution approximates the standard convolution. In this paper, we mathematically analyze the approximation of the group convolution to the standard convolution with respect to the number of groups. Furthermore, we propose a novel variant of the group convolution called balanced group convolution, which shows a higher approximation with a small additional computational cost. We provide experimental results that validate our theoretical findings and demonstrate the superior performance of the balanced group convolution over other variants of group convolution.
keywords:
Convolutional layer , Group convolution , Approximability estimate[a]organization=Natural Science Research Institute, KAIST,state=Daejeon, postcode=34141, country=Korea \affiliation[b]organization=Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology (KAUST),state=Thuwal, postcode=23955, country=Saudi Arabia \affiliation[c]organization=Department of Mathematical Sciences, KAIST,state=Daejeon, postcode=34141, country=Korea
1 Introduction
The convolutional layer plays a crucial role in the success of modern neural networks for image classification (He et al., 2016; Hu et al., 2018; Krizhevsky et al., 2017) and image processing problems (Radford et al., 2015; Ronneberger et al., 2015; Zhang et al., 2017). However, achieving high performance through convolutional neural networks (CNNs) typically requires the use of a large number of channels (Brock et al., 2021; He et al., 2022; Huang et al., 2019; Tan and Le, 2019; Zagoruyko and Komodakis, 2016), resulting in significant computational costs and long training times. Accordingly, there have been many studies focusing on modifying the convolutional layer to reduce its computational complexity (Gholami et al., 2022; Krizhevsky et al., 2017; Lee et al., 2022; Liu et al., 2020; Zhang et al., 2015a, b).
Among them, group convolution is the most basic modification that can be easily thought of. It was introduced in AlexNet (Krizhevsky et al., 2017) as a distributed computing method of convolutional layers to resolve the memory shortage. Group convolution divides the channels of each layer into groups and performs convolution only within each group. CNNs with the group convolution succeeded in reducing the number of parameters and implementation time, but its performance drops rapidly as the number of groups increases (Lee et al., 2022; Long et al., 2015). It has been speculated that this phenomenon is due to the fact that the lack of intergroup communication greatly affects the representation capacity of CNNs. However, it has not yet been mathematically revealed how much the performance is degraded.
Recently, several modifications of the group convolution, which add an intergroup communication, have been considered to restore the performance (Chollet, 2017; Lee et al., 2022; Long et al., 2015; Zhang et al., 2018). The channel shuffling structure used in ShuffleNet (Zhang et al., 2018) adds a permutation step among the output channels of the group convolution to make data exchange among groups. This method is efficient in terms of memory usage because it does not use any additional parameters. Learnable group convolution (Huang et al., 2018) determines channels to be grouped through learning. This method generates the weight for the group convolution by overlaying a trainable mask on the weight of the standard convolution. In fact, it is equivalent to changing the channel shuffling from a fixed rule to a learnable one. Although these methods do not make sense when viewed as a single layer, they effectively restore the performance of CNNs that use multiple layers. On the other hand, fully learnable group convolution (Zhang et al., 2018) introduces more parameters to additionally vary the number of channels in each group. In (Lee et al., 2022), it was observed that group convolution has a block diagonal matrix structure and two-level group convolution was introduced to collect representatives of each group to perform additional convolution with low computational cost. These works have successfully resolved the performance degradation issue described above, but there is still no mathematical analysis of why the performance is improved.
In this paper, we rigorously analyze the performance of the group convolution as an approximation to the corresponding standard convolution with respect to the number of groups . To achieve this, we introduce a new measure of approximability (see (3.2)), which quantifies the optimal squared -error between the outputs of the standard convolution and group convolution. We prove that the approximability of the group convolution is estimated as , where denotes the number of parameters in a convolution layer that maps a single channel to a single channel (see Theorem 3.5).
In addition, we present a new variant of group convolution, called balanced group convolution (BGC), which achieves a theoretically improved approximability estimate compared to the plain group convolution. In BGC, the intergroup mean is computed by averaging the channels between groups. After that, we introduce an additional convolution to add the intergroup mean to the output of the group convolution. This allows each group in BGC to utilize the entire information of the input, resulting in an improved approximability with a small additional computational cost. Indeed, we prove that the approximability of BGC is estimated to be (see Theorem 3.6), which is a better result than the plain group convolution. Furthermore, under an additional assumption, the approximability of the group convolution and BGC can be estimated to be and , respectively, where is the number of input channels. The superior performance of the proposed BGC is verified by various numerical experiments on several recent CNNs such as WideResNet (Zagoruyko and Komodakis, 2016), ResNeXt (Xie et al., 2017), MobileNetV2 (Sandler et al., 2018), and EfficientNet (Tan and Le, 2019).
In summary, we address the following issues in this work:
Is it possible to obtain a rigorous estimate for the approximability of group convolution? Furthermore, can we develop an enhanced version of group convolution that achieves a theoretically better approximability than the original?
We summarize our main contributions in the followings:
-
•
We propose BGC, which is an enhanced variant of group convolution with an improved approximability estimate.
-
•
We estimate the bounds on approximability of group convolution and BGC.
-
•
We demonstrate the performance of BGC by embedding it into state-of-the-art CNNs.
The rest of this paper is organized as follows. In Section 2, we present preliminaries of this paper, specifically related to the group convolution. We introduce the proposed BGC and its improved approximation properties in Section 3, and establish connections to existing group convolution approaches in Section 4. Numerical validations of the theoretical results and improved classification accuracy of the proposed BGC applied to various state-of-the-art CNNs are presented in Section 5. We conclude this paper with remarks in Section 6.
2 Group convolution
In this section, we introduce some basic notations for group convolution to be used throughout this paper. Let and be positive integers and be a common divisor of and . We write and .
We consider convolutional layers that operate on channels and produces channels. Input of a convolutional layer is written as , where is the th channel of . Similarly, output is written as , where is the th channel of . To represent a standard convolutional layer that takes input and produces output , we use the following matrix expression:
(2.1) |
where is a matrix that represents the generic convolution that maps to . We suppose that the channels of and are evenly partitioned into groups. Namely, let and , where
Then (2.1) is rewritten as the following block form:
(2.2) |
Group convolution, a computationally efficient version of the convolution proposed in (Krizhevsky et al., 2017), is formed by discarding intergroup connection in (2.2). That is, we take only the block-diagonal part to obtain the group convolution, such as
(2.3) |
where denotes the zero matrix. We observe that the number of parameters in the group convolution (2.3) is of that of the standard convolution (2.2). Moreover, the representation matrix in (2.3) is block-diagonal, allowing each block on the main diagonal to be processed in parallel, making it computationally more efficient than (2.2). However, when the number of parameters in the group convolution is reduced, the performance of a CNN using group convolutional layers may decrease compared to a CNN using standard convolutional layers (Lee et al., 2022; Zhang et al., 2018).
3 Main results
In this section, we present the main results of this paper, including the proposed BGC and estimates of its approximability and computational cost.
3.1 Balanced group convolution
A major drawback of the group convolution, which affects performance, is the lack of intergroup communication since the representation matrix given in (2.3) is block-diagonal. The motivation behind the proposed BGC is by adding intergroup communication to the group convolution with low computational cost to improve performance. To achieve this, we utilize a representative vector of small dimension that captures all the information in the input . Specifically, we use the mean of , i.e.,
Then, the proposed BGC is defined by
(3.1) |
where each is a convolution that operates on channels and produces channels. That is, BGC is an extension of the group convolution that incorporates an additional structure based on the intergroup mean , which serves as a balancing factor that utilizes the entire information of to intergroup communication. Specifically, this additional structure allows BGC to better distribute feature representations across groups.
3.2 Approximability estimate
Thanks to the additional structure in the proposed BGC, we may expect that it behaves more similarly to the standard convolution than the group convolution. To provide a rigorous assessment of this behavior, we introduce a notion of approximability and prove that BGC has a better approximability to the standard convolution than the group convolution.
Suppose that the standard convolution that maps groups to groups (see (2.2)) and the input are drawn from a certain probability distribution. We define the approximability of the group convolution of the form (2.3) as
(3.2) |
where denotes the -norm. Namely, the approximability (3.2) measures the -expectation of the optimal -averaged squared -error between the output of the standard convolution and the output of the group convolution. The approximability of BGC is defined in the same manner.
To estimate the approximability (3.2), we need the following assumptions on the distributions of and .
Assumption 3.1.
A standard convolution and an input are random variables that satisfy the following:
-
(i)
are identically distributed.
-
(ii)
are independent and identically distributed (i.i.d.).
-
(iii)
and are independent.
Assumption 3.1(i) is essential to handle the off-diagonal parts of the group convolution when estimating (3.2). To effectively compensate for the absence of the off-diagonal parts in the group convolution using , we need Assumption 3.1(ii). On the other hand, Assumption 3.1(iii) is a quite natural assumption since it asserts that the convolution and input are independent of each other. Furthermore, if we additionally assume to the parameters of the standard convolution, we can get better estimates on the approximability of group convolution and BGC. This assumption is specified in Assumption 3.2.
Assumption 3.2.
A standard convolution is a random variable such that are i.i.d. with .
The independence of is essential to obtain a sharper bound on the expectation of . The condition usually occurs when the parameters of the convolutional layers are generated by Glorot (Glorot and Bengio, 2010) or He (He et al., 2015) initialization, which are i.i.d. samples from random variable with zero mean.
The following lemma presents a Young-type inequality for the standard convolution, which plays a key role in the proof of the main theorems.
Lemma 3.3.
Let be a standard convolution that operates on channels and produces channels. For any -channel input , we have
where is the number of parameters in a convolutional layer that maps a single channel to a single channel and denotes the -norm of the vector of parameters of the convolution .
Proof.
We first prove the case . Let be a convolution that maps a single channel to a single channel and be an input. Since the output of the convolution is linear with respect to the parameters of , there exists a matrix such that
where is the vector of parameters of . Note that each entry of is also an entry of . For each entry () of and each entry () of , their product appears at most one in the formulation of . Hence, the th column of satisfies
(3.3) |
By the triangle inequality, Cauchy–Schwarz inequality, and (3.3), we obtain
(3.4) |
which completes the proof of the case .
A direct consequence of Lemma 3.3 is that, if and are independent, then we have
(3.5) |
In the following lemma, we show that the above estimate can be improved up to a multiplicative factor if we additionally assume that has zero mean and that some random variables are independent and/or identically distributed.
Lemma 3.4.
Let be a standard convolution that operates on channels and produces channels, and let be an -channel input. Assume that and satisfy the following:
-
(i)
are i.i.d. with .
-
(ii)
are identically distributed.
-
(iii)
and are independent.
Then we have
Proof.
We first prove the case . For , invoking (i) yields
(3.6) |
On the other hand, for , by (iii) and (3.5), we have
(3.7) |
where the last equality is due to (i) and (ii). By (3.6) and (3.7), we get
which completes the proof of the case . The general case can be shown by applying the case to each th output channel () and then summing over all . ∎
The representation matrix of the group convolution presented in (2.3) becomes more sparse as the number of groups increases. This suggests that the approximability of the group convolution decreases as increases. However, to the best of our knowledge, there has been no theoretical analysis on this. Theorem 3.5 presents the first theoretical result regarding the approximability of the group convolution.
Theorem 3.5.
For an -channel input , let and denote the outputs of the standard and group convolutions given in (2.2) and (2.3), respectively. Under Assumption 3.1, we have
In addition, if we further assume that Assumption 3.2 holds, then we have
Proof.
Take any standard convolution and any input as in (2.2). If we set
i.e., if we choose as the block-diagonal part of , then we have
(3.8) |
For each , since the map is a convolution that operates on channels ( is excluded from the input), invoking Lemmas 3.3 and 3.4 yields
(3.9) |
where
(3.10) |
Note that the independence condition between and are used in (3.9).
Meanwhile, Assumption 3.1 implies that
(3.11) |
Finally, combination of (3.2), (3.9), and (3.11) yields
which completes the proof. ∎
As discussed above, BGC (3.1) has the additional structure that utilizes the intergroup mean to compensate for the absence of the off-diagonal parts in the group convolution. We obtain the main theoretical result summarized in Theorem 3.6, showing that BGC achieves an improved approximability estimate compared to the group convolution.
Theorem 3.6.
For an -channel input , let and denote the outputs of the standard convolution and BGC given in (2.2) and (3.1), respectively. Under Assumption 3.1, we have
In addition, if we further assume that Assumption 3.2 holds, we have
Proof.
Take any standard convolution and any input as in (2.2). If we choose as
then we get
Note that, in the proof of Theorem 3.5, the independence condition of was never used. Hence, we can use the same argument as in the proof of Theorem 3.5 to obtain
(3.12) |
where was given in (3.10). Meanwhile, since are i.i.d., we have
(3.13) |
Remark 3.7.
By Assumption 3.1, the approximability of the group convolution has a bound of . Adding Assumption 3.2 to this provides bound. Since , the latter is a sharper bound. On the other hand, in both cases, the approximability of BGC has sharper estimates than the group convolution by factor.
3.3 Computational cost
We now consider the computational cost for a single convolutional layer with the proposed BGC. Let be the size of each input channel, i.e., . The number of scalar arithmetic operations required in a single operation of BGC is given by
(3.14) |
In the right-hand side of (3.14), the first term corresponds to blocks of convolutional layers in (3.1), and the second term corresponds to the computation of . Noting that the standard convolution and the group convolution require and scalar arithmetic operations, respectively, we conclude that the computational cost of BGC is approximately of that of the standard convolution, but twice as much as that of the group convolution.
4 Comparison with existing works
Method |
|
|
|
# of parameters |
|
|||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
GC (Krizhevsky et al., 2017) | No | deterministic | ||||||||||
Shuffle (Zhang et al., 2018) | Yes | deterministic | ||||||||||
Learnable GC (Huang et al., 2018) | Yes | adaptive | greater | |||||||||
Fully learnable GC (Wang et al., 2019) | Yes | varies | adaptive | greater | ||||||||
Two-level GC (Lee et al., 2022) | Yes | deterministic | greater | |||||||||
BGC | Yes | deterministic |
In this section, we compare BGC with other variants of group convolution. Various methods have been proposed to improve the performance of group convolution by enabling intergroup communication: Shuffle (Zhang et al., 2018), learnable group convolution (LGC) (Huang et al., 2018), fully learnable group convolution (FLGC) (Wang et al., 2019), and two-level group convolution (TLGC) (Lee et al., 2022). In BGC, input and output channels are evenly partitioned into groups to ensure that all groups have the same computational cost, preventing computational bottlenecks. This feature also applies to GC, Shuffle, LGC, and TLGC, but deterministically to BGC, i.e., it uses a fixed partition that is independent of convolution and input. Hence, the computational cost for partitioning into blocks is less expensive than those of LGC and FLGC, which have different partitions for each convolution.
The number of parameters in BGC decreases at a rate of as the number of groups increases, which is consistent with the rates of GC and Shuffle. Consequently, the computational cost of BGC can be scaled by , similar to GC and Shuffle. This is contrast to LGC, FLGC, and TLGC, which have additional parameters that are not , making their computational costs greater than . Note that the total computational cost of LGC, FLGC, and TLGC is , , and , respectively.
As discussed in Theorem 3.6, the upper bound on the approximability of BGC is , which is better than GC in Theorem 3.5 by a factor of . To the best of our knowledge, this is the first rigorous analysis of the performance of a group convolution variant. On the other hand, there is still no approximation theory for Shuffle, LGC, FLGC, and TLGC.
Table 1 provides an overview of the comparison between BGC and the other variants of group convolution discussed above. As shown in Table 1, Shuffle (Zhang et al., 2018) shares all the advantages of BGC except for the theoretical guarantee of the approximability. However, BGC has another advantage over Shuffle. While Shuffle relies on layer propagation to have intergroup communication and provide good accuracy, BGC incorporates intergroup communication in each layer, increasing accuracy even when the network is not deep enough. This is particularly advantageous when BGC is applied to CNNs with wide but not deep layers, such as WideResNet (Zagoruyko and Komodakis, 2016). This assertion will be further supported by the numerical results in the next section.
5 Numerical experiments
In this section, we present numerical results that verify our theoretical results and demonstrate the performance of the proposed BGC. All programs were implemented in Python with PyTorch (Paszke et al., 2019) and all computations were performed on a cluster equipped with Intel Xeon Gold 6240R (2.4GHz, 24C) CPUs, NVIDIA RTX 3090 GPUs, and the operating system CentOS 7.
5.1 Verification of the approximability estimates








We verify the approximability estimates of GC and BGC, showing that the estimate on approximability of BGC is better than that of GC, shown in Theorems 3.5 and 3.6. We consider a set of one-dimensional standard convolutional layers with and , which is generated by He initialization (He et al., 2015). We also sample random data points from either the normal distribution or the uniform distribution . To measure the approximability of GC and BGC with respect to , we use the following measure:
(5.1) |
which can be evaluated by conventional least squares solvers (Golub and Van Loan, 2013).
The graphs in Figure 1 depict with respect to at various settings for GC and BGC. One can readily observe linear relations between and for both GC and BGC. That is, we have an empirical formula
(5.2) |
for some positive constants and independent of . From the graph, we see that for GC and for BGC, which confirm our theoretical results in Theorems 3.5 and 3.6. Next, we define a relative approximability
(5.3) |
In Figure 2, we plot with respect to under various settings for GC and BGC, where for GC and for BGC. We observe that all values of are bounded by independent of , which implies the constant is less than in (5.2). This observation also agrees with the theoretical results in Theorems 3.5 and 3.6.
5.2 Embedding in recent CNNs
In Section 5.1, we verified that our approximability estimates are consistent with experimental results using synthetic data. Now, we examine the classification performance of BGC applied to various recent CNNs, including ResNet (He et al., 2016), WideResNet (Zagoruyko and Komodakis, 2016), ResNeXt (Xie et al., 2017), MobileNetV2 (Sandler et al., 2018), and EfficientNet (Tan and Le, 2019). The network structures used in our experiments are described below:
-
•
ResNet. In our experiments, we used the ResNet-50, which was used for ImageNet classifications. It uses a bottleneck structure consisting of a standard convolution, a group convolution with a kernel, another standard convolution, and a skip connection.
-
•
WideResNet. In our experiments, we used two different structures of WideResNet: WideResNet-28-10 and WideResNet-34-2, which were used for CIFAR-10 and ImageNet classifications, respectively. It uses a residual unit consisting of two convolutions and a skip connection.
-
•
ResNeXt. It uses the same structure as the ResNet except using group convolution instead of convolution. In our experiments, we used ResNeXt-29 for CIFAR-10 classification. We applied the group convolutions to two standard convolutions.
-
•
MobileNetV2. The basic structure of MobileNetV2 (Sandler et al., 2018) is an inverted residual block, similar to the bottleneck in ResNeXt. However, MobileNetV2 uses a depthwise convolution (Chollet, 2017) instead of ResNeXt’s group convolution. In our experiments, we applied group convolution to two standard convolutions.
-
•
EfficientNet. It is based on MnasNet (Tan et al., 2019), a modification of MobileNetV2. Using an automated machine learning technique (Zoph and Le, 2017), EfficientNet proposes several model structures with appropriate numbers of channels and depth. It also has several variations, from b0 to b7 models, depending on the number of parameters. In our experiments, we used the most basic model b0.
Dataset | Image size | Classes | # of training / validation samples |
---|---|---|---|
CIFAR-10 | 10 | / | |
ImageNet | / |
We evaluate the performance of various variants of group convolution on the datasets CIFAR-10 (Krizhevsky, 2009) and ImageNet ILSVRC 2012 (Deng et al., 2009). Details on these datasets are in Table 2. In addition, for the CIFAR-10 dataset, a data augmentation technique in (Lee et al., 2015) was adopted for training; four pixels are padded on each side of images and random crops are sampled from the padded images and their horizontal flips. For the ImageNet dataset, input images of size are randomly cropped from a resized image using the scale and aspect ratio augmentation of (Szegedy et al., 2015), which was implemented by (Gross and Wilber, 2016).
5.2.1 Ablation study through transfer learning
First, we will apply GC and BGC to a pre-trained network on real data to see how well BGC works compared to GC. We select a pre-trained ResNet-50 trained on the ImageNet dataset. Note that the pre-trained parameters can be found in the PyTorch library (Paszke et al., 2019). By transferring the parameters of the pre-trained standard convolution in ResNet-50, we obtained the parameters of (2.3) and (3.1), which are referred to as ResNet-50-GC and ResNet-50-BGC, respectively. We then further trained the ResNet-50-GC and ResNet-50-BGC for epochs using SGD optimizer with batch size , learning rate , Nesterov momentum , and weight decay .
The classification performance of ResNet-50-GC and ResNet-50-BGC is given in Table 3. Compared to GC, the classification performance of BGC improves by as little as 4% and as much as 15%. Therefore, even for real data, it can be confirmed that BGC definitely complements the performance of the neural network degraded by GC.
ResNet-50-GC | ResNet-50-BGC | |
---|---|---|
1 | 23.87 | |
2 | 26.54 | 22.45 |
4 | 36.70 | 25.51 |
8 | 43.72 | 31.13 |
16 | 50.09 | 35.45 |
5.2.2 Computational efficiency
To verify the computational efficiency of our BGC, we conducted experiments to increase the number of groups of convolution mapping channels to channels with input . Note that this convolution is used for ResNet-50. Table 4 shows the total memory usage, computation time, and classification errors for forward and backward propagations of convolutions equipped with GC and BGC. Note that the number of groups varies from to . The results are computed on a single GPU. First, looking at the total memory usage, BGC uses more memory than GC, but the gap narrows as increases. The additional memory usage of BGC occurs when computing the mean and the convolution defined in (3.1). On the other hand, looking at computation time, BGC is slower than GC, but faster than the standard convolution when . As can be seen in (3.14), compared to GC, BGC requires more computation time because it performs convolution twice, but as increases, the time decreases. Moreover, we can see that the cost of calculating the mean is small enough that it has little effect on the total computation time. Eventually, these results suggest that while BGC increases memory consumption and computation time compared to GC, it can improve performance with only a small increase in the computational cost when dealing with large channels in group convolution.
N | Total memory usage (Mb) | Computation time (ms) | Classification error (%) | |||
---|---|---|---|---|---|---|
GC | BGC | GC | BGC | GC | BGC | |
1 | 60.50 | 7.979 | 23.87 | |||
2 | 42.50 | 75.75 | 9.056 | 10.467 | 26.54 | 22.45 |
4 | 33.50 | 48.62 | 4.449 | 4.790 | 36.70 | 25.51 |
8 | 29.00 | 37.06 | 2.120 | 2.648 | 43.72 | 31.13 |
16 | 26.75 | 31.53 | 2.265 | 2.557 | 50.09 | 35.45 |
5.2.3 Comparison with existing approaches
Method | EfficientNet-b0 | WideResNet-28-10 | ResNeXt-29 () | |
---|---|---|---|---|
1 | SC | 8.42 | 3.88 | 5.32 |
2 | GC | 10.02 | 4.33 | 5.92 |
Shuffle | 8.72 | 4.00 | 4.52 | |
FLGC | 8.63 | 4.62 | 4.68 | |
TLGC | 8.93 | 4.12 | 5.16 | |
BGC | 7.16 | 3.75 | 5.84 | |
4 | GC | 11.25 | 5.44 | 6.76 |
Shuffle | 9.41 | 4.21 | 4.17 | |
FLGC | 9.70 | 6.16 | 5.16 | |
TLGC | 9.42 | 4.34 | 6.92 | |
BGC | 7.50 | 4.02 | 5.84 |
Method | EfficientNet-b0 | WideResNet-28-10 | ResNeXt-29 () | |
---|---|---|---|---|
1 | SC | 8.42 | 3.88 | 5.32 |
8 | GC | 13.14 | 6.43 | 6.52 |
Shuffle | 10.65 | 4.73 | 4.81 | |
FLGC | 10.49 | 9.81 | 5.25 | |
TLGC | 10.68 | 4.79 | 6.56 | |
BGC | 7.80 | 4.22 | 6.00 | |
16 | GC | - | 8.47 | 6.92 |
Shuffle | 5.28 | 4.89 | ||
FLGC | 11.26 | 6.11 | ||
TLGC | 4.95 | 6.08 | ||
BGC | 4.76 | 5.60 |
Method | EfficientNet-b0 | WideResNet-34-2 | MobileNetV2 | |
---|---|---|---|---|
1 | SC | 30.68 | 24.99 | 34.13 |
2 | GC | 35.18 | 26.92 | 39.50 |
Shuffle | 33.91 | 25.12 | 36.67 | |
FLGC | 35.53 | 30.53 | 37.98 | |
TLGC | 33.50 | 25.36 | 33.56 | |
BGC | 31.82 | 24.19 | 36.10 | |
4 | GC | 41.64 | 31.96 | 46.15 |
Shuffle | 38.02 | 26.84 | 38.14 | |
FLGC | 40.71 | 38.30 | 44.37 | |
TLGC | 36.38 | 27.17 | 36.88 | |
BGC | 33.78 | 25.36 | 37.82 | |
8 | GC | 48.42 | 36.70 | 51.97 |
Shuffle | 42.74 | 29.03 | 42.75 | |
FLGC | 45.38 | 45.54 | 49.28 | |
TLGC | 38.66 | 28.43 | 39.16 | |
BGC | 38.13 | 27.16 | 42.65 |
As benchmark group convolution variants, we choose GC (Krizhevsky et al., 2017), Shuffle (Zhang et al., 2018), FLGC (Wang et al., 2019), and TLGC (Lee et al., 2022), which were discussed in Section 4. All neural networks for the CIFAR-10 dataset in this section were trained using stochastic gradient descent with batch size , weight decay , Nesterov momentum , total epoch , and weights initialized as in (He et al., 2015). The initial learning rate was set to and was reduced to its one tenth in the 60th, 120th, and 160th epochs. For ImageNet, the hyperparameter settings are the same as the CIFAR case, except for the weight decay , total epoch , and the learning rate reduced by a factor of 10 in the 30th and 60th epochs.
The classification errors of BGC, along with other benchmarks, applied to various CNNs on the CIFAR-10 dataset are presented in Table 5. BGC shows better overall results than other methods. In particular, the application of BGC significantly improves the classification performance of EfficientNet-b0. In addition, the classification errors of BGC for WideResNet-28-10 are always less than 5% when varies up to . Although Shuffle performed the best for ResNeXt-29, BGC still outperforms the classification performance of GC. To further validate the performance of BGC, we report the classification results on the ImageNet dataset in Table 6. For this dataset, we observe that BGC outperforms other benchmark group convolution variants for CNN architectures except MobileNetV2. Therefore, through several experiments, we conclude that BGC is an effective and theoretically guaranteed alternative to group convolution for various CNN architectures on large-scale datasets.
6 Conclusion
In this paper, we proposed a novel variant of group convolution called BGC. We constructed BGC by combining the plain group convolution structure with a balancing term defined as the intergroup mean to improve intergroup communication. We designed a new measure (3.2) to assess the approximability of group convolution variants and proved that the approximability of group convolution is bounded by . Also, we showed that the bound on approximability of proposed BGC is , which is an improved bound compared to the group convolution. Moreover, under the additional assumption about the parameters of the standard convolution, we showed that the bounds for the approximability of group convolution and BGC are and , respectively. Numerical experiments with various CNNs such as WideResNet, MobileNetV2, ResNeXt, and EfficientNet have demonstrated the practical efficacy of BGC on large-scale neural networks and datasets.
We conclude this paper with a remark on BGC. A major drawback of the proposed BGC is that it requires full data communication among groups. This means that when computing the intergroup mean that appears in the balancing term, we need the entire input . This high volume of communication can be a bottleneck in parallel computation, which limits the performance of the model in distributed memory systems. We note that TLGC (Lee et al., 2022) has addressed this issue by minimizing the amount of communication required. Exploring how to improve BGC by reducing communication in a similar manner to (Lee et al., 2022), while maintaining strong performance in both theory and practice, is considered as a future work.
Acknowledgments
This work was supported in part by the National Research Foundation (NRF) of Korea grant funded by the Korea government (MSIT) (No. RS-2023-00208914), and in part by Basic Science Research Program through NRF funded by the Ministry of Education (No. 2019R1A6A1A10073887).
References
- Brock et al. (2021) Brock, A., De, S., Smith, S.L., Simonyan, K., 2021. High-performance large-scale image recognition without normalization, in: Proceedings of the 38th International Conference on Machine Learning, PMLR. pp. 1059–1071.
- Chollet (2017) Chollet, F., 2017. Xception: Deep learning with depthwise separable convolutions, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
- Deng et al. (2009) Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L., 2009. ImageNet: A large-scale hierarchical image database, in: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255.
- Gholami et al. (2022) Gholami, A., Kim, S., Zhen, D., Yao, Z., Mahoney, M., Keutzer, K., 2022. Low-Power Computer Vision: Improve the Efficiency of Artificial Intelligence (1st ed.). Chapman and Hall/CRC, Boca Raton, FL. chapter 13. pp. 291–326.
- Glorot and Bengio (2010) Glorot, X., Bengio, Y., 2010. Understanding the difficulty of training deep feedforward neural networks, in: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, PMLR. pp. 249–256.
- Golub and Van Loan (2013) Golub, G.H., Van Loan, C.F., 2013. Matrix Computations. Johns Hopkins University Press, Baltimore.
- Gross and Wilber (2016) Gross, S., Wilber, M., 2016. Training and investigating residual nets. Facebook AI Research 6.
- He et al. (2022) He, J., Li, L., Xu, J., 2022. Approximation properties of deep ReLU CNNs. Research in the Mathematical Sciences 9, 38.
- He et al. (2015) He, K., Zhang, X., Ren, S., Sun, J., 2015. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification, in: Proceedings of the IEEE International Conference on Computer Vision (ICCV).
- He et al. (2016) He, K., Zhang, X., Ren, S., Sun, J., 2016. Deep residual learning for image recognition, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
- Hu et al. (2018) Hu, J., Shen, L., Sun, G., 2018. Squeeze-and-Excitation networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
- Huang et al. (2018) Huang, G., Liu, S., van der Maaten, L., Weinberger, K.Q., 2018. CondenseNet: An efficient DenseNet using learned group convolutions, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
- Huang et al. (2019) Huang, Y., Cheng, Y., Bapna, A., Firat, O., Chen, D., Chen, M., Lee, H., Ngiam, J., Le, Q.V., Wu, Y., Chen, z., 2019. GPipe: Efficient training of giant neural networks using pipeline parallelism, in: Advances in Neural Information Processing Systems, Curran Associates, Inc.
- Krizhevsky (2009) Krizhevsky, A., 2009. Learning multiple layers of features from tiny images. Technical Report. University of Toronto.
- Krizhevsky et al. (2017) Krizhevsky, A., Sutskever, I., Hinton, G.E., 2017. ImageNet classification with deep convolutional neural networks. Communications of the ACM 60, 84–90.
- Lee et al. (2015) Lee, C.Y., Xie, S., Gallagher, P., Zhang, Z., Tu, Z., 2015. Deeply-Supervised Nets, in: Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, PMLR. pp. 562–570.
- Lee et al. (2022) Lee, Y., Park, J., Lee, C.O., 2022. Two-level group convolution. Neural Networks 154, 323–332.
- Liu et al. (2020) Liu, J., Tripathi, S., Kurup, U., Shah, M., 2020. Pruning algorithms to accelerate convolutional neural networks for edge applications: A survey. arXiv preprint arXiv:2005.04275 .
- Long et al. (2015) Long, J., Shelhamer, E., Darrell, T., 2015. Fully convolutional networks for semantic segmentation, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
- Paszke et al. (2019) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Köpf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., Chintala, S., 2019. PyTorch: An imperative style, high-performance deep learning library. Advances in Neural Information Processing Systems 32, 8024–8035.
- Radford et al. (2015) Radford, A., Metz, L., Chintala, S., 2015. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 .
- Ronneberger et al. (2015) Ronneberger, O., Fischer, P., Brox, T., 2015. U-Net: Convolutional networks for biomedical image segmentation, in: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, Springer, Cham. pp. 234–241.
- Sandler et al. (2018) Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C., 2018. MobileNetV2: Inverted residuals and linear bottlenecks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
- Szegedy et al. (2015) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A., 2015. Going deeper with convolutions, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
- Tan et al. (2019) Tan, M., Chen, B., Pang, R., Vasudevan, V., Sandler, M., Howard, A., Le, Q.V., 2019. MnasNet: Platform-aware neural architecture search for mobile, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
- Tan and Le (2019) Tan, M., Le, Q., 2019. EfficientNet: Rethinking model scaling for convolutional neural networks, in: Proceedings of the 36th International Conference on Machine Learning, PMLR. pp. 6105–6114.
- Wang et al. (2019) Wang, X., Kan, M., Shan, S., Chen, X., 2019. Fully learnable group convolution for acceleration of deep neural networks, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
- Xie et al. (2017) Xie, S., Girshick, R., Dollar, P., Tu, Z., He, K., 2017. Aggregated residual transformations for deep neural networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
- Zagoruyko and Komodakis (2016) Zagoruyko, S., Komodakis, N., 2016. Wide residual networks, in: Proceedings of the British Machine Vision Conference (BMVC), BMVA Press. pp. 87.1–87.12.
- Zhang et al. (2017) Zhang, K., Zuo, W., Chen, Y., Meng, D., Zhang, L., 2017. Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Transactions on Image Processing 26, 3142–3155.
- Zhang et al. (2018) Zhang, X., Zhou, X., Lin, M., Sun, J., 2018. Shufflenet: An extremely efficient convolutional neural network for mobile devices, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
- Zhang et al. (2015a) Zhang, X., Zou, J., He, K., Sun, J., 2015a. Accelerating very deep convolutional networks for classification and detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 38, 1943–1955.
- Zhang et al. (2015b) Zhang, X., Zou, J., Ming, X., He, K., Sun, J., 2015b. Efficient and accurate approximations of nonlinear convolutional networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
- Zoph and Le (2017) Zoph, B., Le, Q., 2017. Neural architecture search with reinforcement learning, in: International Conference on Learning Representations.