Improved Knowledge Distillation via Full Kernel Matrix Transfer
Abstract
Knowledge distillation is an effective way for model compression in deep learning. Given a large model (i.e., teacher model), it aims to improve the performance of a compact model (i.e., student model) by transferring the information from the teacher. Various information for distillation has been studied. Recently, a number of works propose to transfer the pairwise similarity between examples to distill relative information. However, most of efforts are devoted to developing different similarity measurements, while only a small matrix consisting of examples within a mini-batch is transferred at each iteration that can be inefficient for optimizing the pairwise similarity over the whole data set. In this work, we aim to transfer the full similarity matrix effectively. The main challenge is from the size of the full matrix that is quadratic to the number of examples. To address the challenge, we decompose the original full matrix with Nyström method. By selecting appropriate landmark points, our theoretical analysis indicates that the loss for transfer can be further simplified. Concretely, we find that the difference between the original full kernel matrices between teacher and student can be well bounded by that of the corresponding partial matrices, which only consists of similarities between original examples and landmark points. Compared with the full matrix, the size of the partial matrix is linear in the number of examples, which improves the efficiency of optimization significantly. The empirical study on benchmark data sets demonstrates the effectiveness of the proposed algorithm. Code is available at https://github.com/idstcv/KDA.
Keywords— knowledge distillation, full kernel matrix transfer
1 Introduction
With the development of deep learning, neural networks make many computer vision tasks applicable for edge devices. Those devices often have limited computation and storage resources. Therefore, neural networks that contain a small number of FLOPS and parameters are preferred. Lots of efforts are devoted to improving the performance of neural networks with resource constraints [4, 7, 17]. Among various developed strategies, knowledge distillation (KD) is a simple yet effective way to help train compact networks [7].
KD in deep learning aims to improve the performance of a small network (i.e., student) with the information from a large network (i.e., teacher). Given a teacher, various information can be transferred to regularize the training of the student network. [7] transfers the label information to smooth the label space of the student network. [16] and [23] propose to transfer the information of intermediate layers to help training. [22] transfers the flow between layers as hints for student networks. [3] improves the performance of metric learning with the rank information from teacher models. Recently, a number of methods are developed to transfer pairwise similarities between examples to student networks [10, 13, 14, 20] and different similarity functions have been investigated. In this work, we also focus on pairwise similarity matrix transfer but aim to improve the efficiency of transfer rather than similarity measurements.
Given two images and , their similarity can be computed as where is the corresponding kernel function that projects examples from the original space to a space that fits the problem better. In an appropriate space, a simple model (e.g., a linear model) can describe the data well [21]. Many efforts have been devoted to designing similarity functions but the strategy for effective transfer is less investigated. Note that the size of the full similarity matrix is of , where is the total number of examples. However, only a mini-batch of examples are often accessed at each iteration in a standard neural network training pipeline. Most of existing methods [10, 13, 14, 20] can transfer only the partial matrix (i.e., , where is the batch size) consisting of examples from a mini-batch, which can be inefficient for distilling the information of the full kernel matrix from teacher. Therefore, many KD methods that transfer pairwise similarity have to include an additional KD loss [7] for the desired performance.
In this work, we propose to transfer the full kernel matrix between all examples from the teacher model to the student model efficiently. The main challenge comes from the size of the full matrix. With a large number of training examples, it becomes intractable to transfer the full matrix directly, especially for training neural networks with mini-batches. Therefore, we first apply the Nyström method [21] to obtain a low-rank approximation of the original matrix with landmark points. Then, we minimize the difference between the compact matrices that are calculated between original examples and landmark points, to transfer the information from the teacher effectively.
Compared with the full matrix of size , the size of compact one for transfer is only in our method. Besides, the number of landmark points is significantly smaller than that of training examples and we can keep them in the network, which improves the efficiency of optimization. Considering that the selection of landmark points is important for approximating the original matrix, we propose to apply class centers as landmark points for better distillation. Our theoretical analysis shows that the difference between original matrices from teacher and student can be well bounded by that of the corresponding partial matrices. Interestingly, according to our analysis, the conventional KD method in [7] that transfers the label information can be considered as a special case of our proposal with sub-optimal landmark points. This observation provides a new perspective to understand the efficacy of KD. We conduct extensive experiments on benchmark data sets. Our method can achieve a satisfied performance without any additional KD loss, which confirms the effectiveness of transferring full kernel matrix. The main contributions of this work can be summarized as
-
•
Instead of transferring partial similarity matrices within only mini batches that can be inefficient for KD optimization, we propose to transfer the full kernel matrix from the teacher model to the student model.
-
•
Considering that it is intractable to transfer the full matrix directly due to its large size and the mini-batch setting in training, we propose to obtain a low-rank approximation and minimize the difference between the compact matrices for transferring.
-
•
More importantly, we provide a theoretical guarantee that the difference between original full matrices from teacher and student can be well bounded by that of the corresponding compact matrices, which also provides a new perspective to understand the efficacy of conventional KD.
-
•
Our experiments including extensive ablation studies on benchmark data sets further demonstrate our theory and the effectiveness of our proposed method.
2 Related Work
2.1 Knowledge distillation
Knowledge distillation has a long history in ensemble learning and becomes popular for training small-sized neural networks [1, 3, 7, 10, 12, 13, 18, 19, 23]. Various algorithms have been developed to transfer different information from the teacher model to the student model. For example, [7] considers the final output of the teacher model as the soft label and regularizes the similarity between the label distribution output from the student model and that of the soft label from the teacher model. [23] transfers the attention maps from intermediate layers, which provides a way to explore more information from the teacher model. [1] proposes an information-theoretic framework for knowledge transfer which formulates knowledge transfer as maximizing the mutual information between the teacher and the student networks.
Recently, a number of algorithms [10, 13, 14, 20] are proposed to transfer the pairwise similarity for knowledge distillation. However, they focus on developing similarity functions while only transferring the similarity information of pairs within a mini-batch, which can be inefficient for distilling. We, on the other hand, aim to transfer the full matrix directly to achieve a better performance. Furthermore, we provide a theoretical analysis to demonstrate the effectiveness of the proposed method.
2.2 Nyström method
Nyström method is an effective algorithm to obtain a low-rank approximation for a Gram matrix [21]. Given a full Gram matrix, it tries to reconstruct the original one with the randomly sampled columns. The data points corresponding to the selected columns are denoted as landmark points. The approximation error can be bounded even with the randomly sampled landmark points. Later, researchers show that a delicate sampling strategy can further improve the performance [5, 9, 24]. [5] proposes to sample landmark points with a data-dependent probability distribution rather than the uniform distribution. [9] and [24] demonstrate that using clustering centers as landmark points provides the best approximation among different strategies.
Note that Nyström method is developed for unsupervised Gram matrix approximation while we can access the label information in knowledge distillation. In this work, we provide an analysis on the selection criterion of landmark points for Gram matrix transfer and develop a supervised strategy accordingly. Besides, Nyström method is useful only for a single Gram matrix approximation, while we aim to do transferring between two Gram matrices.
3 Efficient Kernel Matrix Transfer
Given two images and , the similarity between them can be measured with a kernel function as , where is a projection function that projects examples from the original space to a space better for the target task.
In this work, we consider a certain layer in a neural network as a projection function, where the output of that layer can be considered as . We denote the student network as and the teacher network as . The features output from a certain layer of and are referred as and , respectively, and the index for the layer is omitted for brevity. Then, the similarity between two images and in the Gram matrix can be computed by
Note that other complicated similarity functions can be applied between and as in existing methods [10, 13, 14, 20].
Let and denote the Gram matrices from the student and teacher networks, respectively, where is the total number of images. We aim to transfer the full Gram matrix from the teacher model to the student model. The corresponding loss for knowledge distillation with Gram matrix transfer can be written as
(3.1) |
where and denote the representations of the entire data set output from the same certain layer of the student and teacher network.
Minimizing the loss directly is intractable due to the large size of the Gram matrix, especially for the conventional training pipeline in deep learning, where only a mini-batch of examples are accessible in each iteration. A straightforward way is to optimize only the random pairs in each mini-batch as in [10, 13, 14, 20]. However, it can be sub-optimal and result in a slow optimization in transferring. Hence, we consider to decompose the Gram matrix and optimize the low-rank approximation in lieu of the full Gram matrix.
3.1 Nyström Approximation
Nyström method is prevalently applied to approximate the Gram matrix [5, 9, 21, 24]. We briefly review it in this subsection.
Given a Gram matrix , we can first randomly shuffle columns and rewrite it as
where . Then, a good approximation for can be obtained as , where and denotes the pseudo inverse of [21].
Let denote the best top- approximation of Gram matrix and . The rank-k approximation derived by the Nyström method can be computed as , where denotes the best top- approximation of and is the corresponding pseudo inverse. The performance of the approximation can be demonstrated by the following theorem.
Theorem 1
[9] Let denote the rank- Nyström approximation with columns that are sampled uniformly at random without replacement from . We have .
The examples corresponding to the selected columns in are referred as landmark points. Theorem 1 shows that the approximation is applicable even with randomly sampled landmark points.
3.2 Gram Matrix Transfer
With the low-rank approximation, the Gram matrix from a certain layer of the student and teacher network can be computed as
Compared with the original Gram matrix, the partial matrix has significantly less number of terms. Let and denote the landmark points for the student and teacher Gram matrices, then we have
With Theorem 1, it is easy to show that transferring the compact matrix is up-bounding the distance between original Gram matrices from the teacher and the student.
Corollary 1
With the Nyström approximation, we can bound the loss in Eqn. 3.1 by
In Corollary 1, the partial Gram matrix will be regularized with the pseudo inverse of . The computational cost of obtaining pseudo inverse is expensive and it can introduce additional noise when the feature space of the student is unstable in the early stage of training. Besides, it still optimizes the difference between two matrices.
By selecting ingenious landmark points, we can bound the original loss in Eqn. 3.1 solely with and as in the following theorem.
Theorem 2
Assuming that and are bounded by a constant as and the smallest eigenvalues in and are larger than , we can bound the loss in Eqn. 3.1 by
-
Proof.
Then, we want to show that . Note that due to the fact that is a partial matrix from . We can prove instead. Let
where and , and , where . We have
So
According to the definition of , we have
Since is a doubly stochastic matrix, we can show that
It can be proved by contradiction. If the optimal solution has a larger result than the R.H.S., we can denote the first column index of the non-zero off-diagonal element as (i.e., ), and the corresponding row index as (i.e., ). Let and we have
It shows that the assignment with the diagonal element can achieve a larger result than the optimal assignment, which contradicts the assumption.
With the optimal results from the assignment of , we obtain that
For each term, it is easy to show that
Therefore,
Theorem 2 illustrates that minimizing the difference between the partial Gram matrices from the teacher and the student using landmark points can transfer the original Gram matrix from the teacher model effectively. Note that the partial matrices have the size of , where is the number of landmark points. When , landmark points and can be kept in GPU memory as parameters of the loss function for optimization, which is much more efficient than transferring the original full Gram matrix. In addition, the simplified formulation depends on the property of eigenvalues from landmark points. Therefore, an appropriate selection is crucial for the efficient transfer. We elaborate our strategy in the next subsection.
3.3 Landmark Selection
We consider a strategy that obtains a single landmark point for each class, which means for a -class classification problem. The desired landmark points should capture the distribution of examples well while keeping a sufficient diversity between each other for large eigenvalues. With the setting that each class has one landmark point, we can demonstrate the selection criterion as follows.
Theorem 3
Given an arbitrary pair , let and denote the corresponding landmark points for the -th example in the space of student and teacher network, respectively. Assuming the norm of are bounded by , we have
(3.2) | |||
-
Proof.
For an arbitrary pair, we have
We obtain the result by accumulating over all pairs.
According to Theorem 3, we can find that the transfer loss comes from two aspects. Term in Eqn. 3.2 contains the distance from each example to its corresponding landmark point. Since the corresponding landmark point for can be obtained by , we can rewrite the problem of minimizing the original term as
Apparently, this objective is a standard clustering problem. Unlike the conventional Nyström method, which is often in an unsupervised learning setting, we can access the label information in knowledge distillation. When we set the number of clusters to be the number of classes, the landmark point becomes the center in each class and can be computed by averaging examples within the class as
(3.3) |
where is the class label of the -th example and denotes the number of examples in the -th class. In addition, class centers keep the diversity from original examples which can have the large eigenvalues as required by Theorem 2. We also verify the assumption in the experiments.
Term from Eqn. 3.2, in fact, indicates the difference between the student and teacher Gram matrices defined by the landmark points that is consistent as in Theorem 2.
With landmark points and obtained from optimizing the term , we can formulate the Knowledge Distillation problem by transferring Approximated Gram matrix (KDA) as
(3.4) |
where and we adopt as the smoothed loss for the stable optimization as
With the KDA loss, we propose a novel knowledge distillation algorithm that works in an alternative manner. In each iteration, we first compute the landmark points with features of examples accumulated from the last epoch by Eqn. 3.3. Then, the KDA loss defined by the fixed landmark points in Eqn. 3.4 will be optimized along with a standard cross-entropy loss for the student. The proposed algorithm is summarized in Alg. 1. Since at least one epoch will be spent on collecting features for computing landmark points, we will minimize the KDA loss after epochs of training, where .
3.4 Connection to Conventional KD
In the conventional KD method [7], only the outputs from the last layer in the teacher model are adopted for the student. By setting an appropriate parameter, [7] illustrates that the loss function for KD can be written as
where denote the logits before the SoftMax operator. With the identity matrix , the equivalent formulation is
Compared to the KDA loss in Eqn. 3.4, the conventional KD can be considered as applying one-hot label vectors as landmark points to transfer the Gram matrix of the teacher network. However, it lacks the constraints on the similarity between each example and its corresponding landmark point as illustrated in Theorem 3, which may degenerate the performance of knowledge distillation.
4 Experiments
We conduct experiments on two benchmark data sets to illustrate the effectiveness of the proposed KDA algorithm. We employ ResNet-34 (denoted as R34) [6] as the teacher network. ResNet-18 (denoted as R18), ResNet-18-0.5 (denoted as R18-0.5) and ShuffleNetV2 (denoted as SN) [11] are adopted as student networks, where ResNet-18-0.5 denotes ResNet-18 with a half number of channels. We apply the standard stochastic gradient descent (SGD) with momentum to train the networks. Specifically, we set the size of mini-batch to , momentum to and weight decay to - in all experiments. The student models are trained with epochs. The initial learning rate is and cosine decay is adopted with epochs for warm-up. All experiments are implemented on two V100 GPUs.
Three baseline knowledge distillation methods are included in the comparison
-
•
KD [7]: a conventional knowledge distillation method that constrains the KL-divergence between the output label distributions of the student and teacher networks.
-
•
AT [23]: a method that transfers the information from intermediate layers to accelerate the training of the student network.
-
•
RKD [13]: a recent representative work that regularizes the similarity matrices within a mini-batch between the student and teacher networks. It adopts the same similarity function as in our method. Although different ways were used to calculate the similarity (e.g., L2 in [10, 13], normalized inner product in [20] and RBF kernel in [14]), they provide similar performance.
Every algorithm will minimize the combined loss from both the distillation and the standard cross entropy loss for classification. For RKD, we transfer the features before the last fully-connected (FC) layer for comparison. Note that AT transfers the attention map of the teacher, so we adopt the feature before the last pooling layer for distillation. Besides, we let “Baseline” denote the method that trains the student without information from the teacher. Our method is referred as “KDA”. We search the best parameters for all methods in the comparison and keep the same parameters for different experiments.
4.1 Ablation Study
We perform the ablation study on CIFAR-100 [8]. This data set contains classes, where each class has images for training and for test. Each image is a color image with size of .
In this subsection, we set ResNet-34 as the teacher and ResNet-18 as the student. During the training, each image is first padded to be , and then we randomly crop a image from it. Besides, random horizontal flipping is also adopted for data augmentation.
4.1.1 Effect of Landmark Points
First, we evaluate the strategy for generating landmark points. As illustrated in Corollary 1, the randomly selected landmark points can achieve a good performance. So we compare the KDA with class centers to that with random landmark points in Fig. 1. In this experiment, we adopt the features before the last FC layer for transfer.

(a) Overall Comparison

(b) Zoom In
From Fig. 1, we can observe that with landmark points, two variants of KDA perform significantly better than the baseline. It demonstrates that Gram matrix is informative for training student models and transferring full matrix from teacher can help improve the performance of student. In addition, KDA with class centers as the landmark points shows the best performance among different methods, which confirms the criterion suggested in Theorem 3. We will use class centers as landmark points in the remaining experiments.
4.1.2 Effect of Full Matrix Transfer
Then, we compare the difference between Gram matrices from a teacher and its student models. The performance of transferring a Gram matrix is measured by , which calculates the faction of information that has not been transferred. We investigate features from two layers in the comparison: the one before the last FC layer and that after the FC layer. The transfer performance of different layers are illustrated in Fig. 2 (a) and (b), respectively.

(a) Layer before FC

(b) Layer after FC
Fig. 2 (a) compares the performance of kernel transfer. First, it is obvious that both RKD and KDA are better than the baseline. It indicates that minimizing the difference between Gram matrices can effectively transfer appropriate information from the teacher. Second, RKD transfers the similarity matrix defined by examples in a mini-batch only and shows a larger transfer loss than KDA. Considering the massive number of pairs, optimizing with all of these pairs in RKD is intractable. Note that the number of pairs can be up to while the number of pairs in a mini-batch is only , where is the size of a mini-batch. To visit all pairs only once, it requires at least epochs.
On the contrary, the loss from KDA is only about of that from RKD. KDA optimizes the partial Gram matrix with landmark points and the total number of pairs is linear in that of original examples. Due to a small number of landmark points, the partial matrix is much more compact than the original one. For example, there are examples in CIFAR-100. When applying landmark points for distillation, the partial matrix contains terms of the original one. Besides, since we keep class centers in the memory as the parameters of the loss function, the full Gram matrix can be approximated in a single epoch. Therefore, SGD can optimize the KDA loss sufficiently.
Then, we compare the performance of transfer after the last FC layer as shown in Fig. 2 (b). For KDA, we compute the Gram matrix with features before the SoftMax operator. From the comparison, we can observe that both of KD and KDA have much less transfer loss than the baseline. As illustrated in the discussion of “Connection to Conventional KD”, conventional KD is equivalent to transferring the partial Gram matrix with one-hot landmark points. Therefore, it can reduce the difference between teacher and student effectively. However, the landmark points adopted by KD fail to satisfy the property illustrated in Theorem 3. By equipping class centers as landmark points, KDA can further reduce the transfer loss from in KD to , which confirms the effectiveness of transferring full Gram matrix with appropriate landmark points.

Finally, we demonstrate that the difference between partial Gram matrices is closely correlated with that between full Gram matrices as suggested in Theorem 2. Features from the layer before the last FC layer are adopted for evaluation. Fig. 3 illustrates how the ground-truth transfer loss and the estimated transfer loss are changing during the training (values used for better visualization). Evidently, minimizing can reduce the gap between the original Gram matrices effectively, which is consistent with our theoretical analysis. Note that we have an assumption in Theorem 2 that the smallest eigenvalues of and are larger than . Since we adopt class centers as landmark points, and can be full rank matrices. We show their smallest eigenvalues in Fig. 4. It is obvious that the smallest eigenvalue is larger than 10, consistent with our assumption.

4.1.3 Effect of Matrix
Corollary 1 implies a variant that uses the standard Nyström method including matrix for transferring, while our proposal ignores and optimizes only the difference between and for efficiency. We compare our method to the one with in Fig. 5, where features before the last FC layer are adopted for transfer. During the experiment, we find that provides better performance. We can observe that our method without has a similar performance as the one with . It further demonstrates our analysis in Theorem 2.

4.1.4 Effect of Centers per Class
When assigning landmark points, we set the number to be that of classes, which avoids clustering in the implementation. It also constrains that each class has a single landmark point. The number of landmark points for each class can be easily increased by clustering. We compare the variant with two centers per class in Fig. 6, where features before the last FC layer are adopted for comparison.

We can observe that more centers for each class cannot improve the performance significantly. It may be because that the feature space is optimized with the cross entropy loss. As illustrated in [15], cross entropy loss will push all examples from the same class to a single center. Therefore, assigning one landmark point for each class is an appropriate setting, which also simplifies the algorithm and improves the efficiency. We use one center per class in the following experiments.
S | conv3_x | conv4_x | Before FC | After FC | T |
77.2 | 77.7 | 78.1 | 79.6 | 79.4 | 80.3 |
T | S | S | Before Last FC | After FC | Combo | T | |||
---|---|---|---|---|---|---|---|---|---|
AT | RKD | KDA | KD | KDA | KDA | ||||
R34 | R18 | 77.2 | 78.10.3 | 78.30.2 | 79.60.1 | 78.80.2 | 79.40.1 | 79.70.1 | 80.3 |
R34 | R18-0.5 | 73.5 | 75.00.1 | 74.30.3 | 75.60.3 | 74.80.2 | 75.30.2 | 75.90.2 | 80.3 |
R34 | SN | 71.7 | 73.00.1 | 72.50.1 | 74.00.3 | 72.90.1 | 73.60.1 | 74.20.3 | 80.3 |
T | S | S | Before Last FC | After FC | Combo | T | |||
---|---|---|---|---|---|---|---|---|---|
AT | RKD | KDA | KD | KDA | KDA | ||||
R34 | R18 | 63.4 | 64.40.1 | 63.90.2 | 65.20.3 | 64.90.2 | 65.40.1 | 65.50.1 | 66.6 |
R34 | R18-0.5 | 60.3 | 61.00.1 | 60.60.2 | 61.70.3 | 61.30.1 | 61.90.2 | 62.20.2 | 66.6 |
R34 | SN | 60.6 | 61.30.1 | 61.20.2 | 62.00.2 | 61.50.1 | 62.30.2 | 62.40.1 | 66.6 |
4.1.5 Effect of Different Layers
Now, we illustrate the performance of transferring the Gram matrix from different layers. ResNet consists of convolutional layer groups and we compare the performance of the last groups (i.e., “conv3_x”, “conv4_x” and “conv5_x”) and the one after the last FC layer. The definition of groups can be found in [6]. For each group, we adopt the last layer for transfer. Before transfer, we add a pooling layer to reduce the dimension of the feature map. Note that after pooling, the last layer of “conv5_x” becomes the layer before the last FC layer.
Table 1 shows the performance of transferring information from different layers. First, transferring information from teacher always improves the performance of student, which demonstrates the effectiveness of knowledge distillation. Besides, the information from the later layers is more helpful for training student. It is because later layers contain more semantic information that is closely related to the target task. We will focus on the layers before and after the FC layer in the rest experiments.
4.2 CIFAR-100
In this subsection, we compare the proposed KDA method to baseline methods on CIFAR-100. The results of different methods can be found in Table 2. All experiments are repeated times and the average results with standard deviation are reported.
First, knowledge distillation methods outperform training the student network without a teacher, which shows that knowledge distillation can improve the performance of student models significantly. By transferring the full Gram matrix before the last FC layer, KDA surpasses RKD by when ResNet-34 is the teacher and ResNet-18 is the student. The observation is consistent with the comparison in the ablation study, which confirms that is an appropriate metric to evaluate the transfer loss of the Gram matrix. Moreover, KDA shows a significant improvement on different student networks, which further demonstrates the applicability of the proposed method.
Furthermore, when transferring the Gram matrix after the last FC layer, both of KD and KDA can demonstrate good performance compared with the student model. It is due to the fact that these methods transfer the Gram matrix with landmark points, which is efficient for optimization. Besides, KDA can further improve the performance compared to KD. The superior performance of KDA demonstrates the effectiveness of the proposed strategy for generating appropriate landmark points.
Finally, compared to benchmark methods, KDA can distill the information from different layers with the same formulation in Eqn. 3.4. The proposed method provides a systematic perspective to understand a family of knowledge distillation methods that aim to transfer Gram matrix. If transferring the Gram matrices before and after the FC layer simultaneously, the performance of KDA can be slightly improved as illustrated by “Combo” in Table 2.
4.3 Tiny-ImageNet
Then, we compare different methods on Tiny-ImageNet data set111https://tiny-imagenet.herokuapp.com. There are classes in this data set and each class provides images for training and for validation. We report the performance on the validation set. Since the size of images in Tiny-ImageNet is that is larger than CIFAR-100, we replace the random crop augmentation with a more aggressive version as in [6], and keep other settings the same.
Table 3 summarizes the comparison. We can observe the similar results as on CIFAR-100. First, all methods with the information from a teacher model can surpass the student model without a teacher by a large margin. Second, KDA outperforms other methods no matter in which layer the transfer happens. Note that CIFAR-100 and Tiny-ImageNet have very different images, which demonstrates the applicability of the proposed algorithm in different real-world applications.
5 Conclusions
In this work, we investigate the knowledge distillation problem from the perspective of kernel matrix transfer. Considering the number of terms in the full kernel matrix is quadratic in the number of training examples, we extend the Nyström method and propose a strategy to obtain the landmark points for efficient transfer. The proposed method not only improves the efficiency of transferring the kernel matrix, but also has the theoretical guarantee for the efficacy. Experiments on the benchmark data sets verify the effectiveness of the proposed algorithm.
Besides the similarity function applied in this work, there are many other complicated functions adopted by other methods. Combining the proposed algorithm with different similarity functions for distillation can be our future work.
References
- [1] S. Ahn, S. X. Hu, A. Damianou, N. D. Lawrence, and Z. Dai, Variational information distillation for knowledge transfer, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 9163–9171.
- [2] G. Chen, W. Choi, X. Yu, T. X. Han, and M. Chandraker, Learning efficient object detection models with knowledge distillation, in Advances in Neural Information Processing Systems 30, 2017, pp. 742–751.
- [3] Y. Chen, N. Wang, and Z. Zhang, Darkrank: Accelerating deep metric learning via cross sample similarities transfer, in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, 2018.
- [4] M. Courbariaux, Y. Bengio, and J. David, Binaryconnect: Training deep neural networks with binary weights during propagations, in Advances in Neural Information Processing Systems 28, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, eds., 2015, pp. 3123–3131.
- [5] P. Drineas, M. W. Mahoney, and N. Cristianini, On the nyström method for approximating a gram matrix for improved kernel-based learning., JMLR, 6 (2005).
- [6] K. He, X. Zhang, S. Ren, and J. Sun, Deep residual learning for image recognition, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
- [7] G. Hinton, O. Vinyals, and J. Dean, Distilling the knowledge in a neural network, arXiv preprint arXiv:1503.02531, (2015).
- [8] A. Krizhevsky, G. Hinton, et al., Learning multiple layers of features from tiny images, (2009).
- [9] S. Kumar, M. Mohri, and A. Talwalkar, Sampling methods for the nyström method, JMLR, 13 (2012), pp. 981–1006.
- [10] Y. Liu, J. Cao, B. Li, C. Yuan, W. Hu, Y. Li, and Y. Duan, Knowledge distillation via instance relationship graph, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 7096–7104.
- [11] N. Ma, X. Zhang, H.-T. Zheng, and J. Sun, Shufflenet v2: Practical guidelines for efficient cnn architecture design, in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 116–131.
- [12] S. I. Mirzadeh, M. Farajtabar, A. Li, N. Levine, A. Matsukawa, and H. Ghasemzadeh, Improved knowledge distillation via teacher assistant, in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, 2020, pp. 5191–5198.
- [13] W. Park, D. Kim, Y. Lu, and M. Cho, Relational knowledge distillation, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 3967–3976.
- [14] B. Peng, X. Jin, J. Liu, D. Li, Y. Wu, Y. Liu, S. Zhou, and Z. Zhang, Correlation congruence for knowledge distillation, in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 5007–5016.
- [15] Q. Qian, L. Shang, B. Sun, J. Hu, H. Li, and R. Jin, Softtriple loss: Deep metric learning without triplet sampling, in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 6450–6458.
- [16] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio, Fitnets: Hints for thin deep nets, in Proceedings of the 3rd International Conference on Learning Representations, 2015.
- [17] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, Mobilenetv2: Inverted residuals and linear bottlenecks, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 4510–4520.
- [18] S. Srinivas and F. Fleuret, Knowledge transfer with jacobian matching, in International Conference on Machine Learning, PMLR, 2018, pp. 4723–4731.
- [19] Y. Tian, D. Krishnan, and P. Isola, Contrastive representation distillation, in Proceedings of the 8th International Conference on Learning Representations, 2020.
- [20] F. Tung and G. Mori, Similarity-preserving knowledge distillation, in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 1365–1374.
- [21] C. K. I. Williams and M. W. Seeger, Using the nyström method to speed up kernel machines, in Advances in Neural Information Processing Systems 13, 2000, pp. 682–688.
- [22] J. Yim, D. Joo, J. Bae, and J. Kim, A gift from knowledge distillation: Fast optimization, network minimization and transfer learning, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 4133–4141.
- [23] S. Zagoruyko and N. Komodakis, Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer, in Proceedings of the 5th International Conference on Learning Representations, 2017.
- [24] K. Zhang, I. W. Tsang, and J. T. Kwok, Improved nyström low-rank approximation and error analysis, in Proceedings of the 25th international conference on Machine learning, 2008, pp. 1232–1239.