[email protected] (Z. Xu), [email protected] (J. Zhang)
Frequency Principle in Deep Learning Beyond Gradient-descent-based Training
Abstract
Frequency perspective recently makes progress in understanding deep learning. It has been widely verified in both empirical and theoretical studies that deep neural networks (DNNs) often fit the target function from low to high frequency, namely Frequency Principle (F-Principle). F-Principle sheds light on the strength and the weakness of DNNs and inspires a series of subsequent works, including theoretical studies, empirical studies and the design of efficient DNN structures etc. Previous works examine the F-Principle in gradient-descent-based training. It remains unclear whether gradient-descent-based training is a necessary condition for the F-Principle. In this paper, we show that the F-Principle exists stably in the training process of DNNs with non-gradient-descent-based training, including optimization algorithms with gradient information, such as conjugate gradient and BFGS, and algorithms without gradient information, such as Powell’s method and Particle Swarm Optimization. These empirical studies show the universality of the F-Principle and provide hints for further study of F-Principle.
keywords:
Deep learning, Frequency principle, non-gradient-descent1 Introduction
Understanding deep neural networks (DNNs) is an important problem in modern machine learning, since it has permeated many aspects of daily life and important industries. Recent studies find that DNNs with gradient-descent-based algorithms often follow a Frequency Principle (F-Principle) proposed in Xu et al. (2019a, b) or Rahaman et al. (2019), namely,
-
DNNs tends to learn target functions from low frequency to high frequency during the training.
Theoretical studies subsequently show that frequency principle holds in general setting with infinite samples (Luo et al., 2019) and in the regime of wide neural networks (Neural Tangent Kernel (NTK) regime (Jacot et al., 2018)) with finite samples (Zhang et al., 2019; Luo et al., 2020b, a) or sufficient many samples (Cao et al., 2019; Yang & Salman, 2019; Ronen et al., 2019; Bordelon et al., 2020). E et al. (2019) show that the integral equation would naturally lead to the frequency principle. With the theoretical understanding, the frequency principle inspires the design of DNN-based algorithms (Liu et al., 2020; Wang et al., 2020a; You et al., 2020; Jagtap et al., 2020; Cai et al., 2019; Biland et al., 2019; Li et al., 2020). In addition, the F-Principle provides a mechanism to understand many phenomena in applications and inspires a series of study on deep learning from frequency perspective, such as the generalization of DNNs (Xu et al., 2019b; Ma et al., 2020), the understanding of the effect of depth in DNNs (Xu & Zhou, 2020), the difference between the traditional algorithm and DNN-based algorithm in solving PDEs (Wang et al., 2020b).
All previous studies of the F-Principle consider the gradient-descent-based training. It is still unclear whether the gradient-descent-based training is a necessary condition for the F-Principle in DNN training process. Previous studies of the F-Principle in finite-width network (Xu et al., 2019b; Luo et al., 2019) or infinite width network(Zhang et al., 2019; Ronen et al., 2019; Yang & Salman, 2019; Cao et al., 2019) all base on the gradient flow of the training. However, the deep learning can be trained by many optimization algorithms in addition to the (stochastic) gradient descent. In this work, we use numerical experiments to show that the F-Principle holds stably in the DNN training process with non-gradient-descent algorithms, such as conjugate gradient and BFGS. We further show that the F-Principle can also exist in the DNN training process with optimization algorithms without any gradient information in each iteration step, such as Powell’s method and Particle Swarm Optimization. To further show the universality of the F-Principle, we design an Monte-Carlo-Like optimization algorithm that randomly selects parameters, which can decrease the loss function. During the training of this Monte-Carlo-Like optimization algorithm, we found that the F-Principle still holds well.
The rest of paper is organized as follows. Section 2 introduces experimental details. Section 3 shows that gradient descent is not necessary for the F-Principle. Section 4 shows that gradient information during training is not necessary for the F-Principle. A short discussion and conclusion are given in section5.
2 Experimental details
To examine the F-Principle, it requires to differentiate the low- and high-frequency parts of dataset. In the following, we introduce discrete Fourier transform for 1d synthetic data and a filtering method for high-dimensional dataset, proposed in Xu et al. (2019b).
2.1 Discrete Fourier transforms in synthetic data
Following the suggested notation in BAAI (2020), we denote the target function by and the DNN output by . For -d synthetic data, and , we examine the relative error of each frequency during the training. The discrete Fourier transforms (DFT) of and the DNN output (denoted by ) are computed by:
where is the frequency. We compute the relative difference between the DNN output and the target function for each frequencies at each training epoch, that is, , where denotes the norm of a complex number.
F-Principle is tenable when converge to one by one in the training process, from low frequency components to high-frequency components. For our experiments of 1-d situation in the following texts, will choose evenly sampled points from with sample size 201, and each elements in and are initialized by a distribution, namely, they are sampled from , where is the neuron number of th layer. The loss function is chosen as mean squared error (MSE) here. The activation function for fully-connected networks is sigmoid function.
2.2 Real data
For high-dimensional data, it is hard to compute the high-dimensional Fourier transform. We now introduce a filtering method proposed in Xu et al. (2019b) to examine the F-Principle in a real data set (e.g., MNIST).
We train the DNN by the original dataset , where is an image vector, is a one-hot vector. At each training epoch, the low frequency part can be derived by a low-frequency filter, that is, the convolution with a Gaussian function,
(1) |
where is a normalization factor and is the variance of the following Gaussian function
(2) |
The high frequency part can be derived by
We also compute and for each DNN output .
To quantify the convergence of and , we compute the relative error and at each training epoch,
(3) | |||
(4) |
where and are obtained from the DNN output, which evolves as a function of training epoch, through the same decomposition. If for different ’s during the training, F-Principle holds; otherwise, it is falsified.
We use a MSE loss and a small sigmoid-CNN network, i.e., two convolutional layers (one convolution layer of 5532, a max pooling of 22, one convolution layer of 5564, a pooling layer of 22), followed by a fully connected multi-layer neural network 1024-10 equipped with a softmax.
Due to the memory constrained of some training algorithms, we only train 550 randomly selected samples from MNIST data, and we only perform experiments of Conjugate Gradient algorithm and L-BFGS in the following experiments. Other algorithms perform badly for the high-dimensional MNIST data.
3 Gradient descent is not necessary for F-Principle
In this section, we would examine the F-Principle in the DNN training with algorithms which are non-gradient-descent algorithms but still use gradient information in each iteration step.
The algorithms used in this section are variants of Newton method. Newton method is faster than the Gradient Descent in terms of iteration step number. However, due to the difficulty of ensuring Hessian matrix positive definite and the complexity of computing the inverse of the Hessian matrix, the original Newton’s method is rarely used for large scale computations. Instead, the conjugate gradient algorithm (CG), truncated Newton algorithm (TNC), BFGS (Nocedal & Wright, 2006) and its variant L-BFGS is popular for practical simulations.
3.1 Conjugate Gradient algorithm
Conjugate gradient (CG) algorithm is a popular algorithm for solving nonlinear optimization problems. The features of CG are that it requires no matrix storage and are faster than the gradient descent. The detail of CG algorithm is given in Appendix.
We use CG algorithm to train the 1-100-10-1 DNN to learn the target function with three frequency peaks, namely,
(5) |
As shown in Figure 1, the DNN converges gradually from low-frequency to high frequency in Fourier analysis.

We also use CG algorithm to train a convolutional network to fit MNIST. The F-Principle also holds in the training process, see Figure 2 for different variances of the Gaussian function.


3.2 Truncated Newton algorithm
Truncated Newton algorithm (TNC), also called Newton Conjugate-Gradient, is a nonlinear method based on Newton method. The CG method is designed to solve positive definite systems, however, the Hessian matrix may have negative eigenvalues to lead to an inaccurate solution. TNC is a Hessian-free optimization method (Nash, 1985). The detail of TNC algorithm is given in Appendix.
In this experiment, we use TNC to train the 1-100-10-1 DNN to learn the target function with two frequency peaks, i.e., . Again, as shown in Figure 3, the DNN converges gradually from low-frequency to high frequency.

In our experiments, we point out that the results of training DNN by TNC to regress the target function (5) are not perfect, which is caused by the fact that the search direction is not the descent direction. This also may be influenced by the Hessian matrix, which may not keep positive definite in the training process. For , the low-frequency components is converged first, and it is regressed perfectly.
3.3 BFGS and L-BFGS
In this subsection, we use BFGS and L-BFGS, which are quasi-Newton methods, to train neural networks. The detail of BFGS algorithm is given in Appendix.
In Figure 4(a), we use BFGS to train a DNN of 1-100-10-1 to learn the target function (5). Similarly, the DNN converges gradually from low-frequency to high frequency.


L-BFGS uses a limited memory algorithm based on BFGS. This method only uses the data of the recent steps, which can simplify the computation and memory (Nocedal & Wright, 2006; Byrd et al., 1995). The detail of L-BFGS algorithm is given in Appendix.
In Figure 4(b), we use L-BFGS to train a DNN of 1-500-50-1 to learn the target function (5). It is clear that the DNN converges gradually from low-frequency to high frequency in this example.
We further use L-BFGS to train a CNN to learn MNIST. As shown in Fig. 5, for different filter width, we still observe that the high frequency part converges slower, that is, F-Principle.


In this section, we have used experiments to show that a training algorithm, which uses gradient information but not a gradient-descent method, can still lead to the phenomenon of F-Principle. In the next section, we would show that even using a training algorithm without using gradient information, the F-Principle can still hold.
4 Gradient is not necessary
In this section, we use two non-gradient-based optimization algorithm (i.e., Powell’s method, Particle Swarm Optimization (PSO)) and a Monte-Carlo-like algorithm, to examine the F-Principle.
4.1 Powell’s method
Powell’s method, a conjugate direction method, performs sequential one-dimensional minimization along each vector of a direction set, in which the direction set is updated at each iteration (Powell, 1964). The method used here is a modification of Powell’s method. The loss function can be non-differentiable, since no derivative is taken. The detail of Powell’s method is given in Appendix.
The Powell’s method is slow in solving large scale optimization problems, such as training DNN, since it needs large internal memory and computations. Therefore, we use Powell’s method to train a small DNN of 1-100-1 to learn . Considering the two frequency peaks of the target function, as shown in Figure 6(a), one can see that the DNN converges the low-frequency components first.


4.2 Particle Swarm Optimization
Particle Swarm Optimization (PSO) does random search in parameter space, inspired by the moving of the swarm of birds. It can be viewed as a mid-level form of artificial life or biologically derived algorithm, and highly depends on stochastic processes. The PSO algorithm is given in appendix:
In this experiment, we use PSO to train a DNN of 1-100-10-1 to learn target function: . As shown in Figure 6(b), the DNN learns the low-frequency components first.
4.3 Monte-Carlo-Like algorithm
We found the F-Principle also holds in the following Monte-Carlo-Like algorithm: is any element in
unless is empty. We train a DNN of 1-500-200-1 to learn target function . Considering the important frequency peaks of the target function. As shown in Figure 7, the DNN converges the low-frequency components first.

5 Discussion and Conclusion
In this paper, we report the F-Principle in the training process of DNN using several non-gradient-descent-based methods. These empirical studies significantly extend the current understanding of the F-Principle. For future work, it is worth to study a mechanism of the F-Principle independent of gradient descent.
References
- BAAI (2020) BAAI. Suggested notation for machine learning. https://github.com/Mayuyu/suggested-notation-for-machine-learning, 2020.
- Biland et al. (2019) Simon Biland, Vinicius C Azevedo, Byungsoo Kim, and Barbara Solenthaler. Frequency-aware reconstruction of fluid simulations with generative networks. arXiv preprint arXiv:1912.08776, 2019.
- Bordelon et al. (2020) Blake Bordelon, Abdulkadir Canatar, and Cengiz Pehlevan. Spectrum dependent learning curves in kernel regression and wide neural networks. arXiv preprint arXiv:2002.02561, 2020.
- Byrd et al. (1995) R H Byrd, P Lu, and J. Nocedal. A limited memory algorithm for bound constrained optimization. SIAM Journal on Scientific and Statistical Computing, 16(5):1190–1208, 1995.
- Cai et al. (2019) Wei Cai, Xiaoguang Li, and Lizuo Liu. A phase shift deep neural network for high frequency wave equations in inhomogeneous media. Arxiv preprint, arXiv:1909.11759, 2019.
- Cao et al. (2019) Yuan Cao, Zhiying Fang, Yue Wu, Ding-Xuan Zhou, and Quanquan Gu. Towards understanding the spectral bias of deep learning. arXiv preprint arXiv:1912.01198, 2019.
- E et al. (2019) Weinan E, Chao Ma, and Lei Wu. Machine learning from a continuous viewpoint. arXiv preprint arXiv:1912.12777, 2019.
- Jacot et al. (2018) Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and generalization in neural networks. In Advances in neural information processing systems, pp. 8571–8580, 2018.
- Jagtap et al. (2020) Ameya D Jagtap, Kenji Kawaguchi, and George Em Karniadakis. Adaptive activation functions accelerate convergence in deep and physics-informed neural networks. Journal of Computational Physics, 404:109136, 2020.
- Li et al. (2020) Xi-An Li, Zhi-Qin John Xu, and Lei Zhang. A multi-scale dnn algorithm for nonlinear elliptic equations with multiple scales. Communications in Computational Physics, 28(5):1886–1906, 2020.
- Liu et al. (2020) Ziqi Liu, Wei Cai, and Zhi-Qin John Xu. Multi-scale deep neural network (mscalednn) for solving poisson-boltzmann equation in complex domains. Communications in Computational Physics, 28(5):1970–2001, 2020.
- Luo et al. (2019) Tao Luo, Zheng Ma, Zhi-Qin John Xu, and Yaoyu Zhang. Theory of the frequency principle for general deep neural networks. arXiv preprint arXiv:1906.09235, 2019.
- Luo et al. (2020a) Tao Luo, Zheng Ma, Zhiwei Wang, Zhi-Qin John Xu, and Yaoyu Zhang. Fourier-domain variational formulation and its well-posedness for supervised learning. arXiv preprint arXiv:2012.03238, 2020a.
- Luo et al. (2020b) Tao Luo, Zheng Ma, Zhi-Qin John Xu, and Yaoyu Zhang. On the exact computation of linear frequency principle dynamics and its generalization. arXiv preprint arXiv:2010.08153, 2020b.
- Ma et al. (2020) Chao Ma, Lei Wu, and E Weinan. The slow deterioration of the generalization error of the random feature model. In Mathematical and Scientific Machine Learning, pp. 373–389. PMLR, 2020.
- Nash (1985) Stephen G Nash. Preconditioning of truncated-newton methods. SIAM Journal on Scientific and Statistical Computing, 6(3):599–616, 1985.
- Nocedal & Wright (2006) Jorge Nocedal and Stephen Wright. Numerical optimization. Springer Science & Business Media, 2006.
- Powell (1964) Michael JD Powell. An efficient method for finding the minimum of a function of several variables without calculating derivatives. The computer journal, 7(2):155–162, 1964.
- Rahaman et al. (2019) Nasim Rahaman, Devansh Arpit, Aristide Baratin, Felix Draxler, Min Lin, Fred A Hamprecht, Yoshua Bengio, and Aaron Courville. On the spectral bias of deep neural networks. International Conference on Machine Learning, 2019.
- Ronen et al. (2019) Basri Ronen, David Jacobs, Yoni Kasten, and Shira Kritchman. The convergence rate of neural networks for learned functions of different frequencies. In Advances in Neural Information Processing Systems, pp. 4763–4772, 2019.
- Wang et al. (2020a) Feng Wang, Alberto Eljarrat, Johannes Müller, Trond R Henninen, Rolf Erni, and Christoph T Koch. Multi-resolution convolutional neural networks for inverse problems. Scientific reports, 10(1):1–11, 2020a.
- Wang et al. (2020b) Jihong Wang, Zhi-Qin John Xu, Jiwei Zhang, and Yaoyu Zhang. Implicit bias with ritz-galerkin method in understanding deep learning for solving pdes. arXiv preprint arXiv:2002.07989, 2020b.
- Xu et al. (2019a) Zhi-Qin J Xu, Yaoyu Zhang, and Yanyang Xiao. Training behavior of deep neural network in frequency domain. International Conference on Neural Information Processing, pp. 264–274, 2019a.
- Xu & Zhou (2020) Zhi-Qin John Xu and Hanxu Zhou. Deep frequency principle towards understanding why deeper learning is faster. arXiv preprint arXiv:2007.14313, AAAI-21, 2020.
- Xu et al. (2019b) Zhi-Qin John Xu, Yaoyu Zhang, Tao Luo, Yanyang Xiao, and Zheng Ma. Frequency principle: Fourier analysis sheds light on deep neural networks. arXiv preprint arXiv:1901.06523, 2019b.
- Yang & Salman (2019) Greg Yang and Hadi Salman. A fine-grained spectral perspective on neural networks. arXiv preprint arXiv:1907.10599, 2019.
- You et al. (2020) Haoran You, Chaojian Li, Pengfei Xu, Yonggan Fu, Yue Wang, Xiaohan Chen, Yingyan Lin, Zhangyang Wang, and Richard G Baraniuk. Drawing early-bird tickets: Towards more efficient training of deep networks. International Conference on Learning Representations, 2020.
- Zhang et al. (2019) Yaoyu Zhang, Zhi-Qin John Xu, Tao Luo, and Zheng Ma. Explicitizing an implicit bias of the frequency principle in two-layer neural networks. arXiv preprint arXiv:1905.10264, 2019.
Appendix A Gradient
For the evaluation of gradient, we use the following difference scheme:
(6) | ||||
Here, is the number of the parameters, and is a small number defined by user, usually associated with or the accuracy of the machine’s floating point: . We set , which is approximately in the following experiments.
Appendix B 1-d search
Before the introduction of the 1-d line search, there are some sub-options needed to introduce first, the evaluation of is in the Algorithm 1:
In the algorithm, returns the minimum of in , where is a cubic polynomial that goes through the points , with derivative at of ; in the Step3, returns the minimum of in , where is a quadratic polynomial that goes through the points , with derivative at of .
In the 1-d line search, under a given boundary , we use the following method (for the computation of in our algorithms) to find a point in interval , which makes the searching process dsatisfy strong Wolfe conditions in Algorithm 2.
In algorithm 2, there is an option called , which is the algorithm 1, and is defined by the difference scheme similar in Appendix A. One may use some other algorithm to make the searching satisfy strong Wolfe conditions, the above one is also used in Python’s scipy.optimize.