Solving Partial Differential Equations with Random Feature Models
Abstract
Machine learning based partial differential equations (PDEs) solvers have received great attention in recent years. Most progress in this area has been driven by deep neural networks such as physics-informed neural networks (PINNs) and kernel method. In this paper, we introduce a random feature based framework toward efficiently solving PDEs. Random feature method was originally proposed to approximate large-scale kernel machines and can be viewed as a shallow neural network as well. We provide an error analysis for our proposed method along with comprehensive numerical results on several PDE benchmarks. In contrast to the state-of-the-art solvers that face challenges with a large number of collocation points, our proposed method reduces the computational complexity. Moreover, the implementation of our method is simple and does not require additional computational resources. Due to the theoretical guarantee and advantages in computation, our approach is proven to be efficient for solving PDEs.
Keywords: Random Feature, Partial Differential Equations, Scientific Machine Learning, Error Analysis
1 Introduction
Solving partial differential equations (PDEs) is a fundamental question in science and engineering. Traditional numerical methods include finite element method and finite difference method. Recently, the use of machine learning tools for solving PDEs, or in general any complex scientific tasks, has led to a new area of scientific machine learning. Unlike traditional numerical methods, machine learning (ML) based methods do not rely on complex mesh designs and intricate numerical techniques. Therefore, it enables simpler, faster, and more convenient implementation and use. The most prominent ML-based solver is physics-informed neural network (PINN) [1], which uses a deep neural network to approximate the PDE solution. Given a set of collocation points in a spatiotemporal domain , we parametrize the PDE solution as a neural network satisfying PDE, boundary conditions, and initial conditions at given collocation points. This approach leads to solving an optimization problem where the objective function measures the PDE residual with respect to some loss functional. Finding the solution of PDE is equivalent to optimizing the neural network parameters by using variants of stochastic gradient descent. PINN and its variations have achieved great success in learning the PDE solutions. However, it is extremely hard and expensive to optimize all parameters in the deep neural network. To reduce the computational complexity, some recent works [2] proposed to use randomized neural networks to solve PDEs. Randomized neural network is a special type of neural networks with some parameters are randomly generated from a known probability distribution, instead of being optimized. This strategy was proven to reduce the computational time as well as maintain the approximation accuracy, see numerical experiments in [2]. While these neural network based methods are often used in practice, the theoretical analysis relies on the universal approximation property of deep neural network, which shows the existence of a network of a requisite size achieving a certain error rate. However, the existence result does not guarantee that the network is computable in practice.
Instead of using deep neural networks, kernel method/Gaussian process (GPs) are also used to learn the PDE solutions [3, 4, 5, 6]. The main idea of such method is to approximate the solution of a given PDE as an element in a reproducing kernel Hilbert space. This element will be found by solving an optimal recovery problem constrained by a PDE at collocation points [5, 7]. An optimal recovery problem can also be interpreted as maximum a posterior (MAP) estimation for a Gaussian process constrained by a PDE. The key to solving an optimal recovery problem is a celebrated representer theorem that characterizes the minimizer as a finite-dimensional representation formula, which is easy to implement and interpret. Moreover, kernel-based PDE solving methods are also supported by rigorous theoretical foundation. Specifically, the authors provided a detailed priori error estimates in [6]. However, the computational efficiency of kernel method can be a significant drawback. Specifically, it does not scale well when the sample size is large. For example, given training collocation points, kernel method requires training time and to store the kernel matrix, which is often computationally infeasible when is large.
To overcome the computational bottleneck in kernel method, random feature method was proposed to approximate large-scale kernel machines [8]. The main idea of random features is mapping data into a low-dimensional randomized feature space. Then, the kernel matrix is approximated by a low-rank matrix, which reduces the computational and storage costs of operating on kernel matrix. The random feature model can be viewed as a randomized two-layer neural network. The weights connecting input layer and single hidden layer are randomly generated from a known distribution rather than trainable parameters. Only the weights on the output layer are trainable.
In this paper, we propose a random feature based PDE solver. Since random feature model is a type of randomized neural network and an approximation of kernel method, the computational complexity can be reduced significantly. Moreover, we provide a convergence analysis of our method. The key contributions of our work are summarized as follows:
-
•
Framework: We propose a random feature framework for solving PDEs. By minimizing the PDE residuals, our method does not require the construction of kernel matrix. In practice, this framework allows us to use the modern automatic differential libraries, which is straightforward and convenient.
-
•
Convergence Analysis: We provide a detailed convergence analysis of our proposed framework under some mild and widely used assumptions on PDEs. Our convergence analysis contains two steps: the first step follows the standard convergence analysis of kernel method [6]; the second step concerns the approximation of kernel method using random feature method. To the best of our knowledge, this is the first work providing a convergence analysis of random feature PDE solving method.
-
•
Numerical Experiments: We test the performance of our framework with nonlinear elliptic PDEs, (high-dimensional) nonlinear Poisson PDEs, Allen-Cahn equation, and Advection-diffusion equation. Our method requires less computational resources to train the model with a similar or better performance compared with the existing methods on all benchmarks. We also numerically verify the convergence rate obtained from our analysis for all problems.
The remaining of this paper is organized as follows. In Section 2, we give an overview of random feature method and provide a framework for solving PDEs using random feature model along with error analysis. Section 3 is dedicated to the numerical experiments for solving PDEs with random feature method. We compare our method with PINN on several benchmarks and provide a empirical convergence study. We conclude the paper with a summary of the results and some possible future directions in Section 4.
2 Random Feature
2.1 Overview of Random Feature Method
To introduce random feature method, we first give a short introduction to kernel method, which is also known as kernel trick. It is one of the popular techniques for capturing nonlinear relations between features and targets. Let be samples and be a feature map transforming samples to a high-dimensional (even infinite-dimensional) reproducing kernel Hilbert space where the mapped data can be learned by a linear model. In practice, the explicit expression of feature map is not necessarily known to us. The inner produce between and endowed by can be computed by using a kernel function , i.e.
Due to the ease of computing the inner product, kernel method is effective for nonlinear learning problems with a wide range of successful applications [9]. However, kernel method does not scale well to extremely large datasets. For example, given training samples, kernel regression requires training time and to store the kernel matrix, which is often computationally infeasible when is large. Random feature method is one of the most popular techniques to overcome the computational challenges of kernel method. The theoretical foundation of random (Fourier) feature builds on the following classical result from harmonic analysis.
Theorem 1 (Bochner [10]).
A continuous shift-invariant kernel on is positive definite if and only if is the Fourier transform of a non-negative measure.
Bochner’s theorem guarantees that the Fourier transform of kernel is a proper probability distribution if the kernel is scaled properly. Denote the probability distribution by , we have
(1) |
Using Monte Carlo sampling technique, we randomly generate i.i.d samples from and define a random Fourier feature map as 222It depends on the i.i.d samples from . To simplify the notation, we omit the dependency on when we define .
(2) |
Using random Fourier feature map, we can define a kernel function as
Kernel function is finite dimensional since the random Fourier feature map transforms data to a finite dimensional space . We can also use random cosine features to approximate any shift-invariant kernel. Specifically, setting random feature generated from and sampled from uniform distribution on , the random cosine feature is defined as . Similarly, we can define a random cosine feature map as
(3) |
using random i.i.d samples , and hence define a kernel function using random cosine feature map. We could approximate the original shift-invariant kernel function defined in (1) by the finite-dimensional kernel . Utilizing random features allows efficient learning with time and storage capacity.
2.2 Random Feature Regression
Now, we set ourselves to the regression setting where our aim is to learn a function using training samples . To implement the random feature model, we first draw i.i.d random features from a probability distribution , and then construct an approximation for target function taking the form
(4) |
We assume that the sampling points ’s are drawn from a certain distribution with the corresponding output values
where is the measurement noise. Let be the random feature matrix defined component-wise by for and . Training the random features model (4) is equivalent to finding the coefficient vector such that , where and .
In the under-parameterized regime where we have more measurements than features (), the coefficients are trained by solving the (regularized) least squares problem:
where is the regularization parameter. It is also referred to ridge regression since the ridge regularization term is added.
Recently, over-parametrized models have received great attention since those trained models not only fit the training samples exactly but also predict well on unseen test data [11, 12]. In the over-parametrized regime, we have more features than measurements (), and we consider training the coefficient vector using the min-norm interpolation problem:
This problem is also referred to ridgeless regression problem since the solution can be viewed as the limit of as .
The generalization analysis of random features models have been of recent interest [13, 14, 15, 16, 17, 18, 19]. In [13], the authors showed that the random feature model yields a test error of when trained on Lipschitz loss functions. Therefore, the generalization error is for large if . However, the model is trained by solving a constrained optimization problem which is rarely used in practice. In [14], it was shown that for in an RKHS, using features is sufficient to achieve a test error of with squared loss. In [15], the authors showed that a regularized model can achieve risk provided that the target function belonging to an RKHS. Nevertheless, results in [14, 15] depend on the assumptions of kernel and a certain decay rate of second moment operator, which may be difficult to verify in practice. Extending the results of random feature models from squared loss to 0-1 loss, the authors of [16] showed that the support vector machine with random features can achieve the learning rate faster than on a training set with samples. In [17], the authors computed the precise asymptotic bound of the test error, in the limit with and fixed. In [18], the authors derived non-asymptotic bounds including both the under-parametrized setting using (regularized) least square problem and the over-parametrized setting using min-norm problem or sparse regression. Their results relied on the condition numbers of the random feature matrix and indicated double descent behavior in random feature models. However, the target function space is a subset of a RKHS, which limits the approximation ability of random feature model. In [19], the authors consider a RKHS as target function space and derived a similar non-asymptotic bound by utilizing different proof techniques.
Kernel name | ||
---|---|---|
Gaussian kernel | , | |
Laplace Kernel | , |
Furthermore, the random feature model can be viewed as a two-layer (one hidden layer) neural network where the weights connecting input layer and hidden layer and biases are sampled randomly and independently from a known distribution. Only the weights of the output layer are trainable using training samples, and hence it leads to solving a convex optimization problem when training random feature models. There are other types of randomized neural network, including the random vector functional link (RVFL) network [20, 21], and the extreme learning machine (ELM) [22, 23, 24], among others. Sharing the same structure as the random feature model, RVFL network is also a shallow neural network where the input-to-hidden weights and biases are randomly selected. However, the motivations of two models are different. RVFL networks were designed to address the difficulties associated with training deep neural networks and weights are usually sampled from uniform distribution. Random feature models were originally used to approximate large-scale kernel machines, and hence random weights depend on the kernel function, see Table 1 for some examples of commonly used kernels and the corresponding densities. Extreme learning machine is one type of deep neural network (more than two hidden layers) in which all the hidden-layer weights are randomly selected and then fixed. Only the output-layer coefficients are trained. Theoretical guarantees on RVFL and ELM suggest that they are universal approximators, see [25].
2.3 Random Feature Models for Solving PDEs
In this section, we present our framework for solving PDEs using random feature models. Let us consider the PDE problem of the general form
(5) | ||||||
where is the domain with the boundary , is the interior differential operator and is the boundary differential operator. For the sake of brevity, we assume that the PDE is well-defined pointwise and has a unique strong solution throughout this paper. We propose to solve the PDE (5) by using random feature model. More precisely, let be a collection of collocation points such that is a collection of points in the interior of and is a set of points on the boundary . Random features are randomly drawn from a known distribution . The random feature model takes the form
(6) |
We train the random feature model by solving the following optimization problem:
(7) | ||||||
s.t. | ||||||
We wish to find an approximation of the true solution with the min-norm random feature model satisfying the PDE and boundary data at collocation points.
Example (Linear PDE).
If the PDE is linear, we can rewrite the constraints in (7) as a linear system. Consider the following linear PDE
we can write the condition as the following linear system
(8) |
where and are defined component-wise by 333Precisely, means that evaluation of at point . and by , respectively. Two vectors on the right-hand side are defined as and . In this case we compute by using the least squares method if the system is overdetermined or using the min-norm method if the system is underdetermined.
However, addressing (7) directly can be complicated if the PDE is nonlinear. Therefore, we may solve an unconstrained optimization problem with regularization instead,
(9) |
where are regularization parameters. When , the solution of (9) converges to the solution of (7). In practice, we use variants of stochastic gradient descent and the modern automatic differential libraries to solve problem (9). In scenarios where PDEs are challenging, a large number of collocation points are required to capture the solution details. Compared with the framework using the standard kernel matrix to construct the solution approximation in [3], our proposed random feature method approximates the kernel matrix, and hence reduces the computational cost and accelerate computation involving the standard kernel matrix (and its inverse) when dealing with massive collocation points.
2.4 Convergence Analysis
In this section, we show the convergence analysis of our method. Our convergence analysis relies on the standard convergence analysis of kernel method in [6] and the kernel approximation by using random features. Recall the kernel-based method for solving PDEs in [5, 6], a reproducing kernel Hilbert space is chosen and we aim to solve the following
(10) | ||||||
s.t. | ||||||
We first state the main assumptions on the domain and its boundary , the PDE operators and , and the reproducing kernel Hilbert space .
Assumption 1.
The following assumptions hold:
-
•
(C1) Regularity of the domain and its boundary with is a compact set and is a smooth connected Riemannian manifold of dimension endowed with a geodesic distance .
-
•
(C2) Stability of the PDE There exist and satisfying and , and so that for any it holds that, for any ,
and for any ,
where is a constant independent of and .
-
•
(C3) The RKHS is continuously embedded in .
Item (C1) is a standard assumption when analyzing PDEs. Item (C2) assumes that the PDE to be Lipschitz well-posed with respect to the right hand side/source term. It relates to the analysis of nonlinear PDEs and is independent of our numerical scheme. Assumption (C3) dictates the choice of the RKHS , and in turn the kernel, which should be carefully selected based on the regularity of the strong solution . We are now ready to state the first theorem which concerns the convergence rate of kernel method
Theorem 2 (Theorem 3.8, [6]).
Suppose Assumption 1 is satisfied and denote the unique strong solution of by . Let be a minimizer of (10) with interior collocation points and collocation points on the boundary . Define the fill-in distances
and set . Then there exists a constant so that if then
where is a constant independent of and .
With the kernel minimizer at hand, our next step is approximating using random features. We first adopt an alternative representation of preselected RKHS . Denote the corresponding random Fourier feature map (or random cosine feature map) by , then we define the following function space
(11) |
where is the Fourier transform density associated with kernel . Notice that the completion of is a Hilbert space equipped with RKHS norm . Recall the Proposition 4.1 in [26], it is indeed the reproducing kernel Hilbert space with associated kernel function . The endowed norms and are equivalent.
In the next theorem, we address the approximation ability of finite sum random feature model taking the form (6).
Theorem 3.
Let be a function from . Suppose that the random feature map satisfies for all and . Then for any , there exists so that the function
(12) |
satisfies
with probability at least over drawn i.i.d from .
Proof.
We first introduce notations and for any . Then we define
(13) |
where ’s are i.i.d samples following a probability distribution with density , and hence define in (12) using ’s. We can show that
By utilizing the triangle inequality, we decompose the error into two terms
(14) |
We first bound term . Recalling the definitions of and , we bound term as
(15) | ||||
where we use the Cauchy-Schwarz inequality in the first line and the Markov’s inequality in the second line.
Next, we bound term . For any , we define random variable and let be i.i.d copies of defined by for each . By boundedness of , we have an upper bound for any . The variance of is bounded above as
By Lemma A.2 and Theorem A.1 in [27], it holds that, with probability at least ,
(16) |
Taking the square root for both sides of (15), and then adding it to (16) gives
Selecting gives the desired result. ∎
The last theorem in this section presents the convergence rate of our proposed random feature model.
Theorem 4.
Proof.
Using the triangle inequality, we decompose the error as
where is a minimizer of (10) with interior collocation points and collocation points on the boundary . We directly apply Theorem 2 to bound . The second term is bounded as
where the first inequality relies on the entry-wise bound in Theorem 3 and the second inequality holds since the strong solution satisfies the boundary conditions, and hence the minimizer must satisfy . Adding the bounds together leads to the desired error bound. ∎
3 Numerical experiments
In this section, we test the performance of our proposed random feature method using several PDE benchmarks in the literature [5, 2, 3]. In addition, we numerically verify the convergence rate obtained in Theorem 4. In all numerical experiments, we randomly generate training samples over the domains to train the models, and generate different test points to evaluate the PDE solutions. We compare the true solution and predicted solution on these test points (of size ) to compute the test error, which is defined as
We consider Gaussian random features in all examples. Detailed problem settings are stated below. All experiments are implemented in Python based on Torch library. Our codes are available on the repository: https://github.com/liaochunyang/RF_PDE.
3.1 Nonlinear Elliptic PDEs
We test with the instance of nonlinear elliptic PDE
where . The solution , and the right-hand side is computed accordingly via the solution . The boundary condition is . This example was also considered in [5, 3]. We randomly sample 1000 random features () from normal distribution with variance . The random feature model is trained on interior collocation points and 124 collocation points on the boundary . We take the uniform grids of size as our test points. In Figure 1, we show predicted solution using our proposed random feature model, true solution, and entry-wise absolute errors at test points. We observe that our proposed random feature model provides an accurate prediction of the true solution.

We also compare the training epochs, test errors and training times of our proposed method with PINN, see results in Table 2. The PINN model has 2 hidden layers and each layer has 64 neurons. The nonlinear activation function is tanh function. PINN is trained on the same training samples and is tested on the uniform grid as well. We observe that training PINN is complicated, which requires more epochs, and hence longer training time. If we set the same number of epochs, then the PINN gives a bad prediction compared with random feature method. Moreover, our proposed method has smaller test error. Overall, our proposed random feature model outperforms PINN in this example.
Method | Epochs | Test error | Training Time (Seconds) |
---|---|---|---|
RF | 1000 | 99.05 | |
PINN | 1000 | 1.22 | 22.32 |
PINN | 10000 | 169.28 |
Finally, we numerically verify the convergence rate of nonlinear elliptic PDE. We first fix the number of random features to be and varies the number of collocation points. We sample points uniformly in the domain, and points uniformly on the boundary. We sample another set of 100 test points and evaluate the test errors. In the second experiment, we fix interior collocation points and points on the boundary. We take different numbers of random features, i.e. . We sample another set of 100 test points and evaluate the test errors. In Figure 2, we show the test errors as a function of the number of collocation points and a function of the number of random features, respectively. For each point in the figure, we repeat the experiments 10 times and take the average.


3.2 Nonlinear Poisson PDE
In this section, we test our proposed method with high-dimensional nonlinear Poisson PDEs. We consider the domain and the following problem defined on ,
where . The solution is crafted as . The function on the boundary, and the right-hand side is computed using the true solution, which has the explicit expression
We first test with the 2D nonliear Poisson PDE and show the predicted solution, true solution, and entry-wise absolute errors in Figure 3. The training samples of size 1024 contains 900 interior collocation points and 124 boundary points, which are uniformly generated over the domain and its boundary , respectively. We randomly generate 500 test samples to evaluate the performance. We generate 1000 random features, following the standard Normal distribution, to construct the random feature model.

Furthermore, we compare between our proposed model and PINN model. The neural network has 2 hidden layers and 64 neurons at each layer. We select tanh function as the activation function. We test 2D nonlinear Poisson PDE as well as high-dimensional nonlinear Poisson PDEs ( and ). We aim to compare the test errors and training times. We also report the number of random features for our model and the number of epochs for both models. All numerical results are summarized in 3. We observe that our proposed random feature model achieves similar performance or even beats the PINN models in terms of the test error.
Dimension | Method | Epochs | Test error | Training Time (Seconds) | |
---|---|---|---|---|---|
RF | 500 | 1000 | 42.29 | ||
PINN | - | 2000 | 51.46 | ||
RF | 500 | 1000 | 51.24 | ||
PINN | - | 2000 | 55.49 | ||
RF | 100 | 1500 | 37.84 | ||
PINN | - | 2000 | 54.14 |
We note that the accuracy of our model is related to the choice of variance of Gaussian random feature, but it is not sensitive to the variance. Usually, we can tune the hyperparameter using cross-validation. In Table 4, we report the variances and the corresponding test errors for dimension nonlinear Poisson PDE. In this example, it is better to use small variance, but the test error is not very sensitive to the choice of variance when it is smaller than some threshold.
100 | 1 | 0.04 | 0.01 | 0.0025 | |
---|---|---|---|---|---|
Test error |
In Figure 4, we report the empirical convergence rates for the nonlinear Poisson equations. Figure 4(a) shows the test error as a function of the number of collocation points. For each dimension, we fix random features and varies the number of collocation points. We uniformly sample interior points, and boundary points. We sample a different set of 100 test points to evaluate the test errors. Figure 4(b) shows the test error as a function of the number of random features. For each dimension, we fix the collocation points of size 484 with 400 interior collocation points. We take random features. We average over 10 experiments to produce each point in Figure 4.


3.3 Allen-Cahn Equation
Next, we consider a 2D stationary Allen-Cahn equation with a source function and Dirichlet boundary conditions, i.e.
where and . The solution takes the form , and the function is computed using the solution . Positive parameter controls the frequency of the solutions. We test three cases , , and .
We first compare the performance between random feature method and PINN. In each case, we randomly sample points with interior collocation points to train models. The test performances for both models are evaluated on 100 test samples, which are uniformly generated over the domain and boundary. We report the parameter selections and summarize the numerical results in Table 5. From the numerical results, we observe that the random feature models beat PINNs cross all cases, especially when the frequency parameter is large. It might be related to the known ”spectral bias” of neural networks [28, 29, 30]. Precisely, neural networks trained by gradient descent fit a low frequency function before a high frequency one. Therefore, it is difficult for PINNs to learn the high frequency PDE solutions. To alleviate ”spectral bias” and learn high frequency functions, previous work proposed to use Fourier features and both theoretically and empirically showed that a Fourier feature mapping can improve the performance [31]. Fourier features have been used to solve high frequency PDEs, see [32]. As our results suggest, higher frequency in the PDE solutions leads to a larger variance of Gaussian random feature. Moreover, it requires more random features and epochs to train the random feature model as the frequency increasing. In Figure 5, we show predicted solution, true solution and entry-wise errors at test points.
Frequency | Method | Epochs | Test error | Training Time (Seconds) | ||
RF | 200 | 1000 | 14.27 | |||
PINN | - | - | 1000 | 8.64 | ||
RF | 200 | 2000 | 25.37 | |||
PINN | - | - | 2000 | 18.48 | ||
RF | 400 | 1500 | 86.54 | |||
PINN | - | - | 2000 | 26.67 |

Figure 6 illustrates the numerical verifications of the convergence rate of Allen-Cahn equation. We first show the test error as a function of the number of collocation points. In this experiment, we fix the number of random features to be . To produce the collocation points, we uniformly sample points in the domain, and points on the boundary. We sample another set of 100 test points to evaluate the test errors. In the second experiment, where we show the test error as a function of the number of random features, we uniformly sample and then fix interior collocation points and points on the boundary. We sample another set of 100 test points and evaluate the test errors. We generate random features from Gaussian distribution. In Figure 6, we show the test errors as a function of the number of collocation points and a function of the number of random features, respectively. For each point in the figure, we repeat the experiments 10 times and take the average.


3.4 Advection Diffusion Equation
Finally, we test our method with advection diffusion equation. Consider the initial boundary value problem on the spatial-temporal domain , the PDE is
The true solution is employed as . Functions , , and are set according to the true solution. When we simulate this problem, we treat the time variable in the same way as the spatial variable . We uniformly generate 1000 training samples over . We enforce the boundary condition on 100 collocations points and the initial condition on 200 collocations points. Gaussian random features () are randomly sampled from standard Normal distribution (variance ). The PINN model has 2 hidden layers with 64 neurons at each layer. We report number of epochs, the test errors and training times for both models in Table 6. In this example, our proposed method achieves similar test error as PINN, but the training is simpler in the sense that it requires less epochs and the training time of random feature model is around 50% of that of PINN model. In Figure 7, we compare the predicted solution and true solution at various times to further highlight the ability of our method in learning the true solution.
Method | Epochs | Test error | Training Time (Seconds) |
---|---|---|---|
RF | 600 | 15.46 | |
PINN | 1000 | 28.24 |



Finally, we perform a convergence study for advection-diffusion equation. In Figure 8(a), we show the test errors as a function of the number of collocation points. We use collocation points, which are uniformly generated from . We use 100 points on the boundary and 200 points for initial values. In Figure 8(b), we use 200 interior points, 100 boundary points, and 200 points for initial condition, respectively. We vary the number of random features, i.e . For each point in the figure, we take the average over 10 repetitions of the experiment.


4 Conclusion
In this paper, we propose a random feature model for solving partial differential equations along with an error analysis. By utilizing some techniques from probability, we provide convergence rates of our proposed method under some mild assumptions on the PDE. Our framework allows convenient implementation and efficient computation. Moreover, it easily scales to massive collocation points, which are necessary for solving challenging PDEs. We test our method on several PDE benchmarks. The numerical experiments indicate that our method either matches or beats state-of-the-art models and reduces the computational cost.
Finally, we conclude with some directions for future work. First, our analysis does not directly address the minimizer we obtained by solving an optimization problem. It requires us to analyze a min-norm minimization problem with some nonlinear constraints. Second, while it is natural to sample random features from the Fourier transform density, it is advantageous to sample from a different density which has been shown to yield better performance. Third, we assume that the PDE at hand is well- defined pointwise and has a unique strong solution. Extension our framework to weak solution is left for future work.
References
- [1] M. Raissi, P. Perdikaris, and G. Karniadakis, “Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations,” Journal of Computational Physics, vol. 378, pp. 686–707, 2019.
- [2] Y. Wang and S. Dong, “An extreme learning machine-based method for computational pdes in higher dimensions,” Computer Methods in Applied Mechanics and Engineering, vol. 418, p. 116578, 2024.
- [3] Z. Xu, D. Long, Y. Xu, G. Yang, S. Zhe, and H. Owhadi, “Toward efficient kernel-based solvers for nonlinear PDEs,” arXiv:2410.11165, 2024.
- [4] S. Fang, M. Cooley, D. Long, S. Li, M. Kirby, and S. Zhe, “Solving high frequency and multi-scale PDEs with gaussian processes,” in The Twelfth International Conference on Learning Representations, 2024.
- [5] Y. Chen, B. Hosseini, H. Owhadi, and A. M. Stuart, “Solving and learning nonlinear pdes with gaussian processes,” Journal of Computational Physics, vol. 447, p. 110668, 2021.
- [6] P. Batlle, Y. Chen, B. Hosseini, H. Owhadi, and A. M. Stuart, “Error analysis of kernel/GP methods for nonlinear and parametric pdes,” Journal of Computational Physics, vol. 520, p. 113488, 2025.
- [7] S. Foucart, C. Liao, S. Shahrampour, and Y. Wang, “Learning from non-random data in Hilbert spaces: an optimal recovery perspective,” Sampling Theory, Signal Processing, and Data Analysis, vol. 20, no. 5, 2022.
- [8] A. Rahimi and B. Recht, “Random features for large-scale kernel machines,” in Advances in Neural Information Processing Systems, vol. 20, Curran Associates, Inc., 2007.
- [9] B. Scholköpf and A. J. Smola, Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. Cambridge, MA, USA: MIT Press, 2001.
- [10] S. Bochner, Harmonic Analysis and the Theory of Probability. University of California Press Berkeley, 1955.
- [11] M. Belkin, S. Ma, and S. Mandal, “To understand deep learning we need to understand kernel learning,” in Proceedings of the 35th International Conference on Machine Learning, vol. 80 of Proceedings of Machine Learning Research, pp. 541–549, PMLR, 10–15 Jul 2018.
- [12] T. Liang and A. Rakhlin, “Just interpolate: Kernel “Ridgeless” regression can generalize,” The Annals of Statistics, vol. 48, no. 3, pp. 1329 – 1347, 2020.
- [13] A. Rahimi and B. Recht, “Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning,” in Advances in Neural Information Processing Systems, vol. 21, Curran Associates, Inc., 2008.
- [14] A. Rudi and L. Rosasco, “Generalization properties of learning with random features,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, (Red Hook, NY, USA), p. 3218–3228, Curran Associates Inc., 2017.
- [15] W. E, C. Ma, L. Wu, and S. Wojtowytsch, “Towards a mathematical understanding of neural network-based machine learning: What we know and what we don’t,” CSIAM Transactions on Applied Mathematics, vol. 1, no. 4, pp. 561–615, 2020.
- [16] Y. Sun, A. Gilbert, and A. Tewari, “But how does it work in theory? linear SVM with random features,” in Advances in Neural Information Processing Systems, vol. 31, Curran Associates, Inc., 2018.
- [17] S. Mei and A. Montanari, “The generalization error of random features regression: Precise asymptotics and the double descent curve,” Communications on Pure and Applied Mathematics, vol. 75, 2019.
- [18] Z. Chen and H. Schaeffer, “Conditioning of random Fourier feature matrices: double descent and generalization error,” Information and Inference: A Journal of the IMA, vol. 13, p. iaad054, 04 2024.
- [19] C. Liao, D. Needell, and A. Xue, “Differentially private random feature model,” arXiv:2412.04785, 2024. Submitted.
- [20] D. Needell, A. A. Nelson, R. Saab, P. Salanevich, and O. Schavemaker, “Random vector functional link networks for function approximation on manifolds,” Frontiers in Applied Mathematics and Statistics-Optimization, vol. 10, 2024.
- [21] A. Malik, R. Gao, M. Ganaie, M. Tanveer, and P. N. Suganthan, “Random vector functional link network: Recent developments, applications, and future directions,” Applied Soft Computing, vol. 143, p. 110377, 2023.
- [22] G.-B. Huang, Q.-Y. Zhu, and C.-K. Siew, “Extreme learning machine: Theory and applications,” Neurocomputing, vol. 70, no. 1, pp. 489–501, 2006. Neural Networks.
- [23] G.-B. Huang, L. Chen, and C.-K. Siew, “Universal approximation using incremental constructive feedforward networks with random hidden nodes,” IEEE Transactions on Neural Networks, vol. 17, no. 4, pp. 879–892, 2006.
- [24] J. Wang, S. Lu, S.-H. Wang, and Y.-D. Zhang, “A review on extreme learning machine,” Multimedia Tools and Applications, vol. 81, no. 29, 2022.
- [25] B. Igelnik and Y.-H. Pao, “Stochastic choice of basis functions in adaptive function approximation and the functional-link net,” IEEE Transactions on Neural Networks, vol. 6, no. 6, pp. 1320–1329, 1995.
- [26] A. Rahimi and B. Recht, “Uniform approximation of functions with random bases,” in 2008 46th Annual Allerton Conference on Communication, Control, and Computing, pp. 555–561, 2008.
- [27] S. Lanthaler and N. H. Nelsen, “Error bounds for learning with vector-valued random features,” in Thirty-seventh Conference on Neural Information Processing Systems, 2023.
- [28] R. Basri, M. Galun, A. Geifman, D. Jacobs, Y. Kasten, and S. Kritchman, “Frequency bias in neural networks for input of non-uniform density,” in Proceedings of the 37th International Conference on Machine Learning, vol. 119 of Proceedings of Machine Learning Research, pp. 685–694, PMLR, 13–18 Jul 2020.
- [29] N. Rahaman, A. Baratin, D. Arpit, F. Draxler, M. Lin, F. Hamprecht, Y. Bengio, and A. Courville, “On the spectral bias of neural networks,” in Proceedings of the 36th International Conference on Machine Learning, vol. 97 of Proceedings of Machine Learning Research, pp. 5301–5310, PMLR, 09–15 Jun 2019.
- [30] B. Ronen, D. Jacobs, Y. Kasten, and S. Kritchman, “The convergence rate of neural networks for learned functions of different frequencies,” in Advances in Neural Information Processing Systems, vol. 32, Curran Associates, Inc., 2019.
- [31] M. Tancik, P. Srinivasan, B. Mildenhall, S. Fridovich-Keil, N. Raghavan, U. Singhal, R. Ramamoorthi, J. Barron, and R. Ng, “Fourier features let networks learn high frequency functions in low dimensional domains,” in Advances in Neural Information Processing Systems, vol. 33, pp. 7537–7547, Curran Associates, Inc., 2020.
- [32] S. Wang, H. Wang, and P. Perdikaris, “On the eigenvector bias of Fourier feature networks: From regression to solving multi-scale pdes with physics-informed neural networks,” Computer Methods in Applied Mechanics and Engineering, vol. 384, p. 113938, 2021.