DeepONet for Solving PDEs: Generalization Analysis in Sobolev Training
Abstract
In this paper, we investigate the application of operator learning, specifically DeepONet, to solve partial differential equations (PDEs). Unlike function learning methods that require training separate neural networks for each PDE, operator learning generalizes across different PDEs without retraining. We focus on the performance of DeepONet in Sobolev training, addressing two key questions: the approximation ability of deep branch and trunk networks, and the generalization error in Sobolev norms. Our findings highlight that deep branch networks offer significant performance benefits, while trunk networks are best kept simple. Moreover, standard sampling methods without adding derivative information in the encoding part are sufficient for minimizing generalization error in Sobolev training, based on generalization analysis. This paper fills a theoretical gap by providing error estimations for a wide range of physics-informed machine learning models and applications.
1 INTRODUCTION
Solving Partial Differential Equations (PDEs) using neural networks has been widely applied in mathematical and engineering fields, especially for high-dimensional domains where classical methods, such as finite elements [3], face challenges. Methods for solving PDEs can be broadly divided into two categories: function learning and operator learning. Function learning methods, such as Physics-Informed Neural Networks (PINNs) [29], the Deep Ritz Method [10], the Deep Galerkin Method [34], and approaches based on random features [5, 36, 9], use neural networks to directly approximate the solutions of PDEs by minimizing specifically designed loss functions. A significant limitation of function learning approaches is the need to train a separate neural network for each PDE, especially if it differs from the one on which the network was originally trained. On the other hand, operator learning methods, such as DeepONet [23] and the Fourier Neural Operator (FNO) [20], focus on learning the operator that maps the PDE parameters to their corresponding solutions. These methods are more general and do not require retraining for different PDEs, provided that the underlying differential operator remains unchanged.
Therefore, applying operator learning to solve PDEs is more effective. In this paper, we will study the performance of operator learning in solving PDEs, with a focus on DeepONet [24]. In the DeepONet structure introduced by [24], the architecture is expressed as follows:
(1) |
where the branch network processes the input function, and the trunk network processes the coordinates. A key difference between this and shallow neural networks for function approximation is the replacement of the coefficient term with a branch network, which itself resembles a shallow neural network. This facilitates the generalization of the structure into a more flexible form, given by:
(2) |
where both and are fully connected DNNs. This kind of the structure has also been mentioned in [25, 19]. Our approach can easily reduce to the shallow neural network case, recovering the classical DeepONet. In Eq. (2), denotes a projection that reduces to a finite-dimensional vector, allowing for various reduction techniques in differential problems, such as the truncation of Fourier and Taylor series or the finite element method [3]. In the classical DeepONet, this projection is based on the sample points.
Note that both the branch and trunk networks can easily be generalized into deep neural networks. As discussed in [23, 42, 40], deep neural networks have been shown to outperform shallow neural networks in terms of approximation error for function approximation. This raises the following question:
Q1: Do the branch and trunk networks have sufficient approximation ability when they are deep neural networks? If so, are their performance in terms of approximation error better than that of shallow neural networks when they are deep neural networks?
The results in this paper show that the branch network benefits from becoming a deep neural network, whereas the trunk network does not. While a deep trunk network may achieve a super-convergence rate—a better rate than , where is the number of parameters, is the smoothness of the function class, and is the dimension—there are key challenges. First, in this case, most of the parameters in the trunk network depend on the input function . For each parameter that depends on , a branch network is required to approximate it, complicating the structure of DeepONet compared to Eq. (2). Second, this requires the branch network to approximate discontinuous functionals [45], making the task particularly challenging, as deep neural networks typically establish discontinuous approximators [8]. The branch network, however, does not face these issues and can benefit from a deep structure in terms of approximation error. This observation is consistent with the findings in [25], which show that the trunk network is sensitive and difficult to train. Therefore, the trunk network should be kept simple, using methods such as Proper Orthogonal Decomposition (POD) to assist in learning.
The second question arises from applying operator learning to solve PDEs. In [18], they consider the error analysis of DeepONet in the -norm. However, for solving PDEs such as
(3) |
where and is a second-order differential operator, the loss function should incorporate information about derivatives, similar to the Physics-Informed Neural Network (PINN) loss [29]. Although some papers have applied PINN losses to DeepONet, such as [37, 21, 13], a theoretical gap remains in understanding the generalization error of DeepONet when measured in Sobolev norms.
Q2: What is the generalization error of DeepONet in Sobolev training, and what insights can be drawn to improve the design of DeepONet?
In this paper, we address this question and fill the gap by providing error estimations for a wide range of physics-informed machine learning models and applications. We find that to minimize the error of DeepONet in Sobolev training, the standard sampling methods used in classical DeepONet are still sufficient. This result differs slightly from our initial expectation that, when studying operators in Sobolev spaces, adding derivative information from the input space would be necessary for training. We show that the encoding in operator learning works effectively without incorporating derivative information from the input space. This insight suggests that even in Sobolev training for operator learning, there is no need to overly complicate the network, particularly the encoding part.
1.1 Contributions and Organization of the Paper
The rest of the paper is organized as follows. In Section 2, we introduce the notations used throughout the paper and set up the operator and loss functions from the problem statement. In Section 3, we estimate the approximation rate. In the first subsection, we reduce operator learning to function learning, showing that the trunk network does not benefit from a deep structure. In the second subsection, we reduce functional approximation to function approximation and demonstrate that the encoding of the input space for the operator can still rely on traditional sampling methods, without the need to incorporate derivative information, even in Sobolev training. In the last subsection, we approximate the function and show that the branch network can benefit from a deep structure. In Section 4, we provide the generalization error analysis of DeepONet in Sobolev training. All omitted proofs from the main text are presented in the appendix.
1.2 Related works
Function Learning: Unlike operator learning, function learning focuses on learning a map from a finite-dimensional space to a finite-dimensional space. Many papers have studied the theory of function approximation in Sobolev spaces, such as [27, 30, 31, 32, 33, 42, 40, 39, 28, 44, 45], among others. In [27, 39, 45], the approximation is considered under the assumption of continuous approximators, as mentioned in [8]. However, for deep neural networks, the approximator is often discontinuous, as discussed in [45], which still achieves the optimal approximation rate with respect to the number of parameters. This is one of the benefits of deep structures, and whether this deep structure provides a significant advantage in approximation is a key point of discussion in this work.
Functional Learning: Functional learning can be viewed as a special case of operator learning, where the objective is to map a function to a constant. Several papers have provided theoretical analyses of using neural networks to approximate functionals. In [6], the universal approximation of neural networks for approximating continuous functionals is established. In [35], the authors provide the approximation rate for functionals, while in [41], they address the approximation of functionals without the curse of dimensionality using Barron space methods. In [43], the approximation rate for functionals on spheres is analyzed.
Operator Learning: The operator learning aspect is closely related to this work. In [18, 7], they perform an error analysis of DeepONet; however, their analysis is primarily focused on solving PDEs in the -norm, and they do not consider network design, i.e., which parts of the network should be deeper and which should be shallow. In [17], the focus is on addressing the curse of dimensionality and providing lower bounds on the number of parameters required to achieve the target rate. In their results, there is a parameter that obscures the structural information of the network, so the benefit of network architecture in operator learning remains unclear. In [22], they summarize the general theory of operator learning, but the existence of the benefits of deep neural networks is still not explicitly visible in their results. Other works, such as those focusing on Fourier Neural Operators (FNO) [16], explore different types of operators, which differ from the focus of this paper.
2 PRELIMINARIES
2.1 Notations
Let us summarize all basic notations used in the paper as follows:
1. Matrices are denoted by bold uppercase letters. For example, is a real matrix of size and denotes the transpose of .
2. For a -dimensional multi-index , we denote several related notations as follows: ; ;
3. Let be the closed ball with a center and a radius measured by the Euclidean distance. Similarly, be the closed ball with a center and a radius measured by the -norm.
4. Assume , then means that there exists positive independent of such that when all entries of go to .
5. Define and . We call the neural networks with activation function with as neural networks (-NNs). With the abuse of notations, we define as for any .
6. Define , and , for , then a -NN with the width and depth can be described as follows:
where and are the weight matrix and the bias vector in the -th linear transform in , respectively, i.e., and In this paper, an DNN with the width and depth , means (a) The maximum width of this DNN for all hidden layers less than or equal to . (b) The number of hidden layers of this DNN less than or equal to .
7. Denote as , as the weak derivative of a single variable function and as the partial derivative where and is the derivative in the -th variable. Let and . Then we define Sobolev spaces
with a norm
and . For simplicity, . Furthermore, for , if and only if for each and .
2.2 Error component in Sobolev training
In this paper, we consider applying DeepONet to solve partial differential equations (PDEs). Without loss of generality, we focus on the following PDEs (3), though our results can be easily generalized to more complex cases. In this paper, we assume that satisfies the following conditions:
Assumption 1.
For any and for , which is the solution of Eq. (3) with source term , the following conditions hold:
(i) There exists a constant such that
(ii) There exists a constant such that
The above assumption holds in many cases. For example, based on [12], when is a linear elliptic operator with smooth coefficients and is a polygon (which is valid for ), condition (i) in Assumption 1 is satisfied. Regarding condition (ii) in Assumption 1, this assumption reflects that the operator we aim to learn is Lipschitz in , which is a regularity assumption. If the domain is smooth and is linear and has smooth coefficients, then [11], which is a stronger condition than the one we assume here.
Based on above assumption, the equation above has a unique solution, establishing a mapping between the source term and the solution , denoted as . In [24], training such an operator is treated as a black-box approach, i.e., a data-driven method. However, in many cases, there is insufficient data for training, or there may be no data at all. Thus, it becomes crucial to incorporate information from the PDEs into the training process, as demonstrated in [37, 21, 13]. This technique is referred to as Sobolev training.
In other words, we aim to use (see Eq. (2)) to approximate the mapping between and . The chosen loss function is:
(4) |
where is a measure on , and is the domain of the operator, i.e., . The constant balances the boundary and interior terms. The error analysis of is the classical estimation. Since we focus on the Sobolev training part in this paper, we set for simplicity. Due to (i) in Assumption 1, we obtain:
(5) |
Furthermore, if we consider that is a finite measure on and, for simplicity, assume , we have:
(6) |
This error is called the approximation error. The other part of the error comes from the sample, which we refer to as the generalization error. To define this error, we first introduce the following loss, which is the discrete version of for :
(7) |
where is an i.i.d. uniform sample in , and is an i.i.d. sample based on .
Then, the generalization error is given by:
(8) |
where and . The expectation symbol is due to the randomness in sampling and . The generalization error is represented by the last term in Eq. (22).
3 APPROXIMATION ERROR
In this section, we estimate the approximation error, i.e., we aim to establish the appropriate DeepONet structure and bound . The proof sketch is divided into three steps: the first step reduces the operator learning problem to a functional learning problem, the second step reduces functional learning to function learning, and the final step approximates the function. The diagram outlining the proof is shown in Fig. (1).

Theorem 1.
For any and sufficiently large , , and with , where for any and , satisfying Assumption 1, there exist -NNs with depth , width , and a map from , as well as -NNs with parameters such that:
(9) |
where , and are independent of and . Furthermore, for , is a shallow neural network that can achieve this approximation rate. As approaches 2, the ratio between the width and depth of decreases, implying that the network structure becomes deeper.
Remark 1.
The term incorporates the benefit of the deep structure of the neural network. This deep structure arises from the branch network rather than the trunk network. In the approximation analysis, the deep structure of the trunk network does not yield benefits; instead, it makes the branch network more difficult to train. We will provide more details on this in the following subsections.
3.1 Operator learning to functional learning in Sobolev spaces
Proposition 1.
For any and , there exist -NNs and bounded linear functionals such that
(10) |
where is a -NN with depth and width , and is a constant111For simplicity of notation, in the following statements, may represent different values from line to line. independent of and .
Remark 2.
Before we present the sketch of the proof for Proposition 1, there are some remarks we would like to mention. First, the number of parameters in each is independent of and . This is a generalization of the trunk network in DeepONet. It can easily reduce to the classical DeepONet case by applying a shallow neural network to approximate the trunk, a well-known result that we omit here.
Second, the target function can be considered in for a more general case. The method of proof remains the same; the only difference is that we need to assume that for any open set , the following inequality holds:
for any . It is straightforward to verify that this condition holds for .
The sketch of the proof of Proposition 1 is as follows. First, we divide into parts and apply the Bramble–Hilbert Lemma [3, Lemma 4.3.8] to locally approximate on each part using polynomials. The coefficients of these polynomials can be regarded as continuous linear functionals in . Next, we use a partition of unity to combine the local polynomial approximations. Finally, we represent both the polynomials and the partition of unity using neural networks.
Based on Proposition 1, we can reduce the operator learning problem to functional learning as follows:
(11) |
where the last inequality follows from Assumption 1. Therefore, without considering the approximation of , the approximation error can be bounded by . The next step is to consider the approximation of the functional .
The approximation order achieved in Proposition 1 corresponds to the optimal approximation rate for neural networks in the continuous approximation case. If we consider deep neural networks, we can achieve an approximation rate of using the neural network . As discussed in [40], most of the parameters depend on , leading to significant interaction between the trunk and branch networks, which makes the learning process more challenging. Furthermore, this approximator can be a discontinuous one, as highlighted in [45, 39], where the functionals may no longer be continuous, thereby invalidating subsequent proofs. Therefore, while it is possible to construct deep neural networks for trunk networks, we do not gain the benefits associated with deep networks, and the approximation process becomes more difficult to learn.
3.2 Functional learning to function learning in Sobolev spaces
In this part, we reduce each functional to a function learning problem by applying a suitable projection in Eq. (2) to map into a finite-dimensional space. To ensure that the error in this step remains small, we require to be small for a continuous reconstruction operator , which will be defined later, as shown below:
(12) |
The first inequality holds because is a bounded linear functional in , and thus also a bounded linear functional in . The second inequality follows from (ii) in Assumption 1.
For the traditional DeepONet [24], is a sampling method, i.e.,
and then a proper basis is found such that . To achieve this reduction in , one approach is to apply pseudo-spectral projection, as demonstrated in [26, 4] and used in the approximation analysis of Fourier Neural Operators (FNO) in [16].
Proposition 2.
For any with , where denotes the functions in with periodic boundary conditions, define
for . Thus, there exists a Lipschitz continuous map such that
(13) |
for any , where , is independent of and , and . Furthermore, the Lipschitz constant of is .
Remark 3.
Based on Proposition 2, we can reduce functional learning to function learning. Note that, even though we are considering Sobolev training for operator learning, we do not need derivative information in the encoding process. Specifically, the projection defined as is sufficient for tasks like solving PDEs. Therefore, when designing operator neural networks for solving PDEs, it is not necessary to include derivative information of the input space in the domain. However, whether adding derivative information in the encoding helps improve the effectiveness of Sobolev training remains an open question, which we will address in future work.
3.3 Function learning in Sobolev spaces
In this part, we will use a neural network to approximate each functional , i.e., we aim to make
(14) |
as small as possible. For simplicity, we consider all functions in with for some . We then need to construct a neural network in such that
(15) |
is small. The idea for achieving this approximation is to first analyze the regularity of with respect to . Once the regularity is established, we can construct a neural network that approximates it with a rate that depends on the regularity.
Lemma 1.
Proof.
For any , we have
(16) |
which holds because is a bounded linear functional in , and thus also a bounded linear functional in .
Next, based on the Lipschitz condition of in Assumption 1, we have
(17) |
where the last inequality is due to the Sobolev embedding theorem and .
Finally, based on the Lipschitz condition of proved in Proposition 2, we have
(18) |
Therefore, is Lipschitz continuous in . Since is a convex set, by [14], we conclude that , with its norm being . ∎
Now, we can establish a neural network to approximate . For shallow neural networks, the result from [27] is as follows:
Lemma 2 ([27]).
Suppose is a continuous non-polynomial function, and is a compact subset of . Then, there exist positive integers , and constants for , such that for any ,
(19) |
For deep neural networks, the result from [32] is as follows:
Lemma 3 ([32]).
Given a continuous function , where is a compact subset of , for any , there exists a -NN with width and depth such that
(20) |
where , and are constants independent of and .
Therefore, for shallow neural networks, based on Lemma 2, we know that using parameters can only achieve an approximation rate of for each . However, for deep neural networks, the approximation rate can be , where . Here, corresponds to the shallow or very wide case (i.e., the depth is logarithmic in the width), and corresponds to the very deep case (i.e., the width is logarithmic in the depth). This is because, in Lemma 3, a -NN with width and depth has a number of parameters , and the approximation rate is . When the depth is large and the width is fixed, the rate corresponds to . The case represents the middle ground, where the network is neither very deep nor very wide.
In [45, 39], it is mentioned that an approximation rate better than may cause the neural network approximator to become discontinuous. However, unlike the approximation of the trunk network, we do not require the approximation of the branch network to be continuous. Thus, we can establish a deep neural network for the approximation of the branch network and obtain the benefits of a deep structure, as indicated by the improved approximation rates.
Proposition 3.
Suppose is an operator satisfying Assumption 1 with domain , where , and for some . For any , let be the functional defined in Proposition 1, and be the map defined in Proposition 2. Then for any and sufficiently large , there exists -NNs with parameters such that
(21) |
where are independent of both and . Furthermore, for , shallow neural networks can achieve this approximation rate. As approaches 2, the ratio between the width and depth of decreases, meaning the network structure becomes deeper.
Proof.
The result in Proposition 3 exhibits a curse of dimensionality when is large. One potential approach to mitigate this issue is to incorporate a proper measure instead of using in the error analysis. However, since this paper focuses on establishing the framework for generalization error in Sobolev training for operator learning and investigating whether deep structures offer benefits, addressing the curse of dimensionality in Sobolev training is left for future work. Solving this challenge would likely require and to exhibit smoother structures.
4 GENERALIZATION ERROR
Recall that the generalization error is defined as:
(22) |
where and . In the definition of , there are two types of samples: one is for the sample points , and the other is for the input function . To bound the generalization error, we introduce , defined as:
(23) |
Thus, we can write:
(24) |
Here, we provide the generalization error of DeepONet in Sobolev training. How to obtain the optimal generalization error in Sobolev training is a question we leave for future work. Furthermore, for the error , it has been shown in [40] that it can be bounded by based on Rademacher complexity. Next, we focus on . The proof idea is inspired by [18].
Assumption 2.
(i) Boundedness: For any neural network with bounded parameters, characterized by a bound and dimension , there exists a function for such that
for all , and there exist constants , such that
(25) |
(ii) Lipschitz continuity: There exists a function , such that
(26) |
for all , and
for the same constants as in Eq. (51).
Remark 4.
Based on the above assumptions, we can provide an upper bound for the generalization error of DeepONet in Sobolev training. However, in this error bound, the distinction between shallow and deep networks is not explicitly visible. The reason is that the information about the depth or shallowness is hidden in the terms and . For deep neural networks, these terms can become extremely large, while for shallow neural networks, they remain relatively small. Finding a proper way to describe and estimate this difference is challenging and may require defining the VC-dimension of DeepONet [42, 2, 1]. We leave this as future work.
5 CONCLUSION
In this paper, we estimated the generalization error of DeepONet in Sobolev training, demonstrating that the deep structure of DeepONet can provide significant benefits at the approximation level, especially in the branch network. Additionally, we showed that the classical DeepONet [23] still achieves strong approximation capabilities in Sobolev training, meaning it can approximate not only the solution itself but also the derivatives of the solutions when applied to solving PDEs. This work fills a gap by providing error estimations for a wide range of physics-informed machine learning models and applications. Several open questions remain for future work. First, while our results show that a deep structure improves the approximation rate of DeepONet, training deep networks can be challenging. Thus, there is still an open question regarding strategies for choosing between deep and shallow neural networks. In [38], this problem is addressed for function learning, but the solution for operator learning remains unexplored. Furthermore, this paper focuses exclusively on DeepONet, and whether similar results can be obtained for other network architectures remains an open question.
References
- Bartlett et al., [2019] Bartlett, P., Harvey, N., Liaw, C., and Mehrabian, A. (2019). Nearly-tight VC-dimension and pseudodimension bounds for piecewise linear neural networks. The Journal of Machine Learning Research, 20(1):2285–2301.
- Bartlett et al., [1998] Bartlett, P., Maiorov, V., and Meir, R. (1998). Almost linear VC dimension bounds for piecewise polynomial networks. Advances in neural information processing systems, 11.
- Brenner et al., [2008] Brenner, S., Scott, L., and Scott, L. (2008). The mathematical theory of finite element methods, volume 3. Springer.
- Canuto and Quarteroni, [1982] Canuto, C. and Quarteroni, A. (1982). Approximation results for orthogonal polynomials in sobolev spaces. Mathematics of Computation, 38(157):67–86.
- Chen et al., [2022] Chen, J., Chi, X., Yang, Z., et al. (2022). Bridging traditional and machine learning-based algorithms for solving pdes: the random feature method. J Mach Learn, 1:268–98.
- Chen and Chen, [1993] Chen, T. and Chen, H. (1993). Approximations of continuous functionals by neural networks with application to dynamic systems. IEEE Transactions on Neural networks, 4(6):910–918.
- Chen and Chen, [1995] Chen, T. and Chen, H. (1995). Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks, 6(4):911–917.
- DeVore et al., [1989] DeVore, R., Howard, R., and Micchelli, C. (1989). Optimal nonlinear approximation. Manuscripta mathematica, 63:469–478.
- Dong and Wang, [2023] Dong, S. and Wang, Y. (2023). A method for computing inverse parametric pde problems with random-weight neural networks. Journal of Computational Physics, 489:112263.
- E and Yu, [2018] E, W. and Yu, B. (2018). The Deep Ritz Method: A deep learning-based numerical algorithm for solving variational problems. Communications in Mathematics and Statistics, 6(1).
- Evans, [2022] Evans, L. (2022). Partial differential equations, volume 19. American Mathematical Society.
- Grisvard, [2011] Grisvard, P. (2011). Elliptic problems in nonsmooth domains. SIAM.
- Hao et al., [2024] Hao, W., Liu, X., and Yang, Y. (2024). Newton informed neural operator for computing multiple solutions of nonlinear partials differential equations. arXiv preprint arXiv:2405.14096.
- Heinonen, [2005] Heinonen, J. (2005). Lectures on Lipschitz analysis. Number 100. University of Jyväskylä.
- Hon and Yang, [2022] Hon, S. and Yang, H. (2022). Simultaneous neural network approximation for smooth functions. Neural Networks, 154:152–164.
- Kovachki et al., [2021] Kovachki, N., Lanthaler, S., and Mishra, S. (2021). On universal approximation and error bounds for fourier neural operators. Journal of Machine Learning Research, 22(290):1–76.
- Lanthaler, [2023] Lanthaler, S. (2023). Operator learning with pca-net: upper and lower complexity bounds. Journal of Machine Learning Research, 24(318):1–67.
- Lanthaler et al., [2022] Lanthaler, S., Mishra, S., and Karniadakis, G. (2022). Error estimates for deeponets: A deep learning framework in infinite dimensions. Transactions of Mathematics and Its Applications, 6(1):tnac001.
- Li et al., [2023] Li, W., Bazant, M., and Zhu, J. (2023). Phase-field deeponet: Physics-informed deep operator neural network for fast simulations of pattern formation governed by gradient flows of free-energy functionals. Computer Methods in Applied Mechanics and Engineering, 416:116299.
- Li et al., [2020] Li, Z., Kovachki, N., Azizzadenesheli, K., Liu, B., Bhattacharya, K., Stuart, A., and Anandkumar, A. (2020). Fourier neural operator for parametric partial differential equations. arXiv preprint arXiv:2010.08895.
- Lin et al., [2023] Lin, B., Mao, Z., Wang, Z., and Karniadakis, G. (2023). Operator learning enhanced physics-informed neural networks for solving partial differential equations characterized by sharp solutions. arXiv preprint arXiv:2310.19590.
- Liu et al., [2022] Liu, H., Yang, H., Chen, M., Zhao, T., and Liao, W. (2022). Deep nonparametric estimation of operators between infinite dimensional spaces. arXiv preprint arXiv:2201.00217.
- [23] Lu, J., Shen, Z., Yang, H., and Zhang, S. (2021a). Deep network approximation for smooth functions. SIAM Journal on Mathematical Analysis, 53(5):5465–5506.
- [24] Lu, L., Jin, P., Pang, G., Zhang, Z., and Karniadakis, G. (2021b). Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. Nature machine intelligence, 3(3):218–229.
- Lu et al., [2022] Lu, L., Meng, X., Cai, S., Mao, Z., Goswami, S., Zhang, Z., and Karniadakis, G. E. (2022). A comprehensive and fair comparison of two neural operators (with practical extensions) based on fair data. Computer Methods in Applied Mechanics and Engineering, 393:114778.
- Maday and Quarteroni, [1982] Maday, Y. and Quarteroni, A. (1982). Spectral and pseudo-spectral approximations of the navier–stokes equations. SIAM Journal on Numerical Analysis, 19(4):761–780.
- Mhaskar, [1996] Mhaskar, H. (1996). Neural networks for optimal approximation of smooth and analytic functions. Neural computation, 8(1):164–177.
- Opschoor et al., [2022] Opschoor, J., Schwab, C., and Zech, J. (2022). Exponential relu dnn expression of holomorphic maps in high dimension. Constructive Approximation, 55(1):537–582.
- Raissi et al., [2019] Raissi, M., Perdikaris, P., and Karniadakis, G. (2019). Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378:686–707.
- Shen et al., [2019] Shen, Z., Yang, H., and Zhang, S. (2019). Nonlinear approximation via compositions. Neural Networks, 119:74–84.
- Shen et al., [2020] Shen, Z., Yang, H., and Zhang, S. (2020). Deep network approximation characterized by number of neurons. Communications in Computational Physics, 28(5).
- Shen et al., [2022] Shen, Z., Yang, H., and Zhang, S. (2022). Optimal approximation rate of relu networks in terms of width and depth. Journal de Mathématiques Pures et Appliquées, 157:101–135.
- Siegel, [2022] Siegel, J. (2022). Optimal approximation rates for deep ReLU neural networks on Sobolev spaces. arXiv preprint arXiv:2211.14400.
- Sirignano and Spiliopoulos, [2018] Sirignano, J. and Spiliopoulos, K. (2018). Dgm: A deep learning algorithm for solving partial differential equations. Journal of computational physics, 375:1339–1364.
- Song et al., [2023] Song, L., Liu, Y., Fan, J., and Zhou, D. (2023). Approximation of smooth functionals using deep relu networks. Neural Networks, 166:424–436.
- Sun et al., [2024] Sun, J., Dong, S., and Wang, F. (2024). Local randomized neural networks with discontinuous galerkin methods for partial differential equations. Journal of Computational and Applied Mathematics, 445:115830.
- Wang et al., [2021] Wang, S., Wang, H., and Perdikaris, P. (2021). Learning the solution operator of parametric partial differential equations with physics-informed deeponets. Science advances, 7(40):eabi8605.
- Yang and He, [2024] Yang, Y. and He, J. (2024). Deeper or wider: A perspective from optimal generalization error with sobolev loss. In Forty-first International Conference on Machine Learning.
- Yang and Lu, [2024] Yang, Y. and Lu, Y. (2024). Near-optimal deep neural network approximation for korobov functions with respect to lp and h1 norms. Neural Networks, page 106702.
- [40] Yang, Y., Wu, Y., Yang, H., and Xiang, Y. (2023a). Nearly optimal approximation rates for deep super ReLU networks on Sobolev spaces. arXiv preprint arXiv:2310.10766.
- Yang and Xiang, [2022] Yang, Y. and Xiang, Y. (2022). Approximation of functionals by neural network without curse of dimensionality. arXiv preprint arXiv:2205.14421.
- [42] Yang, Y., Yang, H., and Xiang, Y. (2023b). Nearly optimal VC-dimension and pseudo-dimension bounds for deep neural network derivatives. In Thirty-seventh Conference on Neural Information Processing Systems.
- Yang et al., [2024] Yang, Z., Huang, S., Feng, H., and Zhou, D.-X. (2024). Spherical analysis of learning nonlinear functionals.
- Yarotsky, [2017] Yarotsky, D. (2017). Error bounds for approximations with deep ReLU networks. Neural Networks, 94:103–114.
- Yarotsky and Zhevnerchuk, [2020] Yarotsky, D. and Zhevnerchuk, A. (2020). The phase diagram of approximation rates for deep neural networks. Advances in neural information processing systems, 33:13005–13015.
Appendix A PROOFS OF APPROXIMATION ERROR
A.1 Proof of Proposition 1
A.1.1 Preliminaries
In the part, we collect the lemmas and definition that we need to use Proposition 1. First of all, we will collect the lemmas and definition relative to Bramble–Hilbert Lemma.
Definition 1 (Sobolev semi-norm [11]).
Let and . Then we define Sobolev semi-norm , if , and . Furthermore, for , we define .
Lemma 4.
Let and , and such that for the ball which is a compact subset of . The corresponding Taylor polynomial of order of averaged over can be read as
Furthermore,
with , is a smooth function and
(29) |
where .
Definition 2.
Let . Then is called stared-shaped with respect to if
Definition 3.
Let be bounded, and define
Then we define
the chunkiness parameter of if .
Lemma 5 ([3]).
Let be open and bounded, and such that is the stared-shaped with respect to , and . Moreover, let , and denote by by the chunkiness parameter of . Then there is a constant such that for all
where denotes the Taylor polynomial of order of averaged over and .
Lastly, we list a few basic lemmas of neural networks repeatedly applied in our main analysis.
Lemma 6 ([15]).
The following basic lemmas of neural networks hold:
(i) can be realized exactly by a neural network with one hidden layer and four neurons.
(ii) Assume for . For any such that , there exists a neural network with the width and depth such that
for any .
A.1.2 Partition of Unity
We are going to divide the domain into several parts and approximate locally. First, we define the following:
Definition 4.
Given , and for any , we define
where
Next, we define the partition of unity based on :
Definition 5.
Define as follows:
(30) |
Definition 6.
Given , then we define two functions in :
(31) |
Then for any , we define
(32) |
for any .
Proposition 4.
Given , defined in Definition 6 satisfies:
(i): , and for any .
(ii): is a partition of the unity with defined in Definition 4.
The proof of this can be verified directly and is omitted here.
A.1.3 Proof of Proposition 1
Proof.
For any , set , and establish the partition as defined in Definition 4. Based on Lemma 5, we know that in each , there exists a function such that
where
(33) |
and
(34) |
Here, is a subset of satisfying the properties in Lemma 5, with
(35) |
We define the partition of unity as shown in Definition 6, and then construct
We now estimate the error:
(36) |
The last step follows from the QM-AM inequality.
The remaining task is to use -NNs to represent . Notice that
Based on Lemma 6, we know that can be represented by a neural network with width and depth . The function is a neural network with width 3 and one hidden layer. Therefore, is a neural network with width and depth . Consequently, can be represented by a neural network with depth and width .
∎
A.2 Proof of Proposition 2
A.2.1 Pseudo-spectral projection
The detailed definition of the pseudo-spectral projection is provided in [26, 4]. For readability, we restate the definition here.
First, we define the Fourier basis of as
where . We then define a bilinear form in based on the grid points
given by
(37) |
The pseudo-spectral projection is defined as follows for any :
Therefore, the pseudo-spectral projection can be divided into two parts. First, we define
(38) |
Then, we define such that
(39) |
where .
The approximation estimate of is shown by the following lemma:
Lemma 7 ([4]).
Let . For any and , the pseudo-spectral projection is well-defined. Furthermore, there exists a constant , such that the following approximation error estimate holds
for any .
A.2.2 Proof of Proposition 2
Proof.
By defining and as in (38) and (39), and using Lemma 7, we obtain the following:
(40) |
where is a constant independent of . Given that , by Sobolev embedding, we have:
(41) |
Thus, we conclude:
(42) |
where is a constant independent of . Next, we show that is a Lipschitz continuous map . For any , we have:
(43) |
where the last inequality follows from the Cauchy–Dhwarz inequality. Therefore, we have:
(44) |
where
(45) |
∎
A.3 Proof of Theorem 1
Proof.
Based on Proposition 2, we also have:
(47) |
for . Since and for some and , we have:
(48) |
Thus, we obtain:
(49) |
Similarly, based on Proposition 3, we have:
(50) |
where is independent of .
Finally, by combining the three parts, we obtain the desired result. ∎
Appendix B PROOFS OF GENERALIZATION ERROR
To enhance the readability of the supplementary material, we recall the assumptions required here:
Assumption 3.
(i) Boundedness: For any neural network with bounded parameters, characterized by a bound and dimension , there exists a function for such that
for all , and there exist constants , such that
(51) |
(ii) Lipschitz continuity: There exists a function , such that
(52) |
for all , and
for the same constants as in Eq. (51).
Lemma 8 ([18]).
The -covering number of , , satisfies
for some constant , independent of , , and .
Proof of Theorem 2.
Step 1: To begin with, we introduce a new term called the middle term, denoted as , defined as follows:
This term represents the limit case of as the number of samples in the domain of the output space tends to infinity ().
Then the error can be divided into two parts:
(54) |
Step 2: For , this is the classical generalization error analysis, and the result can be obtained from [40]. We omit the details of this part, which can be expressed as
(55) |
where is independent of the number of parameters and the sample size . In the following steps, we are going to estimate , which is the error that comes from the sampling of the input space of the operator.
Step 3: Denote
We first estimate the gap between and for any bounded parameters . Due to Assumption 3 (i) and (ii), we have that
(56) |
Step 4: Based on Step 3, we are going to estimate
by covering the number of the spaces.
Set is a -covering of i.e. for any in , there exists with . Then we have
(57) |
For , it can be approximate by
(58) |
For , by applied the result in [18], we know
Step 5: Now we estimate .
Due to Assumption 3 and directly calculation, we have that
for constants , depending only the measure the constant appearing in (53) and the constant appearing in the upper bound (51). For example,
(59) |
Next, let . Then,
and thus we conclude that
On the other hand, we have
Increasing the constant , if necessary, we can further estimate
where depends on and the constant appearing in (51), but is independent of and . We can express this dependence in the form , as the constants and depend on the and the upper bound on .
Therefore,
(60) |
∎