Deep Ritz method for the spectral fractional Laplacian equation using the Caffarelli-Silvestre extension
Abstract
In this paper, we propose a novel method for solving high-dimensional spectral fractional Laplacian equations. Using the Caffarelli-Silvestre extension, the -dimensional spectral fractional equation is reformulated as a regular partial differential equation of dimension . We transform the extended equation as a minimal Ritz energy functional problem and search for its minimizer in a special class of deep neural networks. Moreover, based on the approximation property of networks, we establish estimates on the error made by the deep Ritz method. Numerical results are reported to demonstrate the effectiveness of the proposed method for solving fractional Laplacian equations up to ten dimensions. Technically, in this method, we design a special network-based structure to adapt to the singularity and exponential decaying of the true solution. Also, A hybrid integration technique combining Monte Carlo method and sinc quadrature is developed to compute the loss function with higher accuracy.
keywords:
Ritz method; Deep learning; Fractional Laplacian; Caffarelli-Silvestre extension; Singularity;AMS:
65N15; 65N30; 68T07; 41A251 Introduction
As a nonlocal generalization of the Laplacian , the spectral fractional Laplacian with a fraction arises in many areas of applications, such as anomalous diffusion [10, 30], turbulent flows [26], Lévy processes [16], quantum mechanics [18], finance [29, 12] and pollutant transport [31]. In this paper, we develop a network-based Ritz method for solving fractional Laplacian equations using the Caffarelli-Silvestre extension.
Let be the dimension of the problem and be a bounded Lipschitz domain in . Also, suppose and is a function defined in , we consider the following spectral fractional Laplacian equation with homogeneous Dirichlet condition,
(1) |
where is defined by the spectral decomposition of with the same boundary conditions. More precisely, we suppose the countable set are all the eigenvalues and orthonormal eigenfunctions of the following problem,
(2) |
where is the standard inner product in . Then for any ,
(3) |
We remark that another definition of the fractional Laplacian is formulated by integrals with non-local structures, and these two definitions do not coincide. It is difficult to solve fractional Laplacian equations of either definition directly using numerical methods for regular differential equations (e.g., finite difference method or finite element method) due to the non-local property of the fractional operator [5, 1]. Instead, one effective approach is to use the Caffarelli-Silvestre extension [7]. Specifically, let us introduce a scalar variable and consider the following -dimensional problem
(4) |
where and . Suppose solves (4), then is a solution of (1) [24]. Consequently, one can solve the extended problem (4) with regular derivatives to avoid addressing spectral fractional differential operators, with the extra cost that (i) the dimension is increased from to ; (ii) the domain is extended from the bounded one to an unbounded cylinder . Several methods have been proposed for (4), such as the tensor product finite elements [24] and the enriched spectral method using Laguerre functions [11]. We remark that the Caffarelli-Silvestre extension is exclusively for the spectral fractional Laplacian and not for the integral fractional Laplacian. Moreover, the extension technique can be extended to more general fractional symmetric elliptic operators of the form with being symmetric and positive definite and being positive.
However, conventional linear structures such as finite elements and polynomials are usually incapable of high-dimensional approximation in practice. For example, suppose a tensor product linear structure has basis functions in each dimension, then the total degree of freedom is , which is a huge number if is moderately large. Such a curse of dimensionality prevents one from using linear algebra structures in high-dimensional problems with . For the spectral fractional Laplacian, most existing methods based on the Caffarelli-Silvestre extension could solve numerical examples of dimension at most two, mainly due to the limitation of storage. Our primary target is to solve many physically relevant problems that appear in three or higher dimensional situations.
In recent years, deep neural networks (DNNs) are widely studied and utilized in applied problems. As a composition of simple neural functions, the DNN has parameters nonlinearly arrayed in the network structure. For a fully-connected DNN with depth , width and dimension of inputs , the total number of parameters is of . Therefore, the degree of freedom increases linearly with the dimension and DNNs are capable of dealing with high-dimensional approximation in practice. Theoretically, it is shown that DNNs have decent approximation properties for particular function spaces (e.g., Barron space). The seminal work of Barron [2, 3] deduces -norm and -norm approximations of two-layer sigmoid networks. Recent work [17, 14, 15, 13, 27, 28, 9, 19] considers more variants of the network-based approximation for Barron-type spaces. Generally, given a Barron function , there exists a two-layer neural network with width and common-used activations such that
(5) |
where is the Barron norm of , and can be , or norm under different hypothesis. It is worth mentioning that the above error bound is independent of the dimension of the input variable; hence the network-based approximation can overcome the curse of dimensionality.
In this paper, we solve (4) by Ritz method in which DNNs are employed to approximate the solution. More precisely, we reformulate (4) as a minimal Ritz energy functional problem and characterize the Sobolev space of the weak solution. Next, as a subset of the solution space, a class of DNNs is taken as the hypothesis space of the minimization. We design a special network structure for the DNN class such that (i) it satisfies the homogeneous boundary condition on in (4); (ii) it decays to zero exponentially as ; and (iii) it has a singularity behaves as for integers at . Note that the second and third properties mentioned above are also satisfied by the true solution. Consequently, the special DNNs have better approximation performance than generic DNNs, which is also observed in our numerical experiments.
Theoretically, under a Barron-type framework, we investigate the approximation error between the special DNN class and the solution space under the Sobolev norm, which has a similar form to (5). Based on that, we estimate the solution error of the proposed Ritz method using the special DNN class, assuming that the true solution has components in the Barron space. The final solution error is of , which is consistent with the approximation error. We remark that the error bound of our method is advantageous over the existing methods [24, 11] using finite element or Laguerre functions if the dimension is moderately large, since the error order is independent of the dimension. Also, the error order is consistent with the deep Ritz method for regular Laplacian equations [23]
In the implementation, a combination of stochastic and deterministic methods is employed to compute the integrals in the energy functional. Specifically, we utilize the quasi-Monte Carlo method and the sinc quadrature rule [21] to evaluate the integrals in terms of and , respectively. For the former, due to the potentially high dimension of , Monte Carlo-type methods are effective and easy to implement. For the latter, although the integrand in terms of is one-dimensional, it has a singular term when . While sinc quadrature is highly accurate for integrals with fractional powers and therefore preferred here. By numerical experiments, we demonstrate that our method can solve model problems up to with desired accuracy. To the best of our knowledge, this is the first attempt to solve high-dimensional fractional Laplacian equations by deep learning methods.
Overall, the highlights of our work can be summarized as follows:
-
•
Development of a special approximation structure based on generic DNNs according to the special properties of the true solution;
-
•
Combination of stochastic Monte Carlo method for high dimensions and deterministic sinc quadrature for high accuracy in the learning process;
-
•
Simulation of 10-D fractional Laplacian equations with relative error .
The rest of the paper is organized as follows. In Section 2, we reformulate the problem (4) as the minimization of an energy functional and show their equivalence. In Section 3, the fully connected neural networks are introduced. We characterize the special structures of the hypothesis space and discuss its approximation property. In Section 4, we derive the error estimate for the proposed method. Numerical experiments are presented to show the effectiveness of our method in Section 5. Finally, some concluding remarks are given in Section 6.
2 Minimization of Energy Functional
We solve the regular partial differential equation (4) under the framework of Ritz method. The equation can be transformed to an equivalent minimal functional, and we look for Sobolev weak solutions instead of classical solutions. Similarly, one can also solve (4) using Galerkin method by introducing appropriate test spaces such as in [24, 11]. Since learning-based methods aim to find solutions via optimization, the use of Ritz method can be achieved for building such formulation.
2.1 The space of weak solutions
Let be any region and be a positive weight function. We define the weighted space as
(1) |
equipped with the inner product
(2) |
and the induced norm
(3) |
The weight is omitted from the notations if .
We also define the weighted Sobolev space as
(4) |
equipped with the norm
(5) |
It is shown in [22] the solution of the extended problem (4) has a desirable property that it converges exponentially to zero as . Therefore we can define the solution space as
(6) |
with the norm
(7) |
Denote the trace for all by
(8) |
Moreover, for column vectors or vector-valued functions, we use to denote their Euclidean norm.
2.2 Minimal energy functional
We aim to characterize the solution of (4) as a minimizer of an corresponding energy functional. For this, we define the functional
(9) |
We have the following result.
Proof.
To prove (10), for all , . Then using the fact that and integration by parts we have
(11) |
Note the left hand side is equal to zero since . And the second term on the left is
(12) |
Therefore, it follows (11)
(13) |
which implies
(14) |
Using the inequality , it leads to
(15) |
On the other hand, suppose (10) holds. Fix any and write
(16) |
Note
(17) |
Since for each , takes its minimum at , and thus
(18) |
Using integration by parts we have
(19) |
Especially, (19) holds for all , which implies
(20) |
leading to in . And thus by (19)
(21) |
Especially, takes over all functions in , which leads to in . ∎
3 Neural Network Approximation
In the numerical computation, one aims to introduce a function set with a finite degree of freedom to approximate the solution space , and minimize in this appropriate set of functions. In many physical relevant problems, it is required to address , causing the dimension of no less than 4. Potentially high dimensions impede the usage of conventional linear structures. However, as a nonlinear structure, DNNs can approximate high-dimensional functions by moderately less degree of freedom. This inspires us to use classes of DNNs to approximate especially when is large.
3.1 Fully connected neural network
In our method, we employ the fully connected neural network (FNN) which is one of the most common neural networks in deep learning. Mathematically speaking, let be an activation function which is applied entry-wise to a vector to obtain another vector of the same size. Let and for , an FNN is the composition of simple nonlinear functions as follows
(1) |
where ; with () and for . Here is called the width of the -th layer and is called the depth of the FNN. is the set of all parameters in to determine the underlying neural network. Common types of activation functions include the sigmoid function and the rectified linear unit (ReLU) . We remark that, when solving -th order differential equations, many existing network-based methods use the ReLUk+1 activation function , so that their networks are functions and can be applied by the differential operators. While in the minimization (22), only regularity is required and therefore ReLU networks suffice.
Denote , then it is clear that . Comparatively, the degree of freedom of linear structures such as finite elements and tensor product polynomials increases exponentially with . Hence FNNs are more practicable in high-dimensional approximations. For simplicity, we consider the architecture for all and denote as the set consisting of all FNNs with depth , width and activation function . In the following passage, all functions involving an FNN will be denoted with the superscript .
3.2 Special structures of the approximate class
Recent work [20, 25] indicates that deep FNNs can approximate smooth functions in -norm within any desired accuracy as long as the depth or width is large enough. The approximation will be more accurate if the target function has higher regularity. However, it is shown in [8, 11] that the solution of (4) has a singularity at which behaves as for . Therefore, it is not appropriate to naively use the class of generic FNNs. Instead, We aim to develop a special structure based on FNNs for the approximate class.
In the enriched spectral method [11], the solution of (19) is approximated by a structure consisting of two parts. One part is a linear combination of smooth basis functions, which approximates the regular component of the solution. The other part is a linear tensor product combination of smooth basis functions and a sequence of artificial terms , which is for the singular component of the solution. Following this idea, we use to denote any function in the approximate class, and build its structure as the combination of two parts,
(2) |
where , are FNN-based smooth functions and the term is introduced to adapt to the singularity at .
Moreover, since the true solution of the extended problem (4) converges to zero as , the functions in the approximate class should also preserve this property. To realize it, we can introduce exponential terms concerning in the structure of . Specifically, we let
(3) |
where are FNN-based functions and are two auxiliary scalar parameters ensuring that and converges to zero as exponentially.
In the end, as a subset of , the approximate class should also consist of functions satisfying the boundary condition; namely, . We achieve this by setting
(4) |
where are generic FNNs and is a smooth function constructed particularly to satisfy on . For example, if is a hypercube , then can be chosen as ; if has a boundary characterized by a level set, say , for some continuous function , then can be chosen as . Generally, we can set , where is a particular analytic function satisfying and represents the distant between and . In practice, we expect to set as smooth as possible so that and preserve the same regularity as and , respectively.
To sum up, we build the following special structure for the approximate class,
(5) |
where is the set of all trainable parameters. We denote as the class of all neural networks having the structure in (5), namely,
(6) |
It is clear that is a subset of as long as is Lipschitz continuous and has a polynomial growth bound; namely, for all with a constant and an integer . In our Ritz method, is taken as the approximate class of .
3.3 Approximation property
We will show that functions in can be approximated by the special networks in as . To illustrate the approximation property, we first introduce the Barron space and then derive the error bounds for neural-network approximation, assuming that the target function has components in the Barron space. In this section, we specifically focus on the FNNs with ReLU activation and specify . For simplicity, we concatenate variables and by writing .
3.3.1 Barron space
Let us first quickly review the Barron space and norm. We will focus on the definition discussed in [15] which represents infinitely wide two-layer ReLU FNNs. Following Section 3.1, recall the set of two-layer ReLU FNNs without output bias is given by
(7) |
For a probability measure on , we set the function
(8) |
given this expression exists. For a function , we use to denote the set of all probability measures such that almost everywhere. Then the Barron norm is defined as
(9) |
The infimum of the empty set is considered as . The set of all functions with finite Barron norm is denoted by . It is shown in [15] that equipped with the Barron norm is a Banach space which is called Barron space.
3.4 Error estimation
Let be a function in , and further assume that converges to zero exponentially as . In the error analysis, we make the following assumption that can be factorized explicitly by components vanishing on and decaying to zero exponentially as .
Assumption 3.1.
There exist functions , and some number such that
(10) |
Especially, if Assumption 3.1 holds, we can normalize such that and . Assumption 3.1 is indeed satisfied in some situations. For example, in the case , and , the solution of (4) is given by . Note that by Taylor series , can be written as (10) with
Now we investigate the approximation error between and . It suffices to consider a special subset given by
(11) |
Note in only the parameters of are free and trainable, while is fixed. Clearly, is a subset of . The following theorem and proof are referred to the result in [23] for deep Ritz method in a bounded domain.
Theorem 2.
Proof.
By the definition of the Barron norm, there exsits some probability measure such that a.e. and . For all , using the facts and we have
(14) | |||||
Since , we have
(15) | |||||
While is bounded above since
(16) |
Combining (14),(15),(16) we have
(17) |
On the other hand, the mapping
is continuous and hence Bochner measurable. Also, (17) leads to
which implies the integral is a Bochner integral.
4 Ritz method and Error Estimation
The extended problem (4) can be solved practically by Ritz method. Thanks to the approximation property of the neural network class discussed in Section 3.3, we can replace the hypothesis space with in the minimization (22), obtaining
(1) |
Then the solution of (1) will be an approximation to the solution of (22). Same as in Section 3.3, we will estimate the solution error for the two-layer ReLU networks; namely, we consider the case that and . The final error of the original problem (1) will thereafter be presented.
4.1 Error estimation
Let be a minimizer of (22); namely,
(2) |
Given that Assumption 3.1 holds for with a factorization , let be a minimizer of (1); namely,
(3) |
We first introduce the following Céa Lemma [22].
Proposition 3.
Let be a Hilbert space, any subset and a symmetric, continuous and -coercive bilinear form. For define the quadratic energy and denote its unique minimizer by . Then for every it holds that
(4) |
It is trivial to show that is a symmetric, continuous and -coercive bilinear form. Therefore by Lemma 3 and Theorem 2 we have
(5) |
Proposition 4.
Recall that the solution of original problem (1) is exactly the trace of the solution of the extended problem (4). Also, Theorem 1 shows the equivalence between the extended problem (4) and the minimization problem (22). Therefore, combining (5) and (6) leads to the following error estimate for the original solution.
Theorem 5.
(10) |
where is a constant only depending on , and .
For our network-based method, we obtain the solution error provided that the true solution has a Barron component. This order is consistent with that of the deep Ritz method solving regular Laplacian equations [23] proved under the similar Barron framework. Moreover, the proposed method can be compared with existing methods solving fractional Laplacian [24, 11]. In the early work, finite elements or Laguerre functions are used to approximate the solution, having error orders of , where characterizes the regularity of the true solution and is the number of free parameters. If is relatively low, say , then and hence the early methods converges faster. Otherwise, the network-based methods outperform the existing methods in the error order. This fact implies that neural networks are advantageous over other classical structures in the high-dimensional approximation.
4.2 Implementation
In the proposed method, we solve the optimization (1), finding a minimizer of in the hypothesis space . Note that
(11) |
In practice, numerical quadrature is required for the integrals in (11). Since might be moderately large, for the integral in terms of over , one choice is using Monte Carlo-type quadrature. Specifically, we prescribe a set of quadrature nodes which are uniformly distributed in , then for each ,
(12) |
For the integral in terms of over , specific quadrature rule is needed due to the singularity at . It is shown in [4, 6] that the infinite integral involving fractional operators can be effectively computed by sinc quadrature. More precisely, we use the change of variable so that
(13) |
for all . Given , let
(14) |
then (13) is evaluated by quadrature nodes and uniform weights , namely,
(15) |
Practically, we solve the following optimization,
(17) |
whose solution can be regarded as a practical numerical solution of (22).
5 Numerical Experiments
5.1 The setting
The proposed method is tested by numerical experiments in this section. Deep learning techniques are utilized to solve the minimization (17). The overall setting is summarized as follows.
-
•
Environment. The experiment is performed in Python environment. PyTorch library with CUDA toolkit is utilized for neural network implementation and GPU-based parallel computing. The codes can be simply implemented on a desktop.
-
•
Optimizer and hyper-parameters. The network-based optimization (17) is solved by the stochastic gradient descent (SGD) from PyTorch library. The SGD is implemented for totally 5000 epochs, with adaptively decaying learning rates.
-
•
Network setting. The special network structure (5) is adopted. We choose as the ReLU function. The parameters in and are initialized by
(1) And are initialized as 0.5. All these parameters are trained in the learning process.
-
•
Numerical quadrature. For quadrature (12) over , we adopt the quasi-Monte Carlo with Halton sequence. For the cases of and , totally and sample points are used in the Monte Carlo quadrature, separated as 4 and 10 batches in the SGD, respectively. The sinc quadrature parameter is set as .
-
•
Testing set and error evaluation. We generate a testing set consisting of random points uniformly distributed in for error evaluation. Suppose is the neural network obtained by our method and is the true solution of the original fractional Laplacian problem (1), then the following relative error will be computed,
(2)
5.2 A model problem
In this example, we solve the following problem with an explicit solution to test the accuracy of the proposed method,
(3) |
whose true solution is given by . We specify the boundary function in the network architecture (5).
5.2.1 Network size test
First, we solve the problem (3) with using special network (5) of various network sizes. The proposed method is implemented with network depth and width . The errors of the numerical solutions are listed in Table 1 as well as the total running time. We also computed the numerical error orders with respect to . Note that most of the running time is occupied by training the networks, and very little is for the testing process (computing the errors).
From the table, it is observed that using larger sizes improves the accuracy at the expense of extra running time. For a fixed width , the obtained error is reduced by more than half from to . For a fixed depth , the numerical order with respect to is roughly around the theoretical order proved in Theorem 5, and the deviation is probably due to the stochasticity and capability of the learning algorithm. Overall, the size pair obtains the most accurate solution; hence we continue using it in the following tests.
Error | Order | Running Time | Error | Order | Running Time | |
---|---|---|---|---|---|---|
N.A. | N.A. | |||||
5.2.2 Comparison of structures
Next, this problem is solved for with , , , , . We test both the proposed special structure (5) and the following simple FNN structure for a comparison,
(4) |
where is a generic FNN. Note in (4), the power of the exponent is fixed with instead of a trainable parameter, and no singular term is introduced. By this design, the comparison can reveal the advantages of using trainable exponential decaying powers and the special singular term. For both structures, we set and for the involved FNN.
The error curves versus epochs of the SGD are shown in Figure 1, and the errors of the finally obtained solutions are listed in Table 2. It is observed that for each structure, the smallest error is obtained when , while larger errors are obtained when is close to 0 or 1. This is natural since the true solution has no singularity if and has higher singularity if approaches to 0 or 1. Moreover, from the comparison, it is clear that the special structure outperforms the simple one in obtaining smaller errors. We also remark that the time cost of the simple structure is only slightly less than the special structure.
5.2.3 High-dimensional simulation
Finally, we conduct a high-dimensional test by solving the model problem (3) of and . We continue using the special structure of and for approximation. The error curves and final errors are shown in Figure 2 and Table 3. We observe that the proposed method is still effective for high dimensions, obtaining the errors . While the smallest error is obtained between and .
To the best of our knowledge, our method is the first successful attempt at 10-D spectral fractional Laplacian equation. The existing methods [24, 11] using finite elements or Laguerre functions show their effectiveness in low-dimensional problems (), while they are incapable of high-dimensional cases due to the limitation of storage. Therefore, we do not conduct a numerical comparison in this work.





Special structure in (5) | Simple structure in (4) | |||
---|---|---|---|---|
Error | Running Time | Error | Running Time | |

Error | Running Time | |
---|---|---|
6 Conclusion
In summary, we develop a novel deep learning-based method to solve the spectral fractional Laplacian equation numerically. First, we reformulate the fractional equation as a regular partial differential equation of one more dimension by the Caffarelli-Silvestre extension. Next, we transform the extended problem to an equivalent minimal functional problem, and characterize the space of the weak solutions. To deal with the possible high dimensions, we employ FNNs to construct a special approximate class of the solution space, by which the approximate solution has consistent properties of the true solution. We then solve the minimization in the approximate class. In theory, we studied the approximation error of the special class under Barron-type hypothesis, and thereafter derive the solution error of this method. Finally, the effectiveness of the proposed deep Ritz method is illustrated in high-dimensional simulations.
In this work, we consider the error between the minimizer of the energy functional in the approximation class and the true solution of the original problem. However, practically one needs to use numerical quadrature to compute , which brings extra errors. Consequently, future work may include the error analysis for the minimizer of the empirical loss function (16) discretized from by the Monte Carlo method and sinc quadrature. More precisely, let be the true solution of (4) and
(1) |
for some DNN-based hypothesis space , then
(2) |
Note that has been investigated, it suffices to consider . Suppose has a bounded inverse, then
(3) |
where we use the facts and . On the right hand side of (3), the first term describes the generalization error of the empirical loss minimization over the hypothesis space, and the second term characterizes the bias coming from the numerical quadrature of the integrals. In [19], related analysis is conducted for Poisson equation and Schrödinger equation using Rademacher complexity, but it is only for the Monte-Carlo quadrature. It is promising and challenging to do similar analysis for our method on fractional Laplacian equations, which involves both the stochastic Monte-Carlo method and the deterministic sinc quadrature for approximation.
References
- [1] G. Acosta and J. P. Borthagaray. A fractional Laplace equation: regularity of solutions and finite element approximations. SIAM J Numer Anal, 55:472–495, 2017.
- [2] A. R. Barron. Neural net approximation. In Proceedings of the 7th Yale Workshop on Adaptive and Learning Systems, 1992.
- [3] A. R. Barron. Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans. Inform. Theory, 39(3):930–945, 1993.
- [4] A. Bonito, J. P. Borthagaray, R. H. Nochetto, E. Otarola, and A. J. Salgado. Numerical methods for fractional diffusion. arXiv e-prints, arXiv:1707.01566, 2017.
- [5] A. Bonito, W. Lei, and J. E. Pasciak. Numerical approximation of the integral fractional Laplacian. Numer Math, 142:235–278, 2019.
- [6] A. Bonito, W. Lei, and J. E. Pasciak. On sinc quadrature approximations of fractional powers of regularly accretive operators. J Numer Math, 27(2):57–68, 2019.
- [7] L. Caffarelli and L. Silvestre. An extension problem related to the fractional Laplacian. Comm Partial Differential Equations, 32(8):1245–1260, 2007.
- [8] A. Capella, J. Dávila, L. Dupaigne, and Y. Sire. Regularity of radial extremal solutions for some non-local semilinear equations. Comm Partial Differential Equations, 36(8):1353–1384, 2011.
- [9] A. Caragea, P. Petersen, and F. Voigtlaender. Neural network approximation and estimation of classifiers with classification boundary in a Barron class. arXiv e-prints, arXiv:2011.09363, 2020.
- [10] B. A. Carreras, V. E. Lynch, and G. M. Zaslavsky. Anomalous diffusion and exit time distribution of particle tracers in plasma turbulence model. Phys Plasmas, 8:5096–5103, 2001.
- [11] S. Chen and J. Shen. An efficient and accurate numerical method for the spectral fractional Laplacian equation. J Sci Comput, 82(17), 2020.
- [12] R. Cont and E. Voltchkova. A finite difference scheme for option pricing in jump diffusion and exponential lévy models. SIAM J Numer Anal, 43:1596–1626, 2005.
- [13] W. E, C. Ma, S. Wojtowytsch, and L. Wu. Towards a mathematical understanding of neural network-based machine learning: what we know and what we don’t. arXiv e-prints, arXiv:2009.10713, 2020.
- [14] W. E, C. Ma, and L. Wu. The Barron space and the flow-induced function spaces for neural network models. arXiv e-prints, arXiv:1906.08039, 2019.
- [15] W. E and S. Wojtowytsch. Representation formulas and pointwise properties for Barron functions. arXiv e-prints, arXiv:2006.05982, 2020.
- [16] B. Jourdain, S. Méléard, and W. A. Woyczynski. A probabilistic approach for nonlinear equations involving the fractional Laplacian and a singular operator. Potential Anal, 23(1):55–81, 2005.
- [17] J. M. Klusowski and A. R. Barron. Approximation by combinations of ReLU and squared ReLU ridge functions with and controls. IEEE Trans. Inform. Theory, 64(12):7649–7656, 2018.
- [18] N. Laskin. Fractional quantum mechanics and lévy path integrals. Phys Lett A, 268:298–305, 2000.
- [19] J. Lu, Y. Lu, and M. Wang. A priori generalization analysis of the deep Ritz method for solving high dimensional elliptic equations. arXiv e-prints, arXiv:2101.01708, 2021.
- [20] J. Lu, Z. Shen, H. Yang, and S. Zhang. Deep network approximation for smooth functions. arXiv e-prints, arXiv:2001.03040, 2020.
- [21] J. Lund and K. L. Bowers. Sinc methods for quadrature and differential equations. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1992.
- [22] K. S. Miller and S. G. Samko. Completely monotonic functions. Integr Transf Spec F, 12(4):389–402, 2001.
- [23] Johannes Müller and Marius Zeinhofer. Error estimates for the variational training of neural networks with boundary penalty. arXiv e-prints, arXiv:2103.01007, 2021.
- [24] R. H. Nochetto, E. Otárola, and A. J. Salgado. A PDE approach to fractional diffusion in general domains: a priori error analysis. Found Comput Math, 15:733–791, 2015.
- [25] Z. Shen, H. Yang, and S. Zhang. Deep network with approximation error being reciprocal of width to power of square root of depth. Neural Comput, 33(4):1005–1036, 2021.
- [26] M. F. Shlesinger, B. J. West, and J. Klafter. Lévy dynamics of enhanced diffusion: application to turbulence. Phys Rev Lett, 58:1100–1103, 1987.
- [27] J. W. Siegel and J. Xu. Approximation rates for neural networks with general activation functions. Neural Networks, 128:313–321, 2020.
- [28] J. W. Siegel and J. Xu. High-order approximation rates for neural networks with ReLUk activation functions. arXiv e-prints, arXiv:2012.07205, 2020.
- [29] P. Tankov. Financial modelling with jump processes, volume 2. CRC press, 2003.
- [30] V. V. Uchaikin. Nonlocal models of cosmic ray transport in the galaxy. J Appl Math Phys, 3:187–200, 2015.
- [31] J. L. Vázquez. Recent progress in the theory of nonlinear diffusion with fractional Laplacian operators. Discrete Contin Dyn, 7:857–885, 2014.