DAS-PINNs: A deep adaptive sampling method for solving high-dimensional partial differential equations
Abstract
In this work we propose a deep adaptive sampling (DAS) method for solving partial differential equations (PDEs), where deep neural networks are utilized to approximate the solutions of PDEs and deep generative models are employed to generate new collocation points that refine the training set. The overall procedure of DAS consists of two components: solving the PDEs by minimizing the residual loss on the collocation points in the training set and generating a new training set to further improve the accuracy of the current approximate solution. In particular, we treat the residual as a probability density function and approximate it with a deep generative model, called KRnet. The new samples from KRnet are consistent with the distribution induced by the residual, i.e., more samples are located in the region of large residual and less samples are located in the region of small residual. Analogous to classical adaptive methods such as the adaptive finite element, KRnet acts as an error indicator that guides the refinement of the training set. Compared to the neural network approximation obtained with uniformly distributed collocation points, the developed algorithms can significantly improve the accuracy, especially for low regularity and high-dimensional problems. We demonstrate the effectiveness of the proposed DAS method with numerical experiments.
keywords:
deep learning; numerical approximation of PDEs; adaptive sampling; deep generative models.1 Introduction
In recent years, solving partial differential equations (PDEs) with deep learning methods has been receiving increasing attention [1, 2, 3]. Two major types of deep leaning methods have been proposed for solving PDEs, including the variational form subject to deep learning techniques [4, 5, 6, 7] and the physics-informed neural networks (PINNs) [8, 9, 10, 3], both of which reformulate a PDE problem as an optimization problem and train a deep neural network (DNN) to approximate the solution of PDE through minimizing the corresponding loss functional. The variational form is based on the weak formulation of PDEs, while the physical informed neural networks are based on the residual loss of PDEs. Similar ideas of solving PDEs via minimizing the residual loss can be traced back to the works [11, 12] in the 1990’s, where a shallow neural network is optimized on a priori fixed mesh as an approximation of the solution. Some efforts have been made to incorporate traditional computational techniques to enhance the performance of solving PDEs with deep neural networks. In [13, 14, 15, 16, 7], deep neural networks based on domain decomposition are proposed to improve the efficiency. A penalty free neural network method [17] and Phygeonet [16] are developed to deal with complex geometries and irregular domains. A weak formulation with primal and adversarial networks is proposed in [18], where the PDE problem is converted to an operator norm minimization problem induced by the weak formulation.
One critical step for all these methods is to approximate the loss functional, where the integral is usually approximated by the Monte Carlo method with collocation points randomly generated by a uniform distribution on the computational domain. Since the minimization of the discrete loss functional yields the approximate solution, the accuracy of the approximate solution is closely related to the accuracy of the discrete loss functional. In contrast to classical computational methods, where the main concern is the approximation error, one needs to balance the approximation error and the generalization error for the neural network approximation, where the approximation error mainly originates from the modeling capability of the neural network and the generalization error is mainly related to the data points in the training set, i.e, the random samples for the discretization of the loss functional. However, for many PDE models, the uniform random sampling strategy is not efficient especially when the PDE solution has a low regularity, in other words, an integrand of low regularity may have a large variance in terms of a uniform distribution such that the Monte Carlo approximation of the loss functional has a large prefactor before the convergence rate . This issue becomes worse for high-dimensional problems due to the curse of dimensionality. In high-dimensional spaces, most of the volume of the computational domain concentrates around its surface [19, 20, 21], which means that uniform samples may become less effective for training deep neural networks to approximate high-dimensional PDEs. For example, the collocation points from the uniform distribution are not suitable for solving high-dimensional Fokker-Planck equations, while an adaptive strategy through sampling the current approximate solution is effective [22]. In [23], a selection network is introduced to serve as a weight function to assign higher weights for samples with large point-wise residuals, which yields a more accurate approximate solution if the selection network is properly chosen. However, to obtain a valid selection network, one needs to impose additional constraints on the selection network, which is often a non-trivial task. For low-dimensional problems, it is well known that one can employ adaptive numerical schemes to deal with PDEs with low-regularity solutions [24, 25, 26], which also suggests that the uniform samples are not the best choice. Therefore, adaptive sampling strategies are crucial for developing more efficient and reliable deep learning techniques for the approximation of PDEs.
In this work, we develop a deep adaptive sampling method (DAS) for the neural network approximation of PDEs based on residual minimization, where a deep generative model, called KRnet [27, 28, 29], is used to guide the sample generation for the training set. To this end, we need to construct two deep neural network models: one for approximating the solution and the other for refining the training set. The neural network approximation is achieved by the standard procedure of residual minimization. KRnet defines a transport map [30] from the data distribution to a prior distribution (e.g. the standard Gaussian). KRnet retains two traits of flow-based generative models [31, 32]: exact invertibility of the transport map and efficient computation of the Jacobian determinant, based on which one can obtain an explicit density model using the change of variables and an effective approach for generating samples through the invertible mapping. The key point in our proposed framework is that the residual is viewed as a probability density function (PDF) up to a constant and approximating this PDF can be achieved by minimizing the Kullback-Leibler (KL) divergence between the KRnet-induced density model and the residual-induced distribution. We use the trained KRnet to generate new collocation points to replace or refine the training set, where more points are put in the region of large residual and less points are put in the region of small residual. The updated training set is then used to further improve the accuracy of the current approximate solution. Simply speaking, KRnet acts as an error indicator for the improvement of the training set, which shares similarities with the classical adaptive finite element method subject to a residual-based posteriori error estimator. In summary, the main contributions of this work are as follows.
-
1.
We utilize a deep generative model as a generic means to reflect the correspondence between the residual and the error of approximation through efficient PDF approximation and sample generation.
-
2.
We propose a deep adaptive sampling (DAS) framework, including efficient sampling procedures and training algorithms, for the adaptive improvement of neural network approximation of PDEs.
The remainder of the paper is organized as follows. In the next section, we briefly describe the deep learning method used in this work for the approximation of PDEs. After that, the statistical error of the machine learning technique is illustrated from the perspective of function approximation. Our DAS approach is presented in section 4. We provide the theoretical analysis of DAS in section 5. In section 6, we demonstrate the efficiency of our adaptive sampling approach with numerical experiments. The paper is concluded in section 7.
2 Deep learning for PDEs
Let be a spatial domain, which is bounded, connected and with a polygonal boundary , and denote a spatial variable. The PDE problem is stated as: find where is a proper function space defined on , such that
(1) | ||||
where is the partial differential operator, is the boundary operator, is the source function, and represents the boundary conditions.
Let be a neural network with parameters . In the framework of PINNs, the goal is to use to approximate the solution through optimizing a loss functional defined as [8, 9]
(2) |
where , and measure how well satisfies the partial differential equations and the boundary conditions, respectively, and is a penalty parameter. Here, and . The loss functional (2) is usually discretized numerically before the optimization with respect to is addressed. In practice, one often chooses two sets of uniformly distributed collocation points and respectively for the discretization of the two terms in the objective functional (2), leading to the following empirical loss
(3) |
where , and
Note that in the definition of we do not take into account the constants and and the ratio induced by these two constants can be dealt with by choosing such that is a Monte Carlo approximation of up to a constant scaling factor . We then seek an approximate solution by minimizing the empirical loss (3), i.e.,
(4) |
which can be solved by stochastic gradient-based methods [33, 34].
Recently, some prior error estimates of neural-network-based methods for solving PDEs are established. Combining the analysis techniques of the least square finite element method [35] with the universal approximation property of neural networks [36, 37, 38, 39], Shin et. al. propose an abstract framework for the error estimation of physical informed neural networks [40]. Lu et. al. derive a prior estimate of the generalization error for the deep Ritz method with two-layer neural networks [41]. Suppose that is the minimizer of the empirical loss and is the minimizer of , i.e.,
We have
(5) |
i.e.,
(6) |
where indicates the expectation and the norm corresponds to the function space for . The first term describes the statistical error from discretizing the loss functional with the Monte Carlo approximation, and the second term is the approximation error of minimizing the loss functional over the hypothesis space. The approximation error depends on the capability of neural networks, while the statistical error depends on the definition of and . In this work, we focus on how to reduce the statistical error for problem (4) and our algorithm can also be generalized to other formulations for the neural network approximation of PDEs. For simplicity, we focus on the integration of the residual and assume that the integral on the boundary is well approximated by a prescribed .
3 Illustration of the statistical error
We first use function approximation as an example to illustrate the statistical error of the machine learning technique. Let and subject to a joint distribution . Let be a model and be a function to be approximated. We know in the sense the optimal model is
(7) |
In reality, we usually do not know and only have a set of data which can be regarded as samples of . For a certain hypothesis space , we obtain a regression problem
(8) |
where can be regarded as a Monte Carlo approximation of and the subscript indicates the model parameters specified by . If we let , and assume with being a linear space, we then obtain the continuous least-squares method for function approximation
(9) |
where is the best approximation of located in subject to a weighted norm in terms of a probability density function . To approximate with a machine learning technique, we consider
(10) |
where are samples of PDF , and is the Monte Carlo approximation of . A classical choice for is the linear space spanned by polynomials, for which we derive the error estimate for as follows.
Lemma 1.
Let be a continuous function defined on a compact domain and be a PDF on . Let with being orthonormal polynomials in terms of . For any and with probability at least , we have for a sufficiently large
where is a constant, and is the weighted norm in terms of .
The first term on the right-hand is the statistical error due to the random samples for the approximation of (see the proof in the appendix for more details) and its existence does not depend on the choice of . When goes to infinity, the statistical error goes to zero and only the approximation error is left. In other words, when applying a machine learning technique to function approximation, we need to pay attention to both the hypothesis space and the choice of random samples , i.e., the training set, to obtain a trade-off between the statistical error and the approximation error. For low-dimensional problems, classical method such as finite element methods avoid the statistical error by using Gauss quadrature rules, which implies that machine learning techniques are in general less efficient than classical methods due to the existence of statistical error. On the other hand, for high-dimensional problems, classical methods may not be able to obtain a relatively small approximation error due to the curse of dimensionality and machine learning techniques may perform better by using a capable hypothesis space such as neural networks and an affordable sample size for a relatively small statistical error. A more general estimate about the statistical error needs the Rademacher complexity of the hypothesis space. In this work, we are interested in the reduction of the statistical error instead of its bound for a certain set of random samples.
4 Deep adaptive sampling method
We now focus on the reduction of the statistical error when machine learning techniques are used to approximate PDEs. Our deep adaptive sampling method, i.e., the DAS method, will be established from the viewpoint of variance reduction since the statistical error is induced by the Monte Carlo approximation of the loss. Consider the term in equation (2). It is easy to see that if is a smooth function with a good regularity, the most effective way to reduce the error of is to increase the sample size . However, if there exists low regularity, the situation might be different. For example, if the residual is strongly localized, the scenario can be regarded as a rare event. Assume that the residual in equation (2) has a similar behavior to an indicator function where and is much smaller than , i.e.,
For simplicity, we assume that when presenting the algorithm. To improve the approximation in , we need to compute accurately. Consider a Monte Carlo estimator of in terms of uniform samples
The relative error of is
We then need a sample size of to obtain a relative error of . This implies that a certain samples size quickly becomes less effective if the residual is strongly localized and such a problem can be worsen in the approximation of high-dimensional PDEs. To reduce the relative error, we need to choose more effective random samples instead of uniform samples.
4.1 Some ideas on variance reduction
We outline our basic ideas on variance reduction in this section and more details about the algorithm will be presented later. We first consider the importance sampling technique. For simplicity, we only consider in equation (2), which is the loss induced by the residual. We have
(11) |
where the set is generated with respect to the probability density function (PDF) instead of a uniform distribution as in equation (3). If the variance of in terms of is smaller than the variance of in terms of the uniform distribution, the accuracy of the Monte Carlo approximation will be improved for a fixed sample size . The optimal choice for is
(12) |
where . Although the optimal choice is useless in practice since is the quantity to be computed, it suggests that variance reduction can be achieved if is close to the residual-induced distribution.
Lemma 2.
Assume that and is a PDF satisfying
where indicates the Kullback-Leibler divergence. For any , we have
(13) |
where
and are i.i.d. random variables. is the weighted norm in terms of . The tail probability can be bounded as
When is close to , the ratio should be close to . The second term on the right-hand side of inequality (13) is related to the tail probability in terms of . It is seen that if the tail probability is small enough, a variance reduction can be achieved by choosing a small .
Another option for variance reduction is to relax the definition of as:
(14) |
where the set is sampled from the PDF and on . It is easy to see that the minimizer of is also the solution of problem (1) when the boundary conditions are satisfied. To reduce the error induced by the Monte Carlo approximation, we may adjust such that the residual does not vary dramatically. This is similar to the classical adaptive finite element method, where mesh refinement/coarsening is supposed to make the approximation error nearly uniform. For our case, we may sample a PDF that is close to the residual-induced distribution and add more samples from the region of large residual into the training set.
4.2 PDF approximation and sample generation
Two options for variance reduction are considered in the previous section. To make both ideas practical, the key issue is to generate samples efficiently from a PDF for a fixed . The first option is based on importance sampling, meaning that an explicit PDF model is needed, The second option is based on refinement of the training set, which only needs samples from the region of large residual and does not need the likelihood of the samples. The second option has been employed in the recent literature to improve the neural network approximation, which is either ad hoc [42] (only for low-dimensional problems) or based on traditional sampling strategies such as MCMC [43]. To the best of our knowledge, there does not exist a generic algorithm that can be effectively adapted to both options. We intend to fill this gap in this work. It is seen from Lemma 2 that the tail probability should be small enough for the effectiveness of , which means that must be close enough to the distribution induced by . However, the approximation of PDF is a challenging task especially in high-dimensional spaces. Classical explicit PDF models such as the exponential family of distributions and Gaussian mixture models are in general not sufficient as an approximator for the PDF induced by . To alleviate this difficulty, we resort to deep generative modes. In particular, we employ a recently developed deep generative model called KRnet for both probability approximation and sample generation [22, 27]. KRnet is one type of normalizing flow [44], which provides an invertible transport map between a prior distribution and the target distribution. Unlike other types of deep generative models such as GAN [45, 46, 47] and VAE [48], normalizing flow provides an explicit likelihood. KRnet can be used to address both options for variance reduction while other deep generative models may also be employed if only the second option for variance reduction is considered. However, one computational issue faced by all deep generative models is that the distribution induced by is usually defined on a compact domain while deep generative models are in general defined on the whole space. We subsequently address this issue without going into details about the structure of KRnet.
Let be a random vector associated with a given data set, and its PDF is denoted by . The target is to estimate from data or to generate samples that are consistent with a given . Let be a random vector associated with a PDF , where is a prior distribution (e.g., Gaussian distributions). The flow-based generative modeling is to seek an invertible mapping [31]. By the change of variables, we have the PDF of as
(15) |
Once the prior distribution is specified, equation (15) provides an explicit density model for . The inverse of provides a convenient way to sample as . The basic idea of KRnet is to define the structure of in terms of the Knothe-Rosenblatt rearrangement. Let and be the probability measures of two random variables respectively. A mapping : is called a transport map such that , where is the push-forward of such that for every Borel set [49]. The transport map given by the Knothe-Rosenblatt rearrangement [49, 30] has a lower-triangular structure
(16) |
Simply speaking, KRnet integrates the triangular structure of the K-R rearrangement into the definition of the invertible transport map. More details can be found in [27, 22, 29].
Let indicate the invertible transport map induced by KRnet, where includes the model parameters. An explicit PDF model can be obtained by letting in equation (15), i.e.,
(17) |
The samples of is given as by sampling . A common choice for the distribution of is the standard Gaussian distribution. Depending on the prior knowledge of the problem, a more general model such as Gaussian mixture model can also be used as the prior distribution. The KRnet does not have any constraint on the range of the mapped data, meaning that both and are defined on . Let be Gaussian. Due to the invertibility, for any , which implies that for any . So is not consistent with the distribution induced by , which is equal to zero on . To deal with this issue, we propose the following strategy.
Without loss of generality, we can assume that . Let with such that . For each dimension of , we define the following logarithmic mapping
with being a scale parameter, which defines a one-to-one correspondence between and . Let be a -dimensional mapping such that
Then the following invertible mapping
(18) |
defines a PDF
(19) |
where the support of is .
We now consider a modification of . Define a cutoff function as
where is a piecewise linear function
We consider a modified PDF for any
(20) |
Note that both and have the support . We then solve the following optimization problem
(21) |
where indicates the Kullback-Leibler (KL) divergence between two distributions. We finally use
(22) |
as an approximation of the PDF induced by . If , most the samples will be located in . Since on , approximates the -induced PDF well when is small. In our numerical experiments, we set and .
We now look at the approximation of . The KL divergence in the optimization problem (21) can be written as
(23) |
The first term on the right-hand side corresponds to the differential entropy of , which does not affect the optimization with respect to . So minimizing the KL divergence is equivalent to minimizing the cross entropy between and [50, 51]:
(24) |
Since the samples from are not available, we approximate the cross entropy using the importance sampling technique:
(25) |
where is a PDF model with known parameter and its samples can be generated efficiently as
(26) |
with being sampled from the prior distribution. We then minimize the discretized cross entropy (25) to obtain an approximation of . The choice for will be specified in section 4.3 when the adaptive sampling method is defined.
Remark 1.
An alternative approach for the approximation of is to minimize the following KL divergence:
(27) |
which can be approximated by samples from . Note that the KL divergence is asymmetric. Minimizing the KL divergence (23) is not equivalent to the minimization of the KL divergence (27), although both minimizers will be achieved at if can be reached exactly by a certain parameter .
4.3 Adaptive sampling procedure
We are now ready to present our algorithms. In this work, we mainly focus on the adaptivity of for simplicity. The key step of our adaptivity strategy is to improve the effectiveness of the random samples in the training set , and we provide two algorithms corresponding to the two options discussed in section 4.1.
-
I.
We replace all the collocation points in the current training set using new samples from the probability measure for importance sampling. This corresponds to equation (11).
-
II.
We gradually add more collocation points to the current training set. This corresponds to equation (14), where the new samples are mainly from the region of large residual.
We first present strategy I. Let and be two sets of collocation points that are uniformly sampled from and respectively. Using and , we minimize the empirical loss (3) to obtain . With , we minimize the cross entropy (25) to get , where we simply use uniform samples for importance sampling. To refine the training set, a new set is generated by (see equation (18)). Then we continue to update the approximate solution using as the training set. In general, we use to obtain as
where is initialized as and is defined as
(28) |
Starting with , the density model is updated as
(29) |
where we let in equation (25), i.e., the previous PDF model is used for importance sampling when computing the cross entropy. A new set of collocation points is then generated. As detailed in section 4.2, the support of data points generated by KRnet is , while the computation domain is . So we need to deal with the collocation points located in . Instead of neglecting these points, we project them onto . We define an entry-wise projection operator as
(30) |
For a sequence of i.i.d. samples generated from the standard Gaussian with , we compute . if , we assign to ; otherwise, we add to . The updated training set and will be used for the next training stage. This procedure is repeated until the stopping criterion is satisfied (see Algorithm 1). Since the collocation points in will be completely replaced at the next stage, we call this type of deep adaptive sampling strategy DAS-R for short. The alternative approach given in Remark 1 can also be used to obtain .
We now look at strategy II. Unlike DAS-R, the number of collocation points in the training set increases gradually. So we denote this type of deep adaptive sampling strategy by DAS-G for short. Starting with an initial set of collocation points (as well as ) drawn from a uniform distribution defined on , we minimize the empirical loss (3) on the training set (as well as ) to obtain . Once we have in hand, we can seek using the residual . We here use uniform samples to approximate and minimize the cross entropy (25). Similar to DAS-R, a new set of collocation points is generated by while the main difference is that we update the training set as , in other words, is augmented rather than replaced by . We continue to update using as the initial parameters and as the training set, which yields a refined model . Staring from , we seek using the approach given in Remark 1. We repeat the procedure to obtain an adaptive algorithm (see Algorithm 2).
Our two adaptive training methods are summarized in Algorithm 1 (DAS-R) and Algorithm 2 (DAS-G), where is a given number of maximum adaptivity iterations, is the batch size for stochastic gradient, and is the number of epochs for training and . The algorithms consist of three steps: solving PDE, training KRnet and refining the training set.
Remark 2.
In both strategies, we use uniform samples to approximate and minimize the cross entropy to obtain , after which either equation (25) or equation (27) can be employed. The main reason of doing this is to use uniform samples to capture the modes if the residual-induced distribution is multimodal. The obtained PDF provides a good initialization when equation (27) is employed to seek with . This choice is usually not necessary if the modes are not strongly localized or the location of the modes can be encoded into the prior distribution of KRnet through a Gaussian mixture model.
5 Analysis of DAS
As discussed in section 4.3, the key point of our DAS method is to achieve variance reduction for the discretization of the residual loss, based on which we expect to improve the accuracy of the approximate solution. Under certain conditions, we show that the expectation of error bound becomes smaller when the adaptive sampling strategy is employed.
Assumption 1.
The above condition is called the stability bound [35], which is essential to the existence and uniqueness of problem (1). Except for this assumption, the following two assumptions for the relationship between and are given.
Assumption 2.
Assume that is the optimal candidate for the change of measure in equation (11)
(32) |
where is the normalization constant.
Assumption 3.
Let
be the discrete residual loss at the -th stage, where each is drawn from . Assume that almost surely for each .
At each adaptivity stage, the error of the approximate solution is estimated as follows.
Theorem 1.
The approximate solutions at two adjacent adaptivity stages satisfy:
Corollary 1.
Theorem 1 provides an error estimate for the approximate solution similar to the results in [40]. Corollary 1 outlines the error behavior of a sequence of approximate solutions induced by adaptivity. However, it is not quite straightforward to quantify the decay of the error due to adaptive refinement. For example, for DAS-R the reduction in variance is up to a tail probability as shown in Lemma 2 and for DAS-G the loss is changed at each adaptivity stage due to the modification of the training set. These issues are left for future study.
6 Numerical experiments
In this section, we conduct some numerical experiments to demonstrate the effectiveness of the proposed DAS method, including two low-dimensional and low regularity test problems, one high-dimensional linear test problem, and one high-dimensional nonlinear test problem. Due to the curse of dimensionality, data are sparse in high-dimensional spaces [20, 19, 21], which implies that effective samples should be able to deal with localized information. We mainly use low-dimensional and low regularity test problems to demonstrate that the sampling strategy affects significantly the performance of neural network approximation if the residual is strongly localized. For comparison, we also test the performance of a heuristic method based on residual refinement [52, 42] (see section 6.2 and section 6.3) for high-dimensional problems, where the heuristic method searches uniform samples to find the points with large residuals and add them to the current training set. The heuristic method is similar to DAS-G, where the main difference is that the selection of new samples in DAS-G is completely guided by an optimization problem while the heuristic method relies partially on the user’s intuition. All deep neural network models are trained by the ADAM method [34]. The penalty parameter in equation (3) is set to . The activation function of is set to the hyperbolic tangent function. The activation function of KRnet is the rectified linear unit (ReLU) function since we only use the KRnet for density approximation.
6.1 Low-dimensional and low regularity test problems
In this part, two-dimensional low regularity problems are considered, where the solution of the first one has a peak and the solution of the second one has two peaks.
6.1.1 Two-dimensional peak problem
The following elliptic equation is considered
(33) | ||||
where the computation domain is . In order to quantify the error, we use the following reference solution
which has a peak at the point and decreases rapidly away from . This test problem is often used to test the performance of adaptive finite element methods [53, 24].
We choose a six-layer fully connected neural network with neurons to approximate the solution. For KRnet, we take affine coupling layers, and two fully connected layers with neurons for each affine coupling layer. The number of epochs for training both and is set to . The learning rate for ADAM optimizer is set to , and the batch size is set to . Here, we set . To assess the effectiveness of our DAS methods, we generate a uniform meshgrid with size in and compute the mean square error on these grid points.
In Figure 1, we plot the approximation errors given by different sampling strategies with respect to the sample size in the left plot and with respect to the number of epochs in the right plot. For each , we take three runs with different random seeds for initialization and compute the mean error of the three runs as the final error. For DAS strategies, the numbers of adaptivity iterations are set to for respectively, and is set for the DAS-G strategy (see section 4.3). For the uniform sampling strategy, the number of epochs is set to be the same as the total number of epochs of each DAS method. It is clear that for this test problem the DAS methods (DAS-G and DAS-R) have a better performance than the uniform sampling strategy and DAS-R performs better than DAS-G. For the same sample size, both DAS-R and DAS-G yield a smaller error than the uniform sampling method. In terms of the number of epochs, the errors of DAS-R and DAS-G decay in a more consistent way than the uniform sampling method.
In Figure 2 we compare the exact solution, the DAS solutions given by nonuniform samples and the approximate solution given by uniform samples. It is seen that DAS methods are much more effective than the uniform sampling method to capture the information in the region of low regularity. Figure 3 shows the error evolution of DAS-R at different adaptivity iteration steps. It is seen that the approximation error drops as the adaptivity iteration step increases, which is consistent with Corollary 1, and the relaxation time for the optimization iterations reduces as well. Figure 4 shows the evolution of the training sets () of DAS-R method with respect to adaptivity iterations , where the initial training set consists of uniform collocation points on (see section 4.3). It is seen that the largest density of for DAS-R is around since is consistent with the residual-induced distribution. However, we also expect that the tail of the residual-induced distribution becomes heavier as increases since the adaptivity tries to make the residual-induced distribution more uniform, which is illustrated by , and .








6.1.2 Two-dimensional test problem with two peaks
In this test problem, we consider the following equation
(34) | ||||
where the computation domain is . The exact solution of (34) is chosen as
which has two peaks at the points and . Here, the Dirichlet boundary condition on is given by the exact solution.
We choose a six-layer fully connected neural network with neurons to approximate the solution of (34). For KRnet, we take affine coupling layers, and two fully connected layers with neurons for each affine coupling layer. The number of epochs for training both and is set to . The learning rate for ADAM optimizer is set to , and the batch size is set to . Again, we generate a uniform meshgrid with size in and compute the mean square error on these grid points to assess the effectiveness of our DAS methods.
Figure 5 shows the approximation errors for this test problem, where the left one displays the errors with respect to the sample size for different sampling strategies, and the right one shows the error evolution of DAS-G at different adaptivity iteration steps. For each , we again take three runs with different random seeds for initialization and compute the mean error of the three runs as the final error. For the DAS-G strategy, the numbers of adaptivity iterations is set to (also for DAS-R), and the numbers of collocation points in is set to for respectively. For the uniform sampling strategy, we train the model with epochs to match the total number of epochs of DAS methods. From Figure 5, it is seen that for this test problem our DAS methods (DAS-G and DAS-R) have a better performance than the uniform sampling strategy and DAS-G performs better than DAS-R. It is also seen that the error decreases as the adaptivity iteration step increases.
In Figure 6 we compare the exact solution, the DAS solutions given by nonuniform samples and the approximate solution given by uniform samples. It is seen that DAS methods are much more effective than the uniform sampling method to capture the information around the two peaks. Figure 7 shows the evolution of of DAS-G method with respect to adaptivity iterations (), where the initial training set consists of uniform collocation points on (see section 4.3). shows that the error profile has two peaks. After the training set is augmented with , the error profile becomes more flat as shown by the distribution of . After the training set is augmented with , the largest error is found again around the two peaks, and then the subsequent augmentation of the training set yields a more flat error profile. Such a pattern is repeated until no improvement can be reached.







6.2 High-dimensional linear test problems
Next we consider the -dimensional elliptic equation
(35) |
with an exact solution
where the Dirichlet boundary condition on is given by the exact solution. We are interested in cases with a large . Note that the geometric properties of high-dimensional spaces are significantly different from our intuitions on low-dimensional ones, e.g., most of the volume of a high-dimensional cube is located around its corners [20, 19, 21]. If we use uniform samples to generate , most of the collocation points in are near the surface of the hypercube. Since the information of the exact solution is mainly from the neighborhood of the origin, most of the samples in may not contribute to training the neural network when is large enough.
We choose a six-layer fully connected neural network with 64 neurons to approximate the solution. For KRnet, we set and take affine coupling layers, and two fully connected layers with neurons for each affine coupling layer. The number of epochs for training both and is set to . The learning rate for ADAM optimizer is set to , and the batch size is set to . The numbers of adaptivity iterations is set to . To measure the quality of approximation, we generate a tensor grid with points around the origin (in ) where is the number of nodes for each dimension. We define the relative error
where and denote two vectors whose elements are the function values of and at the tensor grid respectively.
We first investigate the relation between the error and the dimensionality when the uniform sampling strategy is employed. Figure 8(a) shows the relative errors in terms of a varying for a sample size . To roughly match the number of grid points for different , we set for respectively. It is seen that the relative error grows quickly to as increases. However, as shown in Figure 8(b), all training losses are finally close to zero for . This is consistent with the fact that in a high-dimensional space most of the uniform samples are located around the boundary, where the solution is close to zero. The optimizer is then in favor of the trivial solution since there are not sufficient samples to resolve the peak at the origin. This phenomenon demonstrates that the uniform sampling method may become less effective as increases and the convergence of the approximate solution is highly dependent on the choice of for a large .
Figure 9 shows the relative errors for the uniform sampling strategy, the residual-based adaptive refinement (RAR) method proposed in [52], DAS-R and DAS-G, where different numbers of samples are considered. For each , we again take three runs with different random seeds for initialization and compute the mean error of the three runs as the final error. For the DAS-G strategy, the numbers of collocation points in () are set to for respectively. For the uniform sampling strategy, we train the model with epochs to match the total number of epochs of DAS methods. For the heuristic method RAR, the numbers of collocation points in () are set to for respectively. From Figure 9, it can be seen that both DAS-G and DAS-R improve the accuracy significantly compared to the uniform sampling strategy and RAR. In addition, the error of DAS-G decreases slightly faster than that of DAS-R for this test problem. In Figure 10 we compare the error evolution of different sampling strategies. From the left plot of Figure 10, as the number of epochs increases, the errors of DAS-G and DAS-R decrease quickly, while the errors of RAR and the uniform sampling strategy do not decrease. This result suggests that for high-dimensional problems DAS methods are able to achieve a good approximation with a relatively small number of nonuniform samples while much more uniform samples are needed for the same accuracy. The right plot of Figure 10 shows the error of DAS-G at each adaptivity iteration step . It is seen that the error drops dramatically after we refine the solution using .
Figures 11 and 12 show samples from the training sets () DAS-R and DAS-G for the first four adaptivity iterations, where the components and are used for visualization. We have also checked the other components, and no significantly different results were found. For DAS-R, samples are randomly chosen from (). For DAS-G, samples for visualization are randomly selected from (). It is seen that the profile of is gradually flattened as increases, meaning the nonuniform samples are able to smooth the error profile which has a peak around the origin. As for DAS-G, the improvement takes a similar path. shows that the error profile has a peak around the origin. After the training set is augmented with , the error profile becomes more flat as shown by the distribution of . This is expected since more collocation points are added to the neighborhood of the origin which should reduce the error over there. Such a pattern is similar to what we have observed in Figure 7. In Figure 13, we compare the evolution of the variance of the residual for the training with samples. We estimate the variance of residual using grid points around the origin (these points are also used to compute the relative errors in the above discussion). It is clear that both DAS-R and DAS-G achieve the variance reduction significantly compared with RAR, which helps reduce the statistical error dramatically for a fixed sample size. Looking more closely, the variance of DAS-R has a transition between two consecutive adaptivity iterations, resulting in the oscillation of errors for DAS-R as observed in the left plot of Figure 10. From Figure 13, it can be seen that DAS-G appears more robust than DAS-R for this test problem. We may adjust the communication pattern between the PDE model and the PDF model to reduce the oscillations, which will be left for future study.








6.3 High-dimensional nonlinear test problem
In this part, the ten-dimensional nonlinear partial differential equation considered is
(36) |
The exact solution is set to be the same as (35), and the Dirichlet boundary condition on is given by the exact solution. The settings of and KRnet are the same as those in section 6.2.
The observations are similar to those for the high-dimensional linear problem. Figure 14 shows the relative errors for the uniform sampling strategy, RAR, DAS-R and DAS-G, where different numbers of samples are considered. Again, we take three runs with different random seeds for initialization and compute the mean error of the three runs as the final error for each . For the DAS-G strategy, the numbers of collocation points in () are set to the same as those in section 6.2. For the uniform sampling strategy, we train the model with epochs to match the total number of epochs of DAS methods. For the heuristic method RAR, the numbers of collocation points in () are set to for respectively. Both DAS-G and DAS-R improve the accuracy significantly compared to the uniform sampling strategy and RAR. In Figure 15 we compare the error evolution of different sampling strategies. Similar to the high-dimensional linear problem, the errors of DAS-G and DAS-R decrease quickly while the errors of the uniform sampling strategy and RAR do not decrease. The error behavior of DAS-G at each adaptivity iteration step is shown in the right plot of Figure 15. It is seen that the approximation is significantly improved when the adaptivity iteration step increases from to . Figure 16 and 17 show samples from the training sets () DAS-R and DAS-G for the first four adaptivity iterations, where the components and are used for visualization. For DAS-R, samples are randomly chosen from (). For DAS-G, samples for visualization are randomly selected from (). Both DAS-R and DAS-G flatten the error profile through adaptive sampling as we have observed in Figures 11 and 12. Figure 18 shows the evolution of the variance of the residual for DAS-R, DAS-G and RAR. The behavior is similar to that in Figure 13.






7 Conclusion
In this paper we have developed a deep adaptive sampling (DAS) method and coupled it with physics-informed neural networks (PINNs) to improve the neural network approximation of PDEs iteratively. The key idea of DAS is to employ a deep generative model to generate collocation points that are consistent with the distribution induced by an appropriate error indicator function. In this way, the training set is refined according to the regularity of the PDE solution, which follows the similar principle of adaptive mesh refinement of classical numerical methods. Numerical experiments have shown that the DAS method is able to significantly improve the accuracy for the approximation of low regularity problems especially when the dimensionality is relatively large. The proposed DAS method provides a very general and flexible framework for an adaptive learning strategy. There are several possible ways to further improve it. First, DAS consists of two DNN-based models: one model serves as an approximator for the PDE solution and the other one serves as an error indicator for the selection of collocation points. Both models can be chosen in terms of a certain criterion. In this work, we use a regular DNN for PDE approximation and KRnet for density approximation and sample generation. Second, the underlying distribution for the training set can be problem dependent. In this work, we choose the residual-induced distribution. We may also use the gradient of the approximation solution to define an indicator distribution. In [22], we employ KRnet to approximate the Fokker-Planck equation, where the collocation points are sampled from the approximate solution. Third, the DAS method is not limited to steady-state PDE problems. We may employ the DAS method on the space-time domain to refine the training set for the approximation of time-dependent problems. Last but not least, the current training process can also be improved. Although the current DAS methods work well enough to demonstrate the effectiveness of the algorithm, many questions remain open, e.g., what is the optimal way for the two deep models to communicate and what is the optimal sample size for . Research on these issues will be reported in forthcoming papers.
Acknowledgments: K. Tang has been supported by the China Postdoctoral Science Foundation grant 2022M711730. X. Wan has been supported by NSF grant DMS-1913163. C. Yang has been supported by NSFC grant 12131002.
References
- [1] J. Han, A. Jentzen, E. Weinan, Solving high-dimensional partial differential equations using deep learning, Proceedings of the National Academy of Sciences 115 (34) (2018) 8505–8510.
- [2] E. Weinan, The dawning of a new era in applied mathematics, Notices of the American Mathematical Society 68 (4) (2021) 565–571.
- [3] G. E. Karniadakis, I. G. Kevrekidis, L. Lu, P. Perdikaris, S. Wang, L. Yang, Physics-informed machine learning, Nature Reviews Physics 3 (6) (2021) 422–440.
- [4] W. E, B. Yu, The deep Ritz method: A deep learning-based numerical algorithm for solving variational problems, Communications in Mathematics and Statistics 6 (1) (2018) 1–12.
- [5] E. Kharazmi, Z. Zhang, G. E. Karniadakis, Variational physics-informed neural networks for solving partial differential equations, arXiv preprint arXiv:1912.00873.
- [6] Y. Zhu, N. Zabaras, P.-S. Koutsourelakis, P. Perdikaris, Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data, Journal of Computational Physics 394 (2019) 56–81.
- [7] E. Kharazmi, Z. Zhang, G. E. Karniadakis, hp-VPINNs: Variational physics-informed neural networks with domain decomposition, Computer Methods in Applied Mechanics and Engineering 374 (2021) 113547.
- [8] J. Sirignano, K. Spiliopoulos, DGM: A deep learning algorithm for solving partial differential equations, Journal of Computational Physics 375 (2018) 1339–1364.
- [9] M. Raissi, P. Perdikaris, G. E. Karniadakis, Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, Journal of Computational Physics 378 (2019) 686–707.
- [10] G. Pang, L. Lu, G. E. Karniadakis, fPINNs: Fractional physics-informed neural networks, SIAM Journal on Scientific Computing 41 (4) (2019) A2603–A2626.
- [11] I. E. Lagaris, A. Likas, D. I. Fotiadis, Artificial neural networks for solving ordinary and partial differential equations, IEEE Transactions on Neural Networks 9 (5) (1998) 987–1000.
- [12] M. Dissanayake, N. Phan-Thien, Neural-network-based approximations for solving partial differential equations, Communications in Numerical Methods in Engineering 10 (3) (1994) 195–201.
- [13] K. Li, K. Tang, T. Wu, Q. Liao, D3M: A deep domain decomposition method for partial differential equations, IEEE Access 8 (2020) 5283–5294.
- [14] W. Li, X. Xiang, Y. Xu, Deep domain decomposition method: Elliptic problems, in: J. Lu, R. Ward (Eds.), Proceedings of The First Mathematical and Scientific Machine Learning Conference, Vol. 107 of Proceedings of Machine Learning Research, PMLR, Princeton University, Princeton, NJ, USA, 2020, pp. 269–286.
- [15] S. Dong, Z. Li, Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations, Computer Methods in Applied Mechanics and Engineering 387 (2021) 114129.
- [16] H. Gao, L. Sun, J.-X. Wang, Phygeonet: Physics-informed geometry-adaptive convolutional neural networks for solving parameterized steady-state PDEs on irregular domain, Journal of Computational Physics 428 (2021) 110079.
- [17] H. Sheng, C. Yang, PFNN: A penalty-free neural network method for solving a class of second-order boundary-value problems on complex geometries, Journal of Computational Physics (2020) 110085.
- [18] Y. Zang, G. Bao, X. Ye, H. Zhou, Weak adversarial networks for high-dimensional partial differential equations, Journal of Computational Physics 411 (2020) 109409.
- [19] A. Blum, J. Hopcroft, R. Kannan, Foundations of data science, Cambridge University Press, 2020.
- [20] R. Vershynin, High-dimensional probability: An introduction with applications in data science, Vol. 47, Cambridge University Press, 2018.
- [21] J. Wright, Y. Ma, High-dimensional data analysis with low-dimensional models: Principles, computation, and applications, Cambridge University Press, 2021.
- [22] K. Tang, X. Wan, Q. Liao, Adaptive deep density approximation for Fokker-Planck equations, Journal of Computational Physics 457 (2022) 111080.
- [23] Y. Gu, H. Yang, C. Zhou, Selectnet: Self-paced learning for high-dimensional partial differential equations, Journal of Computational Physics 441 (2021) 110444.
- [24] P. Morin, R. H. Nochetto, K. G. Siebert, Convergence of adaptive finite element methods, SIAM Review 44 (4) (2002) 631–658.
- [25] K. Mekchay, R. H. Nochetto, Convergence of adaptive finite element methods for general second order linear elliptic pdes, SIAM Journal on Numerical Analysis 43 (5) (2005) 1803–1827.
- [26] H. C. Elman, D. J. Silvester, A. J. Wathen, Finite elements and fast iterative solvers: With applications in incompressible fluid dynamics, Oxford University Press, USA, 2014.
- [27] K. Tang, X. Wan, Q. Liao, Deep density estimation via invertible block-triangular mapping, Theoretical & Applied Mechanics Letters 10 (2020) 143.
- [28] X. Wan, S. Wei, VAE-KRnet and its applications to variational Bayes, arXiv preprint arXiv:2006.16431.
- [29] X. Wan, K. Tang, Augmented KRnet for density estimation and approximation, arXiv preprint arXiv:2105.12866.
- [30] F. Santambrogio, Optimal transport for applied mathematicians, Birkäuser, NY 55 (2015) 58–63.
- [31] L. Dinh, J. Sohl-Dickstein, S. Bengio, Density estimation using real NVP, arXiv preprint arXiv:1605.08803.
- [32] D. P. Kingma, P. Dhariwal, Glow: Generative flow with invertible 1x1 convolutions, in: Advances in Neural Information Processing Systems, 2018, pp. 10215–10224.
- [33] L. Bottou, F. E. Curtis, J. Nocedal, Optimization methods for large-scale machine learning, SIAM Review 60 (2) (2018) 223–311.
- [34] D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980.
- [35] P. Bochev, M. Gunzburger, Least-squares methods for hyperbolic problems, in: Handbook of Numerical Analysis, Vol. 17, Elsevier, 2016, pp. 289–317.
- [36] G. Cybenko, Approximation by superpositions of a sigmoidal function, Mathematics of Control, Signals and Systems 2 (4) (1989) 303–314.
- [37] K. Hornik, M. Stinchcombe, H. White, Multilayer feedforward networks are universal approximators, Neural Networks 2 (5) (1989) 359–366.
- [38] K. Hornik, Approximation capabilities of multilayer feedforward networks, Neural Networks 4 (2) (1991) 251–257.
- [39] M. Leshno, V. Y. Lin, A. Pinkus, S. Schocken, Multilayer feedforward networks with a nonpolynomial activation function can approximate any function, Neural Networks 6 (6) (1993) 861–867.
- [40] Y. Shin, Z. Zhang, G. E. Karniadakis, Error estimates of residual minimization using neural networks for linear PDEs, arXiv preprint arXiv:2010.08019.
- [41] J. Lu, Y. Lu, M. Wang, A priori generalization analysis of the deep ritz method for solving high dimensional elliptic equations, arXiv preprint arXiv:2101.01708.
- [42] J. Yu, L. Lu, X. Meng, G. E. Karniadakis, Gradient-enhanced physics-informed neural networks for forward and inverse PDE problems, Computer Methods in Applied Mechanics and Engineering 393 (2022) 114823.
- [43] W. Gao, C. Wang, Active learning based sampling for high-dimensional nonlinear partial differential equations, arXiv preprint arXiv:2112.13988.
- [44] I. Kobyzev, S. J. Prince, M. A. Brubaker, Normalizing flows: An introduction and review of current methods, IEEE transactions on pattern analysis and machine intelligence 43 (11) (2020) 3964–3979.
- [45] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, Generative adversarial nets, in: Advances in Neural Information Processing Systems, 2014, pp. 2672–2680.
- [46] M. Arjovsky, S. Chintala, L. Bottou, Wasserstein generative adversarial networks, in: International Conference on Machine Learning, PMLR, 2017, pp. 214–223.
- [47] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, A. C. Courville, Improved training of wasserstein gans, in: Advances in Neural Information Processing Systems, Vol. 30, 2017.
- [48] D. P. Kingma, M. Welling, Auto-Encoding Variational Bayes, stat 1050 (2014) 1.
- [49] G. Carlier, A. Galichon, F. Santambrogio, From Knothe’s transport to Brenier’s map and a continuation method for optimal transport, SIAM Journal on Mathematical Analysis 41 (6) (2010) 2554–2576.
- [50] P.-T. De Boer, D. P. Kroese, S. Mannor, R. Y. Rubinstein, A tutorial on the cross-entropy method, Annals of Operations Research 134 (1) (2005) 19–67.
- [51] R. Y. Rubinstein, D. P. Kroese, The cross-entropy method: a unified approach to combinatorial optimization, Monte-Carlo simulation and machine learning, Springer Science & Business Media, 2013.
- [52] L. Lu, X. Meng, Z. Mao, G. E. Karniadakis, Deepxde: A deep learning library for solving differential equations, SIAM Review 63 (1) (2021) 208–228.
- [53] W. F. Mitchell, A collection of 2D elliptic problems for testing adaptive grid refinement algorithms, Applied Mathematics and Computation 220 (2013) 350–364.
Appendix A Proof of Lemma 1
Proof.
We first introduce the following lemma:
Lemma 3.
Consider a perturbed identity matrix with . We have
(37) |
Proof.
For any , we have
which implies that is nonsingular. We then have
which yields the conclusion. ∎
Assume that and , where includes the basis functions in and the vectors and include the coefficients. It is easy to see that and satisfy the following linear systems respectively
(38) |
where
and indicates the inner product of and . We rewrite the linear system for as
(39) |
where and . Let . Since both and are continuous on a compact support, we may assume that for any and any , where is a constant. Using the Heoffding bound, we have for any with probability at least ,
(40) |
for any . This means with probability at least we have
(41) |
where indicates the Frobenius norm. Let . It is seen that as . Assume that is large enough such that with , in other words, . Since are orthonormal, we have . From equations (38) and (41), we have
to which we apply Lemma 3 and the bounds in equation (39) and obtain
(42) |
Using the Pythagorean theorem, we have
which yields that
(43) |
∎
Appendix B Proof of Lemma 2
Proof.
Let
where . We consider
(44) |
The first term on the right-hand side is bounded as
(45) |
where the Cauchy-Schwarz inequality is used in the last step.
The second term on the right-hand side can be bounded as
(46) |
where in the last step we used the fact that for any variable , with probability 1. The third term on the right-hand side can be bounded the same way as .
We now estimate the tail probability . Using the correspondence between norm and total variation distance for two probability measures as well as the Pinsker’s inequality, we have
(47) |
which yields that
(48) |
From the Markov inequality, we have
(49) |
where the probability is with respect to PDF . Combining the bounds for , , and equation (49), we reach the conclusion. ∎
Appendix C Proof of Theorem 1
Appendix D Proof of Corollary 1
Proof.
Noting that
Since is the optimal solution at the -th stage, we have
(52) |
Plugging into (52) gives that
Noting that is a random variable and taking its expectation, it follows that
which completes the proof. ∎