This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Decoupling Numerical Method Based on Deep Neural Network for Nonlinear Degenerate Interface Problems

Chen Fana, Zhiyue Zhanga,***Corresponding author. E-mail address: [email protected].
a{a} School of Mathematical Sciences, Jiangsu Key Laboratory for NSLSCS,
Nanjing Normal University, Nanjing 210023, China

Abstract    Interface problems depict many fundamental physical phenomena and widely apply in the engineering. However, it is challenging to develop efficient fully decoupled numerical methods for solving degenerate interface problems in which the coefficient of a PDE is discontinuous and greater than or equal to zero on the interface. The main motivation in this paper is to construct fully decoupled numerical methods for solving nonlinear degenerate interface problems with “double singularities”. An efficient fully decoupled numerical method is proposed for nonlinear degenerate interface problems. The scheme combines deep neural network on the singular subdomain with finite difference method on the regular subdomain. The key of the new approach is to split nonlinear degenerate partial differential equation with interface into two independent boundary value problems based on deep learning. The outstanding advantages of the proposed schemes are that not only the convergence order of the degenerate interface problems on whole domain is determined by the finite difference scheme on the regular subdomain, but also can calculate 𝐕𝐄𝐑𝐘\mathbf{VERY} 𝐁𝐈𝐆\mathbf{BIG} jump ratio(such as 1012:110^{12}:1 or 1:10121:10^{12}) for the interface problems including degenerate and non-degenerate cases. The expansion of the solutions does not contains any undetermined parameters in the numerical method. In this way, two independent nonlinear systems are constructed in other subdomains and can be computed in parallel. The flexibility, accuracy and efficiency of the methods are validated from various experiments in both 1D and 2D. Specially, not only our method is suitable for solving degenerate interface case, but also for non-degenerate interface case. Some application examples with complicated multi-connected and sharp edge interface examples including degenerate and nondegenerate cases are also presented.
Key words   nonlinear degenerate interface problems; deep neural network; fully decoupled method; very big jump ratio; convergence order; sharp edge interface

Mathematics Subject Classification   34B16, 35R05, 65M85, 65N06, 68T99

1 Introduction

Nonlinear degenerate interface problems can depict many fundamental physical phenomena in chemical and mechanical engineering, physics and many other applications[16, 57, 36, 44, 40, 21]. For the standard interface problems, it has attracted great interests in numerical computations, such as finite element method[21, 3, 11, 14, 51], finite difference method[1, 59, 7, 6], finite volume element method [47, 18, 12, 61, 48], spectral method[42, 32, 13], least-squares method [52] and references therein. There has been a great deal of rigorous mathematical theory and numerical analysis to deal with degenerate PDE[45, 46, 17, 5, 43, 50, 20, 9, 19, 60]. To the best of our knowledge, degenerate interface problems have received less attention so far, a few notable approaches can be found in the literature to handle the degenerate PDE with interface[58, 59, 8]. As is well known, the difficulty lies in the “double singularities” for nonlinear degenerate interface problems, namely, degeneracy and interface. Generally speaking, the most expensive part work of numerical schemes on standard sharp interface problems[28, 30, 26, 31] is how to approximate the jump conditions very well. For example, there are many methods are interesting but the technique to treat the jump conditions is quite complicated. Nevertheless, our proposed approach based on deep neural network uses different, simple and natural techniques to treat the singularities compared with the above references, and hence obtains numerical method to solve nonlinear degenerate interface problems. In fact, the challenge work of numerical simulation on nonlinear degenerate interface problems are how to design the numerical methods not only to reduce the singularities affect at degenerate points, but also are less dependent or independent of the jump conditions. Due to nonlinear degenerate interface problems possess “double singularities”, it is usually required extremely fine grids such as adaptive mesh or graded mesh to reduce singularity affect. Obviously, it is impossible to use uniform grids to numerically solve nonlinear degenerate interface problems for the traditional numerical methods. The main goal of this paper is to present an efficient and fully decoupled finite difference methods with uniform grids based on deep neural network for solving nonlinear degenerate interface problems.

On the other hand, the deep neural network (DNN) models have achieved great successes in artificial intelligence fields including high-dimensional problems in computer vision, natural language processing, time series analysis, pattern and speech recognition. It is worth noting that even if there is a universal approximation theoretical results about the single layer neural network, the approximation theory of DNN still remain an open question. However, this should not prevent us from trying to apply deep learning to other problems such as numerical weather forecast, petroleum engineering, turbulence flow and interface problems. There are two main techniques to solve PDEs with deep learning, the first is to parameterize the solution of PDEs by the deep neural network (DNN). One of methods is that a universal approximation based on a neural network and point collocation are used to transform the PDE into an unconstrained minimization problem. The other one is that the original problem is transformed an optimization problem with variational form based on representing the trail functions by deep neural networks. Recently, we have noticed there are some gratifying works by using mesh free methods with DNN model to solve PEDs and interface problems[29, 24, 49, 4]. However, we will use structured mesh method with deep learning to deal with degenerate interface problems which is a challenge and is always of great interests. Although boundary conditions are absent on the singular sub-domains, which is known to be the extreme ill-posedness, it is shown that the DNN approach still has some merits in structured grids method. In addition, we use a hybrid asymptotic and augmented compact finite volume method to realize using semi-decoupling numerical method based on a uniform Cartesian mesh for solving 1D degenerate interface problem[58]. This inspires us to develop fully decoupled numerical method for solving the degenerate PDE with interface. Although there have been a great deal of nice works for interface problems[37, 27, 56, 8, 59, 28, 30, 26, 49, 47, 49, 21], there are quite a few fully decoupled numerical methods on the uniform grids for solving such interface problems, even to mentioned interesting degenerate interface problems.

In this paper, we focus on constructing fully decoupled numerical algorithms based on deep learning for solving the degenerate interface problems. This method not only effectively reduces the influence of the degeneracy and interface but also provides an accurate solutions on a uniform Cartesian mesh. We construct two DNN structures near the interface instead of the whole domain, and find the optimal solution by minimizing the mean squared error loss that consists of the equation and the interface conditions. These two parts are linked by its normal derivative jump conditions. We use DNN to treat considered problems on singular sub-domains near the interface to get a solution, then obtain two independent decoupled boundary value sub-problems without interface on regular sub-domains. We can compute those two nonlinear systems in parallel. We find that the proposed our approach is simple, easy to implement reducing lots efforts in handling jump conditions and also its ability to use existing method for solving nonlinear sub-problmes without interface. The choice of the singular sub-domain is more natural since we use a uniform grids, and programming of the new scheme is a straightforward task due to fully decoupled algorithms. Although deep learning has shown remarkable success in various hard problems of artificial intelligence areas, limited approximatability of deep learning with uniform grids results in two general boundary value sub-problems to get satisfactory approximations of the solutions for solving such nonlinear degenerate interface problems. A loss, no bad thing or a blessing in disguise. In fact, if deep learning has the ability to strictly decoupled the degenerate interface problems at the interface into two degenerate PDEs, we probably obtain nonlinear ill-conditioned systems for the corresponding discrete sub-problems. At this moment, we have to look for other special methods to treat degenerate PDE or interface problems likewise the litratures[38, 58], and references therein.

The purpose of the paper is to develop a new fully decoupled numerical method based on DNN technique that not only effectively reduces the influence of the singularities and interface, but also provides a new way to realize completely decoupled method with different ideas compared to the existing methods to treat degenerate interface problems. It does not need any extra efforts to treat the cases between degenerate interface and general interface. The proposed approach has advantages of fully decoupled two problems without interface with uniform grids. Since our fully decoupled method is independent of the interface and the jump conditions, it not only results in two independent sub-problems, but also can easily treat the cases of 𝐕𝐄𝐑𝐘\mathbf{VERY} 𝐁𝐈𝐆\mathbf{BIG} jump ratio(such as 1012:110^{12}:1 or 1:10121:10^{12}). In addition, the computational costs is almost the same for homogenous jump case and non-homogeneous jump case, this numerically demonstrates fully decoupled property of our method. The methods of this paper are sufficiently robust and also can easily handle 1D case and 2D case. In particular, it is easily to handle hard problems such as sharp-edge interface problems. Our method can robustly and efficiently apply to both of the general interface problems and degenerate interface problems, while an effective method to solve general interface problems is not suitable for solving such nonlinear degenerate interface problems. It is demonstrated that our method is a simple and straight method to deal with quit hard works. It should be mentioned that the convergence order of the schemes on entire domain for solving such degenerate PDE with interface can be determined by the convergence order of the sub-problems on regular sub-domain. Numerical experiments show that the proposed approach is able to effectively approximate the solutions of such hard degenerate interface problems. Numerical results have shown great improvement comparing to the existing methods for solving hard cases[2]. From the method[58] we know that it is impossible to split degenerate or general interface problems into two independent boundary value problems. Nevertheless, it is realized our algorithms to be completely decoupled for solving degenerate interface problems due to using dee learning. Although there are a few analytical results, the reason why deep neural networks coupled with traditional numerical methods have performed so well for solving degenerate interface problems still largely remains a mystery. This encourage us to consider the theoretical approximation analysis in the future.

The rest of the paper is organized as follows. In section 2, we give some preliminaries about the Deep Neural Networks and follow this with the process on the interface and fully-decoupling two sub-problems. In section 3, we construct Deep Neural Network structure and finite difference scheme. We present some numerical experiments including some interesting models in mathematical physics area in section 4. Some concluding remarks are given in the final section.

2 Deep Neural Network

The definition and attributes of the deep neural network (DNN), particularly its approximation property, are briefly discussed in this section [49].

In order to define a DNN, we will need two steps. The first is a (vector) linear function of the operator T:RnRmT:R^{n}\rightarrow R^{m}, defined as T(𝒙)=A𝒙+𝒃T(\bm{x})=A\bm{x}+\bm{b}, where A=(ai,j)Rm×nA=(a_{i,j})\in R^{m\times n}, 𝒙\bm{x} and 𝒃\bm{b} are in RnR^{n} and RmR^{m} respectively. A nonlinear activation function σ:RR\sigma:R\rightarrow R is the second. The rectified linear unit (ReLU), a commonly used activation function, is defined as ReLU(x)=max(0,x)ReLU(x)=max(0,x)[35]. The exponential linear unit (ELU) will be used as the activation function in this paper, defined as ELU(x)=max(0,x)+min(0,ex1)ELU(x)=max(0,x)+min(0,e^{x}-1), it is mainly used to avoid the problem of gradient disappearance (Fig.1). The (vector) activation function σ:RmRm\sigma:R^{m}\rightarrow R^{m} can be defined by applying the activation function in an element-wise manner.

Refer to caption
(a) ReLU
Refer to caption
(b) ELU
Figure 1: Images of the activation functions.

We can define a continuous function F(𝒙)F(\bm{x}) by acomposition of linear transforms and activation functions using these definitions, i.e.,

F(𝒙)=TkσTk1σTk2T0(𝒙),F(\bm{x})=T^{k}\circ\sigma\circ T^{k-1}\circ\sigma\circ T^{k-2}\circ\dots\circ T^{0}(\bm{x}), (2.1)

where Ti(𝒙)=Ai𝒙+𝒃𝒊T^{i}(\bm{x})=A_{i}\bm{x}+\bm{b_{i}} with AiA_{i} and 𝒃𝒊\bm{b_{i}} are undetermined matrices and vectors respectively, σ(x)\sigma(x) being the element-wisely specified activation function to make (2.1) meaningful, the dimensions of AiA_{i} and 𝒃𝒊\bm{b_{i}} were chosen. All indeterminate coefficients (e.g., AiA_{i} and 𝒃𝒊\bm{b_{i}}) in (2.1) are denoted as 𝜽Θ\bm{\theta}\in\Theta, where 𝜽\bm{\theta} is a high-dimensional vector and Θ\Theta is the space of 𝜽\bm{\theta}. The DNN representation of a continuous function can be viewed as

F=F(𝒙;𝜽).F=F(\bm{x};\bm{\theta}). (2.2)

Let 𝔽={F(𝒙;𝜽)𝜽Θ}\mathbb{F}=\{F(\bm{x};\bm{\theta})\mid\bm{\theta}\in\Theta\} denote the set of all expressible functions by the DNN parametrized by 𝜽Θ\bm{\theta}\in\Theta. The approximation property of the DNN, which is relevant to the study of a DNN model’s expressive power, have been discussed in other papers[25, 53]. To accelerate the training of the neural network, we use the Adam optimizer [33]version of the stochastic gradient descent (SGD) method in two-dimensional case[41].

Refer to caption
Figure 2: A diagram of the deep neural network architecture

3 2D Degenerate Elliptic Interface Problem

3.1 Problem description

Consider the following nonlinear degenerate elliptic equation with the interface,

(β(𝒙)u)=f(𝒙,u), in ΩΩ+,-\nabla\cdot(\beta(\bm{x})\nabla u)=f(\bm{x},u),\text{ in }\Omega^{-}\cup\Omega^{+}, (3.1)
[u]=w, on Γ,[u]=w,\text{ on }\Gamma,
[β(𝒙)u𝒏]=v, on Γ,[\beta(\bm{x})\nabla u\cdot\bm{n}]=v,\text{ on }\Gamma,
u=g, on Ω.u=g,\text{ on }\partial\Omega.

where Ω\Omega is a bounded domain in R2R^{2}, with Lipschitz boundary Ω\partial\Omega, and the interface Γ\Gamma is closed and divides Ω\Omega into two disjoint sub-domains Ω\Omega^{-} and Ω+\Omega^{+}; ww and vv are two functions defined only along the interface Γ\Gamma. The function f(𝒙,u)f(\bm{x},u) contains uu and denotes the nonlinearity, and has different nonlinear forms with respect to uu. β\beta is weakly degenerate coefficient functions (degenerate points belong to the interface), it is also mentioned other poor properties such as β0\infty\geq\beta\geq 0 (β\beta tends to 0 on the interface). [u]=u+(x)u(x)=w[u]=u^{+}(x)-u^{-}(x)=w and [β(𝒙)u𝒏]=β+(𝒙)u+𝒏β(𝒙)u𝒏=v[\beta(\bm{x})\nabla u\cdot\bm{n}]=\beta^{+}(\bm{x})\nabla u^{+}\cdot\bm{n}-\beta^{-}(\bm{x})\nabla u^{-}\cdot\bm{n}=v are the difference of the limiting values of u(x)u(x) from Ω+\Omega^{+} and Ω\Omega^{-} respectively. Finally, gg is a determined function on the boundary Ω\partial\Omega.

3.2 DNN-FD method

In this research, we focus on using DNN to develop fully-decoupled numerical methods for solving degenerate interface problems. First, we divide the domain Ω\Omega into uniform Cartesian meshes, we use DNN to solve examined problems on singular sub-domains near the interface, then extract two decoupled boundary value sub-problems on regular sub-domains with no interface. Those two nonlinear systems can be computed in parallel by finite difference method,

Refer to caption
Figure 3: A diagram of the method in one-dimensional case
 (I) {(β(𝒙)u)=f(𝒙,u),𝒙Ω1,u=ut(𝒙;𝜽),𝒙Γ.\text{ (I) }\left\{\begin{array}[]{l}-\nabla\cdot(\beta^{-}(\bm{x})\nabla u^{-})=f^{-}(\bm{x},u^{-}),\quad\bm{x}\in\Omega_{1},\\ u^{-}=u^{-}_{t}(\bm{x};\bm{\theta}^{-}),\quad\bm{x}\in\Gamma^{-}.\end{array}\right. (3.2)
 (II) {(β+(𝒙)u+)=f+(𝒙,u+),𝒙Ω2,u+=ut+(𝒙;𝜽+),𝒙Γ+,u+=g,𝒙Ω.\text{ (II) }\left\{\begin{array}[]{l}-\nabla\cdot(\beta^{+}(\bm{x})\nabla u^{+})=f^{+}(\bm{x},u^{+}),\quad\bm{x}\in\Omega_{2},\\ u^{+}=u^{+}_{t}(\bm{x};\bm{\theta}^{+}),\quad\bm{x}\in\Gamma^{+},\\ u^{+}=g,\quad\bm{x}\in\partial\Omega.\end{array}\right. (3.3)

where f±f^{\pm}, β±\beta^{\pm} and u±u^{\pm} are the functions in Ω±\Omega^{\pm} respectively; Ω1\Omega_{1} and Ω2\Omega_{2} are regular domains shown in Fig.3 and ut±(𝒙;𝜽±)u^{\pm}_{t}(\bm{x};\bm{\theta}^{\pm}) are the result of the deep neural network in the next section.

The proposed method has the advantage of totally decoupling the original problems while using uniform grids. Because our fully decoupled technique is independent of the interface and jump conditions, it not only yields two nondegenerate sub-problems, but it can also easily handle the interface problems with large jump ratios. This method can easily handle both 1D and 2D cases. It is very simple to deal with difficulties like sharp-edge interface issues. While an effective approach for handling general interface problems is not suitable for solving such nonlinear degenerate interface problems, our method can be used robustly and efficiently to both general and degenerate interface problems.

3.2.1 Deep Neural Network Structure

In recent years, deep neural network has shown its strong ability in various fields[54, 23, 39, 15], mainly reflected in nonlinear fitting ability, high-dimensional data processing ability, excellent fault tolerance ability and strong feature extraction ability. Here, we apply it to the element mesh near the interface to solve the nonlinearity, degeneration and interface singularity of the original problem.

We apply DNN in the banded degenerate domain composed of near interface element grid in Fig.4. We construct the DNN structure on this domain instead of the whole area to approximate the solution uu. The reason is that we want to solve the singularity on the interface through the characteristics of DNN, in order to avoid the influence of regular domains on the accuracy of DNN. And the regular domains can be improved by better numerical methods. The problem is naturally separated into two nonsingular sub-problems[34, 22, 49],

u(𝒙)ut(𝒙;𝜽)={ut(𝒙;𝜽), if 𝒙ΩΩ1,ut+(𝒙;𝜽+), if 𝒙Ω+Ω2.u(\bm{x})\approx u_{t}(\bm{x};\bm{\theta})=\begin{cases}u^{-}_{t}(\bm{x};\bm{\theta^{-}}),&\text{ if }\bm{x}\in\Omega^{-}\setminus\Omega_{1},\\ u^{+}_{t}(\bm{x};\bm{\theta^{+}}),&\text{ if }\bm{x}\in\Omega^{+}\setminus\Omega_{2}.\end{cases} (3.4)
ut+(𝒙;𝜽+)=(|𝒙𝒙0|+1)g^(𝒙0)+|𝒙𝒙0|u^t+(𝒙;𝜽+).u_{t}^{+}\left(\bm{x};\bm{\theta}^{+}\right)=(|\bm{x}-\bm{x}_{0}|+1)\hat{g}(\bm{x}_{0})+|\bm{x}-\bm{x}_{0}|\hat{u}_{t}^{+}\left(\bm{x};\bm{\theta}^{+}\right). (3.5)

where 𝜽=(𝜽;𝜽+)Θ\bm{\theta}=(\bm{\theta^{-}};\bm{\theta^{+}})\in\Theta, the exact interface is the zero level set of the following level set function ϕ(𝒙0)=0\phi(\bm{x}_{0})=0. g^\hat{g} is an extension of gg near the interface and |.||.| is the Euclidean distance. u^t+\hat{u}_{t}^{+} will be obtained from deep learning networks. The construction of equation (3.5) aims to ensure the uniqueness of the solution. Similarly, depending on the shape of the interface, ut(𝒙;𝜽)u^{-}_{t}(\bm{x};\bm{\theta^{-}}) will also be constructed correspondingly. If the first jump condition across the interface is homogeneous, only one function ut(𝒙;𝜽)u_{t}(\bm{x};\bm{\theta}) can be used to approximate the solution uu.

Refer to caption
Figure 4: A diagram of the method in two-dimensional case

The structure of DNN with four hidden layers has been given in the Fig.2. The following is the selection of sampling points, which is divided into two types: one is to select interior points {𝒙k}k=1M1\left\{\bm{x}_{k}\right\}_{k=1}^{M_{1}}, {𝒙k}k=1M2\left\{\bm{x}_{k}\right\}_{k=1}^{M_{2}} which are random on the degenerate domains; and the other is the nodes {𝒙k}k=1M3\left\{\bm{x}_{k}\right\}_{k=1}^{M_{3}} on the element grids. In order to define the discrete loss function, all sampling points {𝒙k}k=1M1\left\{\bm{x}_{k}\right\}_{k=1}^{M_{1}}, {𝒙k}k=1M2\left\{\bm{x}_{k}\right\}_{k=1}^{M_{2}}, {𝒙k}k=1M3\left\{\bm{x}_{k}\right\}_{k=1}^{M_{3}} need to meet the first condition in (3.1),

L1(𝜽):=1M1+M3/2k=1M1+M3/2|βut(𝒙k;𝜽)f(𝒙k)|2,𝒙ΩΩ1,L_{1}(\bm{\theta}):=\frac{1}{M_{1}+M_{3}/2}\sum_{k=1}^{M_{1}+M_{3}/2}|-\nabla\cdot\beta^{-}\nabla u^{-}_{t}(\bm{x}_{k};\bm{\theta})-f^{-}(\bm{x}_{k})|^{2},\bm{x}\in\Omega^{-}\setminus\Omega_{1}, (3.6)
L2(𝜽):=1M2+M3/2k=1M2+M3/2|β+ut+(𝒙k;𝜽)f+(𝒙k)|2,𝒙Ω+Ω2.L_{2}(\bm{\theta}):=\frac{1}{M_{2}+M_{3}/2}\sum_{k=1}^{M_{2}+M_{3}/2}|-\nabla\cdot\beta^{+}\nabla u^{+}_{t}(\bm{x}_{k};\bm{\theta})-f^{+}(\bm{x}_{k})|^{2},\bm{x}\in\Omega^{+}\setminus\Omega_{2}. (3.7)

The nodes {𝒙k}k=1M3\left\{\bm{x}_{k}\right\}_{k=1}^{M_{3}} also need to meet the jump conditions across the interface,

L3(𝜽):=2M3k=1M3|ut+(𝒙ik+,jk+;𝜽)ut(𝒙ik,jk;𝜽)w|2,L_{3}(\bm{\theta}):=\frac{2}{M_{3}}\sum_{k=1}^{M_{3}}|u^{+}_{t}(\bm{x}_{i^{+}_{k},j^{+}_{k}};\bm{\theta})-u^{-}_{t}(\bm{x}_{i^{-}_{k},j^{-}_{k}};\bm{\theta})-w|^{2}, (3.8)
L4(𝜽):=2M3k=1M3|β+ut+(𝒙ik+,jk+;𝜽)𝒏βut(𝒙in,jn;𝜽)𝒏v|2.L_{4}(\bm{\theta}):=\frac{2}{M_{3}}\sum_{k=1}^{M_{3}}|\beta^{+}\nabla u^{+}_{t}(\bm{x}_{i^{+}_{k},j^{+}_{k}};\bm{\theta})\cdot\bm{n}-\beta^{-}\nabla u^{-}_{t}(\bm{x}_{i^{-}_{n},j^{-}_{n}};\bm{\theta})\cdot\bm{n}-v|^{2}. (3.9)

This structure is to solve the singularity and geometric irregularity on the interface. If we sample points directly from the interface, the separated sub-problems will be also degenerate.

In particular, there may be two cases for nodes, the first case is that the intersection of the interface and the grid is not a grid node shown in the Fig.4, such as α1\alpha_{1}, we can process by nodes close to the intersection in the horizontal or vertical direction,

|ut+(𝒙i1,j1;𝜽)ut(𝒙i1+1,j1;𝜽)w|2,|u^{+}_{t}(\bm{x}_{i_{1},j_{1}};\bm{\theta})-u^{-}_{t}(\bm{x}_{i_{1}+1,j_{1}};\bm{\theta})-w|^{2}, (3.10)
|β+ut+(𝒙i1,j1;𝜽)𝒏βut(𝒙i1+1,j1;𝜽)𝒏v|2.|\beta^{+}\nabla u^{+}_{t}(\bm{x}_{i_{1},j_{1}};\bm{\theta})\cdot\bm{n}-\beta^{-}\nabla u^{-}_{t}(\bm{x}_{i_{1}+1,j_{1}};\bm{\theta})\cdot\bm{n}-v|^{2}. (3.11)

The second case is that the interface just intersects with the grid at the node, such as α2\alpha_{2}. We need to deal with it through the four nodes around it,

|ut+(𝒙i2,j2;𝜽)ut(𝒙i2+2,j2;𝜽)w|2+ut+(𝒙i2+1,j21;𝜽)ut(𝒙i2+1,j2+1;𝜽)w|2,|u^{+}_{t}(\bm{x}_{i_{2},j_{2}};\bm{\theta})-u^{-}_{t}(\bm{x}_{i_{2}+2,j_{2}};\bm{\theta})-w|^{2}+u^{+}_{t}(\bm{x}_{i_{2}+1,j_{2}-1};\bm{\theta})-u^{-}_{t}(\bm{x}_{i_{2}+1,j_{2}+1};\bm{\theta})-w|^{2}, (3.12)
|β+ut+(𝒙i2,j2;𝜽)𝒏βut(𝒙i2+2,j2;𝜽)𝒏v|2+\displaystyle|\beta^{+}\nabla u^{+}_{t}(\bm{x}_{i_{2},j_{2}};\bm{\theta})\cdot\bm{n}-\beta^{-}\nabla u^{-}_{t}(\bm{x}_{i_{2}+2,j_{2}};\bm{\theta})\cdot\bm{n}-v|^{2}+ (3.13)
|β+ut+(𝒙i2+1,j21;𝜽)𝒏βut(𝒙i2+1,j2+1;𝜽)𝒏v|2.\displaystyle|\beta^{+}\nabla u^{+}_{t}(\bm{x}_{i_{2}+1,j_{2}-1};\bm{\theta})\cdot\bm{n}-\beta^{-}\nabla u^{-}_{t}(\bm{x}_{i_{2}+1,j_{2}+1};\bm{\theta})\cdot\bm{n}-v|^{2}.

now, we are ready to define the total discrete loss function as follows:

L(𝜽):=w1L1(𝜽)+w2L2(𝜽)+w3L3(𝜽)+w4L4(𝜽),L(\bm{\theta}):=w_{1}L_{1}(\bm{\theta})+w_{2}L_{2}(\bm{\theta})+w_{3}L_{3}(\bm{\theta})+w_{4}L_{4}(\bm{\theta}), (3.14)

where wi,i=1,2,3,4w_{i},i=1,2,3,4 are weights, which are used to solve the problem with large jump ratios. Therefore, each discrete loss function can be compared by the same order of magnitude. After we get the approximation of the gradient with respect to 𝜽k\bm{\theta}_{k}, we can update each component of 𝜽\bm{\theta} as

𝜽kn+1=𝜽knηL(𝜽)𝜽|𝜽=𝜽kn,\bm{\theta}_{k}^{n+1}=\bm{\theta}_{k}^{n}-\left.\eta\frac{\partial L(\bm{\theta})}{\partial\bm{\theta}}\right|_{\bm{\theta}=\bm{\theta}^{n}_{k}}, (3.15)

where 𝜽k\bm{\theta}_{k} is any component of 𝜽\bm{\theta} and η\eta is the learning rate. For the sake of simplicity, η\eta is usually taken as 10410^{-4} unless specified.

3.2.2 Finite Difference Scheme

On the regular domain, we can use better numerical methods to improve the accuracy of the whole regions. Here we use the finite difference method[6]. Take one of these areas as an example,

 (II) {(β+(𝒙)u+)=f+(𝒙,u+),𝒙Ω2,u+=ut+(𝒙;𝜽+),𝒙Γ+,u+=g,𝒙Ω.\text{ (II) }\left\{\begin{array}[]{l}-\nabla\cdot(\beta^{+}(\bm{x})\nabla u^{+})=f^{+}(\bm{x},u^{+}),\quad\bm{x}\in\Omega_{2},\\ u^{+}=u^{+}_{t}(\bm{x};\bm{\theta}^{+}),\quad\bm{x}\in\Gamma^{+},\\ u^{+}=g,\quad\bm{x}\in\partial\Omega.\end{array}\right.

Suppose that the function u+u^{+} has the following nodes (x1i,x2j)({x_{1}}_{i},{x_{2}}_{j}) on the domain Ω=[a,b]×[c,d]\Omega=[a,b]\times[c,d], where

a=x10<x11<x12<<x1i<<x1N1<x1N=b,a={x_{1}}_{0}<{x_{1}}_{1}<{x_{1}}_{2}<\cdots<{x_{1}}_{i}<\cdots<{x_{1}}_{N-1}<{x_{1}}_{N}=b,
c=x20<x21<x22<<x2j<<x2M1<x2M=d.c={x_{2}}_{0}<{x_{2}}_{1}<{x_{2}}_{2}<\cdots<{x_{2}}_{j}<\cdots<{x_{2}}_{M-1}<{x_{2}}_{M}=d.

The steps are h1h_{1} and h2h_{2} respectively, and x1i=x10+ih1(i=0,1,,N){x_{1}}_{i}={x_{1}}_{0}+ih_{1}~{}(i=0,1,\cdots,N), x2j=x20+jh2(j=0,1,,M){x_{2}}_{j}={x_{2}}_{0}+jh_{2}~{}(j=0,1,\cdots,M). By Taylor formula, numerical calculation usually uses the following first-order central difference quotient and second-order central difference quotient to approximate the first-order partial derivative and second-order partial derivative of the function u+u^{+} at the node (x1i,x2j)({x_{1}}_{i},{x_{2}}_{j}) respectively,

δx1uij+=ui+1/2,j+ui1/2,j+h1,δx2uij+=ui,j+1/2+ui,j1/2+h2.\delta_{x_{1}}u^{+}_{ij}=\frac{u^{+}_{i+1/2,j}-u^{+}_{i-1/2,j}}{h_{1}},~{}\delta_{x_{2}}u^{+}_{ij}=\frac{u^{+}_{i,j+1/2}-u^{+}_{i,j-1/2}}{h_{2}}. (3.16)
δx12uij+=ui+1,j+2uij++ui1,j+h12,δx22uij+=ui,j+1+2uij++ui,j1+h22.\delta_{x_{1}}^{2}u^{+}_{ij}=\frac{u^{+}_{i+1,j}-2u^{+}_{ij}+u^{+}_{i-1,j}}{h_{1}^{2}},~{}\delta_{x_{2}}^{2}u^{+}_{ij}=\frac{u^{+}_{i,j+1}-2u^{+}_{ij}+u^{+}_{i,j-1}}{h_{2}^{2}}. (3.17)

where x1i±1/2=x1i±h1/2{x_{1}}_{i\pm 1/2}={x_{1}}_{i}\pm{h_{1}}/2, x2j±1/2=x2j±h2/2{x_{2}}_{j\pm 1/2}={x_{2}}_{j}\pm{h_{2}}/2, uij+u^{+}_{ij} is the approximate value of the function u+u^{+} at the node.

For the equation (II), the difference quotient is used to approximate the partial derivative at the nodes, and the following difference equations can be obtained on the domain Ω2\Omega_{2}:

δx1(βij+δx1uij+)+δx2(βij+δx2uij+)=fij+,\delta_{x_{1}}\left(\beta^{+}_{ij}\delta_{x_{1}}u^{+}_{ij}\right)+\delta_{x_{2}}\left(\beta^{+}_{ij}\delta_{x_{2}}u^{+}_{ij}\right)=f^{+}_{ij}, (3.18)

where fij+=f+(x1i,x2j,uij+)f^{+}_{ij}=f^{+}({x_{1}}_{i},{x_{2}}_{j},u^{+}_{ij}). By substituting (3.16) and (3.17) into (3.18), we can get

1h12(βi+1/2,j+ui+1,j+(βi+1/2,j++βi1/2,j+)uij++βi1/2,j+ui1,j+)+\displaystyle\frac{1}{h_{1}^{2}}\left(\beta^{+}_{i+1/2,j}u^{+}_{i+1,j}-\left(\beta^{+}_{i+1/2,j}+\beta^{+}_{i-1/2,j}\right)u^{+}_{ij}+\beta^{+}_{i-1/2,j}u^{+}_{i-1,j}\right)+ (3.19)
1h22(βi,j+1/2+ui,j+1+(βi,j+1/2++βi,j1/2+)uij++βi,j1/2+ui,j1+)=fij+.\displaystyle\frac{1}{h_{2}^{2}}\left(\beta^{+}_{i,j+1/2}u^{+}_{i,j+1}-\left(\beta^{+}_{i,j+1/2}+\beta^{+}_{i,j-1/2}\right)u^{+}_{ij}+\beta^{+}_{i,j-1/2}u^{+}_{i,j-1}\right)=f^{+}_{ij}.

where βij+=β+(x1i,x2j)\beta^{+}_{ij}=\beta^{+}\left({x_{1}}_{i},{x_{2}}_{j}\right), βi±1/2,j+=β+(x1i±1/2,x2j)\beta^{+}_{i\pm 1/2,j}=\beta^{+}\left({x_{1}}_{i\pm 1/2},{x_{2}}_{j}\right), βi,j±1/2+=β+(x1i,x2j±1/2)\beta^{+}_{i,j\pm 1/2}=\beta^{+}\left({x_{1}}_{i},{x_{2}}_{j\pm 1/2}\right), i=1,,N1i=1,\cdots,N-1, j=1,,M1j=1,\cdots,M-1.

After discretizing the boundary value conditions, we can get

uij+=ut+(x1i,x2j;𝜽+),(x1i,x2j){𝒙k}k=1M3.u^{+}_{ij}=u^{+}_{t}({x_{1}}_{i},{x_{2}}_{j};\bm{\theta}^{+}),~{}({x_{1}}_{i},{x_{2}}_{j})\in\left\{\bm{x}_{k}\right\}_{k=1}^{M_{3}}.
u0j+=g0j,uNj+=gNj,ui0+=gi0,uiM+=giM,i=0,,N,j=0,,M.u^{+}_{0j}=g_{0j},u^{+}_{Nj}=g_{Nj},u^{+}_{i0}=g_{i0},u^{+}_{iM}=g_{iM},i=0,\cdots,N,j=0,\cdots,M. (3.20)

where gij=g(x1i,x2j)g_{ij}=g({x_{1}}_{i},{x_{2}}_{j}). Finally, the following iterative method is used to solve (3.19), set an initial value uij+(0)(i=1,,N1,j=1,,M1){u^{+}_{ij}}^{(0)}(i=1,\cdots,N-1,j=1,\cdots,M-1) and construct the sequence uij+(m)(i=1,,N1,j=1,,M1,m=0,1,){u^{+}_{ij}}^{(m)}(i=1,\cdots,N-1,j=1,\cdots,M-1,m=0,1,\cdots) according to the following formula:

1h12(βi+1/2,j+ui+1,j+(m)(βi+1/2,j++βi1/2,j+)uij+(m)+βi1/2,j+ui1,j+(m))+\displaystyle\frac{1}{h_{1}^{2}}\left(\beta^{+}_{i+1/2,j}u^{+(m)}_{i+1,j}-\left(\beta^{+}_{i+1/2,j}+\beta^{+}_{i-1/2,j}\right){u^{+}_{ij}}^{(m)}+\beta^{+}_{i-1/2,j}u^{+(m)}_{i-1,j}\right)+ (3.21)
1h22(βi,j+1/2+ui,j+1+(m)(βi,j+1/2++βi,j1/2+)uij+(m)+βi,j1/2+ui,j1+(m))=fij+(m).\displaystyle\frac{1}{h_{2}^{2}}\left(\beta^{+}_{i,j+1/2}u^{+(m)}_{i,j+1}-\left(\beta^{+}_{i,j+1/2}+\beta^{+}_{i,j-1/2}\right){u^{+}_{ij}}^{(m)}+\beta^{+}_{i,j-1/2}u^{+(m)}_{i,j-1}\right)={f^{+}_{ij}}^{(m)}.

4 Numerical examples

In this section, we present some numerical results to illustrate the expected convergence rates for different configurations. The convergence order of the approximate solutions, as measured by the errors, is denoted by

 order =log2(u2huL2/uhuL2),\text{ order }=\log_{2}\left(\left\|u_{2h}-u\right\|_{L^{2}}/\left\|u_{h}-u\right\|_{L^{2}}\right),

where uhu_{h} is the numerical solution with space step size hh and uu is the analytical solution.

4.1 1D degenerate interface with homogeneous jump conditions

Example 4.1. The degenerate differential equation with the homogeneous interface condition will be solved in Ω=(0,1),Ω+=(1,2)\Omega^{-}=(0,1),\Omega^{+}=(1,2), and the interface point α=1\alpha=1. The boundary condition and the source function are chosen so that the exact solution is[58]

u(x)={1τ(exp(1x)1/2+1),xΩ,1τ+(exp(x1)1/21),xΩ+.u(x)=\left\{\begin{array}[]{l}\frac{1}{\tau^{-}}\left(-\exp(1-x)^{1/2}+1\right),x\in\Omega^{-},\\ \frac{1}{\tau^{+}}\left(\exp(x-1)^{1/2}-1\right),x\in\Omega^{+}.\end{array}\right.

The coefficient β\beta is

β={τ(1x)1/2,xΩ,τ+(x1)1/2,xΩ+.\beta=\left\{\begin{array}[]{l}\tau^{-}(1-x)^{1/2},x\in\Omega^{-},\\ \tau^{+}(x-1)^{1/2},x\in\Omega^{+}.\end{array}\right.

Hence, the interface jump conditions,

[u]=w=0,[βux]=v=0.[u]=w=0,\quad\left[\beta u_{x}\right]=v=0.
Table 1: L2L^{2} errors and convergence orders (τ/τ+=\left(\tau^{-}/\tau^{+}=\right. 1012/1)\left.10^{12}/1\right) for Example 4.1.
uhuL2(Ω1)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{1})} uhuL2(Ω2)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{2})} uhuL2(Ω)\left\|u_{h}-u\right\|_{L^{2}(\Omega)}
N Error Order Error Order Error Order
10 4.77E034.77\mathrm{E}-03 - 6.14E036.14\mathrm{E}-03 - 4.39E034.39\mathrm{E}-03 -
20 1.15E031.15\mathrm{E}-03 2.05082.0508 1.73E031.73\mathrm{E}-03 1.82391.8239 1.22E031.22\mathrm{E}-03 1.84621.8462
40 3.02E043.02\mathrm{E}-04 1.92861.9286 4.70E044.70\mathrm{E}-04 1.88301.8830 3.10E043.10\mathrm{E}-04 1.97671.9767
80 7.77E057.77\mathrm{E}-05 1.96081.9608 1.23E041.23\mathrm{E}-04 1.92781.9278 8.07E058.07\mathrm{E}-05 1.94281.9428
160 1.98E051.98\mathrm{E}-05 1.96681.9668 3.19E053.19\mathrm{E}-05 1.95201.9520 2.03E052.03\mathrm{E}-05 1.98641.9864
Refer to caption
(a) τ/τ+=1012/1\tau^{-}/\tau^{+}=10^{12}/1
Refer to caption
(b) τ/τ+=1/1012\tau^{-}/\tau^{+}=1/10^{12}
Figure 5: Comparison between exact and DNN-FD solutions for Example 4.1 (N=160).
Table 2: L2L^{2} errors and convergence orders (τ/τ+=\left(\tau^{-}/\tau^{+}=\right. 1/1012)\left.1/10^{12}\right) for Example 4.1.
uhuL2(Ω1)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{1})} uhuL2(Ω2)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{2})} uhuL2(Ω)\left\|u_{h}-u\right\|_{L^{2}(\Omega)}
N Error Order Error Order Error Order
10 1.42E021.42\mathrm{E}-02 - 7.75E037.75\mathrm{E}-03 - 7.30E037.30\mathrm{E}-03 -
20 2.94E032.94\mathrm{E}-03 2.27672.2767 2.23E032.23\mathrm{E}-03 1.79771.7977 1.84E031.84\mathrm{E}-03 1.98341.9834
40 6.61E046.61\mathrm{E}-04 2.15462.1546 5.23E045.23\mathrm{E}-04 2.09032.0903 4.76E044.76\mathrm{E}-04 1.95471.9547
80 1.57E041.57\mathrm{E}-04 2.07192.0719 1.34E041.34\mathrm{E}-04 1.96231.9623 1.16E041.16\mathrm{E}-04 2.03762.0376
160 3.60E053.60\mathrm{E}-05 2.12532.1253 3.24E053.24\mathrm{E}-05 2.05122.0512 2.80E052.80\mathrm{E}-05 2.04962.0496
Refer to caption
(a) τ/τ+=1012/1\tau^{-}/\tau^{+}=10^{12}/1
Refer to caption
(b) τ/τ+=1/1012\tau^{-}/\tau^{+}=1/10^{12}
Figure 6: The decay of the loss functions for Example 4.1 (N=160).

We test the current method for the classical interface problem with homogeneous jump conditions. The network used 4 intermediate layers. The width of each layer is 6 and the number of sampling points is 202, including 200 interior points and two grid nodes. The numerical results of the current method for the very big jump ratios (τ/τ+=\left(\tau^{-}/\tau^{+}=\right. 1012/1andτ/τ+=1/1012)\left.10^{12}/1~{}\text{and}~{}\tau^{-}/\tau^{+}=1/10^{12}\right) are shown in Table 1 and Table 2 respectively. It can be seen clearly that the convergence orders reach the second order for the numerical solution in L2L^{2} norms. Fig.5 shows the comparison between the exact solution and the numerical solution for the very big jump ratios when N=160. In Fig.6, we present the decay of the loss function during the training process respectively, eventually the error between the DNN solution and the exact solution reduces to about O(104)O(10^{-4}) near the interface.

Many other well-known methods usually give the numerical results with the jump ratios (τ/τ+=\left(\tau^{-}/\tau^{+}=\right. 103/1andτ/τ+=1/103)\left.10^{3}/1~{}\text{and}~{}\tau^{-}/\tau^{+}=1/10^{3}\right) for the one-dimensional or two-dimensional interface problems[27], while it can be calculated by the method used in this paper with the jump ratios (τ/τ+=\left(\tau^{-}/\tau^{+}=\right. 1012/1andτ/τ+=1/1012)\left.10^{12}/1~{}\text{and}~{}\tau^{-}/\tau^{+}=1/10^{12}\right). The time for the deep neural network required to simulate the function is approximately 1263 seconds when N=160.

4.2 1D degenerate interface with nonhomogeneous jump conditions

Example 4.2. In this example, the computational domain and interface (a point) are the same as in the previous example. The source function f(x,u)f(x,u) are chosen such that the exact solution is as follows[58]:

u(x)={u(x)=exp((1x)2/3),xΩ,u+(x)=exp((x1)1/2)+5,xΩ+.u(x)=\left\{\begin{array}[]{l}u^{-}(x)=\exp\left((1-x)^{2/3}\right),x\in\Omega^{-},\\ u^{+}(x)=\exp\left((x-1)^{1/2}\right)+5,x\in\Omega^{+}.\end{array}\right.

The coefficient β\beta is

β={β=τ(1x)1/3,xΩ,β+=τ+(x1)1/2,xΩ+.\beta=\left\{\begin{array}[]{l}\beta^{-}=\tau^{-}(1-x)^{1/3},x\in\Omega^{-},\\ \beta^{+}=\tau^{+}(x-1)^{1/2},x\in\Omega^{+}.\end{array}\right.

The experiment satisfies the following jump conditions,

[u]=w=5,[βux]=v=12τ++23τ.[u]=w=5,~{}\left[\beta u_{x}\right]=v=\frac{1}{2}\tau^{+}+\frac{2}{3}\tau^{-}.
Table 3: L2L^{2} errors and convergence orders (τ/τ+=\left(\tau^{-}/\tau^{+}=\right. 1012/1)\left.10^{12}/1\right) for Example 4.2.
uhuL2(Ω1)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{1})} uhuL2(Ω2)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{2})} uhuL2(Ω)\left\|u_{h}-u\right\|_{L^{2}(\Omega)}
N Error Order Error Order Error Order
10 1.23E021.23\mathrm{E}-02 - 5.80E035.80\mathrm{E}-03 - 6.54E036.54\mathrm{E}-03 -
20 4.00E034.00\mathrm{E}-03 1.62791.6279 1.57E031.57\mathrm{E}-03 1.88221.8822 1.64E031.64\mathrm{E}-03 1.99621.9962
40 1.03E031.03\mathrm{E}-03 1.95151.9515 4.07E044.07\mathrm{E}-04 1.95191.9519 4.55E044.55\mathrm{E}-04 1.84711.8471
80 2.68E042.68\mathrm{E}-04 1.94931.9493 9.93E059.93\mathrm{E}-05 2.03492.0349 1.15E041.15\mathrm{E}-04 1.98531.9853
160 6.83E056.83\mathrm{E}-05 1.97061.9706 2.61E052.61\mathrm{E}-05 1.92511.9251 3.02E053.02\mathrm{E}-05 1.93031.9303
Table 4: L2L^{2} errors and convergence orders (τ/τ+=\left(\tau^{-}/\tau^{+}=\right. 1/1012)\left.1/10^{12}\right) for Example 4.2.
uhuL2(Ω1)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{1})} uhuL2(Ω2)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{2})} uhuL2(Ω)\left\|u_{h}-u\right\|_{L^{2}(\Omega)}
N Error Order Error Order Error Order
10 3.15E033.15\mathrm{E}-03 - 1.37E021.37\mathrm{E}-02 - 8.52E038.52\mathrm{E}-03 -
20 7.70E047.70\mathrm{E}-04 2.03272.0327 4.24E034.24\mathrm{E}-03 1.69351.6935 2.32E032.32\mathrm{E}-03 1.87301.8730
40 2.22E042.22\mathrm{E}-04 1.79511.7951 9.79E049.79\mathrm{E}-04 2.11542.1154 6.08E046.08\mathrm{E}-04 1.93461.9346
80 5.74E055.74\mathrm{E}-05 1.95081.9508 2.75E042.75\mathrm{E}-04 1.83211.8321 1.56E041.56\mathrm{E}-04 1.96371.9637
160 1.45E051.45\mathrm{E}-05 1.97611.9761 6.53E056.53\mathrm{E}-05 2.07452.0745 4.02E054.02\mathrm{E}-05 1.95451.9545
Refer to caption
(a) τ/τ+=1012/1\tau^{-}/\tau^{+}=10^{12}/1
Refer to caption
(b) The decay of the loss functions
Figure 7: Comparison between exact and DNN-FD solutions for Example 4.2 (N=80).

This is an experiment with nonhomogeneous jump conditions and the requirements for the numerical algorithms problem is higher and stricter to the numerical algorithms. First, we present the convergence order of the variables with large jump ratios (τ/τ+=\left(\tau^{-}/\tau^{+}=\right. 1012/1andτ/τ+=1/1012)\left.10^{12}/1~{}\text{and}~{}\tau^{-}/\tau^{+}=1/10^{12}\right) in Table 3 and Table 4 namely. It can be seen that the convergence orders for the case of nonhomogeneous jump conditions are the second order. Fig.7a shows the comparison between the exact solution and the numerical solution for the large jump ratio when N=80. In Fig.7b, we plot the decay of the L2L^{2} norm error between the DNN solution and the exact solution during the training process with the large jump ratio (τ/τ+=\left(\tau^{-}/\tau^{+}=\right. 1012/1)\left.10^{12}/1\right) when N=80 (case 2).

Second, to compare with the methods in the literature[58], we also calculate the results of this experiment with the jump ratio (τ/τ+=\left(\tau^{-}/\tau^{+}=\right. 107/1)\left.10^{7}/1\right). In Fig.7b, we plot the decay of the loss functions during the training process with jump ratios (τ/τ+=\left(\tau^{-}/\tau^{+}=\right. 107/1andτ/τ+=\left.10^{7}/1~{}\text{and}~{}\tau^{-}/\tau^{+}=\right. 1012/1)\left.10^{12}/1\right) when N=80. It can be seen that dealing with a smaller jump ratio is more simple and efficient. Finally, using this example, the two methods can calculate homogeneous and nonhomogeneous degenerate problems in one dimension, and the choice of coefficients can be constant, variable, or with singular properties. The advantage of the DNN-FD method is that the jump ratio of the calculated coefficients is bigger than that of the method in[58]. The method can also be extended to two-dimensional degenerate interfaces with the large jump ratio in the next section. This example takes approximately 1298 seconds when N=160, showing that the current method has no essential difference whether the jump conditions are homogeneous or not.

4.3 2D degenerate interface with nohomogeneous jump conditions

Example 4.3. In this example, we consider the interface problem with nonhomogeneous jump conditions. The exact solution is[27]

u(𝒙)={u(𝒙)=x12+x22+2,𝒙Ω,u+(𝒙)=1x12x22,𝒙Ω+.u(\bm{x})=\left\{\begin{array}[]{l}u^{-}(\bm{x})=x_{1}^{2}+x_{2}^{2}+2,\bm{x}\in\Omega^{-},\\ u^{+}(\bm{x})=1-x_{1}^{2}-x_{2}^{2},\bm{x}\in\Omega^{+}.\end{array}\right.

The coefficient β\beta is

β={β=τ(cos(x12+x22(0.5)2)+1),𝒙Ω;β+=τ+(3x1x2),𝒙Ω+.\beta=\left\{\begin{array}[]{l}\beta^{-}=\tau^{-}(-\cos(x_{1}^{2}+x_{2}^{2}-(0.5)^{2})+1),\bm{x}\in\Omega^{-};\\ \beta^{+}=\tau^{+}(3-x_{1}x_{2}),\bm{x}\in\Omega^{+}.\end{array}\right.

where Ω={𝒙||𝒙<0.5},Ω+=Ω\Ω,Ω=[1,1]×[1,1]\Omega^{-}=\{\bm{x}||\bm{x}\mid<0.5\},\Omega^{+}=\Omega\backslash\Omega^{-},\Omega=[-1,1]\times[-1,1], and r=x12+x22r=\sqrt{x_{1}^{2}+x_{2}^{2}}. The exact interface is the zero level set of the following level set function,

ϕ(𝒙)=x12+x22(0.5)2.\phi(\bm{x})=x_{1}^{2}+x_{2}^{2}-(0.5)^{2}.
Table 5: L2L^{2} errors and convergence orders (τ/τ+=\left(\tau^{-}/\tau^{+}=\right. 1010/1)\left.10^{10}/1\right) for Example 4.3.
uhuL2(Ω1)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{1})} uhuL2(Ω2)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{2})} uhuL2(Ω)\left\|u_{h}-u\right\|_{L^{2}(\Omega)}
N Error Order Error Order Error Order
10 ×\times 10 1.45E021.45\mathrm{E}-02 - 1.15E021.15\mathrm{E}-02 - 8.16E038.16\mathrm{E}-03 -
20 ×\times 20 4.19E034.19\mathrm{E}-03 1.79781.7978 3.32E033.32\mathrm{E}-03 1.79811.7981 2.45E032.45\mathrm{E}-03 1.73351.7335
40 ×\times 40 1.09E031.09\mathrm{E}-03 1.93961.9396 8.56E048.56\mathrm{E}-04 1.95531.9553 6.29E046.29\mathrm{E}-04 1.96361.9636
80 ×\times 80 2.83E042.83\mathrm{E}-04 1.94581.9458 2.23E042.23\mathrm{E}-04 1.93661.9366 1.61E041.61\mathrm{E}-04 1.95981.9598
160 ×\times 160 7.40E057.40\mathrm{E}-05 1.93671.9367 5.73E055.73\mathrm{E}-05 1.96511.9651 4.15E054.15\mathrm{E}-05 1.95981.9598
Refer to caption
(a) DNN-FD Solution
Refer to caption
(b) Exact Solution
Figure 8: Comparison between exact and DNN-FD solutions for Example 4.3 when N=160 (τ/τ+=1/1010)\left(\tau^{-}/\tau^{+}=1/10^{10}\right).
Table 6: L2L^{2} errors and convergence orders (τ/τ+=\left(\tau^{-}/\tau^{+}=\right. 1/1010)\left.1/10^{10}\right) for Example 4.3.
uhuL2(Ω1)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{1})} uhuL2(Ω2)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{2})} uhuL2(Ω)\left\|u_{h}-u\right\|_{L^{2}(\Omega)}
N Error Order Error Order Error Order
10 ×\times 10 2.45E022.45\mathrm{E}-02 - 1.37E021.37\mathrm{E}-02 - 2.52E022.52\mathrm{E}-02 -
20 ×\times 20 6.12E036.12\mathrm{E}-03 2.00202.0020 3.70E033.70\mathrm{E}-03 1.88981.8898 6.22E036.22\mathrm{E}-03 2.02082.0208
40 ×\times 40 1.53E031.53\mathrm{E}-03 2.00002.0000 9.55E049.55\mathrm{E}-04 1.95641.9564 1.54E031.54\mathrm{E}-03 2.01142.0114
80 ×\times 80 3.82E043.82\mathrm{E}-04 2.00012.0001 2.52E042.52\mathrm{E}-04 1.92021.9202 3.84E043.84\mathrm{E}-04 2.00592.0059
160 ×\times 160 9.57E059.57\mathrm{E}-05 2.00002.0000 5.92E055.92\mathrm{E}-05 2.09052.0905 9.59E059.59\mathrm{E}-05 2.00282.0028
Refer to caption
(a) τ/τ+=1010/1\tau^{-}/\tau^{+}=10^{10}/1
Refer to caption
(b) τ/τ+=1/1010\tau^{-}/\tau^{+}=1/10^{10}
Figure 9: The decay of the loss functions for Example 4.3 (N=160).

We reconstruct the example from the literature[27] to degenerate it near the interface. It is a two-dimensional degenerate elliptic equation with nonhomogeneous jump conditions. The network used 6 intermediate layers. The width of each layer is 15 and the number of sampling interior points is 2000. In the running of the SGD method, we generate a new batch every 10 steps of updating. The numerical results of the present method for large jump ratios (τ/τ+=\left(\tau^{-}/\tau^{+}=\right. 1010/1andτ/τ+=1/1010)\left.10^{10}/1~{}\text{and}~{}\tau^{-}/\tau^{+}=1/10^{10}\right) are shown in Table 5 and Table 6 respectively. It can be seen that the convergence orders for the case of nonhomogeneous jump conditions are the second order. Fig.8 shows the comparison between the exact solution and the numerical solution for the large jump ratio (τ/τ+=1/1010)\left(\tau^{-}/\tau^{+}=1/10^{10}\right) when N=160. In Fig.9, we plot the decay of the loss functions during the training process with large jump ratios (τ/τ+=\left(\tau^{-}/\tau^{+}=\right. 1010/1andτ/τ+=1/1010)\left.10^{10}/1~{}\text{and}~{}\tau^{-}/\tau^{+}=1/10^{10}\right) when N=160. The two-dimensional case is more difficult than the one-dimensional case and takes more sampling points, but there is no essential difference in methods. The error between the DNN solution and the exact solution is also reduced to approximately O(104)O(10^{-4}) near the interface. This example shows that this method can be effectively extended to two-dimensional or even higher dimensional degenerate interface problems, and can also effectively solve the coefficients with the large jump ratio.

4.4 2D nondegenerate interface with homogeneous jump conditions

Example 4.4. In this example, we consider the nondegenerate interface problem with high contrast diffusion coefficients with homogeneous jump conditions. The exact solution is[24]

u(𝒙)={u(𝒙)=r3β,𝒙Ω,u+(𝒙)=r3β++(1β1β+)(0.5)3,𝒙Ω+.u(\bm{x})=\left\{\begin{array}[]{l}u^{-}(\bm{x})=\frac{r^{3}}{\beta^{-}},\bm{x}\in\Omega^{-},\\ u^{+}(\bm{x})=\frac{r^{3}}{\beta^{+}}+\left(\frac{1}{\beta^{-}}-\frac{1}{\beta^{+}}\right)(0.5)^{3},\bm{x}\in\Omega^{+}.\end{array}\right.

where Ω={𝒙||𝒙<0.5},Ω+=Ω\Ω,Ω=[1,1]×[1,1]\Omega^{-}=\{\bm{x}||\bm{x}\mid<0.5\},\Omega^{+}=\Omega\backslash\Omega^{-},\Omega=[-1,1]\times[-1,1], and r=x12+x22r=\sqrt{x_{1}^{2}+x_{2}^{2}}. The exact interface is the zero level set of the following level set function,

ϕ(𝒙)=x12+x22(0.5)2.\phi(\bm{x})=x_{1}^{2}+x_{2}^{2}-(0.5)^{2}.
Table 7: L2L^{2} errors and convergence orders (β/β+=\left(\beta^{-}/\beta^{+}=\right. 1010/1)\left.10^{10}/1\right) for Example 4.4.
uhuL2(Ω1)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{1})} uhuL2(Ω2)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{2})} uhuL2(Ω)\left\|u_{h}-u\right\|_{L^{2}(\Omega)}
N Error Order Error Order Error Order
10 ×\times 10 1.44E021.44\mathrm{E}-02 - 4.36E024.36\mathrm{E}-02 - 1.37E031.37\mathrm{E}-03 -
20 ×\times 20 3.56E033.56\mathrm{E}-03 2.02212.0221 1.15E031.15\mathrm{E}-03 1.92521.9252 3.27E033.27\mathrm{E}-03 2.06462.0646
40 ×\times 40 8.98E048.98\mathrm{E}-04 1.98911.9891 2.96E042.96\mathrm{E}-04 1.95481.9548 8.26E048.26\mathrm{E}-04 1.98571.9857
80 ×\times 80 2.26E042.26\mathrm{E}-04 1.98911.9891 2.08E052.08\mathrm{E}-05 1.97271.9727 7.55E047.55\mathrm{E}-04 1.98491.9849
160 ×\times 160 5.68E055.68\mathrm{E}-05 1.99261.9926 1.91E051.91\mathrm{E}-05 1.98401.9840 5.27E055.27\mathrm{E}-05 1.98691.9869
Refer to caption
(a) DNN-FD Solution
Refer to caption
(b) Exact Solution
Figure 10: Comparison between exact and DNN-FD solutions for Example 4.4 when N=160 (β/β+=\left(\beta^{-}/\beta^{+}=\right. 1010/1)\left.10^{10}/1\right).
Table 8: L2L^{2} errors and convergence orders (β/β+=\left(\beta^{-}/\beta^{+}=\right. 1/1010)\left.1/10^{10}\right) for Example 4.4.
uhuL2(Ω1)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{1})} uhuL2(Ω2)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{2})} uhuL2(Ω)\left\|u_{h}-u\right\|_{L^{2}(\Omega)}
N Error Order Error Order Error Order
10 ×\times 10 2.12E022.12\mathrm{E}-02 - 1.10E031.10\mathrm{E}-03 - 1.43E021.43\mathrm{E}-02 -
20 ×\times 20 5.47E035.47\mathrm{E}-03 1.95311.9531 2.74E032.74\mathrm{E}-03 2.01022.0102 3.59E033.59\mathrm{E}-03 1.99841.9984
40 ×\times 40 1.33E031.33\mathrm{E}-03 2.03492.0349 6.91E046.91\mathrm{E}-04 1.99121.9912 8.96E048.96\mathrm{E}-04 2.00322.0032
80 ×\times 80 3.26E043.26\mathrm{E}-04 2.03102.0310 1.73E051.73\mathrm{E}-05 1.99751.9975 2.35E042.35\mathrm{E}-04 1.93041.9304
160 ×\times 160 7.91E057.91\mathrm{E}-05 2.04712.0471 4.30E054.30\mathrm{E}-05 2.00752.0075 6.14E056.14\mathrm{E}-05 1.93641.9364
Refer to caption
(a) DNN-FD Solution
Refer to caption
(b) Exact Solution
Figure 11: Comparisons between exact and DNN-FD solutions for Example 4.4 when N=160 (β/β+=\left(\beta^{-}/\beta^{+}=\right. 1/1010)\left.1/10^{10}\right).

The method used in this paper can compute not only degenerate problems, but also nondegenerate problems. The numerical results of the present method for large jump ratios (β/β+=\left(\beta^{-}/\beta^{+}=\right. 1010/1andβ/β+=1/1010)\left.10^{10}/1~{}\text{and}~{}\beta^{-}/\beta^{+}=1/10^{10}\right) are shown in Table 7 and Table 8 respectively. It can be seen easily that the numerical solution has second-order convergence in the L2L^{2} norm.

Fig.10 and Fig.11 show the comparison between the exact solution and the numerical solution for large jump ratios (τ/τ+=\left(\tau^{-}/\tau^{+}=\right. 1010/1)\left.10^{10}/1\right) and (τ/τ+=1/1010)\left(\tau^{-}/\tau^{+}=1/10^{10}\right) when N=160 respectively. Due to the application of numerical methods on regular domains, the accuracy of this method is higher than that in [24], and because of the fully decoupled format, it can handle the problem with higher coefficients and the larger jump ratio.

4.5 2D nondegenerate flower shape interface

Example 4.5. In this example, we consider the flower shape interface problem. The exact solution is[27]

u(𝒙)={u(𝒙)=7x12+7x22+6,𝒙Ω,u+(𝒙)=55x125x22,𝒙Ω+.u(\bm{x})=\left\{\begin{array}[]{l}u^{-}(\bm{x})=7x_{1}^{2}+7x_{2}^{2}+6,\bm{x}\in\Omega^{-},\\ u^{+}(\bm{x})=5-5x_{1}^{2}-5x_{2}^{2},\bm{x}\in\Omega^{+}.\end{array}\right.

The coefficient β\beta is

β={β=(x12x22+3)/7,𝒙Ω,β+=(x1x2+2)/5,𝒙Ω+.\beta=\left\{\begin{array}[]{l}\beta^{-}=\left(x_{1}^{2}-x_{2}^{2}+3\right)/7,\bm{x}\in\Omega^{-},\\ \beta^{+}=(x_{1}x_{2}+2)/5,\bm{x}\in\Omega^{+}.\end{array}\right.

The exact interface is the zero level set of the following level set function,

ϕ=(x10.025)2+(x20.025)2(0.5+0.2sin(5θ))2,\displaystyle\phi=(x_{1}-0.02\sqrt{5})^{2}+(x_{2}-0.02\sqrt{5})^{2}-(0.5+0.2\sin(5\theta))^{2},
with {x(θ)=0.025+(0.5+0.2sin(5θ))cos(θ),y(θ)=0.025+(0.5+0.2sin(5θ))sin(θ),θ[0,2π).\displaystyle\text{ with }\left\{\begin{array}[]{l}x(\theta)=0.02\sqrt{5}+(0.5+0.2\sin(5\theta))\cos(\theta),\\ y(\theta)=0.02\sqrt{5}+(0.5+0.2\sin(5\theta))\sin(\theta),\end{array}\quad\theta\in[0,2\pi).\right.
Table 9: L2L^{2} errors and convergence orders of the DNN-FD method for Example 4.5.
uhuL2(Ω1)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{1})} uhuL2(Ω2)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{2})} uhuL2(Ω)\left\|u_{h}-u\right\|_{L^{2}(\Omega)}
N Error Order Error Order Error Order
10 ×\times 10 1.03E021.03\mathrm{E}-02 - 1.36E021.36\mathrm{E}-02 - 1.30E021.30\mathrm{E}-02 -
20 ×\times 20 2.63E032.63\mathrm{E}-03 1.97541.9754 3.61E033.61\mathrm{E}-03 1.91451.9145 3.51E033.51\mathrm{E}-03 1.89121.8912
40 ×\times 40 6.56E046.56\mathrm{E}-04 1.98341.9834 8.85E048.85\mathrm{E}-04 2.02852.0285 8.98E048.98\mathrm{E}-04 1.96521.9652
80 ×\times 80 1.66E041.66\mathrm{E}-04 2.00002.0000 2.19E042.19\mathrm{E}-04 2.00982.0098 2.32E042.32\mathrm{E}-04 1.95221.9522
160 ×\times 160 4.16E054.16\mathrm{E}-05 1.99791.9979 5.18E055.18\mathrm{E}-05 2.08472.0847 5.96E055.96\mathrm{E}-05 1.96091.9609
Refer to caption
(a) DNN-FD Solution
Refer to caption
(b) Exact Solution
Figure 12: Comparison between exact and DNN-FD solutions for Example 4.5 when N=160.
Refer to caption
(a) Sampling points in Ω\Omega^{-}
Refer to caption
(b) Sampling points in Ω+\Omega^{+}
Figure 13: The diagram of sampling points for Example 4.5.

The peculiarity of this example is that the problem has a complex smooth interface. It is designed to examine the performance of the DNN-FD method in dealing with geometric irregularities. Our method also has advantages in dealing with complex interface problems. This method becomes simple and efficient by applying a deep neural network near the interface. We present a grid refinement analysis in Table 9 that successfully reached the second order. Fig.13 shows the sampling points in the area of the method in this paper. It can be seen from the figure that we will set more sampling points near the curve with the large radian. Similarly, as dealing with the singularity and non-smoothness of the interface, we will set more sampling points. We take the points by sections based on different degeneracies, large jump ratios, and other conditions to show the properties of the interface well. Fig.12 shows the comparison between the exact solution and the numerical solution when N=160.

4.6 2D nondegenerate happy-face interface

Example 4.6. In this example, we consider the following more general self-adjoint elliptic interface problem,

(β(𝒙)u(𝒙))+σ(𝒙)u(𝒙)=f(𝒙), in Ω.-\nabla\cdot(\beta(\bm{x})\nabla u(\bm{x}))+\sigma(\bm{x})u(\bm{x})=f(\bm{x})\text{, in }\Omega.

The example is a happy-face interface and the coefficients β±\beta^{\pm} are symmetric positive definite matrices. The exact solution is[47, 27]

u(𝒙)={u(𝒙)=7x12+7x22+1,𝒙Ω,u+(𝒙)=55x125x22,𝒙Ω+.u(\bm{x})=\left\{\begin{array}[]{l}u^{-}(\bm{x})=7x_{1}^{2}+7x_{2}^{2}+1,\bm{x}\in\Omega^{-},\\ u^{+}(\bm{x})=5-5x_{1}^{2}-5x_{2}^{2},\bm{x}\in\Omega^{+}.\end{array}\right.

The coefficient β\beta is

β+(𝒙)=(x1x2+2x1x2+1x1x2+1x1x2+3),β(𝒙)=(x12x22+3x12x22+1x12x22+1x12x22+4).\beta^{+}(\bm{x})=\left(\begin{array}[]{ll}x_{1}x_{2}+2&x_{1}x_{2}+1\\ x_{1}x_{2}+1&x_{1}x_{2}+3\end{array}\right),\beta^{-}(\bm{x})=\left(\begin{array}[]{ll}x_{1}^{2}-x_{2}^{2}+3&x_{1}^{2}-x_{2}^{2}+1\\ x_{1}^{2}-x_{2}^{2}+1&x_{1}^{2}-x_{2}^{2}+4\end{array}\right).

The exact interface can be viewed in the literature[27]. The other coefficient σ\sigma is

σ(𝒙)={σ(𝒙)=x1x2+1,𝒙Ω,σ+(𝒙)=x12+x222,𝒙Ω+.\sigma(\bm{x})=\left\{\begin{array}[]{l}\sigma^{-}(\bm{x})=x_{1}x^{2}+1,\bm{x}\in\Omega^{-},\\ \sigma^{+}(\bm{x})=x_{1}^{2}+x_{2}^{2}-2,\bm{x}\in\Omega^{+}.\end{array}\right.
Table 10: L2L^{2} errors and convergence orders of the DNN-FD method for Example 4.6.
uhuL2(Ω1)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{1})} uhuL2(Ω2)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{2})} uhuL2(Ω)\left\|u_{h}-u\right\|_{L^{2}(\Omega)}
N Error Order Error Order Error Order
10 ×\times 10 1.50E021.50\mathrm{E}-02 - 7.13E027.13\mathrm{E}-02 - 2.37E022.37\mathrm{E}-02 -
20 ×\times 20 3.63E033.63\mathrm{E}-03 2.04492.0449 1.31E031.31\mathrm{E}-03 2.44402.4440 7.46E037.46\mathrm{E}-03 1.66591.6659
40 ×\times 40 1.42E031.42\mathrm{E}-03 1.35281.3528 4.75E044.75\mathrm{E}-04 1.46471.4647 1.36E031.36\mathrm{E}-03 2.45522.4552
80 ×\times 80 3.64E043.64\mathrm{E}-04 1.96441.9644 1.21E041.21\mathrm{E}-04 1.97291.9729 3.23E043.23\mathrm{E}-04 2.07522.0752
160 ×\times 160 9.15E059.15\mathrm{E}-05 1.99421.9942 2.97E052.97\mathrm{E}-05 2.02552.0255 8.36E058.36\mathrm{E}-05 1.95031.9503
Refer to caption
(a) DNN-FD Solution
Refer to caption
(b) Exact Solution
Figure 14: Comparison between exact and DNN-FD solutions for Example 4.6 when N=160.

The difficulty of the example is that the interfaces have kinks around ears and mouth. We present the convergence results in Table 10. Numerical results indicate that the DNN-FD solution always converges to the exact solution with second-order accuracy. And the exact solution and the numerical solution are compared in Fig.14 when N=160.

4.7 2D nondegenerate sharp-edged interface

Example 4.7. In this example, we consider the nonsmooth interface problem. The exact solution is[48, 28]

u(𝒙)={u(𝒙)=7x12+7x22+6,u+(𝒙)={x1+x2+1, if x1+x2>0,sin(x1+x2)+cos(x1+x2), if x1+x20.u(\bm{x})=\left\{\begin{array}[]{l}u^{-}(\bm{x})=7x_{1}^{2}+7x_{2}^{2}+6,\\ u^{+}(\bm{x})=\begin{cases}x_{1}+x_{2}+1,&\text{ if }x_{1}+x_{2}>0,\\ \sin\left(x_{1}+x_{2}\right)+\cos\left(x_{1}+x_{2}\right),&\text{ if }x_{1}+x_{2}\leq 0.\end{cases}\end{array}\right.

The coefficient β\beta is

β={β=(x12x22+3)/7,𝒙Ω,β+=8,𝒙Ω+.\beta=\left\{\begin{array}[]{l}\beta^{-}=\left(x_{1}^{2}-x_{2}^{2}+3\right)/7,\bm{x}\in\Omega^{-},\\ \beta^{+}=8,\bm{x}\in\Omega^{+}.\end{array}\right.

The exact interface is the zero level set of the following level set function,

φ(x)={x22x1, if x1+x2>0,x2+x1/2, if x1+x20.\varphi(x)=\begin{cases}x_{2}-2x_{1},&\text{ if }x_{1}+x_{2}>0,\\ x_{2}+x_{1}/2,&\text{ if }x_{1}+x_{2}\leq 0.\end{cases}
Table 11: L2L^{2} errors and convergence orders of the DNN-FD method for Example 4.7.
uhuL2(Ω1)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{1})} uhuL2(Ω2)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{2})} uhuL2(Ω)\left\|u_{h}-u\right\|_{L^{2}(\Omega)}
N Error Order Error Order Error Order
20 ×\times 20 2.48E022.48\mathrm{E}-02 - 2.55E022.55\mathrm{E}-02 - 1.99E021.99\mathrm{E}-02 -
40 ×\times 40 5.89E035.89\mathrm{E}-03 2.07802.0780 6.13E036.13\mathrm{E}-03 2.05842.0584 5.52E035.52\mathrm{E}-03 1.85291.8529
80 ×\times 80 1.50E031.50\mathrm{E}-03 1.96691.9669 1.56E031.56\mathrm{E}-03 1.97181.9718 1.41E031.41\mathrm{E}-03 1.96801.9680
160 ×\times 160 3.85E043.85\mathrm{E}-04 1.96671.9667 3.99E043.99\mathrm{E}-04 1.97101.9710 3.53E043.53\mathrm{E}-04 1.99821.9982
320 ×\times 320 9.38E059.38\mathrm{E}-05 2.03882.0388 9.742E059.742\mathrm{E}-05 2.03482.0348 9.30E059.30\mathrm{E}-05 1.92461.9246
Table 12: LL^{\infty} errors and convergence orders of the DNN-FD method for Example 4.7.
uhuL(Ω)\left\|u_{h}-u\right\|_{L^{\infty}(\Omega)} IFVE[48]
N Error Order Order
20 ×\times 20 1.51E021.51\mathrm{E}-02 - -
40 ×\times 40 3.64E033.64\mathrm{E}-03 2.05442.0544 1.31321.3132
80 ×\times 80 1.09E031.09\mathrm{E}-03 1.79431.7943 1.05051.0505
160 ×\times 160 3.06E043.06\mathrm{E}-04 1.77931.7793 1.01061.0106
320 ×\times 320 8.96E058.96\mathrm{E}-05 1.77161.7716 1.01391.0139
Refer to caption
(a) DNN-FD Solution
Refer to caption
(b) Exact Solution
Figure 15: Comparison between exact and DNN-FD solutions for Example 4.7 when N=320.

For nonsmooth interface problems, the method used in this paper can also be applied the numerical results of the current method are given in Table 11. In Table 11, we present a grid refinement analysis that successfully achieves the second order. In other words, the proposed method is not sensitive to the grid for the solution and interface. In Table 12, We also calculated the logarithmic ratios of LL^{\infty} errors. Although the scheme is the second order one and costs too much expensive works on the interface, it is so hard to get satisfactory results in [48] because of nonsmooth property of the interface. And the solution uu has a singularity at (0,0)(0,0) with blow-up derivatives. Our method has approximately the second-order convergence, the numercial results are much better than ones of IFVE method. Fig.15 shows the comparison between the exact solution and the numerical solution when N=320.

4.8 2D nondegenerate five-pointed star interface

Example 4.8. In this example, we consider the five-pointed star interface problem. The exact solution is[28]

u(𝒙)={u(𝒙)=8,𝒙Ω,u+(𝒙)=x12+x22+sin(x1+x2),𝒙Ω+.u(\bm{x})=\left\{\begin{array}[]{l}u^{-}(\bm{x})=8,\bm{x}\in\Omega^{-},\\ u^{+}(\bm{x})=x_{1}^{2}+x_{2}^{2}+\sin(x_{1}+x_{2}),\bm{x}\in\Omega^{+}.\end{array}\right.

The coefficient β\beta is

β={β=1,𝒙Ω,β+=2+sin(x1+x2),𝒙Ω+.\beta=\left\{\begin{array}[]{l}\beta^{-}=1,\bm{x}\in\Omega^{-},\\ \beta^{+}=2+\sin(x_{1}+x_{2}),\bm{x}\in\Omega^{+}.\end{array}\right.

The exact interface is the zero level set of the following level set function,

ϕ(r,θ)={Rsin(θt/2)sin(θt/2+θθr2π(i1)/5)r,θr+π(2i2)5θ<θr+π(2i1)5,Rsin(θt/2)sin(θt/2θ+θr2π(i1)/5)r,θr+π(2i3)5θ<θr+π(2i2)5.\phi(r,\theta)=\left\{\begin{array}[]{l}\frac{R\sin\left(\theta_{t}/2\right)}{\sin\left(\theta_{t}/2+\theta-\theta_{r}-2\pi(i-1)/5\right)}-r,\theta_{r}+\frac{\pi(2i-2)}{5}\leqslant\theta<\theta_{r}+\frac{\pi(2i-1)}{5},\\ \frac{R\sin\left(\theta_{t}/2\right)}{\sin\left(\theta_{t}/2-\theta+\theta_{r}-2\pi(i-1)/5\right)}-r,\theta_{r}+\frac{\pi(2i-3)}{5}\leqslant\theta<\theta_{r}+\frac{\pi(2i-2)}{5}.\end{array}\right.

with θt=π/5,θr=π/7,R=6/7 and i=1,2,3,4,5.\theta_{t}=\pi/5,\theta_{r}=\pi/7,R=6/7\text{ and }i=1,2,3,4,5.

Table 13: L2L^{2} errors and convergence orders of the DNN-FD method for Example 4.8.
uhuL2(Ω1)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{1})} uhuL2(Ω2)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{2})} uhuL2(Ω)\left\|u_{h}-u\right\|_{L^{2}(\Omega)}
N Error Order Error Order Error Order
20 ×\times 20 2.57E022.57\mathrm{E}-02 - 1.39E021.39\mathrm{E}-02 - 2.09E022.09\mathrm{E}-02 -
40 ×\times 40 6.13E036.13\mathrm{E}-03 2.05842.0584 3.54E033.54\mathrm{E}-03 1.97081.9708 5.84E035.84\mathrm{E}-03 1.84361.8436
80 ×\times 80 1.56E031.56\mathrm{E}-03 1.97181.9718 8.57E048.57\mathrm{E}-04 2.04872.0487 1.48E031.48\mathrm{E}-03 1.97161.9716
160 ×\times 160 3.99E043.99\mathrm{E}-04 1.97101.9710 2.17E042.17\mathrm{E}-04 1.98181.9818 3.74E043.74\mathrm{E}-04 1.98981.9898
320 ×\times 320 9.74E059.74\mathrm{E}-05 2.03482.0348 5.36E055.36\mathrm{E}-05 2.01602.0160 9.79E059.79\mathrm{E}-05 1.93641.9364
Refer to caption
(a) DNN-FD Solution
Refer to caption
(b) Exact Solution
Figure 16: Comparison between exact and DNN-FD solutions for Example 4.8 when N=320.

This example presents a more difficult challenge, that is, considering that the interface consists of several sharp-edged nonsmooth interfaces. Our method can also be applied after special processing for more complex nonsmooth interfaces, such as the five-pointed star interface. The numerical results of the current method for in Table 13. It can be seen that even if the non-smoothness of the interface changes, our method can always maintain the second-order accuracy. The exact solution and the numerical solution are compared in Fig.16 when N=320.

4.9 2D degenerate five-pointed star interface

Example 4.9. In this example, we consider the degenerate five-pointed star interface problem. The exact solution is[28]

u(𝒙)={u(𝒙)=6+sin(2πx1)sin(2πx2),𝒙Ω,u+(𝒙)=x12+x22+sin(x1+x2),𝒙Ω+.u(\bm{x})=\left\{\begin{array}[]{l}u^{-}(\bm{x})=6+\sin(2\pi x_{1})\sin(2\pi x_{2}),\bm{x}\in\Omega^{-},\\ u^{+}(\bm{x})=x_{1}^{2}+x_{2}^{2}+\sin(x_{1}+x_{2}),\bm{x}\in\Omega^{+}.\end{array}\right.

The coefficient β\beta is

β={β=(x167)2+(x267)2,𝒙Ω,β+=(x16sin(π/10)7sin(π/3))2+(x26sin(π/10)7sin(π/3))2,𝒙Ω+.\beta=\left\{\begin{array}[]{l}\beta^{-}=(x_{1}-\frac{6}{7})^{2}+(x_{2}-\frac{6}{7})^{2},\bm{x}\in\Omega^{-},\\ \beta^{+}=(x_{1}-\frac{6sin(\pi/10)}{7sin(\pi/3)})^{2}+(x_{2}-\frac{6sin(\pi/10)}{7sin(\pi/3)})^{2},\bm{x}\in\Omega^{+}.\end{array}\right.

The exact interface is the same as in the previous example.

Table 14: L2L^{2} errors and convergence orders of the DNN-FD method for Example 4.9.
uhuL2(Ω1)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{1})} uhuL2(Ω2)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{2})} uhuL2(Ω)\left\|u_{h}-u\right\|_{L^{2}(\Omega)}
N Error Order Error Order Error Order
20 ×\times 20 1.30E021.30\mathrm{E}-02 - 1.50E021.50\mathrm{E}-02 - 1.88E021.88\mathrm{E}-02 -
40 ×\times 40 3.54E033.54\mathrm{E}-03 1.87721.8772 3.63E033.63\mathrm{E}-03 2.04992.0499 5.10E035.10\mathrm{E}-03 1.88421.8842
80 ×\times 80 7.97E047.97\mathrm{E}-04 2.15082.1508 1.42E031.42\mathrm{E}-03 1.35281.3528 1.24E031.24\mathrm{E}-03 2.03852.0385
160 ×\times160 2.49E042.49\mathrm{E}-04 1.67691.6769 3.64E043.64\mathrm{E}-04 1.96441.9644 3.02E043.02\mathrm{E}-04 2.03912.0391
320 ×\times 320 6.11E056.11\mathrm{E}-05 2.02682.0268 9.25E059.25\mathrm{E}-05 1.97951.9795 8.11E058.11\mathrm{E}-05 1.89731.8973
Refer to caption
(a) DNN-FD Solution
Refer to caption
(b) Exact Solution
Figure 17: Comparison between exact and DNN-FD solutions for Example 4.9 when N=320.

In the last example, we reconstruct the examples from the original literature[28]. We will challenge one which is combining degenerate and nonsmooth interface problems, where the degenerate points are respectively the two angles of the five-pointed star on the positive and negative domains. Furthermore, because the solution of the problem is nonlinear, the difficulty of this example increases once again. The choice of the activation function has also changed, and the selected nonlinear activation function offers a good approximation to the solution of the problem. The numerical results of the current method are shown in Table 14. The experimental results have the second-order accuracy in the L2L^{2} norm. Fig.17 shows the comparison between the exact solution and the numerical solution when N=320.

4.10 2D degenerate interface with large jump conditions

Example 4.10. This example is based on the addition of a large jump ratio to Example 4.9. The boundary condition and the source function are chosen so that the exact solution is[58]

u(x)={7x12+7x22+6,xΩ,x12+x22+sin(x1+x2),xΩ+.u(x)=\left\{\begin{array}[]{l}7x_{1}^{2}+7x_{2}^{2}+6,x\in\Omega^{-},\\ x_{1}^{2}+x_{2}^{2}+\sin(x_{1}+x_{2}),x\in\Omega^{+}.\end{array}\right.

The coefficient β\beta is

β={β=τ((x167)2+(x267)2),𝒙Ω,β+=τ+((x16sin(π/10)7sin(π/3))2+(x26sin(π/10)7sin(π/3))2),𝒙Ω+.\beta=\left\{\begin{array}[]{l}\beta^{-}=\tau^{-}((x_{1}-\frac{6}{7})^{2}+(x_{2}-\frac{6}{7})^{2}),\bm{x}\in\Omega^{-},\\ \beta^{+}=\tau^{+}((x_{1}-\frac{6sin(\pi/10)}{7sin(\pi/3)})^{2}+(x_{2}-\frac{6sin(\pi/10)}{7sin(\pi/3)})^{2}),\bm{x}\in\Omega^{+}.\end{array}\right.

The exact interface is the zero level set of the following level set function,

ϕ(r,θ)={Rsin(θt/2)sin(θt/2+θθr2π(i1)/5)r,θr+π(2i2)5θ<θr+π(2i1)5,Rsin(θt/2)sin(θt/2θ+θr2π(i1)/5)r,θr+π(2i3)5θ<θr+π(2i2)5.\phi(r,\theta)=\left\{\begin{array}[]{l}\frac{R\sin\left(\theta_{t}/2\right)}{\sin\left(\theta_{t}/2+\theta-\theta_{r}-2\pi(i-1)/5\right)}-r,\theta_{r}+\frac{\pi(2i-2)}{5}\leqslant\theta<\theta_{r}+\frac{\pi(2i-1)}{5},\\ \frac{R\sin\left(\theta_{t}/2\right)}{\sin\left(\theta_{t}/2-\theta+\theta_{r}-2\pi(i-1)/5\right)}-r,\theta_{r}+\frac{\pi(2i-3)}{5}\leqslant\theta<\theta_{r}+\frac{\pi(2i-2)}{5}.\end{array}\right.

with θt=π/5,θr=π/7,R=6/7 and i=1,2,3,4,5.\theta_{t}=\pi/5,\theta_{r}=\pi/7,R=6/7\text{ and }i=1,2,3,4,5.

Refer to caption
Figure 18: The DNN-FD solution for Example 4.10 when N=320 (τ/τ+=1:1010\tau^{-}/\tau^{+}=1:10^{10}).
Table 15: L2L^{2} errors and convergence orders of the DNN-FD method for Example 4.10 (τ/τ+=1:1010\tau^{-}/\tau^{+}=1:10^{10}).
uhuL2(Ω1)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{1})} uhuL2(Ω2)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{2})} uhuL2(Ω)\left\|u_{h}-u\right\|_{L^{2}(\Omega)}
N Error Order Error Order Error Order
20 ×\times 20 1.30E021.30\mathrm{E}-02 - 1.50E021.50\mathrm{E}-02 - 1.88E021.88\mathrm{E}-02 -
40 ×\times 40 3.54E033.54\mathrm{E}-03 1.87721.8772 3.63E033.63\mathrm{E}-03 2.04992.0499 5.10E035.10\mathrm{E}-03 1.98421.9842
80 ×\times 80 7.67E047.67\mathrm{E}-04 2.15082.1508 1.42E031.42\mathrm{E}-03 1.35281.3528 1.24E031.24\mathrm{E}-03 2.03252.0325
160 ×\times160 1.49E041.49\mathrm{E}-04 1.67691.6769 3.64E043.64\mathrm{E}-04 1.96441.9644 3.02E043.02\mathrm{E}-04 1.89911.8991
320 ×\times 320 5.12E055.12\mathrm{E}-05 2.02682.0268 9.25E059.25\mathrm{E}-05 1.97951.9795 8.11E058.11\mathrm{E}-05 1.82731.8273
Table 16: L2L^{2} errors and convergence orders of the DNN-FD method for Example 4.10 (τ/τ+=1010:1\tau^{-}/\tau^{+}=10^{10}:1).
uhuL2(Ω1)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{1})} uhuL2(Ω2)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{2})} uhuL2(Ω)\left\|u_{h}-u\right\|_{L^{2}(\Omega)}
N Error Order Error Order Error Order
20 ×\times 20 1.29E021.29\mathrm{E}-02 - 1.50E021.50\mathrm{E}-02 - 1.88E021.88\mathrm{E}-02 -
40 ×\times 40 3.54E033.54\mathrm{E}-03 1.87721.8772 3.63E033.63\mathrm{E}-03 2.04992.0499 5.10E035.10\mathrm{E}-03 1.88421.8842
80 ×\times 80 7.97E047.97\mathrm{E}-04 2.15082.1508 5.12E035.12\mathrm{E}-03 1.35281.3528 1.24E031.24\mathrm{E}-03 1.93851.9385
160 ×\times160 2.49E042.49\mathrm{E}-04 1.67691.6769 3.62E043.62\mathrm{E}-04 1.96441.9644 3.02E043.02\mathrm{E}-04 1.85911.8591
320 ×\times 320 6.11E056.11\mathrm{E}-05 2.02682.0268 8.15E058.15\mathrm{E}-05 1.97951.9795 8.11E058.11\mathrm{E}-05 1.79031.7903

Our method can also be applied in the five-pointed star interface with large jump ratios. The numerical results of the current method for in Table 15 and Table16. It can be seen that even if the non-smoothness of the interface changes, our method can always maintain the second-order accuracy. The numerical solution is shown in Fig.18 when N=320.

4.11 2D interface problem with non analytical solution

Example 4.11. In this example, we consider the five-pointed star interface problem with non analytical solution which is constructed from Example 4.8. The coefficient β\beta is

β={β=1,𝒙Ω,β+=2+sin(x1+x2),𝒙Ω+.\beta=\left\{\begin{array}[]{l}\beta^{-}=1,\bm{x}\in\Omega^{-},\\ \beta^{+}=2+\sin(x_{1}+x_{2}),\bm{x}\in\Omega^{+}.\end{array}\right.

The exact interface is the zero level set of the following level set function,

ϕ(r,θ)={Rsin(θt/2)sin(θt/2+θθr2π(i1)/5)r,θr+π(2i2)5θ<θr+π(2i1)5,Rsin(θt/2)sin(θt/2θ+θr2π(i1)/5)r,θr+π(2i3)5θ<θr+π(2i2)5.\phi(r,\theta)=\left\{\begin{array}[]{l}\frac{R\sin\left(\theta_{t}/2\right)}{\sin\left(\theta_{t}/2+\theta-\theta_{r}-2\pi(i-1)/5\right)}-r,\theta_{r}+\frac{\pi(2i-2)}{5}\leqslant\theta<\theta_{r}+\frac{\pi(2i-1)}{5},\\ \frac{R\sin\left(\theta_{t}/2\right)}{\sin\left(\theta_{t}/2-\theta+\theta_{r}-2\pi(i-1)/5\right)}-r,\theta_{r}+\frac{\pi(2i-3)}{5}\leqslant\theta<\theta_{r}+\frac{\pi(2i-2)}{5}.\end{array}\right.

with θt=π/5,θr=π/7,R=6/7 and i=1,2,3,4,5.\theta_{t}=\pi/5,\theta_{r}=\pi/7,R=6/7\text{ and }i=1,2,3,4,5.

Table 17: L2L^{2} errors and convergence orders of ff and fhf_{h} for Example 4.11.
fhfL2(Ω1)\left\|f_{h}-f\right\|_{L^{2}(\Omega_{1})} fhfL2(Ω2)\left\|f_{h}-f\right\|_{L^{2}(\Omega_{2})} fhfL2(Ω)\left\|f_{h}-f\right\|_{L^{2}(\Omega)}
N Error Order Error Order Error Order
20 ×\times 20 2.57E022.57\mathrm{E}-02 - 1.39E021.39\mathrm{E}-02 - 2.09E022.09\mathrm{E}-02 -
40 ×\times 40 6.13E036.13\mathrm{E}-03 1.55841.5584 3.54E033.54\mathrm{E}-03 1.47081.4708 5.84E035.84\mathrm{E}-03 1.44361.4436
80 ×\times 80 1.56E031.56\mathrm{E}-03 1.37181.3718 8.57E048.57\mathrm{E}-04 1.54871.5487 1.48E031.48\mathrm{E}-03 1.47161.4716
160 ×\times 160 3.99E043.99\mathrm{E}-04 1.47101.4710 2.17E042.17\mathrm{E}-04 1.48181.4818 3.74E043.74\mathrm{E}-04 1.58981.5898
320 ×\times 320 9.74E059.74\mathrm{E}-05 1.43481.4348 5.36E055.36\mathrm{E}-05 1.51601.5160 9.79E059.79\mathrm{E}-05 1.53641.5364
Refer to caption
Figure 19: The DNN-FD solution for Example 4.11. when N=320.

We change the f(𝒙)=|𝒙𝒙0|(1+2log|𝒙𝒙0|)f^{-}(\bm{x})=|\bm{x}-\bm{x}_{0}|(1+2\log|\bm{x}-\bm{x}_{0}|), where ϕ(𝒙0)=0\phi(\bm{x}_{0})=0. This example presents a more difficult challenge, that is, considering that the interface consists of several sharp-edged nonsmooth interfaces and has non analytical solution. Our method can also be applied this example. The numerical results of the current method for in Table 17 where fhf_{h} is the right term calculated by the numerical solution uhu_{h}. Due to the lack of the analytical solution to the equation, we define L2L^{2} errors and convergence orders of the equation as the reference of stability during the operation. This value is stable around a constant, confirming the feasibility of the method. The numerical solution is shown in Fig.19 when N=320.

4.12 2D Linear elasticity interface problem

Example 4.12. Finally, we will consider the example with physical significance that is a linear elasticity PDE with a discontinuous stress tensor as follows,

𝕋=f(𝒙,u), in ΩΩ+,-\nabla\cdot\mathbb{T}=f(\bm{x},u),\text{ in }\Omega^{-}\cup\Omega^{+},
[u]=w, on Γ,[u]=w,\text{ on }\Gamma,
[𝕋𝒏]=v, on Γ,[\mathbb{T}\cdot\bm{n}]=v,\text{ on }\Gamma,
u=g, on Ω.u=g,\text{ on }\partial\Omega.

One application of the linear elasticity problem is to model the shape and location of fibroblast cells under stress. Let 𝐮=(u1,u2)T\mathbf{u}=\left(u_{1},u_{2}\right)^{T} denote the displacement field. Then, the strain tensor is

σ=12(𝐮+(𝐮)T).\sigma=\frac{1}{2}\left(\nabla\mathbf{u}+(\nabla\mathbf{u})^{T}\right).

then the elasticity tensor 𝕋\mathbb{T} is a linear transformation on the tensors. In the isotropic case, we have

𝕋σ=λTr(σ)𝟏+2μ(σ+σT).\mathbb{T}\sigma=\lambda\operatorname{Tr}(\sigma)\mathbf{1}+2\mu\left(\sigma+\sigma^{T}\right).

where λ\lambda and μ\mu are lamé constants, Tr(.)\operatorname{Tr(.)} is the trace operator, and 𝟏\mathbf{1} is the identity matrix. In this case, the above parameters satisfies the following relationships

μ=E2(1+ν),λ=Eν(1+ν)(12ν).\mu=\frac{E}{2(1+\nu)},\quad\lambda=\frac{E\nu}{(1+\nu)(1-2\nu)}.

where E is Young modulus and μ,ν\mu,\nu are Poisson’s ratio. The interface is defined in the polar coordinate

r=0.5+sin5θ7.r=0.5+\frac{\sin 5\theta}{7}.

We set the computational domain Ω=[1,1]×[1,1].\Omega=[-1,1]\times[-1,1]. The Dirichlet boundary condition and homogeneous jump conditions are determined in this example. Then we choose two groups of the Poisson’s ratio and the shear modulus as follows[10]

ν={ν=0.24, in Ω;ν+=0.20, in Ω+.,μ={μ=2000000, in Ω;μ+=1500000, in Ω+.\nu=\begin{cases}\nu^{-}=0.24,&\text{ in }\Omega^{-};\\ \nu^{+}=0.20,&\text{ in }\Omega^{+}.\end{cases},\mu=\begin{cases}\mu^{-}=2000000,&\text{ in }\Omega^{-};\\ \mu^{+}=1500000,&\text{ in }\Omega^{+}.\end{cases} (4.1)

and

ν={ν=0.24, in Ω;ν+=0.00024, in Ω+.,μ={μ=2000000, in Ω;μ+=1500000, in Ω+.\nu=\begin{cases}\nu^{-}=0.24,&\text{ in }\Omega^{-};\\ \nu^{+}=0.00024,&\text{ in }\Omega^{+}.\end{cases},\mu=\begin{cases}\mu^{-}=2000000,&\text{ in }\Omega^{-};\\ \mu^{+}=1500000,&\text{ in }\Omega^{+}.\end{cases} (4.2)
Refer to caption
Figure 20: The DNN-FD solution for Example 4.12 with (4.1) when N=320.
Table 18: L2L^{2} errors and convergence orders with (4.1) for Example 4.12.
uhuL2(Ω1)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{1})} uhuL2(Ω2)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{2})} uhuL2(Ω)\left\|u_{h}-u\right\|_{L^{2}(\Omega)}
N Error Order Error Order Error Order
20 ×\times 20 5.99E035.99\mathrm{E}-03 - 1.60E031.60\mathrm{E}-03 - 3.39E033.39\mathrm{E}-03 -
40 ×\times 40 1.45E041.45\mathrm{E}-04 1.92671.9267 4.77E044.77\mathrm{E}-04 1.75771.7577 8.92E048.92\mathrm{E}-04 1.92341.9234
80 ×\times 80 3.66E043.66\mathrm{E}-04 1.97461.9746 1.42E051.42\mathrm{E}-05 1.75031.7503 1.99E041.99\mathrm{E}-04 2.16472.1647
160 ×\times 160 1.04E041.04\mathrm{E}-04 1.99191.9919 2.83E052.83\mathrm{E}-05 2.32232.3223 4.79E054.79\mathrm{E}-05 2.00762.0076
320 ×\times 320 2.50E052.50\mathrm{E}-05 2.06532.0653 7.61E067.61\mathrm{E}-06 1.89121.8912 1.34E051.34\mathrm{E}-05 1.89961.8996
Table 19: L2L^{2} errors and convergence orders with (4.2) for Example 4.12.
uhuL2(Ω1)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{1})} uhuL2(Ω2)\left\|u_{h}-u\right\|_{L^{2}(\Omega_{2})} uhuL2(Ω)\left\|u_{h}-u\right\|_{L^{2}(\Omega)}
N Error Order Error Order Error Order
20 ×\times 20 4.08E034.08\mathrm{E}-03 - 1.82E031.82\mathrm{E}-03 - 1.38E031.38\mathrm{E}-03 -
40 ×\times 40 1.01E041.01\mathrm{E}-04 2.01082.0108 3.25E043.25\mathrm{E}-04 2.49392.4939 3.20E043.20\mathrm{E}-04 2.11622.1162
80 ×\times 80 2.23E042.23\mathrm{E}-04 2.18862.1886 8.59E058.59\mathrm{E}-05 2.21302.2130 6.35E056.35\mathrm{E}-05 2.33672.3367
160 ×\times 160 5.17E055.17\mathrm{E}-05 2.11082.1108 2.36E052.36\mathrm{E}-05 1.92781.9278 1.50E051.50\mathrm{E}-05 2.08282.0828
320 ×\times 320 1.40E061.40\mathrm{E}-06 1.88681.8868 5.26E065.26\mathrm{E}-06 2.17202.1720 3.53E063.53\mathrm{E}-06 2.08642.0864

The network used 6 intermediate layers. The width of each layer is 20 and the learning rate η\eta is 5×1045\times 10^{-4}.In Fig.20, we plot the profiles of the DNN-FD solution, which are the displacements in x1x_{1} and x2x_{2} coordinates, respectively. The corresponding numerical results are shown in Table 18 and Table 19. We find that the DNN-FD solutions have the second-order accuracy in the L2L^{2} norm.

5 Conclusions.

Numerical methods for solving nolinear degenerate interface problems is one of fundamental iusses in scientific computations, it is challenge to design effective and robust fully decoupled numerical method for such degenerate interface problems. In this paper, fully decoupled finite difference method based on deep neural network for solving degenerate interface problems including 1D and 2D cases is proposed. It is shown that we can adopt uniform grids to solve degenerate PDE with interface. There are no unknown augmented parameters in the discrete schemes, and no more extra conditions and works to be required for designing numerical approximation algorithms. In fact, some augmented variables is obtained by adopting DNN technique, the degenerate interface problem is completely decoupled two independent to the case of other degenerate or singular problems. The accuracy of the proposed fully decoupled algorithms has been demonstrated by solving various examples including degenerate and nondegenerate cases. In particular, the fully decoupled properties of the algorithm make the method capable of easy handling the jump ratio from the case of semi-decoupling 𝐁𝐈𝐆\mathbf{BIG} jump (such as 107:110^{7}:1 or 1:1071:10^{7}) to the case of fully decoupled 𝐕𝐄𝐑𝐘\mathbf{VERY} 𝐁𝐈𝐆\mathbf{BIG} jump (such as 1012:110^{12}:1 or 1:10121:10^{12}) conditions. An interesting typical sharp edge example with degenerate five-pointed star interface shows that our approach works very well for those very hard problems. Numerical examples confirm the effectiveness of the fully decoupled algorithms for solving degenerate interface problems.

Acknowledgments.

This work is partially supported by the National Natural Science Foundation of China(grants No. 11971241).

References

  • [1] L. Adams and Z. Li. The immersed interface/multigrid methods for interface problems. SIAM Journal on Scientific Computing, 24(2):463–479, 2002.
  • [2] J. Albright, Y. Epshteyn, M. Medvinsky, and Q. Xia. High-order numerical schemes based on difference potentials for 2d elliptic problems with material interfaces. Applied Numerical Mathematics, 111:64–91, 2017.
  • [3] T. Arbogast and M. F. Wheeler. A nonlinear mixed finite element method for a degenerate parabolic equation arising in flow in porous media. SIAM Journal on Numerical Analysis, 33(4):1669–1687, 1996.
  • [4] S. Baharlouei, R. Mokhtari, and F. Mostajeran. Dnn-hdg: A deep learning hybridized discontinuous galerkin method for solving some elliptic problems. Engineering Analysis with Boundary Elements, 151:656–669, 2023.
  • [5] W. Bao, Y. Cai, X. Jia, and Q. Tang. Numerical methods and comparison for the dirac equation in the nonrelativistic limit regime. Journal of Scientific Computing, 71(3):1094–1134, 2017.
  • [6] J. Beale and A. Layton. On the accuracy of finite difference methods for elliptic problems with interfaces. Communications in Applied Mathematics and Computational Science, 1(1):91–119, 2007.
  • [7] J. Beale and W. Ying. Solution of the dirichlet problem by a finite difference analog of the boundary integral equation. Numerische Mathematik, 141(3):605–626, 2019.
  • [8] J. Bedrossian, J. H. von Brecht, S. Zhu, E. Sifakis, and J. M. Teran. A second order virtual node method for elliptic problems with interfaces and irregular domains. Journal of Computational Physics, 229(18):6405–6426, 2010.
  • [9] F. Bernis and A. Friedman. Higher order nonlinear degenerate parabolic equations. Journal of Differential Equations, 83(1):179–206, 1990.
  • [10] B.Wang, K.-L.Xia, and G.-W.Wei. Matched interface and boundary method for elasticity interface problems. Journal of computational and applied mathematics, 285:203–225, 2015.
  • [11] Z. Cai, C. He, and S. Zhang. Discontinuous finite element methods for interface problems: robust a priori and a posteriori error estimates. SIAM Journal on Numerical Analysis, 55(1):400–418, 2017.
  • [12] W. Cao, X. Zhang, Z. Zhang, and Q. Zou. Superconvergence of immersed finite volume methods for one-dimensional interface problems. Journal of Scientific Computing, 73(2):543–565, 2017.
  • [13] S. Chen and J. Shen. Enriched spectral methods and applications to problems with weakly singular solutions. Journal of Scientific Computing, 77(3):1468–1489, 2018.
  • [14] Z. Chen and J. Zou. Finite element methods and their convergence for elliptic and parabolic interface problems. Numerische Mathematik, 79(2):175–202, 1998.
  • [15] R. Collobert and J. Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine learning, pages 160–167, 2008.
  • [16] M. J. Del Razo and R. J. LeVeque. Numerical methods for interface coupling of compressible and almost incompressible media. SIAM Journal on Scientific Computing, 39(3):B486–B507, 2017.
  • [17] Q. Du, M. Gunzburger, R. B. Lehoucq, and K. Zhou. Analysis and approximation of nonlocal diffusion problems with volume constraints. SIAM Review, 54(4):667–696, 2012.
  • [18] R. E. Ewing, Z. Li, T. Lin, and Y. Lin. The immersed finite volume element methods for the elliptic interface problems. Mathematics and Computers in Simulation, 50(1-4):63–76, 1999.
  • [19] M. Gunzburger, X. He, and B. Li. On stokes–ritz projection and multistep backward differentiation schemes in decoupling the stokes–darcy model. SIAM Journal on Numerical Analysis, 56(1):397–427, 2018.
  • [20] B.-Y. Guo and L.-L. Wang. Jacobi interpolation approximations and their applications to singular differential equations. Advances in Computational Mathematics, 14(3):227–276, 2001.
  • [21] R. Guo, T. Lin, and Y. Lin. Recovering elastic inclusions by shape optimization methods with immersed finite elements. Journal of Computational Physics, 404:109123, 2020.
  • [22] J. Han, A. Jentzen, et al. Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations. Communications in Mathematics and Statistics, 5(4):349–380, 2017.
  • [23] A. Handa, M. Bloesch, V. Pătrăucean, S. Stent, J. McCormac, and A. Davison. gvnn: Neural network library for geometric computer vision. In European Conference on Computer Vision, pages 67–82. Springer, 2016.
  • [24] C.-Y. He, X.-Z. Hu, and L. Mu. A mesh-free method using piecewise deep neural network for elliptic interface problems. Journal of Computational and Applied Mathematics, 412:114358, 2022.
  • [25] J. He, L. Li, J. Xu, and C. Zheng. Relu deep neural networks and linear finite elements. arXiv preprint arXiv:1807.03973, 2018.
  • [26] X. He, T. Lin, and Y. Lin. Interior penalty bilinear ife discontinuous galerkin methods for elliptic equations with discontinuous coefficient. Journal of Systems Science and Complexity, 23(3):467–483, 2010.
  • [27] S. Hou and X.-D. Liu. A numerical method for solving variable coefficient elliptic equation with interfaces. Journal of Computational Physics, 202(2):411–445, 2005.
  • [28] S. Hou, W. Wang, and L. Wang. Numerical method for solving matrix coefficient elliptic equation with sharp-edged interfaces. Journal of Computational Physics, 229(19):7162–7179, 2010.
  • [29] W.-F. Hu, T.-S. Lin, and M.-C. Lai. A discontinuity capturing shallow neural network for elliptic interface problems. Journal of Computational Physics, 469:111576, 2022.
  • [30] P. Huang, H. Wu, and Y. Xiao. An unfitted interface penalty finite element method for elliptic interface problems. Computer Methods in Applied Mechanics and Engineering, 323:439–460, 2017.
  • [31] H.-F. Ji, F.Wang, J.-R. Chen, and Z.-L. Li. An immersed cr-p0 element for stokes interface problems and the optimal convergence analysis. Computer Methods in Applied Mechanics and Engineering, 399:115306, 2022.
  • [32] W. Jiang, W. Bao, C. V. Thompson, and D. J. Srolovitz. Phase field approach for simulating solid-state dewetting problems. Acta Materialia, 60(15):5578–5592, 2012.
  • [33] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • [34] I. E. Lagaris, A. Likas, and D. I. Fotiadis. Artificial neural networks for solving ordinary and partial differential equations. IEEE Transactions on Neural Networks, 9(5):987–1000, 1998.
  • [35] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436–444, 2015.
  • [36] R. J. LeVeque and Z. Li. The immersed interface method for elliptic equations with discontinuous coefficients and singular sources. SIAM Journal on Numerical Analysis, 31(4):1019–1044, 1994.
  • [37] Z. Li, T. Lin, and X. Wu. New cartesian grid methods for interface problems using the finite element formulation. Numerische Mathematik, 96(1):61–98, 2003.
  • [38] B. Lusch, J. N. Kutz, and S. L. Brunton. Deep learning for universal linear embeddings of nonlinear dynamics. Nature Communications, 9(1):1–10, 2018.
  • [39] Y. Pao. Adaptive pattern recognition and neural networks. Reading, MA (US); Addison-Wesley Publishing Co., Inc., 1989.
  • [40] W. Ren and X.-P. Wang. An iterative grid redistribution method for singular problems in multiple dimensions. Journal of Computational Physics, 159(2):246–273, 2000.
  • [41] H. Robbins and S. Monro. A stochastic approximation method. The Annals of Mathematical Statistics, pages 400–407, 1951.
  • [42] J. Shen, T. Tang, and L.-L. Wang. Spectral methods: algorithms, analysis and applications, volume 41. Springer Science & Business Media, 2011.
  • [43] J. Shen and Y. Wang. Muntz Galerkin methods and applications to mixed dirichlet–neumann boundary value problems. SIAM Journal on Scientific Computing, 38(4):A2357–A2381, 2016.
  • [44] H. Sun and D. L. Darmofal. An adaptive simplex cut-cell method for high-order discontinuous galerkin discretizations of elliptic interface problems and conjugate heat transfer problems. Journal of Computational Physics, 278:445–468, 2014.
  • [45] C. Wang and R. Du. Approximate controllability of a class of semilinear degenerate systems with convection term. Journal of Differential Equations, 254(9):3665–3689, 2013.
  • [46] C. Wang and R. Du. Carleman estimates and null controllability for a class of degenerate parabolic equations with convection terms. SIAM Journal on Control and Optimization, 52(3):1457–1480, 2014.
  • [47] Q. Wang, J. Xie, Z. Zhang, and L. Wang. Bilinear immersed finite volume element method for solving matrix coefficient elliptic interface problems with non-homogeneous jump conditions. Computers & Mathematics with Applications, 86:1–15, 2021.
  • [48] Q. Wang, Z. Zhang, and L. Wang. New immersed finite volume element method for elliptic interface problems with non-homogeneous jump conditions. Journal of Computational Physics, 427:110075, 2021.
  • [49] Z. Wang and Z. Zhang. A mesh-free method for interface problems using the deep learning approach. Journal of Computational Physics, 400:108963, 2020.
  • [50] D. Wu, J. Yue, G. Yuan, and J. Lv. Finite volume element approximation for nonlinear diffusion problems with degenerate diffusion coefficients. Applied Numerical Mathematics, 140:23–47, 2019.
  • [51] K. Xia, M. Zhan, and G.-W. Wei. Mib galerkin method for elliptic interface problems. Journal of Computational and Applied Mathematics, 272:195–220, 2014.
  • [52] M. Xu, L. Zhang, and E. Tohidi. A fourth-order least-squares based reproducing kernel method for one-dimensional elliptic interface problems. Applied Numerical Mathematics, 162:124–136, 2021.
  • [53] D. Yarotsky. Error bounds for approximations with deep relu networks. Neural Networks, 94:103–114, 2017.
  • [54] B. Yu et al. The deep ritz method: a deep learning-based numerical algorithm for solving variational problems. Communications in Mathematics and Statistics, 6(1):1–12, 2018.
  • [55] Z. Zhang, P. Rosakis, T. Y. Hou, and G. Ravichandran. A minimal mechanosensing model predicts keratocyte evolution on flexible substrates. Journal of the Royal Society Interface, 17(166):20200175, 2020.
  • [56] M. Zhao, W. Ying, J. Lowengrub, and S. Li. An efficient adaptive rescaling scheme for computing moving interface problems. Communications in Computational Physics, 21(3):679–691, 2017.
  • [57] S. Zhao. High order matched interface and boundary methods for the helmholtz equation in media with arbitrarily curved interfaces. Journal of Computational Physics, 229(9):3155–3170, 2010.
  • [58] T. Zhao, K. Ito, and Z. Zhang. Semi-decoupling hybrid asymptotic and augmented finite volume method for nonlinear singular interface problems. Journal of Computational and Applied Mathematics, 396:113606, 2021.
  • [59] S. Zhou, Yongchengand Zhao, M. Feig, and G.-W. Wei. High order matched interface and boundary method for elliptic equations with discontinuous coefficients and singular sources. Journal of Computational Physics, 213(1):1–30, 2006.
  • [60] H. Zhu and C. Xu. A fast high order method for the time-fractional diffusion equation. SIAM Journal on Numerical Analysis, 57(6):2829–2849, 2019.
  • [61] L. Zhu, Z. Zhang, and Z. Li. An immersed finite volume element method for 2d pdes with discontinuous coefficients and non-homogeneous jump conditions. Computers & Mathematics with Applications, 70(2):89–103, 2015.