Decoupling Numerical Method Based on Deep Neural Network for Nonlinear Degenerate Interface Problems
Abstract Interface problems depict many fundamental physical phenomena and widely apply in the engineering. However, it is challenging to develop efficient fully decoupled numerical methods for solving degenerate interface problems in which
the coefficient of a PDE is discontinuous and greater than or equal to zero on the interface. The main motivation in this paper is to construct fully decoupled numerical methods for solving nonlinear degenerate interface problems with “double singularities”. An efficient fully decoupled numerical method is proposed for nonlinear degenerate interface problems. The scheme combines deep neural network on the singular subdomain with finite difference method on the regular subdomain. The key of the new approach is to split nonlinear degenerate partial differential equation with interface into two independent boundary value problems based on deep learning. The outstanding advantages of the proposed schemes are that not only the convergence order of the degenerate interface problems on whole domain is determined by the finite difference scheme on the regular subdomain, but also can calculate jump ratio(such as or ) for the interface problems including degenerate and non-degenerate cases. The expansion of the solutions does not contains any undetermined parameters in the numerical method. In this way, two independent nonlinear systems are constructed in other subdomains and can be computed in parallel. The flexibility, accuracy and efficiency of the methods are validated from various experiments in both 1D and 2D. Specially, not only our method is suitable for solving degenerate interface case, but also for non-degenerate interface case. Some application examples with complicated multi-connected and sharp edge interface examples including degenerate and nondegenerate cases are also presented.
Key words nonlinear degenerate interface problems; deep neural network; fully decoupled method; very big jump ratio; convergence order; sharp edge interface
Mathematics Subject Classification 34B16, 35R05, 65M85, 65N06, 68T99
1 Introduction
Nonlinear degenerate interface problems can depict many fundamental physical phenomena in chemical and mechanical engineering, physics and many other applications[16, 57, 36, 44, 40, 21]. For the standard interface problems, it has attracted great interests in numerical computations, such as finite element method[21, 3, 11, 14, 51], finite difference method[1, 59, 7, 6], finite volume element method [47, 18, 12, 61, 48], spectral method[42, 32, 13], least-squares method [52] and references therein. There has been a great deal of rigorous mathematical theory and numerical analysis to deal with degenerate PDE[45, 46, 17, 5, 43, 50, 20, 9, 19, 60]. To the best of our knowledge, degenerate interface problems have received less attention so far, a few notable approaches can be found in the literature to handle the degenerate PDE with interface[58, 59, 8]. As is well known, the difficulty lies in the “double singularities” for nonlinear degenerate interface problems, namely, degeneracy and interface. Generally speaking, the most expensive part work of numerical schemes on standard sharp interface problems[28, 30, 26, 31] is how to approximate the jump conditions very well. For example, there are many methods are interesting but the technique to treat the jump conditions is quite complicated. Nevertheless, our proposed approach based on deep neural network uses different, simple and natural techniques to treat the singularities compared with the above references, and hence obtains numerical method to solve nonlinear degenerate interface problems. In fact, the challenge work of numerical simulation on nonlinear degenerate interface problems are how to design the numerical methods not only to reduce the singularities affect at degenerate points, but also are less dependent or independent of the jump conditions. Due to nonlinear degenerate interface problems possess “double singularities”, it is usually required extremely fine grids such as adaptive mesh or graded mesh to reduce singularity affect. Obviously, it is impossible to use uniform grids to numerically solve nonlinear degenerate interface problems for the traditional numerical methods. The main goal of this paper is to present an efficient and fully decoupled finite difference methods with uniform grids based on deep neural network for solving nonlinear degenerate interface problems.
On the other hand, the deep neural network (DNN) models have achieved great successes in artificial intelligence fields including high-dimensional problems in computer vision, natural language processing, time series analysis, pattern and speech recognition. It is worth noting that even if there is a universal approximation theoretical results about the single layer neural network, the approximation theory of DNN still remain an open question. However, this should not prevent us from trying to apply deep learning to other problems such as numerical weather forecast, petroleum engineering, turbulence flow and interface problems. There are two main techniques to solve PDEs with deep learning, the first is to parameterize the solution of PDEs by the deep neural network (DNN). One of methods is that a universal approximation based on a neural network and point collocation are used to transform the PDE into an unconstrained minimization problem. The other one is that the original problem is transformed an optimization problem with variational form based on representing the trail functions by deep neural networks. Recently, we have noticed there are some gratifying works by using mesh free methods with DNN model to solve PEDs and interface problems[29, 24, 49, 4]. However, we will use structured mesh method with deep learning to deal with degenerate interface problems which is a challenge and is always of great interests. Although boundary conditions are absent on the singular sub-domains, which is known to be the extreme ill-posedness, it is shown that the DNN approach still has some merits in structured grids method. In addition, we use a hybrid asymptotic and augmented compact finite volume method to realize using semi-decoupling numerical method based on a uniform Cartesian mesh for solving 1D degenerate interface problem[58]. This inspires us to develop fully decoupled numerical method for solving the degenerate PDE with interface. Although there have been a great deal of nice works for interface problems[37, 27, 56, 8, 59, 28, 30, 26, 49, 47, 49, 21], there are quite a few fully decoupled numerical methods on the uniform grids for solving such interface problems, even to mentioned interesting degenerate interface problems.
In this paper, we focus on constructing fully decoupled numerical algorithms based on deep learning for solving the degenerate interface problems. This method not only effectively reduces the influence of the degeneracy and interface but also provides an accurate solutions on a uniform Cartesian mesh. We construct two DNN structures near the interface instead of the whole domain, and find the optimal solution by minimizing the mean squared error loss that consists of the equation and the interface conditions. These two parts are linked by its normal derivative jump conditions. We use DNN to treat considered problems on singular sub-domains near the interface to get a solution, then obtain two independent decoupled boundary value sub-problems without interface on regular sub-domains. We can compute those two nonlinear systems in parallel. We find that the proposed our approach is simple, easy to implement reducing lots efforts in handling jump conditions and also its ability to use existing method for solving nonlinear sub-problmes without interface. The choice of the singular sub-domain is more natural since we use a uniform grids, and programming of the new scheme is a straightforward task due to fully decoupled algorithms. Although deep learning has shown remarkable success in various hard problems of artificial intelligence areas, limited approximatability of deep learning with uniform grids results in two general boundary value sub-problems to get satisfactory approximations of the solutions for solving such nonlinear degenerate interface problems. A loss, no bad thing or a blessing in disguise. In fact, if deep learning has the ability to strictly decoupled the degenerate interface problems at the interface into two degenerate PDEs, we probably obtain nonlinear ill-conditioned systems for the corresponding discrete sub-problems. At this moment, we have to look for other special methods to treat degenerate PDE or interface problems likewise the litratures[38, 58], and references therein.
The purpose of the paper is to develop a new fully decoupled numerical method based on DNN technique that not only effectively reduces the influence of the singularities and interface, but also provides a new way to realize completely decoupled method with different ideas compared to the existing methods to treat degenerate interface problems. It does not need any extra efforts to treat the cases between degenerate interface and general interface. The proposed approach has advantages of fully decoupled two problems without interface with uniform grids. Since our fully decoupled method is independent of the interface and the jump conditions, it not only results in two independent sub-problems, but also can easily treat the cases of jump ratio(such as or ). In addition, the computational costs is almost the same for homogenous jump case and non-homogeneous jump case, this numerically demonstrates fully decoupled property of our method. The methods of this paper are sufficiently robust and also can easily handle 1D case and 2D case. In particular, it is easily to handle hard problems such as sharp-edge interface problems. Our method can robustly and efficiently apply to both of the general interface problems and degenerate interface problems, while an effective method to solve general interface problems is not suitable for solving such nonlinear degenerate interface problems. It is demonstrated that our method is a simple and straight method to deal with quit hard works. It should be mentioned that the convergence order of the schemes on entire domain for solving such degenerate PDE with interface can be determined by the convergence order of the sub-problems on regular sub-domain. Numerical experiments show that the proposed approach is able to effectively approximate the solutions of such hard degenerate interface problems. Numerical results have shown great improvement comparing to the existing methods for solving hard cases[2]. From the method[58] we know that it is impossible to split degenerate or general interface problems into two independent boundary value problems. Nevertheless, it is realized our algorithms to be completely decoupled for solving degenerate interface problems due to using dee learning. Although there are a few analytical results, the reason why deep neural networks coupled with traditional numerical methods have performed so well for solving degenerate interface problems still largely remains a mystery. This encourage us to consider the theoretical approximation analysis in the future.
The rest of the paper is organized as follows. In section 2, we give some preliminaries about the Deep Neural Networks and follow this with the process on the interface and fully-decoupling two sub-problems. In section 3, we construct Deep Neural Network structure and finite difference scheme. We present some numerical experiments including some interesting models in mathematical physics area in section 4. Some concluding remarks are given in the final section.
2 Deep Neural Network
The definition and attributes of the deep neural network (DNN), particularly its approximation property, are briefly discussed in this section [49].
In order to define a DNN, we will need two steps. The first is a (vector) linear function of the operator , defined as , where , and are in and respectively. A nonlinear activation function is the second. The rectified linear unit (ReLU), a commonly used activation function, is defined as [35]. The exponential linear unit (ELU) will be used as the activation function in this paper, defined as , it is mainly used to avoid the problem of gradient disappearance (Fig.1). The (vector) activation function can be defined by applying the activation function in an element-wise manner.


We can define a continuous function by acomposition of linear transforms and activation functions using these definitions, i.e.,
(2.1) |
where with and are undetermined matrices and vectors respectively, being the element-wisely specified activation function to make (2.1) meaningful, the dimensions of and were chosen. All indeterminate coefficients (e.g., and ) in (2.1) are denoted as , where is a high-dimensional vector and is the space of . The DNN representation of a continuous function can be viewed as
(2.2) |
Let denote the set of all expressible functions by the DNN parametrized by . The approximation property of the DNN, which is relevant to the study of a DNN model’s expressive power, have been discussed in other papers[25, 53]. To accelerate the training of the neural network, we use the Adam optimizer [33]version of the stochastic gradient descent (SGD) method in two-dimensional case[41].

3 2D Degenerate Elliptic Interface Problem
3.1 Problem description
Consider the following nonlinear degenerate elliptic equation with the interface,
(3.1) |
where is a bounded domain in , with Lipschitz boundary , and the interface is closed and divides into two disjoint sub-domains and ; and are two functions defined only along the interface . The function contains and denotes the nonlinearity, and has different nonlinear forms with respect to . is weakly degenerate coefficient functions (degenerate points belong to the interface), it is also mentioned other poor properties such as ( tends to on the interface). and are the difference of the limiting values of from and respectively. Finally, is a determined function on the boundary .
3.2 DNN-FD method
In this research, we focus on using DNN to develop fully-decoupled numerical methods for solving degenerate interface problems. First, we divide the domain into uniform Cartesian meshes, we use DNN to solve examined problems on singular sub-domains near the interface, then extract two decoupled boundary value sub-problems on regular sub-domains with no interface. Those two nonlinear systems can be computed in parallel by finite difference method,

(3.2) |
(3.3) |
where , and are the functions in respectively; and are regular domains shown in Fig.3 and are the result of the deep neural network in the next section.
The proposed method has the advantage of totally decoupling the original problems while using uniform grids. Because our fully decoupled technique is independent of the interface and jump conditions, it not only yields two nondegenerate sub-problems, but it can also easily handle the interface problems with large jump ratios. This method can easily handle both 1D and 2D cases. It is very simple to deal with difficulties like sharp-edge interface issues. While an effective approach for handling general interface problems is not suitable for solving such nonlinear degenerate interface problems, our method can be used robustly and efficiently to both general and degenerate interface problems.
3.2.1 Deep Neural Network Structure
In recent years, deep neural network has shown its strong ability in various fields[54, 23, 39, 15], mainly reflected in nonlinear fitting ability, high-dimensional data processing ability, excellent fault tolerance ability and strong feature extraction ability. Here, we apply it to the element mesh near the interface to solve the nonlinearity, degeneration and interface singularity of the original problem.
We apply DNN in the banded degenerate domain composed of near interface element grid in Fig.4. We construct the DNN structure on this domain instead of the whole area to approximate the solution . The reason is that we want to solve the singularity on the interface through the characteristics of DNN, in order to avoid the influence of regular domains on the accuracy of DNN. And the regular domains can be improved by better numerical methods. The problem is naturally separated into two nonsingular sub-problems[34, 22, 49],
(3.4) |
(3.5) |
where , the exact interface is the zero level set of the following level set function . is an extension of near the interface and is the Euclidean distance. will be obtained from deep learning networks. The construction of equation (3.5) aims to ensure the uniqueness of the solution. Similarly, depending on the shape of the interface, will also be constructed correspondingly. If the first jump condition across the interface is homogeneous, only one function can be used to approximate the solution .

The structure of DNN with four hidden layers has been given in the Fig.2. The following is the selection of sampling points, which is divided into two types: one is to select interior points , which are random on the degenerate domains; and the other is the nodes on the element grids. In order to define the discrete loss function, all sampling points , , need to meet the first condition in (3.1),
(3.6) |
(3.7) |
The nodes also need to meet the jump conditions across the interface,
(3.8) |
(3.9) |
This structure is to solve the singularity and geometric irregularity on the interface. If we sample points directly from the interface, the separated sub-problems will be also degenerate.
In particular, there may be two cases for nodes, the first case is that the intersection of the interface and the grid is not a grid node shown in the Fig.4, such as , we can process by nodes close to the intersection in the horizontal or vertical direction,
(3.10) |
(3.11) |
The second case is that the interface just intersects with the grid at the node, such as . We need to deal with it through the four nodes around it,
(3.12) |
(3.13) | |||
now, we are ready to define the total discrete loss function as follows:
(3.14) |
where are weights, which are used to solve the problem with large jump ratios. Therefore, each discrete loss function can be compared by the same order of magnitude. After we get the approximation of the gradient with respect to , we can update each component of as
(3.15) |
where is any component of and is the learning rate. For the sake of simplicity, is usually taken as unless specified.
3.2.2 Finite Difference Scheme
On the regular domain, we can use better numerical methods to improve the accuracy of the whole regions. Here we use the finite difference method[6]. Take one of these areas as an example,
Suppose that the function has the following nodes on the domain , where
The steps are and respectively, and , . By Taylor formula, numerical calculation usually uses the following first-order central difference quotient and second-order central difference quotient to approximate the first-order partial derivative and second-order partial derivative of the function at the node respectively,
(3.16) |
(3.17) |
where , , is the approximate value of the function at the node.
For the equation (II), the difference quotient is used to approximate the partial derivative at the nodes, and the following difference equations can be obtained on the domain :
(3.18) |
where . By substituting (3.16) and (3.17) into (3.18), we can get
(3.19) | ||||
where , , , , .
After discretizing the boundary value conditions, we can get
(3.20) |
where . Finally, the following iterative method is used to solve (3.19), set an initial value and construct the sequence according to the following formula:
(3.21) | ||||
4 Numerical examples
In this section, we present some numerical results to illustrate the expected convergence rates for different configurations. The convergence order of the approximate solutions, as measured by the errors, is denoted by
where is the numerical solution with space step size and is the analytical solution.
4.1 1D degenerate interface with homogeneous jump conditions
Example 4.1. The degenerate differential equation with the homogeneous interface condition will be solved in , and the interface point . The boundary condition and the source function are chosen so that the exact solution is[58]
The coefficient is
Hence, the interface jump conditions,
N | Error | Order | Error | Order | Error | Order |
---|---|---|---|---|---|---|
10 | ||||||
20 | ||||||
40 | ||||||
80 | ||||||
160 |


N | Error | Order | Error | Order | Error | Order |
---|---|---|---|---|---|---|
10 | ||||||
20 | ||||||
40 | ||||||
80 | ||||||
160 |


We test the current method for the classical interface problem with homogeneous jump conditions. The network used 4 intermediate layers. The width of each layer is 6 and the number of sampling points is 202, including 200 interior points and two grid nodes. The numerical results of the current method for the very big jump ratios are shown in Table 1 and Table 2 respectively. It can be seen clearly that the convergence orders reach the second order for the numerical solution in norms. Fig.5 shows the comparison between the exact solution and the numerical solution for the very big jump ratios when N=160. In Fig.6, we present the decay of the loss function during the training process respectively, eventually the error between the DNN solution and the exact solution reduces to about near the interface.
Many other well-known methods usually give the numerical results with the jump ratios for the one-dimensional or two-dimensional interface problems[27], while it can be calculated by the method used in this paper with the jump ratios . The time for the deep neural network required to simulate the function is approximately 1263 seconds when N=160.
4.2 1D degenerate interface with nonhomogeneous jump conditions
Example 4.2. In this example, the computational domain and interface (a point) are the same as in the previous example. The source function are chosen such that the exact solution is as follows[58]:
The coefficient is
The experiment satisfies the following jump conditions,
N | Error | Order | Error | Order | Error | Order |
---|---|---|---|---|---|---|
10 | ||||||
20 | ||||||
40 | ||||||
80 | ||||||
160 |
N | Error | Order | Error | Order | Error | Order |
---|---|---|---|---|---|---|
10 | ||||||
20 | ||||||
40 | ||||||
80 | ||||||
160 |


This is an experiment with nonhomogeneous jump conditions and the requirements for the numerical algorithms problem is higher and stricter to the numerical algorithms. First, we present the convergence order of the variables with large jump ratios in Table 3 and Table 4 namely. It can be seen that the convergence orders for the case of nonhomogeneous jump conditions are the second order. Fig.7a shows the comparison between the exact solution and the numerical solution for the large jump ratio when N=80. In Fig.7b, we plot the decay of the norm error between the DNN solution and the exact solution during the training process with the large jump ratio when N=80 (case 2).
Second, to compare with the methods in the literature[58], we also calculate the results of this experiment with the jump ratio . In Fig.7b, we plot the decay of the loss functions during the training process with jump ratios when N=80. It can be seen that dealing with a smaller jump ratio is more simple and efficient. Finally, using this example, the two methods can calculate homogeneous and nonhomogeneous degenerate problems in one dimension, and the choice of coefficients can be constant, variable, or with singular properties. The advantage of the DNN-FD method is that the jump ratio of the calculated coefficients is bigger than that of the method in[58]. The method can also be extended to two-dimensional degenerate interfaces with the large jump ratio in the next section. This example takes approximately 1298 seconds when N=160, showing that the current method has no essential difference whether the jump conditions are homogeneous or not.
4.3 2D degenerate interface with nohomogeneous jump conditions
Example 4.3. In this example, we consider the interface problem with nonhomogeneous jump conditions. The exact solution is[27]
The coefficient is
where , and . The exact interface is the zero level set of the following level set function,
N | Error | Order | Error | Order | Error | Order |
---|---|---|---|---|---|---|
10 10 | ||||||
20 20 | ||||||
40 40 | ||||||
80 80 | ||||||
160 160 |


N | Error | Order | Error | Order | Error | Order |
---|---|---|---|---|---|---|
10 10 | ||||||
20 20 | ||||||
40 40 | ||||||
80 80 | ||||||
160 160 |


We reconstruct the example from the literature[27] to degenerate it near the interface. It is a two-dimensional degenerate elliptic equation with nonhomogeneous jump conditions. The network used 6 intermediate layers. The width of each layer is 15 and the number of sampling interior points is 2000. In the running of the SGD method, we generate a new batch every 10 steps of updating. The numerical results of the present method for large jump ratios are shown in Table 5 and Table 6 respectively. It can be seen that the convergence orders for the case of nonhomogeneous jump conditions are the second order. Fig.8 shows the comparison between the exact solution and the numerical solution for the large jump ratio when N=160. In Fig.9, we plot the decay of the loss functions during the training process with large jump ratios when N=160. The two-dimensional case is more difficult than the one-dimensional case and takes more sampling points, but there is no essential difference in methods. The error between the DNN solution and the exact solution is also reduced to approximately near the interface. This example shows that this method can be effectively extended to two-dimensional or even higher dimensional degenerate interface problems, and can also effectively solve the coefficients with the large jump ratio.
4.4 2D nondegenerate interface with homogeneous jump conditions
Example 4.4. In this example, we consider the nondegenerate interface problem with high contrast diffusion coefficients with homogeneous jump conditions. The exact solution is[24]
where , and . The exact interface is the zero level set of the following level set function,
N | Error | Order | Error | Order | Error | Order |
---|---|---|---|---|---|---|
10 10 | ||||||
20 20 | ||||||
40 40 | ||||||
80 80 | ||||||
160 160 |


N | Error | Order | Error | Order | Error | Order |
---|---|---|---|---|---|---|
10 10 | ||||||
20 20 | ||||||
40 40 | ||||||
80 80 | ||||||
160 160 |


The method used in this paper can compute not only degenerate problems, but also nondegenerate problems. The numerical results of the present method for large jump ratios are shown in Table 7 and Table 8 respectively. It can be seen easily that the numerical solution has second-order convergence in the norm.
Fig.10 and Fig.11 show the comparison between the exact solution and the numerical solution for large jump ratios and when N=160 respectively. Due to the application of numerical methods on regular domains, the accuracy of this method is higher than that in [24], and because of the fully decoupled format, it can handle the problem with higher coefficients and the larger jump ratio.
4.5 2D nondegenerate flower shape interface
Example 4.5. In this example, we consider the flower shape interface problem. The exact solution is[27]
The coefficient is
The exact interface is the zero level set of the following level set function,
N | Error | Order | Error | Order | Error | Order |
---|---|---|---|---|---|---|
10 10 | ||||||
20 20 | ||||||
40 40 | ||||||
80 80 | ||||||
160 160 |




The peculiarity of this example is that the problem has a complex smooth interface. It is designed to examine the performance of the DNN-FD method in dealing with geometric irregularities. Our method also has advantages in dealing with complex interface problems. This method becomes simple and efficient by applying a deep neural network near the interface. We present a grid refinement analysis in Table 9 that successfully reached the second order. Fig.13 shows the sampling points in the area of the method in this paper. It can be seen from the figure that we will set more sampling points near the curve with the large radian. Similarly, as dealing with the singularity and non-smoothness of the interface, we will set more sampling points. We take the points by sections based on different degeneracies, large jump ratios, and other conditions to show the properties of the interface well. Fig.12 shows the comparison between the exact solution and the numerical solution when N=160.
4.6 2D nondegenerate happy-face interface
Example 4.6. In this example, we consider the following more general self-adjoint elliptic interface problem,
The example is a happy-face interface and the coefficients are symmetric positive definite matrices. The exact solution is[47, 27]
The coefficient is
The exact interface can be viewed in the literature[27]. The other coefficient is
N | Error | Order | Error | Order | Error | Order |
---|---|---|---|---|---|---|
10 10 | ||||||
20 20 | ||||||
40 40 | ||||||
80 80 | ||||||
160 160 |


The difficulty of the example is that the interfaces have kinks around ears and mouth. We present the convergence results in Table 10. Numerical results indicate that the DNN-FD solution always converges to the exact solution with second-order accuracy. And the exact solution and the numerical solution are compared in Fig.14 when N=160.
4.7 2D nondegenerate sharp-edged interface
Example 4.7. In this example, we consider the nonsmooth interface problem. The exact solution is[48, 28]
The coefficient is
The exact interface is the zero level set of the following level set function,
N | Error | Order | Error | Order | Error | Order | |
---|---|---|---|---|---|---|---|
20 20 | |||||||
40 40 | |||||||
80 80 | |||||||
160 160 | |||||||
320 320 |
IFVE[48] | |||
---|---|---|---|
N | Error | Order | Order |
20 20 | |||
40 40 | |||
80 80 | |||
160 160 | |||
320 320 |


For nonsmooth interface problems, the method used in this paper can also be applied the numerical results of the current method are given in Table 11. In Table 11, we present a grid refinement analysis that successfully achieves the second order. In other words, the proposed method is not sensitive to the grid for the solution and interface. In Table 12, We also calculated the logarithmic ratios of errors. Although the scheme is the second order one and costs too much expensive works on the interface, it is so hard to get satisfactory results in [48] because of nonsmooth property of the interface. And the solution has a singularity at with blow-up derivatives. Our method has approximately the second-order convergence, the numercial results are much better than ones of IFVE method. Fig.15 shows the comparison between the exact solution and the numerical solution when N=320.
4.8 2D nondegenerate five-pointed star interface
Example 4.8. In this example, we consider the five-pointed star interface problem. The exact solution is[28]
The coefficient is
The exact interface is the zero level set of the following level set function,
with
N | Error | Order | Error | Order | Error | Order |
---|---|---|---|---|---|---|
20 20 | ||||||
40 40 | ||||||
80 80 | ||||||
160 160 | ||||||
320 320 |


This example presents a more difficult challenge, that is, considering that the interface consists of several sharp-edged nonsmooth interfaces. Our method can also be applied after special processing for more complex nonsmooth interfaces, such as the five-pointed star interface. The numerical results of the current method for in Table 13. It can be seen that even if the non-smoothness of the interface changes, our method can always maintain the second-order accuracy. The exact solution and the numerical solution are compared in Fig.16 when N=320.
4.9 2D degenerate five-pointed star interface
Example 4.9. In this example, we consider the degenerate five-pointed star interface problem. The exact solution is[28]
The coefficient is
The exact interface is the same as in the previous example.
N | Error | Order | Error | Order | Error | Order |
---|---|---|---|---|---|---|
20 20 | ||||||
40 40 | ||||||
80 80 | ||||||
160 160 | ||||||
320 320 |


In the last example, we reconstruct the examples from the original literature[28]. We will challenge one which is combining degenerate and nonsmooth interface problems, where the degenerate points are respectively the two angles of the five-pointed star on the positive and negative domains. Furthermore, because the solution of the problem is nonlinear, the difficulty of this example increases once again. The choice of the activation function has also changed, and the selected nonlinear activation function offers a good approximation to the solution of the problem. The numerical results of the current method are shown in Table 14. The experimental results have the second-order accuracy in the norm. Fig.17 shows the comparison between the exact solution and the numerical solution when N=320.
4.10 2D degenerate interface with large jump conditions
Example 4.10. This example is based on the addition of a large jump ratio to Example 4.9. The boundary condition and the source function are chosen so that the exact solution is[58]
The coefficient is
The exact interface is the zero level set of the following level set function,
with

N | Error | Order | Error | Order | Error | Order |
---|---|---|---|---|---|---|
20 20 | ||||||
40 40 | ||||||
80 80 | ||||||
160 160 | ||||||
320 320 |
N | Error | Order | Error | Order | Error | Order |
---|---|---|---|---|---|---|
20 20 | ||||||
40 40 | ||||||
80 80 | ||||||
160 160 | ||||||
320 320 |
Our method can also be applied in the five-pointed star interface with large jump ratios. The numerical results of the current method for in Table 15 and Table16. It can be seen that even if the non-smoothness of the interface changes, our method can always maintain the second-order accuracy. The numerical solution is shown in Fig.18 when N=320.
4.11 2D interface problem with non analytical solution
Example 4.11. In this example, we consider the five-pointed star interface problem with non analytical solution which is constructed from Example 4.8. The coefficient is
The exact interface is the zero level set of the following level set function,
with
N | Error | Order | Error | Order | Error | Order |
---|---|---|---|---|---|---|
20 20 | ||||||
40 40 | ||||||
80 80 | ||||||
160 160 | ||||||
320 320 |

We change the , where . This example presents a more difficult challenge, that is, considering that the interface consists of several sharp-edged nonsmooth interfaces and has non analytical solution. Our method can also be applied this example. The numerical results of the current method for in Table 17 where is the right term calculated by the numerical solution . Due to the lack of the analytical solution to the equation, we define errors and convergence orders of the equation as the reference of stability during the operation. This value is stable around a constant, confirming the feasibility of the method. The numerical solution is shown in Fig.19 when N=320.
4.12 2D Linear elasticity interface problem
Example 4.12. Finally, we will consider the example with physical significance that is a linear elasticity PDE with a discontinuous stress tensor as follows,
One application of the linear elasticity problem is to model the shape and location of fibroblast cells under stress. Let denote the displacement field. Then, the strain tensor is
then the elasticity tensor is a linear transformation on the tensors. In the isotropic case, we have
where and are lamé constants, is the trace operator, and is the identity matrix. In this case, the above parameters satisfies the following relationships
where E is Young modulus and are Poisson’s ratio. The interface is defined in the polar coordinate
We set the computational domain The Dirichlet boundary condition and homogeneous jump conditions are determined in this example. Then we choose two groups of the Poisson’s ratio and the shear modulus as follows[10]
(4.1) |
and
(4.2) |

N | Error | Order | Error | Order | Error | Order |
---|---|---|---|---|---|---|
20 20 | ||||||
40 40 | ||||||
80 80 | ||||||
160 160 | ||||||
320 320 |
N | Error | Order | Error | Order | Error | Order |
---|---|---|---|---|---|---|
20 20 | ||||||
40 40 | ||||||
80 80 | ||||||
160 160 | ||||||
320 320 |
The network used 6 intermediate layers. The width of each layer is 20 and the learning rate is .In Fig.20, we plot the profiles of the DNN-FD solution, which are the displacements in and coordinates, respectively. The corresponding numerical results are shown in Table 18 and Table 19. We find that the DNN-FD solutions have the second-order accuracy in the norm.
5 Conclusions.
Numerical methods for solving nolinear degenerate interface problems is one of fundamental iusses in scientific computations, it is challenge to design effective and robust fully decoupled numerical method for such degenerate interface problems. In this paper, fully decoupled finite difference method based on deep neural network for solving degenerate interface problems including 1D and 2D cases is proposed. It is shown that we can adopt uniform grids to solve degenerate PDE with interface. There are no unknown augmented parameters in the discrete schemes, and no more extra conditions and works to be required for designing numerical approximation algorithms. In fact, some augmented variables is obtained by adopting DNN technique, the degenerate interface problem is completely decoupled two independent to the case of other degenerate or singular problems. The accuracy of the proposed fully decoupled algorithms has been demonstrated by solving various examples including degenerate and nondegenerate cases. In particular, the fully decoupled properties of the algorithm make the method capable of easy handling the jump ratio from the case of semi-decoupling jump (such as or ) to the case of fully decoupled jump (such as or ) conditions. An interesting typical sharp edge example with degenerate five-pointed star interface shows that our approach works very well for those very hard problems. Numerical examples confirm the effectiveness of the fully decoupled algorithms for solving degenerate interface problems.
Acknowledgments.
This work is partially supported by the National Natural Science Foundation of China(grants No. 11971241).
References
- [1] L. Adams and Z. Li. The immersed interface/multigrid methods for interface problems. SIAM Journal on Scientific Computing, 24(2):463–479, 2002.
- [2] J. Albright, Y. Epshteyn, M. Medvinsky, and Q. Xia. High-order numerical schemes based on difference potentials for 2d elliptic problems with material interfaces. Applied Numerical Mathematics, 111:64–91, 2017.
- [3] T. Arbogast and M. F. Wheeler. A nonlinear mixed finite element method for a degenerate parabolic equation arising in flow in porous media. SIAM Journal on Numerical Analysis, 33(4):1669–1687, 1996.
- [4] S. Baharlouei, R. Mokhtari, and F. Mostajeran. Dnn-hdg: A deep learning hybridized discontinuous galerkin method for solving some elliptic problems. Engineering Analysis with Boundary Elements, 151:656–669, 2023.
- [5] W. Bao, Y. Cai, X. Jia, and Q. Tang. Numerical methods and comparison for the dirac equation in the nonrelativistic limit regime. Journal of Scientific Computing, 71(3):1094–1134, 2017.
- [6] J. Beale and A. Layton. On the accuracy of finite difference methods for elliptic problems with interfaces. Communications in Applied Mathematics and Computational Science, 1(1):91–119, 2007.
- [7] J. Beale and W. Ying. Solution of the dirichlet problem by a finite difference analog of the boundary integral equation. Numerische Mathematik, 141(3):605–626, 2019.
- [8] J. Bedrossian, J. H. von Brecht, S. Zhu, E. Sifakis, and J. M. Teran. A second order virtual node method for elliptic problems with interfaces and irregular domains. Journal of Computational Physics, 229(18):6405–6426, 2010.
- [9] F. Bernis and A. Friedman. Higher order nonlinear degenerate parabolic equations. Journal of Differential Equations, 83(1):179–206, 1990.
- [10] B.Wang, K.-L.Xia, and G.-W.Wei. Matched interface and boundary method for elasticity interface problems. Journal of computational and applied mathematics, 285:203–225, 2015.
- [11] Z. Cai, C. He, and S. Zhang. Discontinuous finite element methods for interface problems: robust a priori and a posteriori error estimates. SIAM Journal on Numerical Analysis, 55(1):400–418, 2017.
- [12] W. Cao, X. Zhang, Z. Zhang, and Q. Zou. Superconvergence of immersed finite volume methods for one-dimensional interface problems. Journal of Scientific Computing, 73(2):543–565, 2017.
- [13] S. Chen and J. Shen. Enriched spectral methods and applications to problems with weakly singular solutions. Journal of Scientific Computing, 77(3):1468–1489, 2018.
- [14] Z. Chen and J. Zou. Finite element methods and their convergence for elliptic and parabolic interface problems. Numerische Mathematik, 79(2):175–202, 1998.
- [15] R. Collobert and J. Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine learning, pages 160–167, 2008.
- [16] M. J. Del Razo and R. J. LeVeque. Numerical methods for interface coupling of compressible and almost incompressible media. SIAM Journal on Scientific Computing, 39(3):B486–B507, 2017.
- [17] Q. Du, M. Gunzburger, R. B. Lehoucq, and K. Zhou. Analysis and approximation of nonlocal diffusion problems with volume constraints. SIAM Review, 54(4):667–696, 2012.
- [18] R. E. Ewing, Z. Li, T. Lin, and Y. Lin. The immersed finite volume element methods for the elliptic interface problems. Mathematics and Computers in Simulation, 50(1-4):63–76, 1999.
- [19] M. Gunzburger, X. He, and B. Li. On stokes–ritz projection and multistep backward differentiation schemes in decoupling the stokes–darcy model. SIAM Journal on Numerical Analysis, 56(1):397–427, 2018.
- [20] B.-Y. Guo and L.-L. Wang. Jacobi interpolation approximations and their applications to singular differential equations. Advances in Computational Mathematics, 14(3):227–276, 2001.
- [21] R. Guo, T. Lin, and Y. Lin. Recovering elastic inclusions by shape optimization methods with immersed finite elements. Journal of Computational Physics, 404:109123, 2020.
- [22] J. Han, A. Jentzen, et al. Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations. Communications in Mathematics and Statistics, 5(4):349–380, 2017.
- [23] A. Handa, M. Bloesch, V. Pătrăucean, S. Stent, J. McCormac, and A. Davison. gvnn: Neural network library for geometric computer vision. In European Conference on Computer Vision, pages 67–82. Springer, 2016.
- [24] C.-Y. He, X.-Z. Hu, and L. Mu. A mesh-free method using piecewise deep neural network for elliptic interface problems. Journal of Computational and Applied Mathematics, 412:114358, 2022.
- [25] J. He, L. Li, J. Xu, and C. Zheng. Relu deep neural networks and linear finite elements. arXiv preprint arXiv:1807.03973, 2018.
- [26] X. He, T. Lin, and Y. Lin. Interior penalty bilinear ife discontinuous galerkin methods for elliptic equations with discontinuous coefficient. Journal of Systems Science and Complexity, 23(3):467–483, 2010.
- [27] S. Hou and X.-D. Liu. A numerical method for solving variable coefficient elliptic equation with interfaces. Journal of Computational Physics, 202(2):411–445, 2005.
- [28] S. Hou, W. Wang, and L. Wang. Numerical method for solving matrix coefficient elliptic equation with sharp-edged interfaces. Journal of Computational Physics, 229(19):7162–7179, 2010.
- [29] W.-F. Hu, T.-S. Lin, and M.-C. Lai. A discontinuity capturing shallow neural network for elliptic interface problems. Journal of Computational Physics, 469:111576, 2022.
- [30] P. Huang, H. Wu, and Y. Xiao. An unfitted interface penalty finite element method for elliptic interface problems. Computer Methods in Applied Mechanics and Engineering, 323:439–460, 2017.
- [31] H.-F. Ji, F.Wang, J.-R. Chen, and Z.-L. Li. An immersed cr-p0 element for stokes interface problems and the optimal convergence analysis. Computer Methods in Applied Mechanics and Engineering, 399:115306, 2022.
- [32] W. Jiang, W. Bao, C. V. Thompson, and D. J. Srolovitz. Phase field approach for simulating solid-state dewetting problems. Acta Materialia, 60(15):5578–5592, 2012.
- [33] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
- [34] I. E. Lagaris, A. Likas, and D. I. Fotiadis. Artificial neural networks for solving ordinary and partial differential equations. IEEE Transactions on Neural Networks, 9(5):987–1000, 1998.
- [35] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436–444, 2015.
- [36] R. J. LeVeque and Z. Li. The immersed interface method for elliptic equations with discontinuous coefficients and singular sources. SIAM Journal on Numerical Analysis, 31(4):1019–1044, 1994.
- [37] Z. Li, T. Lin, and X. Wu. New cartesian grid methods for interface problems using the finite element formulation. Numerische Mathematik, 96(1):61–98, 2003.
- [38] B. Lusch, J. N. Kutz, and S. L. Brunton. Deep learning for universal linear embeddings of nonlinear dynamics. Nature Communications, 9(1):1–10, 2018.
- [39] Y. Pao. Adaptive pattern recognition and neural networks. Reading, MA (US); Addison-Wesley Publishing Co., Inc., 1989.
- [40] W. Ren and X.-P. Wang. An iterative grid redistribution method for singular problems in multiple dimensions. Journal of Computational Physics, 159(2):246–273, 2000.
- [41] H. Robbins and S. Monro. A stochastic approximation method. The Annals of Mathematical Statistics, pages 400–407, 1951.
- [42] J. Shen, T. Tang, and L.-L. Wang. Spectral methods: algorithms, analysis and applications, volume 41. Springer Science & Business Media, 2011.
- [43] J. Shen and Y. Wang. Muntz Galerkin methods and applications to mixed dirichlet–neumann boundary value problems. SIAM Journal on Scientific Computing, 38(4):A2357–A2381, 2016.
- [44] H. Sun and D. L. Darmofal. An adaptive simplex cut-cell method for high-order discontinuous galerkin discretizations of elliptic interface problems and conjugate heat transfer problems. Journal of Computational Physics, 278:445–468, 2014.
- [45] C. Wang and R. Du. Approximate controllability of a class of semilinear degenerate systems with convection term. Journal of Differential Equations, 254(9):3665–3689, 2013.
- [46] C. Wang and R. Du. Carleman estimates and null controllability for a class of degenerate parabolic equations with convection terms. SIAM Journal on Control and Optimization, 52(3):1457–1480, 2014.
- [47] Q. Wang, J. Xie, Z. Zhang, and L. Wang. Bilinear immersed finite volume element method for solving matrix coefficient elliptic interface problems with non-homogeneous jump conditions. Computers & Mathematics with Applications, 86:1–15, 2021.
- [48] Q. Wang, Z. Zhang, and L. Wang. New immersed finite volume element method for elliptic interface problems with non-homogeneous jump conditions. Journal of Computational Physics, 427:110075, 2021.
- [49] Z. Wang and Z. Zhang. A mesh-free method for interface problems using the deep learning approach. Journal of Computational Physics, 400:108963, 2020.
- [50] D. Wu, J. Yue, G. Yuan, and J. Lv. Finite volume element approximation for nonlinear diffusion problems with degenerate diffusion coefficients. Applied Numerical Mathematics, 140:23–47, 2019.
- [51] K. Xia, M. Zhan, and G.-W. Wei. Mib galerkin method for elliptic interface problems. Journal of Computational and Applied Mathematics, 272:195–220, 2014.
- [52] M. Xu, L. Zhang, and E. Tohidi. A fourth-order least-squares based reproducing kernel method for one-dimensional elliptic interface problems. Applied Numerical Mathematics, 162:124–136, 2021.
- [53] D. Yarotsky. Error bounds for approximations with deep relu networks. Neural Networks, 94:103–114, 2017.
- [54] B. Yu et al. The deep ritz method: a deep learning-based numerical algorithm for solving variational problems. Communications in Mathematics and Statistics, 6(1):1–12, 2018.
- [55] Z. Zhang, P. Rosakis, T. Y. Hou, and G. Ravichandran. A minimal mechanosensing model predicts keratocyte evolution on flexible substrates. Journal of the Royal Society Interface, 17(166):20200175, 2020.
- [56] M. Zhao, W. Ying, J. Lowengrub, and S. Li. An efficient adaptive rescaling scheme for computing moving interface problems. Communications in Computational Physics, 21(3):679–691, 2017.
- [57] S. Zhao. High order matched interface and boundary methods for the helmholtz equation in media with arbitrarily curved interfaces. Journal of Computational Physics, 229(9):3155–3170, 2010.
- [58] T. Zhao, K. Ito, and Z. Zhang. Semi-decoupling hybrid asymptotic and augmented finite volume method for nonlinear singular interface problems. Journal of Computational and Applied Mathematics, 396:113606, 2021.
- [59] S. Zhou, Yongchengand Zhao, M. Feig, and G.-W. Wei. High order matched interface and boundary method for elliptic equations with discontinuous coefficients and singular sources. Journal of Computational Physics, 213(1):1–30, 2006.
- [60] H. Zhu and C. Xu. A fast high order method for the time-fractional diffusion equation. SIAM Journal on Numerical Analysis, 57(6):2829–2849, 2019.
- [61] L. Zhu, Z. Zhang, and Z. Li. An immersed finite volume element method for 2d pdes with discontinuous coefficients and non-homogeneous jump conditions. Computers & Mathematics with Applications, 70(2):89–103, 2015.