This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Solving Partial Differential Equations on Evolving Surfaces via the Constrained Least-Squares and Grid-Based Particle Method

Ningchen Ying Department of Mathematics, the Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong. Email: [email protected]    Shingyu Leung Department of Mathematics, the Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong. Email: [email protected]
Abstract

We present a framework for solving partial different equations on evolving surfaces. Based on the grid-based particle method (GBPM) [18], the method can naturally resample the surface even under large deformation from the motion law. We introduce a new component in the local reconstruction step of the algorithm and demonstrate numerically that the modification can improve computational accuracy when a large curvature region is developed during evolution. The method also incorporates a recently developed constrained least-squares ghost sample points (CLS-GSP) formulation [37, 38], which can lead to a better-conditioned discretized matrix for computing some surface differential operators. The proposed framework can incorporate many methods and link various approaches to the same problem. Several numerical experiments are carried out to show the accuracy and effectiveness of the proposed method.

1 Introduction

This paper proposes a new numerical method to solve a class of parabolic partial differential equations (PDEs) on evolving surfaces. In particular, we consider the following equation

DuDt+uΓ𝐯+Γ𝐪=f on Γ,\displaystyle\frac{Du}{Dt}+u\nabla_{\Gamma}\cdot\mathbf{v}+\nabla_{\Gamma}\cdot\mathbf{q}=f\text{\; on\; }\Gamma\,, (1)

where Γ\Gamma is a surface evolving under the velocity flow 𝐯\mathbf{v}, DDt\frac{D}{Dt} is the material derivative given by DDt=t+𝐯\frac{D}{Dt}=\frac{\partial}{\partial t}+\mathbf{v}\cdot\nabla, the vector 𝐪\mathbf{q} represents a flux and ff is a source term. This equation can also be rewritten as

ut+Vu𝐧κVu+Γ(u𝐓)+Γ𝐪=f on Γ,u_{t}+V\frac{\partial u}{\partial\mathbf{n}}-\kappa Vu+\nabla_{\Gamma}\cdot(u\mathbf{T})+\nabla_{\Gamma}\cdot\mathbf{q}=f\text{\; on\; }\Gamma\,,

where the quantity κ\kappa is the mean curvature of the surface, VV and 𝐓\mathbf{T} are the normal velocity of the interface motion and the velocity tangential to the interface, respectively, satisfying the decomposition 𝐯=V𝐧+𝐓\mathbf{v}=V\mathbf{n}+\mathbf{T}. For motions in the normal direction, the equation can be further reduced to

ut+Vu𝐧κVu+Γ𝐪=f on Γ.u_{t}+V\frac{\partial u}{\partial\mathbf{n}}-\kappa Vu+\nabla_{\Gamma}\cdot\mathbf{q}=f\text{\; on\; }\Gamma\,.

We can use various choices of the flux function 𝐪\mathbf{q}. Here we only consider two cases with the diffusive flux given by 𝐪=𝒟Γw\mathbf{q}=-\mathcal{D}\nabla_{\Gamma}w where ww is a function depending on the quantity uu defined on the interface and also a diffusion parameter 𝒟>0\mathcal{D}>0. In particular, if we choose w=uw=u, equation (1) leads to the advection-diffusion equation with a source term ff given by

DuDt+uΓ𝐯𝒟ΔΓu=f.\displaystyle\frac{Du}{Dt}+u\nabla_{\Gamma}\cdot\mathbf{v}-\mathcal{D}\Delta_{\Gamma}u=f\,. (2)

Another widely used choice of the flux is w=μΔΓu+ψ(u)w=-\mu\Delta_{\Gamma}u+\psi^{\prime}(u) where ψ(u)\psi(u) is the so-called double well potential function and μ>0\mu>0 represents physically a chemical potential in modeling multiphase fluid flows. This leads to the Cahn-Hilliard equation on moving surfaces

DuDt+uΓ𝐯𝒟ΔΓ(μΔΓu+ψ(u))=f.\frac{Du}{Dt}+u\nabla_{\Gamma}\cdot\mathbf{v}-\mathcal{D}\Delta_{\Gamma}(-\mu\Delta_{\Gamma}u+\psi^{\prime}(u))=f\,.

These equations have many applications and are especially important in biological, physical, and engineering sciences. For example, the growth of a solid tumor has been modeled by a coupled system of a reaction-diffusion system with a curvature-dependent evolution law [5, 3, 9]. The Cahn-Hilliard has been used for simulating pattern formation and selection [24, 1]. The advection-diffusion equation can model the multi-phase interfacial flows with a surfactant as discussed in [36, 35].

There are various difficulties in developing a numerical strategy for the problems. The representation of the interface itself already plays an essential role since the topology of the surface might change with evolution. A typical Lagrangian method based on a triangulation of the interface might require remeshing when the dynamic involves significant deformation. This might lead to some computational challenges in the implementation. Mesh surgery might also be needed to carefully reconnect the triangulation of the surface when there is any topological change in the evolution. Implicit methods based on the Eulerian formulation have recently become more attractive. One of the most widely used representations is based on the level set method [27, 26]. The evolution of the interface is modeled by the numerical solution of a Hamilton-Jacobi equation, which is a nonlinear first-order PDE. More recently, we have developed a grid-based particle method (GBPM) and a cell-based particle method (CBPM) in [18, 13] to represent and model the interface dynamics. These grid-based or cell-based representations sample the surface by meshless Lagrangian particles according to an underlying uniform Eulerian mesh. This automatically provides a set of quasi-uniform sampling points of the interface. As the interface evolves according to the motion law, these unconnected non-parametrized sampling points will be continuously updated and can naturally capture changes in the interface topology.

The second phase in the development is to have a numerical approach for solving PDEs on a static surface given by a specific interface representation. For example, based on the implicit Eulerian representation, some projection operators have been introduced to extend a PDE off the interface to a small neighborhood [2, 11]. Without the projection operator, the extension idea has also been used in [34] for solving the surface eikonal equation using the fast sweeping method [32, 39]. Based on the level set representation, a so-called closest point method (CPM) has been developed in [30, 22, 23, 21] by replacing the surface PDE by an evolution equation approach. Some methods have been proposed for general surfaces represented by point clouds. For example, there are methods based on the radial basis functions formulation [29, 31], a local mesh method [14], a virtual grid difference method [33], etc.

A more challenging task is an approach to couple the above two tasks together based on techniques like operator-splitting so that one can develop a numerical method for solving PDEs on evolving surfaces. Some fully explicit Lagrangian methods have been developed in [7, 1, 6] based on a finite element formulation on the triangulated surface. A fully implicit Eulerian method based on the level set method has been proposed in [36]. A numerical approach based on the GBPM has been discussed in [19, 16]. Coupling the GBPM, the CPM can also solve PDEs on moving surfaces [28].

In this paper, we develop a general framework for solving PDEs on evolving surfaces represented by the GBPM representation. We closely follow the original GBPM in [18, 19, 16] for both the interface representation and the evolution. Yet, we discuss an improvement in the implementation of the local reconstruction. We must collect a small set of neighbors in the original algorithm for the local interface reconstruction. These neighboring sample points must be chosen carefully for an accurate approximation. One constraint is that we must sample the same interface segment. When the curvature of the interface is too large compared to the underlying mesh size, one might collect footpoints from different segments of the interface, and it will result in a wrong local reconstruction of the surface. In the original work, we proposed checking if the associated normal vectors from these footpoints were pointing in a similar direction. When the interface developed a high curvature region compared to the size of the underlying mesh, however, this check does not provide enough penalty on far away sampling points. This paper introduces a simple weighting strategy that can naturally reduce the contributions from those footpoints from regions close to a different segment.

The second main contribution of the paper is to incorporate a CLS-GSP approach for approximating local derivatives [37, 38]. In the original GBPM implementation in [18], the surface differential operator is approximated simply by the polynomial least squares fitting. Although the local approximation accuracy is relatively easy to achieve in local polynomial reconstruction, in some applications, it is more important to construct a discretization on the sampling points so that the resulting linear system can be efficiently inverted stably. For example, when applied to the Laplace-Beltrami operator, one can obtain an M-matrix system using a constrained quadratic programming optimization technique to enforce consistency and diagonal dominance after the discretization [20]. In a recent work [33], we have developed a modified virtual grid difference (MVGD) discretization. It has led to a more diagonal dominant and better conditioned linear system, which can be efficiently and robustly solved by any existing fast solver for elliptic PDEs. In this work, we propose to follow the CLS approach for approximating local derivatives as developed in [37, 38]. Numerically, the method can give a better-conditioned discretization of the Laplace-Beltramni operator.

The framework that we are proposing integrates a large class of methods. This allows us to link various current approaches to the same problem. For example, we will also demonstrate that the so-called local tangential lifting (LTL) method [4] and a recently developed CLS method with RBF [37, 38] can be easily incorporated with our framework.

This paper is organized as follows. We first summarize the GBPM as introduced in [18, 17, 16] in Section 2. We will then introduce a new modification step in the local reconstruction at the end of the Section. In Section 3, we introduce our proposed framework for various local reconstructions and explain how the method couples with the original GBPM. In Section 4, we show various numerical experiments to demonstrate the proposed approach’s effectiveness and accuracy. Finally, we summarize our proposed method and discuss some possible future works.

2 Grid Based Particle Method (GBPM)

2.1 The original formulation

This section briefly summarizes the Grid Based Particle Method (GBPM) proposed and developed in a series of studies [18, 17, 16]. We refer the interested readers to the reference and thereafter. The overall algorithm of the GBPM is given as follows, and a graphical illustration is given in Figure 1.

Algorithm:

  1. 1.

    Initialization [Figure 1 (a)]. Collect all grid points in a small neighborhood (computational tube) of the interface. From each of these grid points, compute the closest point on the interface. We call these grid points active and their corresponding particles on the interface footpoint. The interface is represented by these meshless footpoints (particles).

  2. 2.

    Motion [Figure 1 (b)]. Move all footpoints according to a given motion law.

  3. 3.

    Re-Sampling [Figure 1 (c)]. For each active grid point, re-compute the closest point to the interface reconstructed locally by those particles after the motion in step 2.

  4. 4.

    Updating the computational tube [Figure 1 (d-e)]. Activate any grid point with an active neighbor and find their corresponding footpoints. Then, inactivate grid points that are far away from the interface.

  5. 5.

    Adaptation (Optional). Locally refine the underlying grid cell if necessary.

  6. 6.

    Iteration. Repeat steps 2-5 until the final computation time.

Refer to caption
Figure 1: Grid Based Particle Method. (a) Initialization, (b) after motion, (c) after re-sampling, (d) after activating new grid points with their footpoints, (e) after inactivating grid points with their footpoints. Blue circles denote active grid points, and red squares denote their corresponding footpoints on the interface. In practice, we do not use such a thick computational tube. We pick it here to show the effect of each step in our algorithm clearly.

In the GBPM, we represent the interface by meshless particles associated with an underlying Eulerian mesh. Each sampling particle on the interface is chosen to be the closest point from each underlying grid point in a small neighborhood of the interface. This one-to-one correspondence gives each particle an Eulerian reference during the evolution.

In the first step, we define an initial computational tube for active grid points and use their corresponding closest points as the sampling particles for the interface. A grid point is called active if its distance to the interface is smaller than a given tube radius, and we label the set containing all active grids Γ\Gamma. We associate the corresponding closest point on the interface to each active grid point. This particle is called the footpoint associated with this active grid point. Furthermore, we can also compute and store certain Lagrangian information of the interface at the footpoints, including normal, curvature, and parametrization, which will be helpful in various applications.

This representation is illustrated in Figure 1(a) using a circular manifold as an example. We plot the underlying mesh in a solid line, all active grids using small circles, and their associated footpoints on the manifold using squares. To each grid point near the interface (blue circles), we associate a footpoint on the interface (red squares). The relationship of each of these pairs is shown by a solid line link. To track the motion of the interface, we move all the sampling particles according to a given motion law. This motion law can be very general. Suppose the interface is moved under an external velocity field. Since we have a collection of particles on the interface, we move these points just like all other particle-based methods, which is simple and computationally efficient. We can solve a set of ordinary differential equations using a high-order scheme, which gives a very accurate interface location. For more complicated motions, the velocity may depend on the geometry of the interface. This can be done through local interface reconstruction.

It should be noted that a footpoint after motion may not be the closest point on the interface to its associated active grid point anymore. For example, Figure 1 (b) shows the location of all particles on the interface after the constant motion u=(1,1)T\textbf{u}=(1,1)^{T} with a small time step. As we can see, these particles on the interface are no longer the closest point from these active grid points to the interface. More importantly, the motion may cause those original footpoints to become unevenly distributed along the interface. This may introduce stiffness when particles are moving toward each other and a large error when particles are moving apart. To maintain a quasi-uniform distribution of particles, we need to resample the interface by recomputing the footpoints and updating the set of active grid points (Γ\Gamma) during the evolution. During this resampling process, we locally reconstruct the interface, which involves communications among different particles on the interface. This local reconstruction also provides geometric and Lagrangian information at the recomputed footpoints on the interface. The key step in the method is a least squares approximation of the interface using polynomials at each particle in a local coordinate system, {(n),n}\{(\textbf{n}^{\prime})^{\perp},\textbf{n}^{\prime}\}, with the associated particle as the origin. Using this local reconstruction, we find the closest point from this active grid point to the local approximation of the interface. This gives the new footpoint location. Further, we also compute and update any necessary geometric and Lagrangian information, such as the normal vector, curvature, and possibly an updated interface parametrization at this new footpoint.

2.2 A new weighting strategy in the local reconstruction

We approximate the surface locally using a polynomial from each grid point 𝐩\mathbf{p}. For example, in the three dimensions, we could use a second-order polynomial given by

h(x,y)=a0,0+a1,0x+a0,1y+a2,0x2+a1,1xy+a0,2y2,h(x,y)=a_{0,0}+a_{1,0}x+a_{0,1}y+a_{2,0}x^{2}+a_{1,1}xy+a_{0,2}y^{2}\,,

where (x,y,h(x,y))(x,y,h(x,y)) are coordinates in the local coordinate system, {(n),n}\{(\textbf{n}^{\prime})^{\perp},\textbf{n}^{\prime}\}. The original idea in [18] fits the polynomial using a collection of (n+1)(n+1)-footpoints, (xi,yi,h(xi,yi))(x_{i},y_{i},h(x_{i},y_{i})) with i=0,1,,ni=0,1,\cdots,n, in the neighborhood of 𝐩\mathbf{p}. With these points, we determine the least squares solution of the linear system:

(1x0y0x02x0y0y021x1y1x12x1y1y121x2y2x22x2y2y221xnynxn2xnynyn2)(a0,0a1,0a0,1a2,0a1,1a0,2)=(h(x0,y0)h(x1,y1)h(x2,y2)h(xn,yn)).\begin{pmatrix}1&x_{0}&y_{0}&x_{0}^{2}&x_{0}y_{0}&y_{0}^{2}\\ 1&x_{1}&y_{1}&x_{1}^{2}&x_{1}y_{1}&y_{1}^{2}\\ 1&x_{2}&y_{2}&x_{2}^{2}&x_{2}y_{2}&y_{2}^{2}\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ 1&x_{n}&y_{n}&x_{n}^{2}&x_{n}y_{n}&y_{n}^{2}\end{pmatrix}\begin{pmatrix}a_{0,0}\\ a_{1,0}\\ a_{0,1}\\ a_{2,0}\\ a_{1,1}\\ a_{0,2}\end{pmatrix}=\begin{pmatrix}h(x_{0},y_{0})\\ h(x_{1},y_{1})\\ h(x_{2},y_{2})\\ \vdots\\ h(x_{n},y_{n})\end{pmatrix}\,.

This set of footpoints in the local reconstruction must be chosen carefully. As discussed in the original GBPM formulation, this set of points has to satisfy the following conditions. The first one is that these points have to be distinct. In the previous formulation, we have conditioned that these footpoints must be separated by at least a certain distance on the local tangent plane. This constraint avoids putting an extensive amount of weight on these sampling points. The second constraint is that we would like to sample the same segment on the interface. When the curvature of the interface is too large compared to the underlying mesh size, one might collect footpoints from different interface segments, which will result in a wrong local reconstruction of the surface. In the original work, we proposed checking if the associated normal vectors from these footpoints were pointing in a similar direction. In particular, for each of these footpoints, 𝐱i\mathbf{x}_{i} has an associated vector 𝐧i\mathbf{n}^{\prime}_{i} corresponding to the surface normal before the motion. We have checked if 𝐧i𝐧j>cosθmax\mathbf{n}^{\prime}_{i}\cdot\mathbf{n}^{\prime}_{j}>\cos\theta_{\max} for some angle θmax\theta_{\max}.

The above strategies are in some sense imposing some hard constraints in the selection phase. We either use the neighboring sampling point or eliminate it when choosing the subset of footpoints in the local reconstruction step. This paper proposes relaxing this by imposing some soft constraints. The first condition on the well-separation can be partly handled by the moving least squares (MLS) method [15], which introduces a weighting to the footpoints based on the distance between points on the tangent plane. We will not concentrate on this part. To avoid information from different segments on the surface, instead of simply getting rid of those points pointing at an angle larger than θmax\theta_{\max}, we propose to add a weighting in the least squares further to emphasize the contribution from footpoints points with similar normal directions. Mathematically, we let 𝐱0=(x0,y0,h(x0,y0))\mathbf{x}_{0}=(x_{0},y_{0},h(x_{0},y_{0})) be the closest footpoint to the grid point 𝐩\mathbf{p} and denote those normal vectors at the footpoints before motion by 𝐧i,i=0,1,,n\mathbf{n}^{\prime}_{i},i=0,1,\cdots,n. We now introduce a weight cic_{i} to each collected footpoints given by

ci={𝐧0𝐧i if 𝐧0𝐧i>cosθmax,0 otherwise.c_{i}=\left\{\begin{array}[]{cc}\mathbf{n}^{\prime}_{0}\cdot\mathbf{n}^{\prime}_{i}&\mbox{ if $\mathbf{n}^{\prime}_{0}\cdot\mathbf{n}^{\prime}_{i}>\cos\theta_{\max}$,}\\ 0&\mbox{ otherwise.}\end{array}\right. (3)

Once we have this set of weights, the ii-th equation in the least squares system will be modified to

ci(1xiyixi2xiyiyi2)(a0,0a1,0a0,1a2,0a1,1a0,2)=cih(xi,yi).c_{i}\begin{pmatrix}1&x_{i}&y_{i}&x_{i}^{2}&x_{i}y_{i}&y_{i}^{2}\end{pmatrix}\begin{pmatrix}a_{0,0}\\ a_{1,0}\\ a_{0,1}\\ a_{2,0}\\ a_{1,1}\\ a_{0,2}\end{pmatrix}=c_{i}\,h(x_{i},y_{i})\,.

3 Our proposed framework

This section discusses our proposed framework for solving PDEs on an evolving surface. We will first discuss the mathematical background of the problem and discuss all the necessary quantities we have to compute. Then, we introduce our CLS method approach for local approximation.

3.1 Background

Let Γ\Gamma be a regular surface in 2\mathbb{R}^{2} parametrized by 𝐱\mathbf{x}. Mathematically, the surface gradient of a smooth function ξ\xi is given by

Γξ=ξuGξvFEGF2𝐱u+ξvEξuFEGF2𝐱v,\nabla_{\Gamma}\xi=\frac{\xi_{u}G-\xi_{v}F}{EG-F^{2}}\mathbf{x}_{u}+\frac{\xi_{v}E-\xi_{u}F}{EG-F^{2}}\mathbf{x}_{v},

where E,FE,F and GG are the coefficients of the first fundamental form of the regular surface Γ\Gamma, the quantities ξu=uξ(𝐱(u,v))\xi_{u}=\frac{\partial}{\partial u}\xi(\mathbf{x}(u,v)) and ξv=vξ(𝐱(u,v))\xi_{v}=\frac{\partial}{\partial v}\xi(\mathbf{x}(u,v)). For any local vector field X=A𝐱u+B𝐱vX=A\mathbf{x}_{u}+B\mathbf{x}_{v}, we have the surface divergence defined as

ΓX=1EGF2[u(AEGF2)+v(BEGF2)].\nabla_{\Gamma}\cdot X=\frac{1}{\sqrt{EG-F^{2}}}\left[\frac{\partial}{\partial u}\left(A\sqrt{EG-F^{2}}\right)+\frac{\partial}{\partial v}\left(B\sqrt{EG-F^{2}}\right)\right]\,.

The surface Laplacian (Laplace-Beltrami) operator ΔΓξ\Delta_{\Gamma}\xi is defined as ΔΓξ=Γ(Γξ)\Delta_{\Gamma}\xi=\nabla_{\Gamma}\cdot(\nabla_{\Gamma}\xi). In the local representation, it can be expressed as

ΔΓξ\displaystyle\Delta_{\Gamma}\xi =\displaystyle= 1EGF2[u(GEGF2ξu)u(FEGF2ξv)\displaystyle\frac{1}{\sqrt{EG-F^{2}}}\left[\frac{\partial}{\partial u}\left(\frac{G}{\sqrt{EG-F^{2}}}\xi_{u}\right)-\frac{\partial}{\partial u}\left(\frac{F}{\sqrt{EG-F^{2}}}\xi_{v}\right)\right.
+v(EEGF2ξv)v(FEGF2ξu)].\displaystyle+\left.\frac{\partial}{\partial v}\left(\frac{E}{\sqrt{EG-F^{2}}}\xi_{v}\right)-\frac{\partial}{\partial v}\left(\frac{F}{\sqrt{EG-F^{2}}}\xi_{u}\right)\right]\,.

Therefore, to design a numerical method for computing any differential operator on the surface, we have to approximate the first fundamental form of the surface and also provide a way to approximate the derivatives in the local coordinate systems.

Refer to caption
Figure 2: The local reconstruction step involves a construction of the tangent plane using a normal vector 𝐧\mathbf{n} estimated from the data and a projection step of sampled points collected from a local neighborhood.

3.2 The CLS-GSP reconstruction

The first issue is to design a good local representation of surface Γ\Gamma. One natural representation is the local tangential projection as shown in Figure 2. We assume that a set of sampling points represents the interface. These sampling points can be obtained from various approaches. For example, it can be the GBPM representation we summarized above, a typical point cloud dataset, or an interface represented by triangulation. The representation itself does not play a crucial role in the following discussions. Assume further that for any individual sampling point 𝐲\mathbf{y}^{*}, we can collect a set of MM-neighboring sampling points denoted by {𝐲j:j=1,,M}\{\mathbf{y}_{j}:j=1,\cdots,M\}. It is certainly easier to have the interface represented by a triangulation since we have the connectivities of all vertices (the sampling points) on the interface. For the GBPM representation, we can collect these neighboring points through the underlying mesh. Some KNN algorithms can also be applied to point cloud datasets.

For the sampling point 𝐲\mathbf{y}^{*}, we let 𝐧\mathbf{n} be the corresponding (approximated) normal vector of the surface. With 𝐧\mathbf{n} and its tangent plane, we introduce a new local coordinate system (x,y,z)(x,y,z) with the zz-direction representing the component in the 𝐧\mathbf{n}-direction and 𝐲\mathbf{y}^{*} becomes (𝐱0,0)(\mathbf{x}_{0},0) in the new coordinate system. In this local coordinate system, a function can locally approximate the surface, and we denote it as h(xj,yj)=zjh(x_{j},y_{j})=z_{j}. Once we have this local approximation hh, we can approximate the derivatives hx,hy,hxx,hxyh_{x},h_{y},h_{xx},h_{xy} and hyyh_{yy} for the first fundamental form in the surface differential operators.

Numerically, this leads to one of the following problems. The first one is an interpolation problem: Given (xj,yj,zj)(x_{j},y_{j},z_{j}) for j=1,,nj=1,\cdots,n, determine an interpolant h(x,y)h(x,y) which passes through all data points such that (xj,yj,h(xj,yj))(x_{j},y_{j},h(x_{j},y_{j})). One idea is to use the polynomial interpolation. For example, when the neighbors are carefully chosen such that n=6n=6, we can use a second-order polynomial to interpolate these data points. This approach better estimates the local error if the interpolation uses the polynomial basis. However, we generally have no control over the distribution of (xj,yj)(x_{j},y_{j}) from the projection of the sampling points onto the local tangent plane. In particular, some of these projection points could be very close to each other or could collide on the tangent plane. Designing a good subset of points might be challenging, which could lead to a good interpolation problem. Therefore, we do not consider this class of approach further.

The second is the least squares approximation problem: Given (xj,yj,zj)(x_{j},y_{j},z_{j}) for j=1,,nj=1,\cdots,n, determine a function h(x,y)h(x,y) that best explains the data points. Mathematically, we obtain the set of parameters αi\alpha_{i} to minimize the mismatch between the data points and ψiαiψi(𝐱)\sum_{\psi_{i}\in\mathcal{B}}\alpha_{i}\psi_{i}(\mathbf{x}) where \mathcal{B} is the space for the basis, i.e. we solve

minαij=0M[h(𝐱j)ψiαiψi(𝐱j)]2.\min_{\alpha_{i}}\sum_{j=0}^{M}\left[h(\mathbf{x}_{j})-\sum_{\psi_{i}\in\mathcal{B}}\alpha_{i}\psi_{i}(\mathbf{x}_{j})\right]^{2}\,.

The choice of the basis is clearly not unique. For example, the original GBPM developed in [18, 17, 16] chose the standard polynomial basis for \mathcal{B}. Of course, one could also replace the standard least squares approximation with a weighted least squares approach so that those points far away from 𝐱0\mathbf{x}_{0} do not contribute equally like the point 𝐱0\mathbf{x}_{0} itself.

In a recent work [37, 38], we have developed a general strategy to impose a hard constraint in the least squares reconstruction by enforcing the least squares reconstruction passing through (𝐱0,h(𝐱0))(\mathbf{x}_{0},h(\mathbf{x}_{0})) exactly. We call this a CLS approach. Under this constraint, the least squares method can be modified to fit the function value h(𝐱)h(𝐱0)h(\mathbf{x})-h(\mathbf{x}_{0}) with functions in the form of ψiβi[ψi(𝐱)ψi(𝐱0)]\sum_{\psi_{i}\in\mathcal{B}}\beta_{i}[\psi_{i}(\mathbf{x})-\psi_{i}(\mathbf{x}_{0})]. This only requires a simple modification of the standard least squares approximation method. We will give out more information below. Let \mathcal{L} be some linear differential operator. We can also follow a similar idea to obtain an approximation (h(𝐱0))\mathcal{L}(h(\mathbf{x}_{0})) based on the CLS, i.e.

(h(𝐱0))=j=1Mwj(h(𝐱j)h(𝐱0))=j=0Mwjh(𝐱j),\mathcal{L}(h(\mathbf{x}_{0}))=\sum_{j=1}^{M}w_{j}(h(\mathbf{x}_{j})-h(\mathbf{x}_{0}))=\sum_{j=0}^{M}w_{j}h(\mathbf{x}_{j})\,,

with the coefficient w0=j=1Mwjw_{0}=-\sum_{j=1}^{M}w_{j}.

The choice of the basis is not unique. In this paper, we present two different sets of bases and give the technical implementation details. The first is the simple polynomial basis, which leads to an approximation result similar to the original GBPM method for PDEs but with an extra hard constraint at the target location during reconstruction. The second basis follows our previous development in [37, 38], which uses the radial basis functions. A different basis will give slightly different results. We are not trying to conclude which of the following two basis sets will yield better numerical results in all applications. Still, we will only show some examples to explain better the framework that we are proposing.

3.2.1 Standard polynomial basis

The first set of basis in consideration is the standard polynomial basis. The weights wjw_{j} in the numerical approximation (h(𝐱0))=j=0Mwih(𝐱j)\mathcal{L}(h(\mathbf{x}_{0}))=\sum_{j=0}^{M}w_{i}h(\mathbf{x}_{j}) satisfy

(w1,w2,,wM)=({Pj(𝐱)|𝐱=𝐱0}j=1k)C+,\left(w_{1},w_{2},\cdots,w_{M}\right)=\left(\left\{\mathcal{L}P_{j}(\mathbf{x})\Big{|}_{\mathbf{x}=\mathbf{x}_{0}}\right\}_{j=1}^{k}\right)C^{+}\,,

where kk depends on the order of those polynomials in the basis \mathcal{B}, the matrix CC is the corresponding polynomial basis coefficients, and C+C^{+} is the pseudo-inverse of the matrix CC, which can be easily computed using the SVD or the QR algorithm. We only use the second-order least squares polynomial fitting at 𝐱0=𝟎\mathbf{x}_{0}=\mathbf{0}. A similar formulation can be easily extended to any higher-order polynomial fitting. Set

G1=({xPj(𝐱)|𝐱=𝟎}j=1k{yPj(𝐱)|𝐱=𝟎}j=1k{xxPj(𝐱)|𝐱=𝟎}j=1k{xyPj(𝐱)|𝐱=𝟎}j=1k{yyPj(𝐱)|𝐱=𝟎}j=1k)=(1000001000002000001000002),G_{1}=\begin{pmatrix}\left\{\partial_{x}P_{j}(\mathbf{x})\Big{|}_{\mathbf{x}=\mathbf{0}}\right\}_{j=1}^{k}\\ \left\{\partial_{y}P_{j}(\mathbf{x})\Big{|}_{\mathbf{x}=\mathbf{0}}\right\}_{j=1}^{k}\\ \left\{\partial_{xx}P_{j}(\mathbf{x})\Big{|}_{\mathbf{x}=\mathbf{0}}\right\}_{j=1}^{k}\\ \left\{\partial_{xy}P_{j}(\mathbf{x})\Big{|}_{\mathbf{x}=\mathbf{0}}\right\}_{j=1}^{k}\\ \left\{\partial_{yy}P_{j}(\mathbf{x})\Big{|}_{\mathbf{x}=\mathbf{0}}\right\}_{j=1}^{k}\end{pmatrix}=\begin{pmatrix}1&0&0&0&0\\ 0&1&0&0&0\\ 0&0&2&0&0\\ 0&0&0&1&0\\ 0&0&0&0&2\\ \end{pmatrix},

where Pj(𝐱){x,y,x2,xy,y2}P_{j}(\mathbf{x})\in\{x,y,x^{2},xy,y^{2}\}. Then we have

(w11w21wM1w12w22wM2w13w23wM3w14w24wM4w15w25wM5)=G1(x1y1x12x1y1y12x2y2x22x2y2y22xMyMxM2xMyMyM2)+.\begin{pmatrix}w_{1}^{1}&w_{2}^{1}&\cdots&w_{M}^{1}\\ w_{1}^{2}&w_{2}^{2}&\cdots&w_{M}^{2}\\ w_{1}^{3}&w_{2}^{3}&\cdots&w_{M}^{3}\\ w_{1}^{4}&w_{2}^{4}&\cdots&w_{M}^{4}\\ w_{1}^{5}&w_{2}^{5}&\cdots&w_{M}^{5}\end{pmatrix}=G_{1}\begin{pmatrix}x_{1}&y_{1}&x_{1}^{2}&x_{1}y_{1}&y_{1}^{2}\\ x_{2}&y_{2}&x_{2}^{2}&x_{2}y_{2}&y_{2}^{2}\\ \vdots&\vdots&\vdots&\vdots&\vdots\\ x_{M}&y_{M}&x_{M}^{2}&x_{M}y_{M}&y_{M}^{2}\end{pmatrix}^{+}.

This numerical scheme is the so-called local tangential lifting (LTL) developed in [4]. Some nice properties can be proved under the standard polynomial basis. We do not give the full numerical details of this approach but refer the interested readers to the reference and after that.

3.2.2 Radial basis functions

An interesting set of basis is the radial basis functions (RBF). Since the projection of the sampling points from the local neighborhood could collide on the tangent plane, this could sometimes create numerical instability in the standard RBF interpolation. In this paper, we follow [37, 38] and apply the CLS-GSP method to approximate the local surface and any function defined on the evolving manifold.

Figure 3: The local samplings of the function. The black dot denotes the target point 𝐱0\mathbf{x}_{0}. The blue squares are the nn-nearest scattered data points projection onto the local tangent plane at 𝐱0\mathbf{x}_{0}. The red triangles are the ghost sample points surrounding the target point 𝐱0\mathbf{x}_{0} on the tangent plane.

We propose to convert the interpolation problem in the standard RBF method to a regression problem on a set of prescribed sample locations. A simple two-dimensional demonstration is shown in Figure 3. Because we do not require the interpolating values at all sampling points to match the function values, we relax the constraint by minimizing the mismatch at a different set of dd sampling locations in the least squares sense. We call this set of new sampling locations the ghost sample points (as shown using the red triangles) and denote them by {𝐱^i}\{\hat{\mathbf{x}}_{i}\}. The choice of these ghost sample points is not unique. Here, for any target point 𝐱0\mathbf{x}_{0}, we propose to pick dd ghost sample points uniformly surrounding it. In this paper, we collect 16 local neighbors for each sampling point in various local reconstructions (i.e., n=16n=16). In Figure 3, the projection of these sampling points onto the local tangent plane is plotted using blue squares and the black circle (representing the target location 𝐱0\mathbf{x}_{0} itself). Then we apply 8 ghost sample points (i.e. d=8d=8) distributed uniformly at a distance r=12maxi𝐱0𝐱ir=\frac{1}{2}\max_{i}\|\mathbf{x}_{0}-\mathbf{x}_{i}\| from each individual sampling point 𝐱0\mathbf{x}_{0} (plotted by the red triangles).

Following the idea of the CLS, we enforce the least squares reconstruction passing through the function value (𝐱0,h(𝐱0))(\mathbf{x}_{0},h(\mathbf{x}_{0})) exactly. Mathematically, this constraint implies

h(𝟎)=i=1dλiψ(𝐱^i)+j=0kμjPj(𝟎)=i=1dλiψ(𝐱^i)+μ0,h(\mathbf{0})=\sum_{i=1}^{d}\lambda_{i}\psi(\|\hat{\mathbf{x}}_{i}\|)+\sum_{j=0}^{k}\mu_{j}P_{j}(\mathbf{0})=\sum_{i=1}^{d}\lambda_{i}\psi(\|\hat{\mathbf{x}}_{i}\|)+\mu_{0}\,,

where PjP_{j} is the polynomial basis. If we define

v(𝐱)=h(𝐱)h(𝟎)=i=1dλi(ψ(𝐱𝐱^i)ψ(𝐱^i))+j=1kμjPj(𝐱),v(\mathbf{x})=h(\mathbf{x})-h(\mathbf{0})=\sum_{i=1}^{d}\lambda_{i}\left(\psi(\|\mathbf{x}-\hat{\mathbf{x}}_{i}\|)-\psi(\|\hat{\mathbf{x}}_{i}\|)\right)+\sum_{j=1}^{k}\mu_{j}P_{j}(\mathbf{x})\,,

we find that this function v(𝟎)0v(\mathbf{0})\equiv 0 for any choice of λi\lambda_{i} and μj\mu_{j}. In other words, any function in the form of

i=1dλi(ψ(𝐱𝐱^i)ψ(𝐱^i))+j=1kμjPj(𝐱)+h(𝟎)\sum_{i=1}^{d}\lambda_{i}\left(\psi(\|\mathbf{x}-\hat{\mathbf{x}}_{i}\|)-\psi(\|\hat{\mathbf{x}}_{i}\|)\right)+\sum_{j=1}^{k}\mu_{j}P_{j}(\mathbf{x})+h(\mathbf{0})

passes through (𝐱0,h(𝐱0))(\mathbf{x}_{0},h(\mathbf{x}_{0})) exactly for any λi\lambda_{i} and μj\mu_{j}. Then, we follow an idea similar to the LS-RBF to determine the coefficients λi\lambda_{i} and μj\mu_{j} in the least squares sense. We redefine a new kernel function by

φ(𝐱𝐱^i)=ψ(𝐱𝐱^i)ψ(𝐱^i).\varphi(\|\mathbf{x}-\hat{\mathbf{x}}_{i}\|)=\psi(\|\mathbf{x}-\hat{\mathbf{x}}_{i}\|)-\psi(\|\hat{\mathbf{x}}_{i}\|)\,.

Then, the coefficients can be determined by obtaining the least squares solution to the following system of linear equations

(Dx1y1xMyMx^1x^dy^1y^d0)(λ1λdμ1μk)=(v(𝐱1)v(𝐱M)00),\displaystyle\begin{pmatrix}D&\begin{matrix}x_{1}&y_{1}&\cdots\\ \vdots&\vdots&\\ x_{M}&y_{M}&\cdots\end{matrix}\\ \begin{matrix}\hat{x}_{1}&\cdots&\hat{x}_{d}\\ \hat{y}_{1}&\cdots&\hat{y}_{d}\\ \vdots&&\vdots\end{matrix}&0\end{pmatrix}\begin{pmatrix}\lambda_{1}\\ \vdots\\ \lambda_{d}\\ \mu_{1}\\ \vdots\\ \mu_{k}\end{pmatrix}=\begin{pmatrix}v(\mathbf{x}_{1})\\ \vdots\\ v(\mathbf{x}_{M})\\ 0\\ \vdots\\ 0\end{pmatrix}, (4)

where Dp,j=φ(xpx^i)D_{p,j}=\varphi(\|x_{p}-\hat{x}_{i}\|) for p=1,,Mp=1,\cdots,M and i=1,,di=1,\cdots,d.

Further, to obtain an approximation to the differential operator, we first let D^\hat{D} be the coefficient matrix on the left hand side of the equation (4). Then we have

(w^1,,w^M,w^M+1,,w^M+k)=({φ(𝐱𝐱^i)|𝐱=𝐱0}i=1d,{Pj(𝐱)|𝐱=𝐱0}j=1k)D^+,\displaystyle\left(\hat{w}_{1},\cdots,\hat{w}_{M},\hat{w}_{M+1},\cdots,\hat{w}_{M+k}\right)=\left(\left\{\mathcal{L}\varphi(\|\mathbf{x}-\hat{\mathbf{x}}_{i}\|)\Big{|}_{\mathbf{x}=\mathbf{x}_{0}}\right\}_{i=1}^{d},\left\{\mathcal{L}P_{j}(\mathbf{x})\Big{|}_{\mathbf{x}=\mathbf{x}_{0}}\right\}_{j=1}^{k}\right)\hat{D}^{+}\,, (5)

where D^+\hat{D}^{+} denotes the pseudo-inverse of the matrix D^\hat{D}. Note that

φ(𝐱𝐱^i)|𝐱=𝐱0=ϕ(𝐱𝐱^i)|𝐱=𝐱0\mathcal{L}\varphi(\|\mathbf{x}-\hat{\mathbf{x}}_{i}\|)\Big{|}_{\mathbf{x}=\mathbf{x}_{0}}=\mathcal{L}\phi(\|\mathbf{x}-\hat{\mathbf{x}}_{i}\|)\Big{|}_{\mathbf{x}=\mathbf{x}_{0}}

for any differential operator. With the weights obtained, we have

h(𝐱)|𝐱=𝐱0=p=1Mw^pv(𝐱p)=p=1Mw^ph(𝐱p)(p=1Mw^p)h(𝐱0).\mathcal{L}h(\mathbf{x})\Big{|}_{\mathbf{x}=\mathbf{x}_{0}}=\sum_{p=1}^{M}\hat{w}_{p}v(\mathbf{x}_{p})=\sum_{p=1}^{M}\hat{w}_{p}h(\mathbf{x}_{p})-\left(\sum_{p=1}^{M}\hat{w}_{p}\right)h(\mathbf{x}_{0})\,.

In particular, if we use the Gaussian kernel given by ϕ(r)=exp[(εr)2]\phi(r)=\exp[{-(\varepsilon r)^{2}}] and set

G2=({xφ(𝐱𝐱^i)|𝐱=𝟎}i=1d{yφ(𝐱𝐱^i)|𝐱=𝟎}i=1d{xxφ(𝐱𝐱^i)|𝐱=𝟎}i=1d{xyφ(𝐱𝐱^i)|𝐱=𝟎}i=1d{yyφ(𝐱𝐱^i)|𝐱=𝟎}i=1d)=({2x^iε2exp[(ε𝐱^i)2]}i=1d{2y^iε2exp[(ε𝐱^i)2]}i=1d{(2ε2+4x^i2ε4)exp[(ε𝐱^i)2]}i=1d{4x^iy^iε4exp[(ε𝐱^i)2]}i=1d{(2ε2+4y^i2ε4)exp[(ε𝐱^i)2]}i=1d),G_{2}=\begin{pmatrix}\left\{\partial_{x}\varphi(\|\mathbf{x}-\hat{\mathbf{x}}_{i}\|)\Big{|}_{\mathbf{x}=\mathbf{0}}\right\}_{i=1}^{d}\\ \left\{\partial_{y}\varphi(\|\mathbf{x}-\hat{\mathbf{x}}_{i}\|)\Big{|}_{\mathbf{x}=\mathbf{0}}\right\}_{i=1}^{d}\\ \left\{\partial_{xx}\varphi(\|\mathbf{x}-\hat{\mathbf{x}}_{i}\|)\Big{|}_{\mathbf{x}=\mathbf{0}}\right\}_{i=1}^{d}\\ \left\{\partial_{xy}\varphi(\|\mathbf{x}-\hat{\mathbf{x}}_{i}\|)\Big{|}_{\mathbf{x}=\mathbf{0}}\right\}_{i=1}^{d}\\ \left\{\partial_{yy}\varphi(\|\mathbf{x}-\hat{\mathbf{x}}_{i}\|)\Big{|}_{\mathbf{x}=\mathbf{0}}\right\}_{i=1}^{d}\end{pmatrix}=\begin{pmatrix}\left\{2\hat{x}_{i}\varepsilon^{2}\exp[{-(\varepsilon\|\hat{\mathbf{x}}_{i}\|)^{2}}]\right\}_{i=1}^{d}\\ \left\{2\hat{y}_{i}\varepsilon^{2}\exp[{-(\varepsilon\|\hat{\mathbf{x}}_{i}\|)^{2}}]\right\}_{i=1}^{d}\\ \left\{(-2\varepsilon^{2}+4\hat{x}_{i}^{2}\varepsilon^{4})\exp[{-(\varepsilon\|\hat{\mathbf{x}}_{i}\|)^{2}}]\right\}_{i=1}^{d}\\ \left\{4\hat{x}_{i}\hat{y}_{i}\varepsilon^{4}\exp[{-(\varepsilon\|\hat{\mathbf{x}}_{i}\|)^{2}}]\right\}_{i=1}^{d}\\ \left\{(-2\varepsilon^{2}+4\hat{y}_{i}^{2}\varepsilon^{4})\exp[{-(\varepsilon\|\hat{\mathbf{x}}_{i}\|)^{2}}]\right\}_{i=1}^{d}\\ \end{pmatrix}\,,

we can approximate the coefficients by

(w^11w^21w^M1w^M+11w^M+k1w^12w^22w^M2w^M+12w^M+k2w^13w^23w^M3w^M+13w^M+k3w^14w^24w^M4w^M+14w^M+k4w^15w^25w^M5w^M+15w^M+k5)=(G2G1)D^+.\begin{pmatrix}\hat{w}_{1}^{1}&\hat{w}_{2}^{1}&\cdots&\hat{w}_{M}^{1}&\hat{w}_{M+1}^{1}&\cdots\hat{w}_{M+k}^{1}\\ \hat{w}_{1}^{2}&\hat{w}_{2}^{2}&\cdots&\hat{w}_{M}^{2}&\hat{w}_{M+1}^{2}&\cdots\hat{w}_{M+k}^{2}\\ \hat{w}_{1}^{3}&\hat{w}_{2}^{3}&\cdots&\hat{w}_{M}^{3}&\hat{w}_{M+1}^{3}&\cdots\hat{w}_{M+k}^{3}\\ \hat{w}_{1}^{4}&\hat{w}_{2}^{4}&\cdots&\hat{w}_{M}^{4}&\hat{w}_{M+1}^{4}&\cdots\hat{w}_{M+k}^{4}\\ \hat{w}_{1}^{5}&\hat{w}_{2}^{5}&\cdots&\hat{w}_{M}^{5}&\hat{w}_{M+1}^{5}&\cdots\hat{w}_{M+k}^{5}\end{pmatrix}=\begin{pmatrix}G_{2}&G_{1}\\ \end{pmatrix}\hat{D}^{+}\,.

Omitting the last kk columns of coefficients, we have

(hx(𝐱0)hy(𝐱0)hxx(𝐱0)hxy(𝐱0)hyy(𝐱0))=(w^11w^21w^M1w^12w^22w^M2w^13w^23w^M3w^14w^24w^M4w^15w^25w^M5)(h(𝐱1)h(𝐱0)h(𝐱2)h(𝐱0)h(𝐱M)h(𝐱0)).\begin{pmatrix}h_{x}(\mathbf{x}_{0})\\ h_{y}(\mathbf{x}_{0})\\ h_{xx}(\mathbf{x}_{0})\\ h_{xy}(\mathbf{x}_{0})\\ h_{yy}(\mathbf{x}_{0})\end{pmatrix}=\begin{pmatrix}\hat{w}_{1}^{1}&\hat{w}_{2}^{1}&\cdots&\hat{w}_{M}^{1}\\ \hat{w}_{1}^{2}&\hat{w}_{2}^{2}&\cdots&\hat{w}_{M}^{2}\\ \hat{w}_{1}^{3}&\hat{w}_{2}^{3}&\cdots&\hat{w}_{M}^{3}\\ \hat{w}_{1}^{4}&\hat{w}_{2}^{4}&\cdots&\hat{w}_{M}^{4}\\ \hat{w}_{1}^{5}&\hat{w}_{2}^{5}&\cdots&\hat{w}_{M}^{5}\end{pmatrix}\begin{pmatrix}h(\mathbf{x}_{1})-h(\mathbf{x}_{0})\\ h(\mathbf{x}_{2})-h(\mathbf{x}_{0})\\ \vdots\\ h(\mathbf{x}_{M})-h(\mathbf{x}_{0})\end{pmatrix}\,.

Finally, the partial derivatives /x{\partial}/{\partial x}, /y{\partial}/{\partial y}, 2/x2{\partial^{2}}/{\partial x^{2}}, 2/y2{\partial^{2}}/{\partial y^{2}} and 2/xy{\partial^{2}}/{\partial x\partial y} can be easily constructed using the same set of coefficients. Further, with both hxh_{x} and hyh_{y} approximated, we can determine the coefficients in the first fundamental form using

E=1+hx2 , F=hxhy and G=1+hy2.E=1+h_{x}^{2}\mbox{ , }F=h_{x}h_{y}\mbox{ and }G=1+h_{y}^{2}\,.

Note that the coefficients w^j\hat{w}_{j} depend only on the distribution of the projected points on the local tangent place but not on the function values of hh. We can apply the same set of coefficients w^j\hat{w}_{j} to compute ξx,ξy,ξxx,ξxy\xi_{x},\xi_{y},\xi_{xx},\xi_{xy} and ξyy\xi_{yy} at 𝐱0\mathbf{x}_{0}. In particular, we have

Γξ=(ξx(1+hx2)ξyhxhy,ξy(1+hy2)ξxhxhy,ξxξy)1+hx2+hy2.\nabla_{\Gamma}\xi=\frac{\left(\xi_{x}(1+h_{x}^{2})-\xi_{y}h_{x}h_{y},\xi_{y}(1+h_{y}^{2})-\xi_{x}h_{x}h_{y},\xi_{x}\xi_{y}\right)}{1+h_{x}^{2}+h_{y}^{2}}\,.
Remark 1.

Indeed, to get a more accurate approximation of the surface differential operator, we need to collect more neighboring points for each sample point. If the sampling density of the interface cannot grow with accuracy, we have to go further away to collect enough neighbors in the least squares approximation. However, this will lower the accuracy of the local approximation of the interface. Therefore, we need to assume that the number of sampling points has to be large enough, and the sampling density must be uniform.

Remark 2.

In the discussions above, we only require the fitting function to pass through the target point exactly. Except for the target point, the remaining samples are treated equally with the same weighting in the least squares fitting. However, as discussed in Section 2.2, we introduce an extra weighting to emphasize the contributions from sample points based on the angle of its normal direction compared to the target one.

3.3 The overall algorithm for solving PDEs on evolving surfaces

To end this section, we summarize our overall approach to solving PDEs on evolving surfaces. The original GBPM handles both the interface sampling and the interface motion. At the same time, the proposed CLS methods help to approximate the surface, the local reconstructions of the function value uu on the interface, and the coefficients in the first fundamental form for the surface differential operator.

Algorithm:

  1. 1.

    Initialization.

  2. 2.

    Generate the surface operators.

    1. (a)

      Determine neighboring grids (inside the computational band) and collect their corresponding footpoints. Denote the number of points by MM.

    2. (b)

      Construct the local coordinate system at each sampling point.

    3. (c)

      Apply the CLS method to construct the weights wjw_{j} for j=1,,Mj=1,\cdots,M.

    4. (d)

      Generate the surface differential operators.

  3. 3.

    Update the function value by solving the time-dependent surface PDEs.

  4. 4.

    Update the location of foot points according to the motion law.

  5. 5.

    Follow the GBPM local reconstruction, including resampling new footpoints and updating the corresponding function value.

  6. 6.

    Back to Step 2, repeat until the stopping time.

4 Numerical experiments

In this section, we present several numerical tests to demonstrate the effectiveness of the proposed method. We will first solve some PDEs on static surfaces represented by the GBPM to show the performance of the CLS. Then, we add different motion laws to the interface for various applications. The examples in this section will be arranged in the following way. We will first look at the effect on the new weightings in the local reconstruction in Section 4.1 without consideration of any PDE on the evolving surface. Then, we concentrate on solving PDEs on a static surface sampled by the GBPM representation in Section 4.2. This section tests the ability and effectiveness of the newly proposed CLS-GSP method. We consider in Section 4.3 a simple example for solving an advection-diffusion equation on an evolving surface where the exact solution to the problem can be found. We will then present some simulations of various equations in Section 4.4 to Section 4.6 to demonstrate the effectiveness of the proposed approach.

Refer to caption
Figure 4: (Example 4.1) Motion under a single vortex with the reversible motion using the resolution 1003100^{3} at t=0,0.375,0.75t=0,0.375,0.75, and 1.5.
Refer to caption
Figure 5: (Example 4.1) (a) The change in the number of footpoints at different times and (b) the error in the radius of the sphere rr0r-r_{0} at the final time.
Refer to caption
Figure 6: (Example 4.1) (a) Number of footpoints during the evolution. (b) The surface after the final stage of the evolution without weighting in local reconstruction. (c) The surface after the final stage of the evolution with weighting in local reconstruction.

4.1 The modified GBPM

Our first example is the motion of a sphere under a single vortex flow [10, 12, 25, 18]. This is a challenging test case that can test the stability of our GBPM with the modified local reconstruction. The velocity field is given by

v1(x,y,z)\displaystyle v_{1}(x,y,z) =\displaystyle= 2sin(πx)2sin(2πy)sin(2πz)cos(πtT)\displaystyle 2\sin(\pi x)^{2}\sin(2\pi y)\sin(2\pi z)\cos\left(\frac{\pi t}{T}\right)
v2(x,y,z)\displaystyle v_{2}(x,y,z) =\displaystyle= sin(2πx)sin(πy)2sin(2πz)cos(πtT)\displaystyle-\sin(2\pi x)\sin(\pi y)^{2}\sin(2\pi z)\cos\left(\frac{\pi t}{T}\right)
v3(x,y,z)\displaystyle v_{3}(x,y,z) =\displaystyle= sin(2πx)sin(2πy)sin(πz)2cos(πtT),\displaystyle-\sin(2\pi x)\sin(2\pi y)\sin(\pi z)^{2}\cos\left(\frac{\pi t}{T}\right)\,,

for some given final time TT. The initial shape is a sphere centered at (0.35,0.35,0.35)(0.35,0.35,0.35) with radius r0=0.15r_{0}=0.15. In Figure 4, we compute the solution at t=0,0.375,0.75t=0,0.375,0.75 and 1.51.5 with T=1.5T=1.5 using Δx=0.01\Delta x=0.01. The mean distance from the final footpoints to the point (0.35,0.35,0.35)(0.35,0.35,0.35) at the final time is 0.15060.1506 with the standard deviation 5.114×1045.114\times 10^{-4}. The largest absolute error in the location is 2.862×1032.862\times 10^{-3}. There is a total of 85308530 initially at t=0t=0. The number gradually increases to the maximum value 1380113801 when t=0.75t=0.75, finally dropping to 86158615 at the final time. Figure 5 shows the detailed evolution of footpoints and the error in the sphere’s radius rr0r-r_{0} at the final time.

To see the effect of the new weighting (3) introduced in the local reconstruction, we compare two numerical solutions in Figure 6. In this example, we repeat the similar setup but choose a slightly smaller final time with T=1T=1. Figure 6 shows the changes in the number of footpoints and the solution at the final time T=1T=1. For the original scheme without any weighting in the local reconstruction, we found that the mean distance from the final footpoints to the center at the final time is 0.15570.1557 with the standard deviation 5.832×1035.832\times 10^{-3}. The modified local reconstruction gives a more accurate solution with the corresponding mean distance 0.15020.1502 with the standard deviation 2.247×1042.247\times 10^{-4}.

Refer to caption
Figure 7: (Example 4.2) The solution of the Cahn-Hilliard equation on the unit sphere at (a) t=0.1t=0.1, (b) t=0.2t=0.2, (c) t=0.5t=0.5, (d) t=1t=1, (e) t=3t=3 and (f) t=9t=9.
Refer to caption
Figure 8: (Example 4.2) The Turing pattern in the concentrations of (a-c) uu and (d-f) ww on the unit sphere at t=10t=10 with (from left to right) γ=30,200\gamma=30,200 and 500500.

4.2 The CLS method for solving PDEs on static surfaces represented by the GBPM

In this section, we give two numerical examples to show that the proposed CLS method works well in solving PDEs on surfaces sampled by the GBPM representations. The first example is the Cahn-Hilliard equation on the unit sphere. The surface Cahn-Hilliard equation has the form

ut=νPeΔΓ[g(u)]νCn2PeΔΓ2u.\displaystyle u_{t}=\frac{\nu}{Pe}\Delta_{\Gamma}\left[g^{\prime}(u)\right]-\frac{\nu\,Cn^{2}}{Pe}\Delta_{\Gamma}^{2}u\,. (6)

where CnCn is the Cahn-Hilliard number, PePe is the Peclet number and ν\nu is a diffusion parameter. In this example, we solve the equation on a unit sphere with the parameters in the equation given by Pe=ν=1Pe=\nu=1, Cn=0.06Cn=0.06, and the double well potential function g(u)=u2(1u)2g(u)=u^{2}(1-u)^{2}. Our numerical solution at different times is shown in Figure 7. The initial condition is a uniform random perturbation with a magnitude of 0.01, with an average concentration of uu of 0.5.

In the second example, we simulate the Turing patterns on a static surface. Consider the following coupled system of equation (2) given by

DuDt+uΓ𝐯𝒟1ΔΓu\displaystyle\frac{Du}{Dt}+u\nabla_{\Gamma}\cdot\mathbf{v}-\mathcal{D}_{1}\Delta_{\Gamma}u =\displaystyle= f1(u,w)\displaystyle f_{1}(u,w)
DwDt+wΓ𝐯𝒟2ΔΓw\displaystyle\frac{Dw}{Dt}+w\nabla_{\Gamma}\cdot\mathbf{v}-\mathcal{D}_{2}\Delta_{\Gamma}w =\displaystyle= f2(u,w),\displaystyle f_{2}(u,w)\,, (7)

with

f1(u,w)=γ(au+u2w) and f2(u,w)=γ(bu2w).\displaystyle f_{1}(u,w)=\gamma(a-u+u^{2}w)\,\mbox{ and }\,f_{2}(u,w)=\gamma(b-u^{2}w)\,. (8)

The parameters γ\gamma, aa, and bb are all positive constants. In this example, we follow [1] and use the following parameters for the kinetics function in (8): a=0.1a=0.1, b=0.9b=0.9, 𝒟1=1\mathcal{D}_{1}=1 and 𝒟2=10\mathcal{D}_{2}=10. All three examples use the same initial condition given by (u,w)=(1,0.9)(u,w)=(1,0.9) with a uniform random perturbation of magnitude 0.01. The parameter γ\gamma is given in different scales. The grid size in the GBPM representation is Δx=0.1\Delta x=0.1, and the stopping time is t=10t=10.

Figure 8 shows our numerical solutions corresponding to various γ\gamma at the fixed final time t=10t=10. Even though the exact solution to these examples is not available, these results are similar to those obtained in various articles, including [3, 24, 1]. Numerically, the CLS method is flexible and matches the GBPM well in solving PDEs on a static surface.

Refer to caption
Figure 9: (Example 4.3) The numerical solution of the advection-diffusion equation with the source term on an oscillating ellipsoid at (a) t=π/8t=\pi/8, (b) t=π/4t=\pi/4, (c) t=3π/8t=3\pi/8 and (d) t=π/2t=\pi/2.
Refer to caption
Figure 10: (Example 4.3) The LL^{\infty}-errors in the numerical solutions of the advection-diffusion model on an oscillating ellipsoid over time computed on various meshes. (a) CLS-Polynomial, (b) CLS-RBF.
Δx\Delta x t=0.01t=0.01 order t=0.02t=0.02 order t=0.03t=0.03 order t=0.04t=0.04 order
0.20.2 5.186×1045.186\times 10^{-4} - 9.364×1049.364\times 10^{-4} - 1.283×1031.283\times 10^{-3} - 1.575×1031.575\times 10^{-3} -
0.10.1 1.299×1041.299\times 10^{-4} 2.0002.000 2.222×1042.222\times 10^{-4} 2.1082.108 3.012×1043.012\times 10^{-4} 2.1302.130 3.700×1043.700\times 10^{-4} 2.1302.130
0.050.05 2.956×1052.956\times 10^{-5} 2.1972.197 5.201×1055.201\times 10^{-5} 2.1362.136 7.169×1057.169\times 10^{-5} 2.1012.101 8.927×1058.927\times 10^{-5} 2.0722.072
0.0250.025 6.909×1066.909\times 10^{-6} 2.1392.139 1.253×1051.253\times 10^{-5} 2.0762.076 1.769×1051.769\times 10^{-5} 2.0262.026 2.221×1052.221\times 10^{-5} 2.0102.010
Table 1: (Example 4.3, CLS-Polynomial) LL_{\infty}-errors and some estimated orders of convergence at different times.
Δx\Delta x t=0.01t=0.01 order t=0.02t=0.02 order t=0.03t=0.03 order t=0.04t=0.04 order
0.20.2 4.957×1044.957\times 10^{-4} - 8.382×1048.382\times 10^{-4} - 1.122×1031.122\times 10^{-3} - 1.372×1031.372\times 10^{-3} -
0.10.1 1.177×1041.177\times 10^{-4} 2.1062.106 2.109×1042.109\times 10^{-4} 1.9871.987 2.939×1042.939\times 10^{-4} 1.9091.909 3.671×1043.671\times 10^{-4} 1.8681.868
0.050.05 3.017×1053.017\times 10^{-5} 1.9501.950 5.563×1055.563\times 10^{-5} 1.8961.896 7.824×1057.824\times 10^{-5} 1.8781.878 9.980×1059.980\times 10^{-5} 1.8731.873
0.0250.025 1.007×1051.007\times 10^{-5} 1.4971.497 1.706×1051.706\times 10^{-5} 1.6301.630 2.313×1052.313\times 10^{-5} 1.6911.691 2.857×1052.857\times 10^{-5} 1.7151.715
Table 2: (Example 4.3, CLS-RBF) LL_{\infty}-errors and some estimated orders of convergence at different times.

4.3 The advection-diffusion equation with a source term on an oscillating ellipsoid

This three-dimensional example is taken from [7, 8, 28] to test the accuracy of a PDE solver on moving surfaces. The time-dependent oscillating ellipsoid has the following exact solution

Γ(t)={𝐱=(x,y,z)3|x2a(t)+y2+z2=1},\Gamma(t)=\left\{\mathbf{x}=(x,y,z)\in\mathbb{R}^{3}\left|\frac{x^{2}}{a(t)}+y^{2}+z^{2}=1\right.\right\}\,,

where a(t)=1+sin2ta(t)=1+\sin 2t. The surface Γ\Gamma can be explicitly parametrized as

Γ(t,θ,ϕ):=(a(t)cosθcosϕ,sinθcosϕ,sinϕ).\Gamma(t,\theta,\phi):=\left(\sqrt{a(t)}\cos\theta\cos\phi,\sin\theta\cos\phi,\sin\phi\right)\,.

The associated velocity field is given by 𝐯(𝐱,t)=a(t)a(t)(x,0,0)T\mathbf{v}(\mathbf{x},t)=\frac{a^{\prime}(t)}{a(t)}(x,0,0)^{T}. Therefore, the equation involves both the advection along the surface and the deformation by the motion in the normal direction. We choose the exact continuous solution u(𝐱,t)=e6txyu(\mathbf{x},t)=e^{-6t}xy so that the corresponding source term on the right-hand side of the equation is given by

f(𝐱,t)={6+a(t)a(t)(1x122N)+1+5a(t)+2a2(t)N1a(t)N2[x12+a3(t)(x22+x32)]}u(𝐱,t)\displaystyle f(\mathbf{x},t)=\left\{-6+\frac{a^{\prime}(t)}{a(t)}\left(1-\frac{x_{1}^{2}}{2N}\right)+\frac{1+5a(t)+2a^{2}(t)}{N}-\frac{1-a(t)}{N^{2}}\left[x_{1}^{2}+a^{3}(t)\left(x_{2}^{2}+x_{3}^{2}\right)\right]\right\}u(\mathbf{x},t)

with N=x2+a2(t)(y2+z2)N=x^{2}+a^{2}(t)(y^{2}+z^{2}). In this simulation, we choose the mesh size Δx=0.1\Delta x=0.1 and a fixed time step size Δt=0.1(Δx)2\Delta t=0.1\,(\Delta x)^{2}. We plot our solution at different time t=π/8,π/4,3π/8t=\pi/8,\pi/4,3\pi/8 and π/2\pi/2 in Figure 9. Figure 10 gives the LL^{\infty}-error in the numerical solution computed using the CLS with different mesh sizes at different times. Table 1 shows the errors in the computed solution at different times t=0.01,0.02,0.03t=0.01,0.02,0.03 and 0.040.04 computed on a different mesh. We see that our numerical approach is roughly second-order accurate.

Concerning the computational efficiency, the CPU time for one marching step on the mesh with Δx=0.1\Delta x=0.1 is 3.13.1 seconds, including 1.51.5 seconds spent on the CLS reconstruction and 1.61.6 seconds on the GBPM computations. This computational time is recorded under a 4-core laptop in MATLAB with a parallel modulo implemented.

Refer to caption
Figure 11: (Example 4.4) The numerical solution uu at (a) t=0.02t=0.02, (b) t=0.04t=0.04, (c) t=0.06t=0.06 and (d) t=0.08t=0.08.

4.4 A strongly coupled flow on an evolving torus

In this case, the interface velocity is strongly coupled with the solution of the PDE. This example is taken from [28], where we evolve an initial torus

(1x2+y2)2+z2=0.32\left(1-\sqrt{x^{2}+y^{2}}\right)^{2}+z^{2}=0.3^{2}

according to the given velocity 𝐯=(0.1κ+5u)𝐧\mathbf{v}=(0.1\kappa+5u)\mathbf{n} where κ\kappa is the mean curvature, 𝐧\mathbf{n} is the unit normal vector, and uu is the solution to the PDE

ut+Vu𝐧κVu+ΔΓu=0 on Γ.\displaystyle u_{t}+V\frac{\partial u}{\partial\mathbf{n}}-\kappa Vu+\Delta_{\Gamma}u=0\text{\; on\; }\Gamma\,.

The initial value of uu is given by u(𝐱,t)=1+20xyzu(\mathbf{x},t)=1+20xyz. Figure 11 shows our computed solution of uu on the evolving surface at different times from 0.020.02 to 0.080.08 in an increment of 0.020.02. Even though the exact solution to this problem is unavailable, these numerical solutions are similar to those presented in [28].

Refer to caption
Figure 12: (Example 4.5) The tumor surface and the concentration of the growth factor uu at (a) t=0t=0, (b) t=5t=5, (c) t=6t=6, (d) t=8t=8 and (e) t=10t=10. (f) The corresponding foot points at final time t=10t=10. The parameters used in this example are γ=30\gamma=30, 𝒟1=1\mathcal{D}_{1}=1, 𝒟2=10\mathcal{D}_{2}=10, a=0.1a=0.1, b=0.9b=0.9, t¯=5\bar{t}=5, δ=0.1\delta=0.1, ϵ=0.01\epsilon=0.01 and the time marching step is 0.0002. The number of footpoints grows from 38103810 to 97309730 at the final time.
Refer to caption
Figure 13: (Example 4.5) The tumor surface and the concentration of the growth factor uu at (a) t=0t=0, (b) t=5t=5, (c) t=6t=6, (d) t=8t=8 and (e) t=10t=10. (f) The corresponding foot points at final time t=10t=10. The parameters used in this example are γ=100\gamma=100, 𝒟1=1\mathcal{D}_{1}=1, 𝒟2=10\mathcal{D}_{2}=10, a=0.1a=0.1, b=0.9b=0.9, t¯=5\bar{t}=5, δ=0.1\delta=0.1, ϵ=0.01\epsilon=0.01 and the time marching step is 0.0002. The number of footpoints grows from 38103810 to 1074510745 at the final time.

4.5 Solid tumors growth

In this example, we compute the reaction-diffusion system (7) with an activator-depleted kinetics in the form of (8). The surface growth law is assumed to be in the form

𝐯=(ϵκ+δu)𝐧,tt¯\displaystyle\mathbf{v}=(-\epsilon\kappa+\delta u)\mathbf{n},\;\;t\geq\bar{t}

where δ\delta is the growth rate and the parameter ϵ\epsilon models the surface tension of the tumor surface. In this system, the solution of uu denotes the growth-promoting factor, and ww is the corresponding growth-inhibiting factor [5]. We follow the discussions in [3, 1, 9] by first starting the numerical simulations from a stationary sphere in a pre-patterning stage (controlled by the parameter t¯\bar{t}). After the pre-patterning stage, the surface grows under the given motion law.

Figures 12 and 13 show the evolution of the tumor surface together with the growth factor concentration uu with two different values of γ\gamma’s. In the pre-pattern stage t[0,t¯]t\in[0,\bar{t}], the behavior of the solution uu is the same as when we simulate the Turing patterns in the previous section. After the pre-pattern stage, the surface evolves in the normal direction. Those regions with a larger growth concentration uu evolve much faster than others. This leads to some interesting configurations of the shape.

Refer to caption
Figure 14: (Example 4.6) The evolution of the interface and the solution uu to the Cahn-Hilliard equation with γ=100\gamma=100 at the time t=0.2t=0.2, 0.4, 0.6 and 1.0. The number of footpoints grows from (a) 1131211312 at t=0t=0 to (f) 2206222062 at the final time t=1t=1.

4.6 The Cahn-Hilliard equation on an ellipsoid

This example considers the homogenous Cahn-Hilliard equation (6) on an evolving ellipsoid. An initial ellipsoid centered at the origin with axis rx=1r_{x}=1 and ry=rz=0.8r_{y}=r_{z}=0.8 is evolved in the normal direction according to the velocity 𝐯=(0.01κ+0.4u)𝐧\mathbf{v}=(0.01\kappa+0.4u)\mathbf{n}, where κ\kappa is the mean curvature and 𝐧\mathbf{n} is the unit normal direction. We choose the Peclet number Pr=1Pr=1, the diffusion parameter ν=1\nu=1, Cn=0.03Cn=0.03 and the double well potential function g(u)=u2(1u)2g(u)=u^{2}(1-u)^{2}. The initial value of the function uu is defined as a 0.010.01 Gaussian noise around a constant number 0.50.5. Figure 14 shows the numerical results computed on the mesh with Δx=0.05\Delta x=0.05 and the time marching step Δt=0.0001\Delta t=0.0001.

5 Conclusion

This paper proposes a new CLS method for solving PDEs on evolving surfaces. The method modifies the GBPM developed in [18, 17, 16] by enforcing an extra constraint in the local least squares approximation. Unlike a recent approach in [33] where we introduce extra weighting to the target point through a virtual grid, the additional condition in the current approach requires the least squares reconstruction passing through the data point at the target location. This significantly improves the computational stability and accuracy. We have shown that this CLS approach fits extremely well with the GBPM for interface representation. The coupling of these two methods provides a computationally efficient and robust method for solving PDEs on evolving surfaces.

Acknowledgment

Leung’s work was partly supported by the Hong Kong RGC grants 16303114 and 16309316.

References

  • [1] R. Barreira, C.M. Elliott, and A. Madzvamuse. The surface finite element method for pattern formation on evolving biological surfaces. J. Math. Biol., 63:1095–1119, 2011.
  • [2] M. Bertalmio, L.-T. Cheng, S. Osher, and G. Sapiro. Variational problems and partial differential equations on implicit surfaces. J. Comput. Phys., 174:759–780, 2001.
  • [3] M.A.J. Chaplain, M. Ganesh, and I.G. Graham. Spatio-temporal pattern formation on spherical surfaces: numerical simulation and application to solid tumour growth. J. Math. Biol., 42:387–423, 2001.
  • [4] S. G. Chen, M. H. Chi, and J. Y. Wu. High-Order Algorithms for Laplace–Beltrami Operators and Geometric Invariants over Curved Surfaces. J. Sci. Comput., 65:839–865, 2015.
  • [5] E.J. Crampin, E.A. Gaffney, and P.K. Maini. Reaction and diffusion on growing domains: scenarios for robust pattern formation. Bull. Math. Biol., 61:1093–1120, 1999.
  • [6] G. Dziuk and C. M. Elliott. Finite element methods for surface PDEs. Acta. Numer., 22:289–396, 2013.
  • [7] G. Dziuk and C.M. Elliott. An Eulerian level set method for partial differential equations on evolving surfaces. Comput. Vis. Sci., 13:17–28, 2008.
  • [8] C.M. Elliott, B. Stinner, V. Styles, and R. Welford. Numerical computation of advection and diffusion on evolving diffuse interfaces. IMA J. Numer. Anal., 31:786–812, 2010.
  • [9] C.M. Elliott and V. Styles. An ale esfem for solving pdes on evolving surfaces. Milan J. Math., pages 1–33, 2012.
  • [10] D. Enright, R. Fedkiw, J. Ferziger, and I. Mitchell. A hybrid particle level set method for improved interface capturing. J. Comput. Phys., 183:83–116, 2002.
  • [11] J.B. Greer. An improvement of a recent Eulerian method for solving PDEs on general geometries. J. Sci. Comp., 29(3):321–352, 2005.
  • [12] S. Hieber and P. Koumoutsakos. A Lagrangian particle level set method. J. Comput. Phys., 210:342–367, 2005.
  • [13] S. Hon, S. Leung, and H.-K. Zhao. A cell based particle method for modeling dynamic interfaces. J. Comp. Phys., 272:279–306, 2014.
  • [14] R. Lai, J. Liang, and H. Zhao. A local mesh method for solving pdes on point clouds. Inverse Problems and Imaging, 7(3):737–755, 2013.
  • [15] P. Lancaster and K. Salkauskas. Surfaces generated by moving least squares methods. Math. Comp., 87:141–158, 1981.
  • [16] S. Leung, J. Lowengrub, and H.K. Zhao. A grid based particle method for high order geometrical motions and local inextensible flows. J. Comput. Phys., 230:2540–2561, 2011.
  • [17] S. Leung and H.K. Zhao. A grid-based particle method for evolution of open curves and surfaces. J. Comput. Phys., 228:7706–7728, 2009.
  • [18] S. Leung and H.K. Zhao. A grid based particle method for moving interface problems. J. Comput. Phys., 228:2993–3024, 2009.
  • [19] S. Leung and H.K. Zhao. Gaussian beam summation for diffraction in inhomogeneous media based on the grid based particle method. Communications in Computational Physics, 8:758–796, 2010.
  • [20] J. Liang and H. Zhao. Solving partial differential equations on point clouds. SIAM J. on Scientific Computing, 35(3):A1461–A1486, 2013.
  • [21] C.B. Macdonald, J. Brandman, and S. Ruuth. Solving eigenvalue problems on curved surfaces using the closest point method. J. Comp. Phys., 230:7944–7956, 2011.
  • [22] C.B. Macdonald and S.J. Ruuth. Level set equations on surfaces via the closest point method. J. Sci. Comput., 35:219–240, 2008.
  • [23] C.B. Macdonald and S.J. Ruuth. The implicit closest point method for the numerical solution of partial differential equations on surfaces. SIAM J. Sci. Comput., 31:4330–4350, 2009.
  • [24] A. Madzvamuse and P. Maini. Velocity-induced numerical solutions of reaction-diffusion systems on continuously growing domains. J. Comput. Phys., 225:100–119, 2007.
  • [25] C. Min and F. Gibou. A second order accurate level set method on non-graded adaptive cartesian grids. J. Comput. Phys., 225:300–321, 2007.
  • [26] S. J. Osher and R. P. Fedkiw. Level Set Methods and Dynamic Implicit Surfaces. Springer-Verlag, New York, 2003.
  • [27] S. J. Osher and J. A. Sethian. Fronts propagating with curvature dependent speed: algorithms based on Hamilton-Jacobi formulations. J. Comput. Phys., 79:12–49, 1988.
  • [28] A. Petras and S.J. Ruuth. PDEs on moving surfaces via the closest point method and a modified grid based particle method. J. Comput. Phys., 312:139–156, 2016.
  • [29] C. Piret. The orthogonal gradients method: A radial basis functions method for solving partial differential equations on arbitrary surfaces. J. Comput. Phys., 231:4662–4675, 2012.
  • [30] S.J. Ruuth and B. Merriman. A simple embedding method for solving partial differential equations on surfaces. J. Comput. Phys., 227:1943–1961, 2008.
  • [31] V. Shankar, G. B. Wright, R. M. Kirby, and A. L. Fogelson. A radial basis function (rbf)-finite difference (fd) method for diffusion and reaction–diffusion equations on surfaces. J. Sci. Comput., 63(3):745–768, 2015.
  • [32] R. Tsai, L.T. Cheng, S. Osher, and H. K. Zhao. Fast sweeping method for a class of Hamilton-Jacobi equations. SIAM J. Numer. Anal., 41:673–694, 2003.
  • [33] M. Wang, S. Leung, and H. Zhao. Modified Virtual Grid Difference (MVGD) for Discretizing the Laplace-Beltrami Operator on Point Clouds. Submitted.
  • [34] T. Wong and S. Leung. A fast sweeping method for eikonal equations on implicit surfaces. Accepted by J. Sci. Comput., 2015.
  • [35] J. Xu, Z. Li, J. Lowengrub, and H. Zhao. A level set method for interfacial flows with surfactant. J. Comput. Phys., 212:590–616, 2006.
  • [36] J. Xu and H.K. Zhao. An Eulerian formulation for solving partial differential equations along a moving interface. J. Sci. Comput., 19:573–594, 2003.
  • [37] N. Ying. Numerical Methods for Partial Differential Equations from Interface Problems. HKUST PhD Thesis, 2017.
  • [38] N. Ying, K.L. Chu, and S. Leung. A Constrained Least-Squares Ghost Sample Points (CLS-GSP) Method for Differential Operators on Point Clouds. arXiv:2407.06467, 2024.
  • [39] H. K. Zhao. Fast sweeping method for eikonal equations. Math. Comp., 74:603–627, 2005.