This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

A Constrained Least-Squares Ghost Sample Points (CLS-GSP) Method for Differential Operators on Point Clouds

Ningchen Ying Department of Mathematics, the Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong. Email: [email protected]    Kwunlun Chu Department of Mathematics, Statistics and Insurance, Hang Seng University of Hong Kong, Hong Kong. Email: [email protected]    Shingyu Leung Department of Mathematics, the Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong. Email: [email protected]
Abstract

We introduce a novel meshless method called the Constrained Least-Squares Ghost Sample Points (CLS-GSP) method for solving partial differential equations on irregular domains or manifolds represented by randomly generated sample points. Our approach involves two key innovations. Firstly, we locally reconstruct the underlying function using a linear combination of radial basis functions centered at a set of carefully chosen ghost sample points that are independent of the point cloud samples. Secondly, unlike conventional least-squares methods, which minimize the sum of squared differences from all sample points, we regularize the local reconstruction by imposing a hard constraint to ensure that the least-squares approximation precisely passes through the center. This simple yet effective constraint significantly enhances the diagonal dominance and conditioning of the resulting differential matrix. We provide analytical proofs demonstrating that our method consistently estimates the exact Laplacian. Additionally, we present various numerical examples showcasing the effectiveness of our proposed approach in solving the Laplace/Poisson equation and related eigenvalue problems.

1 Introduction

Classical numerical methods for partial differential equations (PDEs) usually rely on a structured computational mesh. Finite difference methods (FDM) and finite element methods (FEM), in particular, assume a certain degree of connectivity among sample points and necessitate the computational domain to be properly discretized or triangulated. Due to the connectivity between these sample points, mesh surgery or developing a robust remeshing strategy may be necessary for mesh adaptation, which can be challenging, especially in high dimensions.

Meshless methods [2, 34, 10, 27], therefore, present a more computationally attractive alternative when we lack control over the allocation of sample points or when obtaining a good discretization of the computational domain is problematic. These meshless methods can be roughly classified into two groups. The first group is the so-called generalized finite difference (GFD) methods [12, 14], which extend the traditional FDM based on classical Taylor’s expansion. Further studies on this method have been conducted in [3, 4]. A more popular group follows a similar approach but replaces the standard basis in Taylor’s expansion with radial basis functions (RBF) [18, 13, 5]. The idea is to locally represent the surface by interpolating the data points using a set of RBFs centered at each sample point. These RBF interpolation methods have demonstrated the ability to achieve spectral accuracy under certain assumptions regarding the form of the bases and the properties of the underlying functions [6, 15]. To expedite computations, the so-called radial basis function finite-differences (RBF-FD) method was proposed by [32, 31, 36]. The idea is to replace the global interpolation described above with a local interpolation, such that the reconstruction and derivative depend only on values in a small neighborhood around the center of the approximation. The first step of this approach is to gather a subset of sample points around the center using any standard algorithm such as K-nearest neighbors (KNN). Then, the global RBF approach is replaced by this smaller subset of points.

This paper extends the work developed in [38] and introduces a novel meshless method that offers several key advantages. Firstly, they exhibit robustness when given sample points are close to each other; the methods remain stable even if sample points become arbitrarily close or overlap. Secondly, they are local methods, meaning that the approximation to the difference operator depends only on sample points in a small neighborhood of the center. Therefore, the methods are computationally efficient and optimal in computational complexity. Finally, although the methods share some similarities with RBF methods, we must emphasize that our proposed methods differ from conventional RBF-type methods. Specifically, our methods do not interpolate data points but instead approximate the underlying function based on a least-squares approach.

We introduce our proposed approach as the constrained least-squares ghost sample points (CLS-GSP) method. It comprises two main components. Initially, we represent a local approximation as a linear combination of a set of RBFs. However, unlike typical RBF interpolation methods (or even RBF-FD methods), where the centers of these RBFs coincide with the sample locations, we suggest replacing these locations with a pre-chosen set of ghost sample points. These ghost sample points can be arbitrarily selected if they are well-separated around the center. This effectively mitigates the problem of co-linearity of the basis functions and the ill-conditioning of the coefficient matrix. To enhance the robustness of the algorithm, we propose replacing the interpolation problem with a least-squares problem, minimizing the sum of squared mismatches at all ghost sample points in the local reconstruction to the function value. This substitution allows for greater flexibility in the sampling locations, including the allowance for overlapping given sample points.

It is essential to reiterate again that our method differs from typical RBF interpolation approaches in that our reconstruction does not interpolate all function values but only at the center location where the local approximation is sought. This implies that spectral accuracy cannot be expected in our method. One might argue against replacing interpolation with least-squares fitting, which sacrifices this spectral accuracy. However, spectral accuracy is contingent upon certain conditions being met [16, 11], including the smoothness of the underlying function and the distribution of sample points. Addressing the first concern, one might consider only linear elliptic problems, which imply a smooth solution. However, regarding the second issue, we typically have less control over the distribution of the point cloud and can only assume that the sampling density satisfies certain regularity conditions. Our proposed approach, therefore, offers a simple alternative to approximate derivatives of a function with reasonable accuracy in many applications where these assumptions are not satisfied.

The second major component of our proposed method emphasizes the contribution of the function value at the center location of each reconstruction. Inspired by the approach developed in a series of studies [23, 22, 33], we observe that increasing the weighting at the center point in the least-squares (LS) reconstruction improves both the accuracy and the diagonal dominance of the differential matrix (DM). In this work, we propose a straightforward idea to enforce a hard constraint: the least-squares reconstruction must pass through the function value at the center location. This concept shares similarities with the local tangential lifting method [37], developed for estimating surface curvature and function derivatives on triangulated surfaces. This method has also been recently applied to approximate differential operators and solve PDEs on surfaces [8, 7, 9], albeit solely for triangulated surfaces, whereas we do not require any connectivity in the sampling points.

This paper is organized as follows. In Sections 2, we will discuss our proposed Least-Squares Ghost Sample Points (LS-GSP) and Constrained Least-Squares Ghost Sample Points (CLS-GSP) Methods. This section will provide a more detailed comparison of our approach with various methods. Some theoretical properties of these methods will be provided in Section 3. In particular, we will examine its consistency when approximating the standard Laplacian. Some numerical experiments are presented in Section 4 to demonstrate the effectiveness and robustness of our proposed approach. Finally, we provide a summary and conclusion in Section 5.

2 Our Proposed Methods

We introduce two new ideas for approximating differential operators on randomly distributed sample points. Firstly, in Section 2.1, we introduce the idea of ghost sample points and their use in function reconstruction based on least-squares. Like other LS methods, this approach does not guarantee that the local representation will pass through any data point, particularly the center point. Consequently, the approximation to the difference operator may be inconsistent with the exact value. Therefore, we propose a further enhancement called the constrained least-squares radial basis function method (CLS-GSP) in Section 2.2 to improve the reconstruction. Although we focus mostly on the two-dimensional case in the following discussions, generalizing to higher dimensions is straightforward.

Refer to caption
Figure 1: The graphical representation of ghost sample points. The black dot represents the center point 𝐱0\mathbf{x}_{0}. Blue squares denote the nn nearest scattered data points around 𝐱0\mathbf{x}_{0}. Red triangles correspond to the proposed dd ghost sample points associated with the center point 𝐱0\mathbf{x}_{0}. Here, we pick d=8d=8 ghost sample points uniformly distributed on a ball with a radius given by the mean distance to the sample points.

2.1 The Least-Squares Ghost Sample Points (LS-GSP) Method

Approximating a given differential operator at 𝐱=𝐱0\mathbf{x}=\mathbf{x}_{0} involves two primary steps. The first step entails introducing a set of ghost sample points, followed by a least-squares approximation based on RBF centered at these ghost sample points.

We first gather a subset of sample points 𝐱i,i=1,,n{\mathbf{x}_{i},i=1,\ldots,n} within a small neighborhood of 𝐱0\mathbf{x}_{0}. Within this neighborhood, we introduce a set of dd basis functions, each centered at a predefined location denoted by 𝐱^m{\hat{\mathbf{x}}_{m}} for m=1,,dm=1,\ldots,d. These dd locations are referred to as the ghost sample points associated with the center location 𝐱=𝐱0\mathbf{x}=\mathbf{x}_{0}. A typical arrangement is illustrated in Figure 1. The selection of these locations offers considerable flexibility. This work suggests two simple choices for allocating these ghost sample points. The first one is to have 𝐱^m,m=1,,8{\hat{\mathbf{x}}_{m},m=1,\ldots,8} (i.e., d=8d=8) evenly distributed on the boundary of a ball centered at 𝐱0\mathbf{x}_{0} with the radius given by the average distance from the sample points 𝐱i,i=1,,n{\mathbf{x}_{i},i=1,\ldots,n} to the center. The second one is to have them uniformly located in a small neighborhood centered at the center point. Figure 8(b) shows a distribution with 49 ghost sample points uniformly placed in a disc. This allows these ghost sample points to spread widely and evenly in the local neighborhood. Indeed, one might randomly select a certain fraction of sample points as ghost sample points and determine a stable local approximation of the function using cross-validation. This approach, however, might significantly increase computational time and will not be further investigated here.

The method for selecting these ghost sample points in higher dimensions can also be quite natural. For instance, in three dimensions, we might opt for a sphere of a given radius and uniformly distribute ghost sample points on its surface. Determining these locations can be achieved using Thomson’s solution, conceptualizing each ghost sample point as a charged particle that repels others according to Coulomb’s law. Explicit coordinates of some configurations are available and can be easily found in the public domain.

Next, we introduce a radial basis function with each ghost sample point and reconstruct the underlying function based on the space spanned by these basis functions. The major difference from the typical RBF approach is that we propose choosing dnd\ll n so that the typical interpolation problem at 𝐱i,i=1,,n{\mathbf{x}_{i},i=1,\ldots,n} becomes a least-squares problem. Specifically, we reconstruct the function in the following form:

uc(𝐱)=m=1dλmϕ(𝐱𝐱^m)+j=0lμjPj(𝐱),u_{c}(\mathbf{x})=\sum_{m=1}^{d}\lambda_{m}\phi(\|\mathbf{x}-\hat{\mathbf{x}}_{m}\|)+\sum_{j=0}^{l}\mu_{j}P_{j}(\mathbf{x}), (1)

and aim to determine the unknown coefficients λm\lambda_{m} and μj\mu_{j} by minimizing the sum of mismatches [uc(𝐱)u(𝐱)]2\left[u_{c}(\mathbf{x})-u(\mathbf{x})\right]^{2} at the sample locations 𝐱i\mathbf{x}_{i}, along with the extra polynomial constraints as in RBF-FD. However, since we are considering a least-squares problem after all, our approach might seem to work even if we do not incorporate extra polynomials into the set of basis functions. There are two reasons why we propose to include them. Firstly, this allows our method to approach the standard RBF-FD as dd approaches nn and 𝐱^m\hat{\mathbf{x}}_{m} approaches 𝐱i\mathbf{x}_{i}. Therefore, our method recovers all the desirable properties, including the stability behavior of RBF-FD. The more significant reason, however, relates to the consistency of the approximation, which will be fully discussed later in Section 3.

The choice of the mismatch in u(𝐱)u(\mathbf{x}) with uc(𝐱)u_{c}(\mathbf{x}) is clearly not unique. However, emphasizing local information might be more desirable by imposing soft constraints in applications where the samples contain noise. For instance, we can introduce a weight function such as κ(𝐱i)=exp(γ𝐱i)\kappa(\mathbf{x}_{i})=\exp(-\gamma\|\mathbf{x}_{i}\|) to assign fewer weights to those sample points far away from the center point. Then, the corresponding minimization problem becomes i=1nκ(𝐱i)[u(𝐱i)uc(𝐱i)]2\sum_{i=1}^{n}\kappa(\mathbf{x}_{i})\left[u(\mathbf{x}_{i})-u_{c}(\mathbf{x}_{i})\right]^{2}. However, the current paper will not further investigate this moving least-squares-type approach.

Mathematically, we find the least-squares solution to

(ΦPP^T𝟎)(𝝀𝝁)=(𝐮𝟎)\begin{pmatrix}\Phi&\textbf{P}\\ \hat{\textbf{P}}^{T}&\mathbf{0}\end{pmatrix}\begin{pmatrix}\boldsymbol{\lambda}\\ \boldsymbol{\mu}\end{pmatrix}=\begin{pmatrix}\mathbf{u}\\ \mathbf{0}\end{pmatrix}

where

Φ=(ϕ(𝐱1𝐱^1)ϕ(𝐱1𝐱^d)ϕ(𝐱n𝐱^1)ϕ(𝐱n𝐱^d)),P=(1x1y1x12x1y11xnynxn2xnyn),\displaystyle\Phi=\begin{pmatrix}\phi\left(\|\mathbf{x}_{1}-\hat{\mathbf{x}}_{1}\|\right)&\ldots&\phi\left(\|\mathbf{x}_{1}-\hat{\mathbf{x}}_{d}\|\right)\\ \vdots&\vdots&\vdots\\ \phi\left(\|\mathbf{x}_{n}-\hat{\mathbf{x}}_{1}\|\right)&\ldots&\phi\left(\|\mathbf{x}_{n}-\hat{\mathbf{x}}_{d}\|\right)\end{pmatrix}\,,\,\textbf{P}=\begin{pmatrix}1&x_{1}&y_{1}&x_{1}^{2}&x_{1}y_{1}&\ldots\\ \vdots&\vdots&\vdots&\vdots&\vdots\\ 1&x_{n}&y_{n}&x_{n}^{2}&x_{n}y_{n}&\ldots\\ \end{pmatrix}\,,
P^=(1x^1y^1x^12x^1y^11x^dy^dx^d2x^dy^d),𝝀=(λ1λd),𝝁=(μ0μk) and 𝐮=(u(𝐱1)u(𝐱n)).\displaystyle\hat{\textbf{P}}=\begin{pmatrix}1&\hat{x}_{1}&\hat{y}_{1}&\hat{x}_{1}^{2}&\hat{x}_{1}\hat{y}_{1}&\ldots\\ \vdots&\vdots&\vdots&\vdots&\vdots\\ 1&\hat{x}_{d}&\hat{y}_{d}&\hat{x}_{d}^{2}&\hat{x}_{d}\hat{y}_{d}&\ldots\\ \end{pmatrix}\,,\,\boldsymbol{\lambda}=\begin{pmatrix}\lambda_{1}\\ \vdots\\ \lambda_{d}\\ \end{pmatrix}\,,\,\boldsymbol{\mu}=\begin{pmatrix}\mu_{0}\\ \vdots\\ \mu_{k}\end{pmatrix}\,\mbox{ and }\,\mathbf{u}=\begin{pmatrix}u(\mathbf{x}_{1})\\ \vdots\\ u(\mathbf{x}_{n})\end{pmatrix}\,.

The size of the matrices P and P^\hat{\textbf{P}} are given by n×(l+1)n\times({l}+1) and d×(l+1)d\times({l}+1), where l+1{l}+1 is the total number of monomials of degree up to kk added to the set of basis functions. For example, if we include polynomials in 2 variables (xx and yy) up to degree kk, we have a total of l+1=12(k+1)(k+2)l+1=\frac{1}{2}(k+1)(k+2) elements of PjP_{j}. When we have 3 variables (xx, yy and zz), we have l+1=16(k3+11k)+k2+1l+1=\frac{1}{6}(k^{3}+11k)+k^{2}+1 elements of PjP_{j} for polynomials up to degree kk.

We introduce the second set of linear equations P^T𝝀=𝟎\hat{\textbf{P}}^{T}\boldsymbol{\lambda}=\mathbf{0} to mimic the conventional RBF-FD method. In particular, if we choose d=nd=n and 𝐱i=𝐱^i\mathbf{x}_{i}=\hat{\mathbf{x}}_{i}, the approach reduces back to the original RBF-FD method with the addition of polynomial basis functions. In this paper, however, we choose dnd\ll n, and the system becomes overdetermined since there are in total d+l+1d+{l}+1 unknown coefficients, but the system contains n+l+1n+{l}+1 equations. The least-squares coefficients 𝝀\boldsymbol{\lambda} and 𝝁\boldsymbol{\mu} can be obtained by

(𝝀𝝁)=(ΦPP^T𝟎)(𝐮𝟎)\begin{pmatrix}\boldsymbol{\lambda}\\ \boldsymbol{\mu}\end{pmatrix}=\begin{pmatrix}\Phi&\textbf{P}\\ \hat{\textbf{P}}^{T}&\mathbf{0}\end{pmatrix}^{\dagger}\begin{pmatrix}\mathbf{u}\\ \mathbf{0}\end{pmatrix}

where AA^{\dagger} is the pseudoinverse of matrix AA.

Finally, we consider computing the derivative of the reconstruction. We denote the differential operator by \mathcal{L}. Then, applying the operator to the reconstruction and evaluating it at 𝐱=𝐱0\mathbf{x}=\mathbf{x}_{0}, we have u(𝐱0)=i=1dλiϕ(𝐱𝐱^i)|𝐱=𝐱0+j=0lμjPj(𝐱)|𝐱=𝐱0\mathcal{L}u(\mathbf{x}_{0})=\sum_{i=1}^{d}\lambda_{i}\mathcal{L}\phi(\|\mathbf{x}-\hat{\mathbf{x}}_{i}\|)\Big{|}_{\mathbf{x}=\mathbf{x}_{0}}+\sum_{j=0}^{{l}}\mu_{j}\mathcal{L}P_{j}(\mathbf{x})\Big{|}_{\mathbf{x}=\mathbf{x}_{0}} with λi\lambda_{i} and μj\mu_{j} determined by solving the above least-squares problem. Alternatively, we could rewrite the above procedure and solve the following explicit problem given by u(𝐱0)=i=1nwiu(𝐱i)\mathcal{L}u(\mathbf{x}_{0})=\sum_{i=1}^{n}w_{i}u(\mathbf{x}_{i}) where the weights w1,,wnw_{1},{\ldots},w_{n} are given by the first nn components in

(ΦTP^PT𝟎)(ϕ|𝐱=𝐱0𝑷|𝐱=𝐱0).\begin{pmatrix}\Phi^{T}&\hat{\textbf{P}}\\ \textbf{P}^{T}&\mathbf{0}\end{pmatrix}^{\dagger}\begin{pmatrix}\mathcal{L}\boldsymbol{\phi}|_{\mathbf{x}=\mathbf{x}_{0}}\\ \mathcal{L}\boldsymbol{P}|_{\mathbf{x}=\mathbf{x}_{0}}\\ \end{pmatrix}\,.

2.2 The Constrained Least-Squares Ghost Sample Points (CLS-GSP) Method

The LS-GSP method introduced in the above section works reasonably well even if the given sampling locations 𝐱i\mathbf{x}_{i} are badly distributed. However, we observe that the approximation to any linear operator might be inaccurate. A similar behavior has been observed in [23, 22, 33]. The main reason is that the least-squares approach does not necessarily provide enough weighting from the center location. This has motivated the development of the modified virtual grid difference method (MVGD) [33], in which we explicitly increase the least-squares weighting at the center location.

In this work, we propose to enforce that the least-squares reconstruction has to pass through the function value (𝐱0,u(𝐱0))(\mathbf{x}_{0},u(\mathbf{x}_{0})) exactly. Mathematically, this constraint implies:

u(𝐱0)=i=1dλiϕ(𝐱0𝐱^i)+j=0lμjPj(𝐱0).u(\mathbf{x}_{0})=\sum_{i=1}^{d}\lambda_{i}\phi(\|\mathbf{x}_{0}-\hat{\mathbf{x}}_{i}\|)+\sum_{j=0}^{l}\mu_{j}P_{j}(\mathbf{x}_{0})\,.

Without loss of generality, we assume 𝐱0=𝟎\mathbf{x}_{0}=\mathbf{0} for the remaining discussion. Otherwise, we could make a translation 𝐱~=𝐱𝐱0\tilde{\mathbf{x}}=\mathbf{x}-\mathbf{x}_{0} to transform the center point to the origin. The above equation then implies u(𝟎)=i=1dλiϕ(𝐱^i)+μ0u(\mathbf{0})=\sum_{i=1}^{d}\lambda_{i}\phi(\|\hat{\mathbf{x}}_{i}\|)+\mu_{0}. By subtracting this equation from equation (1), the CLS-GSP method will fit the function uu by:

u(𝐱)=u(0)+i=1dλi[ϕ(𝐱𝐱^i)ϕ(𝐱^i)]+j=1lμjPj(𝐱).u(\mathbf{x})=u(\textbf{0})+\sum_{i=1}^{d}\lambda_{i}\left[\phi\left(\|\mathbf{x}-\hat{\mathbf{x}}_{i}\|\right)-\phi\left(\|\hat{\mathbf{x}}_{i}\|\right)\right]+\sum_{j=1}^{l}\mu_{j}P_{j}(\mathbf{x})\,.

We emphasize that the set of polynomial bases in CLS-GSP now excludes the constant basis P0(𝐱)=1P_{0}(\mathbf{x})=1 corresponding to the index j=0j=0. Therefore, when we substitute 𝐱=𝟎\mathbf{x}=\mathbf{0} into the right-hand side, the least-squares approximation gives the exact function value u(𝟎)u(\mathbf{0}) regardless of the values of 𝝀\boldsymbol{\lambda} and 𝝁\boldsymbol{\mu}.

Following a similar practice in the RBF community, we impose additional constraints

i=1dλiPj(𝐱^i)=0\sum_{i=1}^{d}\lambda_{i}P_{j}(\hat{\mathbf{x}}_{i})=0

for each j=1,,lj=1,\ldots,l to improve the conditioning of the ultimate least-squares system. Now, we replace the previous set of basis functions by the following modified basis ψi(𝐱)=ϕ(𝐱𝐱^i)ϕ(𝐱^i)\psi_{i}(\mathbf{x})=\phi\left(\|\mathbf{x}-\hat{\mathbf{x}}_{i}\|\right)-\phi\left(\|\hat{\mathbf{x}}_{i}\|\right), we obtain the coefficients 𝝀\boldsymbol{\lambda} and 𝝁\boldsymbol{\mu} by finding the least-squares solution of the following overdetermined system of size (n+l)×(d+l)(n+l)\times(d+l):

(𝚿PP^T𝟎)(𝝀𝝁)=(𝐮𝟎)\begin{pmatrix}\boldsymbol{\Psi}&\textbf{P}\\ \hat{\textbf{P}}^{T}&\mathbf{0}\end{pmatrix}\begin{pmatrix}\boldsymbol{\lambda}\\ \boldsymbol{\mu}\end{pmatrix}=\begin{pmatrix}\mathbf{u}\\ \mathbf{0}\end{pmatrix}

where we are abusing the notations of 𝐮,P\mathbf{u},\textbf{P} and P^\hat{\textbf{P}} given by

𝚿=(ψ1(𝐱1)ψ2(𝐱1)ψd(𝐱1)ψ1(𝐱2)ψ2(𝐱2)ψd(𝐱2)ψ1(𝐱n)ψ2(𝐱n)ψd(𝐱n))𝐮=(u(𝐱1)u(𝟎)u(𝐱2)u(𝟎)u(𝐱n)u(𝟎)),\displaystyle\boldsymbol{\Psi}=\begin{pmatrix}\psi_{1}(\mathbf{x}_{1})&\psi_{2}(\mathbf{x}_{1})&\ldots&\psi_{d}(\mathbf{x}_{1})\\ \psi_{1}(\mathbf{x}_{2})&\psi_{2}(\mathbf{x}_{2})&\ldots&\psi_{d}(\mathbf{x}_{2})\\ \vdots&&\vdots\\ \psi_{1}(\mathbf{x}_{n})&\psi_{2}(\mathbf{x}_{n})&\ldots&\psi_{d}(\mathbf{x}_{n})\\ \end{pmatrix}\quad\mathbf{u}=\begin{pmatrix}u(\mathbf{x}_{1})-u(\mathbf{0})\\ u(\mathbf{x}_{2})-u(\mathbf{0})\\ \vdots\\ u(\mathbf{x}_{n})-u(\mathbf{0})\end{pmatrix}\,,
P=(P1(𝐱1)P2(𝐱1)Pl(𝐱1)P1(𝐱2)P2(𝐱2)Pl(𝐱2)P1(𝐱n)P2(𝐱n)Pl(𝐱n))andP^=(P1(𝐱^1)P2(𝐱^1)Pl(𝐱^1)P1(𝐱^2)P2(𝐱^2)Pl(𝐱^2)P1(𝐱^d)P2(𝐱^d)Pl(𝐱^d)).\displaystyle\textbf{P}=\begin{pmatrix}P_{1}(\mathbf{x}_{1})&P_{2}(\mathbf{x}_{1})&\ldots&P_{l}(\mathbf{x}_{1})\\ P_{1}(\mathbf{x}_{2})&P_{2}(\mathbf{x}_{2})&\ldots&P_{l}(\mathbf{x}_{2})\\ \vdots&&\vdots\\ P_{1}(\mathbf{x}_{n})&P_{2}(\mathbf{x}_{n})&\ldots&P_{l}(\mathbf{x}_{n})\\ \end{pmatrix}\quad\text{and}\quad\hat{\textbf{P}}=\begin{pmatrix}P_{1}(\hat{\mathbf{x}}_{1})&P_{2}(\hat{\mathbf{x}}_{1})&\ldots&P_{l}(\hat{\mathbf{x}}_{1})\\ P_{1}(\hat{\mathbf{x}}_{2})&P_{2}(\hat{\mathbf{x}}_{2})&\ldots&P_{l}(\hat{\mathbf{x}}_{2})\\ \vdots&&\vdots\\ P_{1}(\hat{\mathbf{x}}_{d})&P_{2}(\hat{\mathbf{x}}_{d})&\ldots&P_{l}(\hat{\mathbf{x}}_{d})\\ \end{pmatrix}\,.

The solution can again be found by taking the pseudoinverse of the (n+l)×(d+l)(n+{l})\times(d+{l}) matrix.

Finally, applying a differential operator \mathcal{L} to a function defined at 𝐱=𝟎\mathbf{x}=\mathbf{0}, we have u(𝟎)i=1nwi(u(𝐱i)u(𝟎))\mathcal{L}u(\mathbf{0})\approx\sum_{i=1}^{n}w_{i}(u(\mathbf{x}_{i})-u(\mathbf{0})) where w1,,wnw_{1},{\ldots},w_{n} are determined by the first nn components in

(𝚿TP^PT𝟎)(𝝍|𝐱=𝟎𝑷|𝐱=𝟎) where 𝝍|𝐱=𝟎=(ψ1|𝐱=𝟎ψd|𝐱=𝟎)and𝑷|𝐱=𝟎=(P1|𝐱=𝟎Pl|𝐱=𝟎).\begin{pmatrix}\boldsymbol{\Psi}^{T}&\hat{\textbf{P}}\\ \textbf{P}^{T}&\mathbf{0}\end{pmatrix}^{\dagger}\begin{pmatrix}\mathcal{L}\boldsymbol{\psi}|_{\mathbf{x}=\mathbf{0}}\\ \mathcal{L}\boldsymbol{P}|_{\mathbf{x}=\mathbf{0}}\\ \end{pmatrix}\,\mbox{ where }\,\mathcal{L}\boldsymbol{\psi}|_{\mathbf{x}=\mathbf{0}}=\begin{pmatrix}\mathcal{L}\psi_{1}|_{\mathbf{x}=\mathbf{0}}\\ \vdots\\ \mathcal{L}\psi_{d}|_{\mathbf{x}=\mathbf{0}}\end{pmatrix}\,\mbox{and}\,\mathcal{L}\boldsymbol{P}|_{\mathbf{x}=\mathbf{0}}=\begin{pmatrix}\mathcal{L}P_{1}|_{\mathbf{x}=\mathbf{0}}\\ \vdots\\ \mathcal{L}P_{l}|_{\mathbf{x}=\mathbf{0}}\end{pmatrix}\,.

2.3 Comparisons with Some Other Approaches

While our proposed ghost sample points methods incorporate RBF and virtual grids, they differ from typical RBF approaches and our previously developed MVGD method [33]. Here, we now provide a detailed comparison of these approaches.

Conventional RBF algorithms utilize point clouds for interpolation of the underlying function. Subsequently, the interpolant undergoes analytical differentiation. In contrast, LS-GSP or CLS-GSP employs least-squares to reconstruct the underlying function by specifying the basis functions at predetermined locations, which may not align with the data points. Although RBF-FD narrows its focus to a subset of locations within the point cloud for interpolation, the basis functions remain centered at this subset. Consequently, the quality of interpolation could be heavily contingent upon the quality of the provided point cloud sampling and the subsampling step during neighborhood selection.

While MVGD also employs least-squares fitting, it utilizes a polynomial basis for local approximation of the underlying function. Subsequently, finite difference techniques are applied to determine higher-order derivatives of the function, with any center function value substituted by the data value. For instance, assuming u~i(x)\tilde{u}_{i}(x) represents the local least-squares approximation of the underlying function u(x)u(x) at xix_{i}, the Laplacian of the function is given by [u~i(Δx)2ui+u~i(Δx)]/Δx2\left[\tilde{u}_{i}(-\Delta x)-2u_{i}+\tilde{u}_{i}(\Delta x)\right]/\Delta x^{2}, where Δx\Delta x controls the size of the virtual grid, and uiu_{i} denotes the given function value at data point xix_{i}. There exist multiple distinctions from the method proposed in this work. Firstly, we substitute the polynomial least-squares fitting with an RBF basis centered at well-chosen locations. Consequently, the set of virtual grids (ghost sample points) is not utilized for finite difference calculations; instead, it already plays a role in the least-squares process. Secondly, unlike MVGD, CLS-GSP does not replace the finite difference value with any function value to enhance the diagonal dominance property; rather, we introduce a hard constraint to ensure that the least-squares reconstruction aligns with the function value at the centered location.

3 A Consistency Analysis

To facilitate our discussion, we focus on the standard Laplace operator Δ\Delta due to its significance in numerous applications. While our discussion is limited to two dimensions, extending our approach to higher dimensions is straightforward. The consistency of any other linear differential operator \mathcal{L} can be similarly extended, although we will not delve into those details here.

In this section, we demonstrate the consistency of the CLS-GSP method. Under some regularity conditions, we will analytically show that the CLS-GSP method provides a consistent estimation of the standard Laplacian of a function with order k1k-1, where kk is the order of polynomial basis included in the CLS-GSP method.

3.1 Approximating the Laplace Operator

Following the discussion above, the Laplace operator is approximated by

Δu(𝟎)i=1nwi(u(𝐱i)u(𝟎))where𝐰=(𝚿TP^PT𝟎)(Δ𝝍|𝐱=𝟎ΔP|𝐱=𝟎).\Delta u(\mathbf{0})\approx\sum_{i=1}^{n}w_{i}(u(\mathbf{x}_{i})-u(\mathbf{0}))\quad\text{where}\quad\mathbf{w}=\begin{pmatrix}\boldsymbol{\Psi}^{T}&\hat{\textbf{P}}\\ \textbf{P}^{T}&\mathbf{0}\end{pmatrix}^{\dagger}\begin{pmatrix}\Delta\boldsymbol{\psi}|_{\mathbf{x}=\mathbf{0}}\\ \Delta\textbf{P}|_{\mathbf{x}=\mathbf{0}}\\ \end{pmatrix}\,. (2)

The vector 𝐰\mathbf{w} has length n+ln+{l}, where nn is the number of sample points and l{l} is the number of elements in the polynomial basis up to the kk-th order in the representation. In approximating the derivative of u(𝟎)u(\mathbf{0}), on the other hand, we use only the first nn elements in 𝐰\mathbf{w}. In particular, if we include only the linear basis (k=1k=1 and l=2l=2), we have the following system:

(w1w2wnwn+1wn+2)=(ψ1(𝐱1)ψ1(𝐱n)x^1y^1ψ2(𝐱1)ψ2(𝐱n)x^2y^2ψd(𝐱1)ψd(𝐱n)x^dy^dx1xn00y1yn00)(Δψ1|𝐱=𝟎Δψ2|𝐱=𝟎Δψd|𝐱=𝟎00)\begin{pmatrix}w_{1}\\ w_{2}\\ \vdots\\ w_{n}\\ w_{n+1}\\ w_{n+2}\end{pmatrix}=\begin{pmatrix}\psi_{1}(\mathbf{x}_{1})&\ldots&\psi_{1}(\mathbf{x}_{n})&\hat{x}_{1}&\hat{y}_{1}\\ \psi_{2}(\mathbf{x}_{1})&\ldots&\psi_{2}(\mathbf{x}_{n})&\hat{x}_{2}&\hat{y}_{2}\\ \vdots&&\vdots&\vdots&\vdots\\ \psi_{d}(\mathbf{x}_{1})&\ldots&\psi_{d}(\mathbf{x}_{n})&\hat{x}_{d}&\hat{y}_{d}\\ x_{1}&\ldots&x_{n}&0&0\\ y_{1}&\ldots&y_{n}&0&0\\ \end{pmatrix}^{\dagger}\begin{pmatrix}\Delta\psi_{1}|_{\mathbf{x}=\mathbf{0}}\\ \Delta\psi_{2}|_{\mathbf{x}=\mathbf{0}}\\ \vdots\\ \Delta\psi_{d}|_{\mathbf{x}=\mathbf{0}}\\ 0\\ 0\\ \end{pmatrix}

and the following system if we include up to the quadratic basis (k=2k=2 and l=5l=5),

(w1w2wnwn+1wn+2wn+3wn+4wn+5)=(ψ1(𝐱1)ψ1(𝐱n)x^1y^1x^12x^1y^1y^12ψ2(𝐱1)ψ2(𝐱n)x^2y^2x^12x^2y^2y^22ψd(𝐱1)ψd(𝐱n)x^dy^dx^d2x^dy^dy^d2x1xn00000y1yn00000x12xn200000x1y1xnyn00000y12yn200000)(Δψ1|𝐱=𝟎Δψ2|𝐱=𝟎Δψd|𝐱=𝟎00202).\begin{pmatrix}w_{1}\\ w_{2}\\ \vdots\\ w_{n}\\ w_{n+1}\\ w_{n+2}\\ w_{n+3}\\ w_{n+4}\\ w_{n+5}\\ \end{pmatrix}=\begin{pmatrix}\psi_{1}(\mathbf{x}_{1})&\ldots&\psi_{1}(\mathbf{x}_{n})&\hat{x}_{1}&\hat{y}_{1}&\hat{x}^{2}_{1}&\hat{x}_{1}\hat{y}_{1}&\hat{y}^{2}_{1}\\ \psi_{2}(\mathbf{x}_{1})&\ldots&\psi_{2}(\mathbf{x}_{n})&\hat{x}_{2}&\hat{y}_{2}&\hat{x}^{2}_{1}&\hat{x}_{2}\hat{y}_{2}&\hat{y}^{2}_{2}\\ \vdots&&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ \psi_{d}(\mathbf{x}_{1})&\ldots&\psi_{d}(\mathbf{x}_{n})&\hat{x}_{d}&\hat{y}_{d}&\hat{x}^{2}_{d}&\hat{x}_{d}\hat{y}_{d}&\hat{y}^{2}_{d}\\ x_{1}&\ldots&x_{n}&0&0&0&0&0\\ y_{1}&\ldots&y_{n}&0&0&0&0&0\\ x^{2}_{1}&\ldots&x^{2}_{n}&0&0&0&0&0\\ x_{1}y_{1}&\ldots&x_{n}y_{n}&0&0&0&0&0\\ y^{2}_{1}&\ldots&y^{2}_{n}&0&0&0&0&0\\ \end{pmatrix}^{\dagger}\begin{pmatrix}\Delta\psi_{1}|_{\mathbf{x}=\mathbf{0}}\\ \Delta\psi_{2}|_{\mathbf{x}=\mathbf{0}}\\ \vdots\\ \Delta\psi_{d}|_{\mathbf{x}=\mathbf{0}}\\ 0\\ 0\\ 2\\ 0\\ 2\end{pmatrix}\,.

3.2 Consistency

In the following discussion on consistency, we fix the number of sample points 𝐱i=(xi,yi)2\mathbf{x}_{i}=(x_{i},y_{i})\in\mathbb{R}^{2}, where i=1,,ni=1,\ldots,n, while shrinking these nn points towards the origin by taking h𝐱ih\mathbf{x}_{i} with hh approaching 0. We define ri:=h𝐱i2=hxi2+yi2r_{i}:=h\|\mathbf{x}_{i}\|_{2}=h\sqrt{x_{i}^{2}+y_{i}^{2}}. For the ghost sample points chosen evenly distributed on the boundary of a ball, since the radius of this ball equals the average of rir_{i} for i=1,,ni=1,\ldots,n, the radius (denoted by r^d\hat{r}_{d}) shrinks to 0 at a rate of O(h)O(h). For the second type of distribution where the ghost sample points are chosen evenly within a disc, we can also assume that the radius of the disc is given by the average distance from the sample points to the origin. This also gives the same estimates that r^d=O(h)\hat{r}_{d}=O(h).

Definition 1.

An estimation θh\theta_{h} of a quantity θ\theta is said to be consistent of order kk if |θhθ|=O(hk)|\theta_{h}-\theta|=O(h^{k}) as hh approaches 0.

The remaining part of this section aims to demonstrate the following consistent property of the approximation to the Laplacian using the CLS-GSP method proposed in the previous section.

Theorem 1.

Assume uCk+1u\in C^{k+1}. Additionally, we assume that the RBF function ϕ\phi has the form ϕ(r;c)=f(cr2)\phi(r;c)=f(cr^{2}) with the shape parameter cc, where fC2[0,)f\in C^{2}[0,\infty). If the sample points h𝐱ih\mathbf{x}_{i} are taken with h0h\to 0, keeping ch2ch^{2} constant, and the polynomial basis contains elements up to order kk, then the approximation (2) is consistent of order k1k-1, i.e., |𝐰AT𝐮Δu(𝟎)|=O(hk1)\left|\mathbf{w}_{A}^{T}\mathbf{u}-\Delta u(\mathbf{0})\right|=O\left(h^{k-1}\right) as hh approaches to 0.

The rest of this section is to prove Theorem 1. To begin with, we need the following two lemmas.

Lemma 1.

Denote the first nn entries and the remaining l{l} elements of 𝐰\mathbf{w} in (2) by 𝐰A\mathbf{w}_{A} and 𝐰B\mathbf{w}_{B}, respectively, so that

(𝐰A𝐰B)=(𝚿TP^PT𝟎)(Δ𝝍|𝐱=𝟎ΔP|𝐱=𝟎).\begin{pmatrix}\mathbf{w}_{A}\\ \mathbf{w}_{B}\end{pmatrix}=\begin{pmatrix}\boldsymbol{\Psi}^{T}&\hat{\textbf{P}}\\ \textbf{P}^{T}&\mathbf{0}\end{pmatrix}^{\dagger}\begin{pmatrix}\Delta\boldsymbol{\psi}|_{\mathbf{x}=\mathbf{0}}\\ \Delta\textbf{P}|_{\mathbf{x}=\mathbf{0}}\\ \end{pmatrix}\,.

If the matrix (𝚿PP^T𝟎)\begin{pmatrix}\boldsymbol{\Psi}&\textbf{P}\\ \hat{\textbf{P}}^{T}&\mathbf{0}\end{pmatrix} is of full rank given by d+ld+{l}, the weights 𝐰A\mathbf{w}_{A} satisfies PT𝐰A=ΔP|𝐱=𝟎\textbf{P}^{T}\mathbf{w}_{A}=\left.\Delta\textbf{P}\right|_{\mathbf{x}=\mathbf{0}}.

Proof.

Since (𝚿PP^T𝟎)\begin{pmatrix}\boldsymbol{\Psi}&\textbf{P}\\ \hat{\textbf{P}}^{T}&\mathbf{0}\end{pmatrix} is full rank, the transpose (𝚿TP^PT𝟎)\begin{pmatrix}\boldsymbol{\Psi}^{T}&\hat{\textbf{P}}\\ \textbf{P}^{T}&\mathbf{0}\end{pmatrix} is also full rank. This implies that the solution (𝐰A𝐰B)\begin{pmatrix}\mathbf{w}_{A}\\ \mathbf{w}_{B}\end{pmatrix} must be a particular solution to the underdetermined system given by

(𝚿TP^PT𝟎)(𝐰A𝐰B)=(Δ𝝍|𝐱=𝟎ΔP|𝐱=𝟎)\begin{pmatrix}\boldsymbol{\Psi}^{T}&\hat{\textbf{P}}\\ \textbf{P}^{T}&\mathbf{0}\end{pmatrix}\begin{pmatrix}\mathbf{w}_{A}\\ \mathbf{w}_{B}\end{pmatrix}=\begin{pmatrix}\Delta\boldsymbol{\psi}|_{\mathbf{x}=\mathbf{0}}\\ \Delta\textbf{P}|_{\mathbf{x}=\mathbf{0}}\\ \end{pmatrix}

The second row of the submatrix implies PT𝐰A=ΔP|𝐱=𝟎\textbf{P}^{T}\mathbf{w}_{A}=\left.\Delta\textbf{P}\right|_{\mathbf{x}=\mathbf{0}}. This completes the proof. ∎

Lemma 2.

Assume the RBF function has the form ϕ(r)=f(cr2)\phi(r)=f(cr^{2}) with fC2[0,)f\in C^{2}[0,\infty) where cc is the shape parameter. If both matrices 𝚿\boldsymbol{\Psi} and (𝚿PP^T𝟎)\displaystyle\begin{pmatrix}\boldsymbol{\Psi}&\textbf{P}\\ \hat{\textbf{P}}^{T}&\mathbf{0}\end{pmatrix} are full rank, we have (𝐰A𝐰B)=O(h2)\left\|\begin{pmatrix}\mathbf{w}_{A}\\ \mathbf{w}_{B}\end{pmatrix}\right\|=O\left(h^{-2}\right) as hh approaches to 0 for any vector norm while keeping ch2ch^{2} as constant.

We found that most of the commonly used RBFs are in the form of f(cr2)f(cr^{2}) with a twice differentiable ff. These include the IQ, the MQ, the GA, and the IMQ.

Proof.

We first rewrite the definition of 𝐰\mathbf{w} and obtain

(𝐰A𝐰B)=((𝚿PP^T𝟎))T(Δ𝝍(𝐱)|𝐱=𝟎ΔP(𝐱)|𝐱=𝟎).\begin{pmatrix}\mathbf{w}_{A}\\ \mathbf{w}_{B}\end{pmatrix}=\left(\begin{pmatrix}\boldsymbol{\Psi}&\textbf{P}\\ \hat{\textbf{P}}^{T}&\mathbf{0}\end{pmatrix}^{\dagger}\right)^{T}\begin{pmatrix}\left.\Delta\boldsymbol{\psi}(\mathbf{x})\right|_{\mathbf{x}=\mathbf{0}}\\ \left.\Delta\textbf{P}(\mathbf{x})\right|_{\mathbf{x}=\mathbf{0}}\\ \end{pmatrix}\,.

Therefore, it is sufficient to show the following two conditions: (𝚿PP^T𝟎)=O(1)\left\|\begin{pmatrix}\boldsymbol{\Psi}&\textbf{P}\\ \hat{\textbf{P}}^{T}&\mathbf{0}\end{pmatrix}^{\dagger}\right\|=O(1) and (Δ𝝍(𝐱)|𝐱=𝟎ΔP(𝐱)|𝐱=𝟎)=O(h2)\left\|\begin{pmatrix}\left.\Delta\boldsymbol{\psi}(\mathbf{x})\right|_{\mathbf{x}=\mathbf{0}}\\ \left.\Delta\textbf{P}(\mathbf{x})\right|_{\mathbf{x}=\mathbf{0}}\\ \end{pmatrix}\right\|=O(h^{-2}). For the first condition, we apply the following expansion of the pseudoinverse (A+B)=AABA+O(B2)(A+B)^{\dagger}=A^{\dagger}-A^{\dagger}BA^{\dagger}+O(B^{2}) and take A=(Ψ𝟎𝟎𝟎)A=\begin{pmatrix}\Psi&\mathbf{0}\\ \mathbf{0}&\mathbf{0}\\ \end{pmatrix} and B=(𝟎PP^T𝟎)B=\begin{pmatrix}\mathbf{0}&\textbf{P}\\ \hat{\textbf{P}}^{T}&\mathbf{0}\end{pmatrix}. If the shape parameter cc satisfies ch2=constantch^{2}=\text{constant} as hh goes to 0 (i.e. c=O(h2)c=O(h^{-2})), we found that AA will be independent of hh and the remaining part B\|B\| (without the zeroth order polynomial) is of order hh. This implies

(𝚿PP^T𝟎)=(𝚿𝟎𝟎𝟎)+O(h).\left\|\begin{pmatrix}\boldsymbol{\Psi}&\textbf{P}\\ \hat{\textbf{P}}^{T}&\mathbf{0}\end{pmatrix}^{\dagger}\right\|=\left\|\begin{pmatrix}\boldsymbol{\Psi}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}\end{pmatrix}^{\dagger}\right\|+O(h)\,.

For the second condition, we first examine Δ𝝍(𝟎)\Delta\boldsymbol{\psi}(\mathbf{0}). For each component, we have

Δψi(𝟎)\displaystyle\Delta\psi_{i}(\mathbf{0}) =\displaystyle= Δϕi|𝐱=𝟎=ϕi(r(x))|𝐱=𝟎=(ϕi(r)r)|𝐱=𝟎\displaystyle\left.\Delta\phi_{i}\right|_{\mathbf{x}=\mathbf{0}}=\nabla\cdot\nabla\phi_{i}(r(x))|_{\mathbf{x}=\mathbf{0}}=\nabla\cdot(\phi_{i}^{\prime}(r)\nabla r)|_{\mathbf{x}=\mathbf{0}}
=\displaystyle= ϕi′′(r)rr+ϕi(r)Δr|𝐱=𝟎.\displaystyle\left.\phi_{i}^{\prime\prime}(r)\nabla r\cdot\nabla r+\phi_{i}^{\prime}(r)\Delta r\right|_{\mathbf{x}=\mathbf{0}}\,.

Now, since rr=1\nabla r\cdot\nabla r=1 and Δr=(D1)r1\Delta r=(D-1)r^{-1} where DD is the dimension, we obtain

Δψi(𝟎)=ϕi′′(r)+ϕi(r)r|r=r^i=ϕi′′(r^i)+(D1)ϕi(r^i)r^i=r^i2ϕi′′+(D1)r^iϕir^i2.\Delta\psi_{i}(\mathbf{0})=\left.\phi_{i}^{\prime\prime}(r)+\frac{\phi_{i}^{\prime}(r)}{r}\right|_{r=\hat{r}_{i}}=\phi_{i}^{\prime\prime}(\hat{r}_{i})+\frac{{(D-1)}\phi_{i}^{\prime}(\hat{r}_{i})}{\hat{r}_{i}}={\frac{\hat{r}^{2}_{i}\phi_{i}^{\prime\prime}+(D-1)\hat{r}_{i}\phi_{i}^{\prime}}{\hat{r}^{2}_{i}}}\,.

Now, the condition ϕ(r)=f(cr2)\phi(r)=f(cr^{2}) with fC2[0,)f\in C^{2}[0,\infty) implies rϕ(r)=2cr2f(cr2)r\phi^{\prime}(r)=2cr^{2}f^{\prime}(cr^{2}) and r2ϕ′′(r)=2cr2f(cr2)+4c2r4f′′(cr2)r^{2}\phi^{\prime\prime}(r)=2cr^{2}f^{\prime}(cr^{2})+4c^{2}r^{4}f^{\prime\prime}(cr^{2}). These are all functions in the form of cr2cr^{2}. Therefore, if we have a constant cr^i2c\hat{r}^{2}_{i}, the quantity Δψi(𝟎)\Delta\psi_{i}(\mathbf{0}) can be expressed as a constant divided by r^i2\hat{r}_{i}^{2}. As all the ghost sample points shrink to the origin with speed hh, the average radius r^i\hat{r}_{i} is a constant multiple of hh. This implies that the condition cr^i2c\hat{r}^{2}_{i} being a constant is the same as keeping ch2ch^{2} constant. Therefore, the quantity Δψi(𝟎)\Delta\psi_{i}(\mathbf{0}) can also be expressed as a constant divided by h2h^{2}, and |Δ𝝍(𝟎)|=O(h2)|\Delta\boldsymbol{\psi}(\mathbf{0})|=O(h^{-2}) when hh approaches zero with a constant ch2ch^{2}.

For the term ΔP(𝐱)|𝐱=𝟎\left.\Delta\textbf{P}(\mathbf{x})\right|_{\mathbf{x}=\mathbf{0}}, we notice that out of all possible polynomials in the basis, only two polynomials in the basis (given by x2x^{2} and y2y^{2}) whose Laplacian is nonzero. In particular,

ΔPj(𝐱)|𝐱=𝟎={2if Pj(𝐱)=x12 or Pj(𝐱)=x22 or  or Pj(𝐱)=xD20otherwise.\left.\Delta P_{j}(\mathbf{x})\right|_{\mathbf{x}=\mathbf{0}}=\left\{\begin{array}[]{ll}2&\text{if }P_{j}(\mathbf{x})=x_{1}^{2}\text{ or }P_{j}(\mathbf{x})=x_{2}^{2}\text{ or }\ldots\text{ or }P_{j}(\mathbf{x})=x_{D}^{2}\\ 0&\mbox{otherwise.}\end{array}\right.

For example, in the two-dimensional space, if we take the basis to be {x,y,x2,xy,y2}\{x,y,x^{2},xy,y^{2}\} respectively, then

ΔP(𝐱)|𝐱=𝟎=(ΔP1(𝐱)|𝐱=𝟎ΔP2(𝐱)|𝐱=𝟎ΔP3(𝐱)|𝐱=𝟎ΔP4(𝐱)|𝐱=𝟎ΔP5(𝐱)|𝐱=𝟎)=(Δx|𝐱=𝟎Δy|𝐱=𝟎Δx2|𝐱=𝟎Δxy|𝐱=𝟎Δy2|𝐱=𝟎)=(00202)\left.\Delta\textbf{P}(\mathbf{x})\right|_{\mathbf{x}=\mathbf{0}}=\begin{pmatrix}\left.\Delta P_{1}(\mathbf{x})\right|_{\mathbf{x}=\mathbf{0}}\\ \left.\Delta P_{2}(\mathbf{x})\right|_{\mathbf{x}=\mathbf{0}}\\ \left.\Delta P_{3}(\mathbf{x})\right|_{\mathbf{x}=\mathbf{0}}\\ \left.\Delta P_{4}(\mathbf{x})\right|_{\mathbf{x}=\mathbf{0}}\\ \left.\Delta P_{5}(\mathbf{x})\right|_{\mathbf{x}=\mathbf{0}}\\ \end{pmatrix}=\begin{pmatrix}\left.\Delta x\right|_{\mathbf{x}=\mathbf{0}}\\ \left.\Delta y\right|_{\mathbf{x}=\mathbf{0}}\\ \left.\Delta x^{2}\right|_{\mathbf{x}=\mathbf{0}}\\ \left.\Delta xy\right|_{\mathbf{x}=\mathbf{0}}\\ \left.\Delta y^{2}\right|_{\mathbf{x}=\mathbf{0}}\\ \end{pmatrix}=\begin{pmatrix}0\\ 0\\ 2\\ 0\\ 2\\ \end{pmatrix}

Combining these two results, we have

(Δ𝝍(𝐱)|𝐱=𝟎ΔP(𝐱)|𝐱=𝟎)=O(h2).\left\|\begin{pmatrix}\left.\Delta\boldsymbol{\psi}(\mathbf{x})\right|_{\mathbf{x}=\mathbf{0}}\\ \left.\Delta\textbf{P}(\mathbf{x})\right|_{\mathbf{x}=\mathbf{0}}\\ \end{pmatrix}\right\|=O(h^{-2})\,.

Now, we are ready to present the proof of Theorem 1.

Theorem 1.

We write 𝐰AT𝐮=w1(u1u0)+w2(u2u0)++wn(unu0)\mathbf{w}^{T}_{A}\mathbf{u}=w_{1}(u_{1}-u_{0})+w_{2}(u_{2}-u_{0})+{\ldots}+w_{n}(u_{n}-u_{0}) where ui=u(𝐱i)u_{i}=u(\mathbf{x}_{i}) for i=1,,ni=1,{\ldots},n. We first expand all uiu_{i} around the origin up to the (k+1)(k+1)-th order derivatives. For example, if up to second order polynomial is included (k=2)(k=2) in the case of two dimensions (D=2)(D=2), we have

𝐰AT𝐮\displaystyle\mathbf{w}_{A}^{T}\mathbf{u} =\displaystyle= (w1x1++wnxn)ux+(w1y1++wnyn)uy\displaystyle(w_{1}x_{1}+\cdots+w_{n}x_{n})u_{x}+(w_{1}y_{1}+\cdots+w_{n}y_{n})u_{y}
+(w1x1y1++wnxnyn)uxy\displaystyle+\left(w_{1}x_{1}y_{1}+\cdots+w_{n}x_{n}y_{n}\right)u_{xy}
+12(w1x12++wnxn2)uxx+12(w1y12++wnyn2)uyy\displaystyle+\frac{1}{2}\left(w_{1}x^{2}_{1}+\cdots+w_{n}x^{2}_{n}\right)u_{xx}+\frac{1}{2}\left(w_{1}y^{2}_{1}+\cdots+w_{n}y^{2}_{n}\right)u_{yy}
+16(w1x13++wnxn3)uxxx+12(w1x12y1++wnxn2yn)uxxy\displaystyle+\frac{1}{6}\left(w_{1}x^{3}_{1}+\cdots+w_{n}x^{3}_{n}\right)u_{xxx}+\frac{1}{2}\left(w_{1}x^{2}_{1}y_{1}+\cdots+w_{n}x^{2}_{n}y_{n}\right)u_{xxy}
+12(w1x1y12++wnxnyn2)uxyy+16(w1y13++wnyn3)uyyy\displaystyle+\frac{1}{2}\left(w_{1}x_{1}y^{2}_{1}+\cdots+w_{n}x_{n}y^{2}_{n}\right)u_{xyy}+\frac{1}{6}\left(w_{1}y^{3}_{1}+\cdots+w_{n}y^{3}_{n}\right)u_{yyy}

where the first and second order derivatives ux,uy,uxx,uxy,uyyu_{x},u_{y},u_{xx},u_{xy},u_{yy} are evaluated at the origin and the third order derivatives uxxx,uxxy,uxyy,uyyyu_{xxx},u_{xxy},u_{xyy},u_{yyy} are evaluated at some point 𝝃\boldsymbol{\xi} inside the circle enclosing all the sample points. This expansion can be easily extended to higher dimensions. Note that the coefficient of all derivatives of uu (up to the kk-th order derivatives) are simply given by components in PT𝐰A\textbf{P}^{T}\mathbf{w}_{A}. Using Lemma 1 and an intermediate result in Lemma 2, we have

PT𝐰A=ΔP|𝐱=𝟎={2if Pj(𝐱)=x2 or Pj(𝐱)=y20otherwise.\textbf{P}^{T}\mathbf{w}_{A}=\left.\Delta\textbf{P}\right|_{\mathbf{x}=\mathbf{0}}=\left\{\begin{array}[]{ll}2&\text{if }P_{j}(\mathbf{x})=x^{2}\text{ or }P_{j}(\mathbf{x})=y^{2}\\ 0&\mbox{otherwise.}\end{array}\right.

for j=1,,lj=1,{\ldots},l up to the highest degree in the polynomial basis. Therefore, if we include up to kk-th order polynomial basis, then 𝐰AT𝐮\mathbf{w}_{A}^{T}\mathbf{u} can be expressed as

𝐰AT𝐮=uxx+uyy+s=0k+1i=1n1(k+1)!(k+1s)wixisyik+1sk+1uxsyk+1s.\mathbf{w}_{A}^{T}\mathbf{u}=u_{xx}+u_{yy}+\sum_{s=0}^{k+1}\sum_{i=1}^{n}\frac{1}{(k+1)!}\begin{pmatrix}k+1\\ s\end{pmatrix}w_{i}x_{i}^{s}y_{i}^{k+1-s}\frac{\partial^{k+1}u}{\partial x^{s}\partial y^{k+1-s}}\,.

Hence, we have |𝐰AT𝐮Δu(𝟎)|\left|\mathbf{w}_{A}^{T}\mathbf{u}-\Delta u(\mathbf{0})\right| bounded above by

2k+1n(k+1)![maxi=1,,n|wi|][maxi=1,,nmax(|xi|,|yi|)]k+1[maxs=0,,k+1|k+1uxsyk+1s|𝐱=𝝃].\frac{2^{k+1}n}{(k+1)!}\left[\max_{i=1,\cdots,n}{|w_{i}|}\right]\left[\max_{i=1,\cdots,n}\max(|x_{i}|,|y_{i}|)\right]^{k+1}{\left[\max_{s=0,\cdots,k+1}\left|\frac{\partial^{k+1}u}{\partial x^{s}\partial y^{k+1-s}}\right|_{\mathbf{x}=\mathbf{\boldsymbol{\xi}}}\right]}\,.

In general, for higher dimensions, we apply Taylor’s expansion for the multivariable function uu up the (k+1)(k+1)-th order and cancel out all the lower order terms not in the form of Pj(x)=xp2P_{j}(x)=x_{p}^{2}. Finally, we obtain the error bounded for |𝐰AT𝐮Δu(𝟎)|\left|\mathbf{w}_{A}^{T}\mathbf{u}-\Delta u(\mathbf{0})\right| given by

nDk+1(k+1)![maxi=1,,n|wi|][maxi=1,,np=1,,D(|(xi)P|)]k+1[maxαp=k+1,αp0|k+1ux1α1xDαD|𝐱=𝝃]\frac{nD^{k+1}}{(k+1)!}\left[\max_{i=1,\cdots,n}{|w_{i}|}\right]\left[\max_{\begin{subarray}{c}i=1,\cdots,n\\ p=1,\cdots,D\end{subarray}}(|(\textbf{x}_{i})_{P}|)\right]^{k+1}\left[\max_{\sum\alpha_{p}=k+1,\alpha_{p}\geq 0}\left|\frac{\partial^{k+1}u}{\partial x_{1}^{\alpha_{1}}\ldots\partial x_{D}^{\alpha_{D}}}\right|_{\mathbf{x}=\mathbf{\boldsymbol{\xi}}}\right]

with (xi)p(\textbf{x}_{i})_{p} being the pp-th component of the ii-th sampling point xi\textbf{x}_{i}. Now, based on Lemma 2, we have maxi=1,,n|wi|=O(h2)\max_{i=1,{\ldots},n}{|w_{i}|}=O(h^{-2}). Moreover, since sample points shrink to the center with speed hh, we have [maxi,p(|(xi)P|)]k+1=O(hk+1)\big{[}\max_{i,p}(|(\textbf{x}_{i})_{P}|)\big{]}^{k+1}=O(h^{k+1}). Therefore, we obtain the estimate |𝐰AT𝐮Δu(𝐱)|𝐱=𝟎|=O(hk1)\left|\mathbf{w}_{A}^{T}\mathbf{u}-\left.\Delta u(\mathbf{x})\right|_{\mathbf{x}=\mathbf{0}}\right|=O(h^{k-1}). ∎

Note that this error bound depends explicitly on the number of data points used in the approximation (i.e., nn). It might seem that more data would yield less accuracy. However, we are considering the behavior of the approximation for a fixed number of data points. The situation is more complicated when comparing the corresponding behavior for two sets with different numbers of data points. If more data points are collected in the reconstruction, one has to go further away from the center to search for more neighbors, making the approximation less local. Therefore, intuitively, it indeed yields less accuracy. However, if we can control the average radius of these data points from the center as we increase nn, one may be able to derive a tighter bound for the summation i=1nwixisyik+1s\sum_{i=1}^{n}{w_{i}x_{i}^{s}y_{i}^{k+1-s}} independent of nn. It is, therefore, less conclusive whether the error bounds will actually increase as we increase nn.

4 Numerical Examples

In this section, we will design several numerical examples to demonstrate the effectiveness and performance of the proposed CLS-GSP method in various applications.

Refer to caption
Figure 2: (Example 4.1) A random sampling of the function (a) u(x)=cos4xu(x)=\cos 4x and (b) u(x)=1sgn(x)xu(x)=1-\text{sgn}(x)x. The red circles represent the random sample points chosen in the reconstruction.
Refer to caption
Figure 3: (Example 4.1) Reconstructions of (a) a smooth and (b) a nonsmooth function using our CLS-GSP method with different RBF kernels. The solid black lines are the exact function, and the solid red lines are obtained by the original least-squares reconstruction.

4.1 Local Reconstruction

This section demonstrates the performance of the local reconstruction in our proposed CLS-GSP method. We randomly generate sample points between [1,1][-1,1] with the center at the origin. In the first example, we consider a smooth function cos4x\cos 4x, as shown in Figure 2(a), and also a non-smooth function u(x)=1sgn(x)xu(x)=1-\text{sgn}(x)x which has a kink at the origin, as shown in Figure 2(b). To emphasize the importance of the RBF kernels, we will compare our performance with the standard second-order polynomial least-squares polynomial approach, as shown in Figure 3. The solution from the standard polynomial least-squares method is a little drastic since the method tries to balance the error from all sampling locations. The result is especially unsatisfactory since some sample points are far from the center x0=0x_{0}=0. Nevertheless, our CLS-GSP method, using either the Gaussian or the Multiquadric basis, can recover the function well, especially in the area near the center x0=0x_{0}=0, even if the samples are not localized.

Refer to caption
Figure 4: (Section 4.2) Error in approximating the Laplacian of (a) u1(x,y)=exp(x2y2)u_{1}(x,y)=\exp(-x^{2}-y^{2}) and (b) u2(x,y)=3cosx4sinyu_{2}(x,y)=3\cos x-4\sin y at the origin for our CLS-GSP method with the second, the third and the fourth order polynomial basis.

4.2 Consistency in Approximating the Laplacian

In the second example, we demonstrate the consistency of our CLS-GSP method applied to the Laplace operator, Δ\Delta, in 2\mathbb{R}^{2}. We examine the Laplacian of two functions: u1(x,y)=exp(x2y2)u_{1}(x,y)=\exp(-x^{2}-y^{2}) and u2(x,y)=3cosx4sinyu_{2}(x,y)=3\cos x-4\sin y, at (x,y)=(0,0)(x,y)=(0,0) where the exact Laplacian is given by 4-4 and 3-3, respectively. We generate 2020 random sample points around the center (0,0)(0,0) and take 8 ghost sample points uniformly on a circle centered at the origin given by {(rcoskπ4,rsinkπ4)}k=18\left\{\left(r\cos\frac{k\pi}{4},r\sin\frac{k\pi}{4}\right)\right\}_{k=1}^{8}, where rr is chosen to be the average distance of the sample points from the origin. We apply our CLS-GSP method using Gaussian kernels (GA) and polynomial basis up to the second, third, and fourth order. The approximation to the Laplacian is calculated using expression (2).

In the first test function u1(𝐱)=exp(x2y2)u_{1}(\mathbf{x})=\exp(-x^{2}-y^{2}), we refine the scaling parameter hh as defined in Section 3.2 while fixing ch2=0.1ch^{2}=0.1 where cc is the shape parameter. The log-Error plot is shown in Figure 4(a). According to Theorem 1, we expect our computed Laplacian to have (k1)(k-1)-th order consistency when up to kk-th order polynomial bases are included. In Figure 4(a), when hh is small, the slopes of the log-Error plot are approximately 22, 22, and 44 when k=2,3k=2,3, and 44, respectively. This indicates an extra consistent order when up to the second- or fourth-order polynomial bases are included. All the third-order and fifth-order partial derivatives of u1(x,y)u_{1}(x,y) at the origin are zero. This leads to the vanishing of the leading-order term. However, considering the second test function u2(𝐱)=3cosx4sinyu_{2}(\mathbf{x})=3\cos x-4\sin y, the consistent behavior is different from u1u_{1}. Figure 4(b) shows the log-Error for different hh, when ch2ch^{2} is fixed to be 11. The slopes are approximately 11, 22, and 33 when k=2,3k=2,3, and 44, respectively. This observation aligns with Theorem 1 as some high-order derivatives of u2(x,y)u_{2}(x,y) are non-zero.

Refer to caption
Figure 5: (Section 4.3) Error in the local reconstruction for (a) the RBF-FD method and (b) the CLS-GSP method using IQ-RBF. The white crosses and the green circle present the location of the sample and ghost points, respectively. The red dot is the origin.
cr¯2c\bar{r}^{2} Error Rank Error Rank
in RBF-FD in CLS-GSP
10810^{-8} 4.7959×1034.7959\times 10^{-3} 1212 3.9414×1033.9414\times 10^{-3} 1010
10710^{-7} 4.7959×1034.7959\times 10^{-3} 1212 3.9414×1033.9414\times 10^{-3} 1010
10610^{-6} 4.7959×1034.7959\times 10^{-3} 1212 3.9414×1033.9414\times 10^{-3} 1010
10510^{-5} 4.6101×1034.6101\times 10^{-3} 1212 2.1457×1032.1457\times 10^{-3} 1010
10410^{-4} 1.3323×1041.3323\times 10^{-4} 1212 2.3067×1052.3067\times 10^{-5} 1010
10310^{-3} 1.5363×1051.5363\times 10^{-5} 1717 4.6044×1064.6044\times 10^{-6} 1212
10210^{-2} 4.6426×1054.6426\times 10^{-5} 2424 4.1861×1054.1861\times 10^{-5} 1313
10110^{-1} 2.5375×1042.5375\times 10^{-4} 2626 5.1265×1045.1265\times 10^{-4} 1313
10010^{0} 3.5709×1033.5709\times 10^{-3} 2626 1.8594×1031.8594\times 10^{-3} 1313
10110^{1} 4.3898×1034.3898\times 10^{-3} 2626 3.0790×1033.0790\times 10^{-3} 1313
10210^{2} 6.5927×1036.5927\times 10^{-3} 2626 3.5674×1033.5674\times 10^{-3} 1313
10310^{3} 5.0129×1035.0129\times 10^{-3} 2626 3.6423×1033.6423\times 10^{-3} 1313
10410^{4} 4.8179×1034.8179\times 10^{-3} 2626 3.6509×1033.6509\times 10^{-3} 1313
10510^{5} 4.7981×1034.7981\times 10^{-3} 2626 3.6518×1033.6518\times 10^{-3} 1313
Table 1: (Section 4.3) The error in the numerical Laplacian and the rank of (𝚽PPT0)\left(\begin{smallmatrix}\boldsymbol{\Phi}&\textbf{P}\\ \textbf{P}^{T}&\textbf{0}\\ \end{smallmatrix}\right) in the RBF-FD method and the rank of (𝚿PP^T0)\left(\begin{smallmatrix}\boldsymbol{\Psi}&\textbf{P}\\ \hat{\textbf{P}}^{T}&\textbf{0}\\ \end{smallmatrix}\right) in the CLS-GSP method with different shape parameters.
Refer to caption
Figure 6: (Section 4.3) The error in the numerical Laplacian versus the shape parameter using RBF-FD and CLS-GSP methods with 4 common types of basis functions. (a) The RBF-FD with 8log(cr¯2)5-8\leq\log(c\bar{r}^{2})\leq 5, (b) the CLS-GSP with 8log(cr¯2)5-8\leq\log(c\bar{r}^{2})\leq 5, (c) the RBF-FD with 4log(cr¯2)1-4\leq\log(c\bar{r}^{2})\leq-1 and (d) the CLS-GSP with 4log(cr¯2)1-4\leq\log(c\bar{r}^{2})\leq-1.
Refer to caption
Figure 7: (Section 4.3) The rank of the corresponding matrices in (a) the RBF-FD and (b) the CLS-GSP method versus the shape parameter with 4 common types of basis functions.

4.3 Sensitivity of the Shape Parameter

In this section, we test the sensitivity of the shape parameter in both the RBF-FD and CLS-GSP methods. The condition for a certain matrix to be full rank, as required in Lemma 1 and Theorem 1, depends on the choice of the shape parameter in the basis function. In this example, we generate 2020 sample points in [1,1]2[-1,1]^{2} and examine the performance of both the RBF-FD method and the CLS-GSP method with different types of basis functions and different values of the shape parameter cc. The underlying function is given by u(x,y)=exp(0.25x)cos(0.2y)u(x,y)=\exp{(-0.25x)}\cos(0.2y). The exact Laplacian at the origin is 0.02250.0225. We include the polynomial basis up to the second order in both methods. The ghost points in our proposed CLS-GSP are chosen to be {(r¯coskπ4,r¯sinkπ4)}k=18\left\{\left(\bar{r}\cos\frac{k\pi}{4},\bar{r}\sin\frac{k\pi}{4}\right)\right\}_{k=1}^{8}, where r¯=0.7682\bar{r}=0.7682 is the average distance of the 2020 sample points from the origin.

Figure 5 illustrates the error in the approximation obtained by both the conventional RBF and our CLS-GSP approach. Remarkably, with only 8 basis functions in our CLS-GSP approach, we achieve an approximation of comparable accuracy to the typical RBF approach, which requires 20 basis elements. To test the sensitivity, we maintain the same set of sampling points while varying the parameter cc. Given that all four commonly used RBFs listed are functions of cr2cr^{2}, it is natural to select cc such that cr¯2c\bar{r}^{2} remains constant. In our test case, we fix cc such that cr¯2=10kc\bar{r}^{2}=10^{k}, where 8k5-8\leq k\leq 5. To determine the optimal choice of the shape parameter, we analyze the error in the numerical Laplacian at the origin and monitor the rank of the matrices

(𝚽PPT0) and (𝚿PP^T0)\begin{pmatrix}\boldsymbol{\Phi}&\textbf{P}\\ \textbf{P}^{T}&\textbf{0}\\ \end{pmatrix}\,\mbox{ and }\,\begin{pmatrix}\boldsymbol{\Psi}&\textbf{P}\\ \hat{\textbf{P}}^{T}&\textbf{0}\\ \end{pmatrix}

in the RBF-FD method and the CLS-GSP method, respectively. We compute the rank numerically by counting the number of singular values greater than the tolerance ϵ=1010\epsilon=10^{-10}. The results are shown in Table 1 for the IQ-RBF case. Initially, we observe that both methods yield large errors when cc is extremely large. Conversely, if the shape parameter cc is extremely small, both methods encounter a rank-deficiency problem, resulting in a large error in the numerical Laplacian.

We repeated the same experiments to determine a suitable range of shape parameter cc but replaced the IQ-type basis with the MQ-, GA-, and IMQ-types, respectively. Figure 6(a-b) illustrates the errors in the numerical Laplacian as we vary the quantity log(cr¯2)\log(c\bar{r}^{2}). Our proposed CLS-GSP method generally approximates the Laplacian more accurately than the RBF-FD method. Further zooming in on the region cr¯2[104,101]c\bar{r}^{2}\in[10^{-4},10^{-1}], as shown in Figure 6(c-d), reveals that the fluctuation in the error in our proposed CLS-GSP method is smaller than that obtained by the RBF-FD method. This indicates that the CLS-GSP method is less sensitive to small changes in the shape parameter cc.

Figure 7 illustrates the rank of the corresponding matrices in both methods versus the choice of the shape parameter cc. Across all types of basis functions, the RBF-FD method exhibits a rank deficiency problem (with full rank being 2626) when log10(cr¯2)\log_{10}\left(c\bar{r}^{2}\right) drops below roughly 1.12-1.12. In contrast, our proposed CLS-GSP method (specifically the GA-type) encounters a rank deficiency problem (with full rank being 1313) only when log10(cr¯2)\log_{10}\left(c\bar{r}^{2}\right) drops below approximately 2.5-2.5. As discussed in the analysis in Section 3, achieving a full-rank matrix is crucial for obtaining the consistency result. Hence, our proposed CLS-GSP method offers a broader range of the shape parameter cc to ensure the necessary consistency in the numerical approximation of the required derivative.

Regarding the behavior of the GA-type basis functions, it is evident that the rank of the matrix (𝚿PP^T0)\begin{pmatrix}\boldsymbol{\Psi}&\textbf{P}\\ \hat{\textbf{P}}^{T}&\textbf{0}\end{pmatrix} decreases as we increase the parameter cr¯2c\bar{r}^{2} and, consequently, the shape parameter cc. This phenomenon arises because the support of the Gaussian basis functions becomes excessively narrow with larger values of cc. Specifically, when log10(cr¯2)\log_{10}\left(c\bar{r}^{2}\right) is around 2, the corresponding shape parameter cc is approximately 170. This indicates that the standard deviation of each Gaussian basis function is roughly 0.05. Comparing this scale to the domain size, which is O(1)O(1), it becomes clear that the GA-type basis functions struggle to provide an accurate approximation to the underlying function.

4.4 Two Dimensional Poisson Equations

In this example, we apply the CLS-GSP method to solve Poisson’s equation in different 2D domains, defined by the equation:

{Δu=f(x,y), in Ωu=g(x,y), on Ω.\left\{\begin{matrix}\Delta u=f(x,y),&\text{ in }\Omega\\ u=g(x,y),&\text{ on }\partial\Omega\,.\end{matrix}\right.

To mitigate one-sided stencils near the boundary Ω\partial\Omega, we incorporate a layer of boundary nodes outside the domain Ω\Omega. In our approach, we presume that the function g(x,y)g(x,y) is specified not only on the boundary Ω\partial\Omega but also analytically defined in the exterior, ensuring precise values when necessary. If gg is solely provided by an explicit expression on Ω\partial\Omega, we determine the function value on these boundary nodes through a normal extension, ensuring 𝐧g=0\mathbf{n}\cdot\nabla g=0, where 𝐧\mathbf{n} represents the outward normal along Ω\partial\Omega. This is achieved by projecting the boundary nodes onto the boundary. Furthermore, if data points are available on the boundary, the value assigned to a boundary node can be determined through additional interpolation on Ω\partial\Omega.

Let NN denote the number of meshless sample points in the computational domain, and NbN_{b} represent the number of boundary nodes outside the domain. Consequently, we obtain a DM of the Laplacian operator given by: Lg=(LL1)L_{g}=\begin{pmatrix}L&L_{1}\end{pmatrix}, where LL is an N×NN\times N matrix associated solely with the sample points within the computational domain, and L1L_{1} is an N×NbN\times N_{b} matrix containing the coefficients associated with the boundary nodes in Ωb\Omega_{b}. This operator operates on all N+NbN+N_{b} points and provides the numerical Laplacian at the NN sample points within the computational domain. Set

𝐟\displaystyle\mathbf{f} =\displaystyle= [f(x1,y1),f(x2,y2),,f(xN,yN)]T\displaystyle\left[f(x_{1},y_{1}),f(x_{2},y_{2}),\cdots,f(x_{N},y_{N})\right]^{T}
𝐠\displaystyle\mathbf{g} =\displaystyle= [g(x^1,y^1),g(x^2,y^2),,g(x^Nb,y^Nb)]T,\displaystyle\left[g(\hat{x}_{1},\hat{y}_{1}),g(\hat{x}_{2},\hat{y}_{2}),\cdots,g(\hat{x}_{N_{b}},\hat{y}_{N_{b}})\right]^{T}\,,

where (xi,yi),i=1,,N(x_{i},y_{i}),i=1,{\ldots},N are sample points in Ω\Omega and (x^i,y^i),i=1,,Nb(\hat{x}_{i},\hat{y}_{i}),i=1,{\ldots},N_{b} are boundary nodes in Ωb\Omega_{b}. Therefore, the solution uu at the interior sample points, denoted by 𝐮\mathbf{u}, can be computed using L𝐮+L1𝐠=𝐟L\mathbf{u}+L_{1}\mathbf{g}=\mathbf{f}, which implies that we solve L𝐮=𝐟L1𝐠L\mathbf{u}=\mathbf{f}-L_{1}\mathbf{g}.

Refer to caption
Figure 8: (Example 4.4 on a disc) (a) The sample points. (b) The allocation of 49 ghost sample points. (c) The numerical solution to the Poisson equation on a disc.
Refer to caption
Figure 9: (Example 4.4: the numerical solution to the Poisson equation on an irregular domain) (a) Uniform sample points and (b) random sample points.

In the first domain, we consider a simple disc Ω=(x,y):x2+y222\Omega={(x,y):x^{2}+y^{2}\leq 2^{2}}. Both functions f(x,y)f(x,y) and g(x,y)g(x,y) are determined from the exact solution given by u(x,y)=1+sin(4x)+cos(3x)+sin(2y)u(x,y)=1+\sin(4x)+\cos(3x)+\sin(2y). We select N=3159N=3159 random points from x2+y222x^{2}+y^{2}\leq 2^{2} and Nb=1841N_{b}=1841 random boundary nides from 22<x2+y22.522^{2}<x^{2}+y^{2}\leq 2.5^{2} in the boundary domain Ωb\Omega_{b}. These points are plotted in Figure 8(a). Some sample points are located very close, which could cause instability problems with a straightforward implementation of the RBF-FD method based on the KK-nearest neighbors. In this example, we choose n=60n=60 (i.e., we have 60 local neighbors for each sample point 𝐱0\mathbf{x}_{0}) to generate the weights vector of the Laplacian matrix at 𝐱0\mathbf{x}_{0}. We choose 49 ghost basis samples uniformly within a distance of r=12maxid(𝐱0,𝐱i)r=\frac{1}{2}\max_{i}d(\mathbf{x}_{0},\mathbf{x}_{i}), as shown in Figure 8(b). The numerical solution is presented in Figure 8(c), and the LL_{\infty} error in the solution is around 4×1034\times 10^{-3}.

In the second test domain, we consider an irregular flower shape given by Ω={(θ,r):rrmax(θ)=1.4+0.4sin(5θ)+0.4sin(2θ),θ[0,2π)}\Omega=\{(\theta,r):r\leq r_{\max}(\theta)=1.4+0.4\sin(5\theta)+0.4\sin(2\theta),\theta\in[0,2\pi)\}. The functions f(x,y)f(x,y) and g(x,y)g(x,y) are again determined from the same exact solution. In this example, we test on two different sets of sample points. In the first sampling, we select 2677 samples uniformly inside the computational domain with 2066 boundary nodes chosen in Ωb={(θ,r):rmax(θ)<rrmax(θ)+0.5,θ[0,2π)}\Omega_{b}=\{(\theta,r):r_{\max}(\theta)<r\leq r_{\max}(\theta)+0.5,\theta\in[0,2\pi)\}. The second case uses 2782 random samples in Ω\Omega with 2782 boundary nodes in Ωb\Omega_{b}. Figure 9 shows our computed solution in this irregular domain using the uniform samples (in (a)) and the random samples (in (b)). The LL_{\infty} error of these solutions is around 3×1043\times 10^{-4} and 2×1032\times 10^{-3} for the uniform and the random samplings, respectively.

Refer to caption
Figure 10: (Example 4.5) (a) The Eulerian approach to the Laplace-Beltrami operator. The interface is plotted in a black-solid line. Between the two blue dashed lines is the tube area extended from the interface. The red lines show the normal direction of some points on the interface. Along the normal direction, we impose 𝐧f=0\mathbf{n}\cdot\nabla f=0. (b) The Lagrangian approach to the Laplace-Beltrami operator. The local reconstruction involves the construction of the tangent plane using a normal 𝐧\mathbf{n} estimated from the data and a projection step of sampled points on a local neighborhood.

4.5 The Laplace-Beltrami Operator

There are several methods for approximating the Laplace-Beltrami (LB) operator on manifolds. One Eulerian approach involves extending the d\mathbb{R}^{d} manifold to d+1\mathbb{R}^{d+1} with the constraint 𝐧f=0\mathbf{n}\cdot\nabla f=0, where 𝐧\mathbf{n} denotes the normal on the manifold SS, as illustrated in Figure 10(a). Various methods have been developed within this Eulerian framework. For instance, an earlier approach by Green [17] utilized the level set method [28] and the projection method. The closest point method (CPM) [26, 30, 25] has also been applied to this problem. Following the basic embedding concept outlined in [35] for surface eikonal equations, we have devised a similar straightforward embedding approach for the LB eigenproblem, as detailed in [21]. Additionally, methods like RBF-OGr [29] and RBF-FD are utilized within the RBF community. Another approach is the Lagrangian framework, which explicitly deals with sample points on the manifold. In this approach, neighbor points for each sample point are projected onto the local tangent plane of the manifold. Subsequently, these methods use the standard Laplacian on the tangent plane to compute the Laplace-Beltrami operator, incorporating local geometric information. A graphical illustration is provided in Figure 10(b). This technique finds widespread use in point cloud data analysis, as seen in methods proposed in [1], as well as in some local mesh methods developed in [24, 20]. Other methods, such as the grid-based particle method (GBPM) and the cell-based particle method (CBPM) [23, 22, 19], and a virtual grid method [33], also employ similar principles.

In this example, we focus on approximating the Laplace-Beltrami operator on a two-dimensional surface embedded in 3\mathbb{R}^{3} and parameterized by (s1,s2)(s_{1},s_{2}). The Laplace-Beltrami operator acting on a function f:Sf:S\rightarrow\mathbb{R} is defined as: ΔSf=α,β=121gsα(ggαβfsβ)\Delta_{S}f=\sum_{\alpha,\beta=1}^{2}\frac{1}{\sqrt{g}}\frac{\partial}{\partial s_{\alpha}}\left(\sqrt{g}g^{\alpha\beta}\frac{\partial f}{\partial s_{\beta}}\right). Here, the metric [gαβ][g_{\alpha\beta}] is given by gαβ=𝐗sα𝐗sαg_{\alpha\beta}=\frac{\partial\mathbf{X}}{\partial s_{\alpha}}\cdot\frac{\partial\mathbf{X}}{\partial s_{\alpha}}, and the Jacobian of the metric is denoted as g=g11g22g12g21g=g_{11}g_{22}-g_{12}g_{21}. The matrix [gαβ][g^{\alpha\beta}] represents the inverse of [gαβ][g_{\alpha\beta}]. It’s important to note that the Laplace-Beltrami operator is geometrically intrinsic, meaning it’s independent of the parametrization. Thus, computations based on local parametrization remain valid and retain geometric intrinsic properties.

Refer to caption
Figure 11: (Example 4.5.1) (a) An illustration of the GBPM/CPM representation of a circle. The underlying mesh is plotted using the green lines. The activated grid points are plotted using blue circles, and their projection is shown in red squares. These red squares are the sample points of our manifold. (b) The GBPM sampling of the unit sphere. (c) The sparse pattern of the discretized LB operator after applying the sparse reverse Cuthill-McKee ordering.

4.5.1 Eigenvalues of the LB Operator

In this example, we have employed the GBPM or CPM to sample the sphere. This involves selecting points closest to the underlying mesh points near the interface. The illustration in Figure 11(a) plots a typical point cloud sampling of the interface/surface using GBPM. Here, the underlying mesh is shown with solid lines, while all active grids near the interface (plotted as a circle) are represented by blue circles. The associated closest points on the interface are marked with red squares, and solid line links show the correspondence between each pair. It’s worth noting that the resulting point cloud from GBPM may exhibit non-uniformity, as the two closest points to two mesh points can be very close or even identical. A crucial parameter in generating the GBPM/CPM point cloud is the width of the computational band, which we have set to 1.5Δx1.5\Delta x in our numerical example. This implies that for each grid point within a distance less than 1.5Δx1.5\Delta x from the interface, we compute its projection onto the manifold. These projected points serve as sample points for representing the interface.

Refer to caption
Figure 12: (Example 4.5.1) Eigenvalues of the discretized Laplace-Beltrami operator ΔS-\Delta_{S} using different approaches in the complex plane. There are, in total, N=3795N=3795 nodes in the sample. (a) The least-squares method uses a second-order polynomial in [22]. (b) The proposed CLS-GSP method.

Here, we compute the eigenvalues of the Laplace-Beltrami operator on the unit sphere represented by the GBPM sampling. The sample points are shown in Figure 11(b).

First, we construct the DM of the operator ΔS-\Delta_{S} using our proposed CLS-GSP method, denoted as LL. Then, we compute all the eigenvalues λ\lambda and eigenvectors ν\nu of the matrix LL using the MATLAB function eig. These eigenvalues and eigenvectors can serve as a good discrete approximation to the eigenvalues and eigenfunctions of the continuous eigenvalue problem ΔSν=λν-\Delta_{S}\nu=\lambda\nu. After obtaining these computed eigenvalues, we can plot them onto the complex plane. Figure 12 shows the eigenvalues of LL from (a) the least-squares method as in the GBPM [22], and (b) our proposed CLS-GSP method.

In many applications, having the real part of all eigenvalues of the Laplace-Beltrami operator ΔS-\Delta_{S} staying non-negative is important. For example, when solving the diffusion equation on the manifold ut=ΔSu\frac{\partial u}{\partial t}=\Delta_{S}u, an eigenvalue of the Laplace-Beltrami operator ΔS-\Delta_{S} with a negative real part can lead to instability in the numerical solution. In the GBPM sampling, two sample points can collide at the same place on the surface, leading to their corresponding projection points being arbitrarily close to the tangent plane. However, our proposed CLS-GSP works better in this example. All eigenvalues from our proposed CLS-GSP are on the right-hand side of the complex plane, showing that the real part of all eigenvalues is non-negative.

Refer to caption
Figure 13: (Example 4.5.2) First row: The E2,mE_{2,m} and E,mE_{\infty,m} of the first seven eigenvalues in different discretizations. The dashed lines are the reference curve for the second-order convergence. Second row: The E2,mE_{2,m} and E,mE_{\infty,m} of the first seven eigenvalues in different sample sizes.
Δx\Delta x 0.20.2 0.10.1 0.050.05 0.0250.025
Samples 984984 37953795 1512715127 6032260322
λ1=2\lambda_{1}=2 1.47×1021.47\times 10^{-2} 3.89×1033.89\times 10^{-3} 9.53×1049.53\times 10^{-4} 2.38×1042.38\times 10^{-4}
λ2=6\lambda_{2}=6 9.64×1039.64\times 10^{-3} 2.50×1032.50\times 10^{-3} 2.10×1042.10\times 10^{-4} 2.10×1052.10\times 10^{-5}
λ3=12\lambda_{3}=12 3.01×1023.01\times 10^{-2} 6.23×1036.23\times 10^{-3} 1.42×1031.42\times 10^{-3} 4.33×1044.33\times 10^{-4}
λ4=20\lambda_{4}=20 4.66×1024.66\times 10^{-2} 1.86×1021.86\times 10^{-2} 3.45×1033.45\times 10^{-3} 9.35×1049.35\times 10^{-4}
λ5=30\lambda_{5}=30 8.80×1028.80\times 10^{-2} 2.42×1022.42\times 10^{-2} 5.62×1035.62\times 10^{-3} 1.47×1031.47\times 10^{-3}
λ6=42\lambda_{6}=42 1.22×1011.22\times 10^{-1} 3.39×1023.39\times 10^{-2} 8.80×1038.80\times 10^{-3} 2.42×1032.42\times 10^{-3}
λ7=56\lambda_{7}=56 1.92×1011.92\times 10^{-1} 4.67×1024.67\times 10^{-2} 1.18×1021.18\times 10^{-2} 3.02×1033.02\times 10^{-3}
Table 2: (Example 4.5.2) The E2,mE_{2,m} of various eigenvalues of the Laplace-Beltrami operator on the unit sphere sampled by the GBPM with different scales.
Δx\Delta x 0.20.2 0.10.1 0.050.05 0.0250.025
Samples 984984 37953795 1512715127 6032260322
λ1=2\lambda_{1}=2 1.68×1021.68\times 10^{-2} 4.29×1034.29\times 10^{-3} 9.92×1049.92\times 10^{-4} 2.43×1042.43\times 10^{-4}
λ2=6\lambda_{2}=6 1.52×1021.52\times 10^{-2} 3.64×1033.64\times 10^{-3} 2.84×1042.84\times 10^{-4} 4.15×1054.15\times 10^{-5}
λ3=12\lambda_{3}=12 6.25×1026.25\times 10^{-2} 8.68×1038.68\times 10^{-3} 1.81×1031.81\times 10^{-3} 6.44×1046.44\times 10^{-4}
λ4=20\lambda_{4}=20 6.72×1026.72\times 10^{-2} 3.45×1023.45\times 10^{-2} 5.07×1035.07\times 10^{-3} 1.57×1031.57\times 10^{-3}
λ5=30\lambda_{5}=30 1.15×1011.15\times 10^{-1} 3.33×1023.33\times 10^{-2} 7.05×1037.05\times 10^{-3} 1.95×1031.95\times 10^{-3}
λ6=42\lambda_{6}=42 1.61×1011.61\times 10^{-1} 4.26×1024.26\times 10^{-2} 1.32×1031.32\times 10^{-3} 4.16×1034.16\times 10^{-3}
λ7=56\lambda_{7}=56 2.88×1012.88\times 10^{-1} 5.99×1025.99\times 10^{-2} 1.41×1021.41\times 10^{-2} 3.86×1033.86\times 10^{-3}
Table 3: (Example 4.5.2) The E,mE_{\infty,m} of various eigenvalues of the Laplace-Beltrami operator on the unit sphere sampled by the GBPM with different scales.

4.5.2 Convergence of the Eigenvalues of the Laplace-Beltrami Operator

In this part, we test the convergence of the computed eigenvalues of the LB operator on the unit sphere. On the unit sphere, the exact value of the mm-th eigenvalue can be analytically computed and is given by λm=m(m+1)\lambda_{m}=m(m+1) with multiplicity 2m+12m+1. We use the following normalized L2L_{2} and LL_{\infty} errors to measure our algorithm’s performance: E2,m=12m+1i(λm,iλmλm)2E_{2,m}=\sqrt{\frac{1}{2m+1}\sum_{i}\left(\frac{\lambda_{m,i}-\lambda_{m}}{\lambda_{m}}\right)^{2}} and E,m=maxi|λm,iλmλm|E_{\infty,m}=\max_{i}\left|\frac{\lambda_{m,i}-\lambda_{m}}{\lambda_{m}}\right|. Figure 13 shows that the convergence rate of our algorithm is approximately second order in both error measures. It also shows that the error decreases when the GBPM sample size is increased. Tables 2 and 3 show the details of the errors measured in different discretizations.

Refer to caption
Figure 14: (Example 4.5.3) (a) The 1st, 2nd, 3rd and 6th eigenfunctions of the Stanford bunny. (b) The 1st, 2nd, 6th and 16th eigenfunctions of the cow.

4.5.3 Eigenfunctions of the LB Operator on Manifolds

Our method’s ability in handling manifolds, as represented by various samplings, is a significant advantage. Figure 14 illustrates the first few eigenfunctions of two-point cloud datasets obtained from the Stanford 3D scanning repository and the public domain, showcasing the applicability of your approach across different datasets.

5 Conclusion

This paper introduced the constrained least-squares ghost sample points (CLS-GSP) method for approximating differential operators on domains sampled by a random set of sample points. Unlike traditional RBF interpolation methods, we formulated the problem as a least-squares one, offering more flexibility in allocating basis locations, especially when sample points are close or overlap. Additionally, we enhanced the approach presented in previous work by enforcing the local reconstruction to pass through the center data point, improving the accuracy and diagonal dominance of the resulting DM.

Given the challenge of designing optimal sample locations, we addressed the issue by introducing the concept of consistency and analyzing the behavior of the pointwise consistency error as sample locations approach the center. By incorporating polynomial basis functions up to the kk-th order into the CLS-GSP basis, we demonstrated the method’s consistency in estimating the Laplacian of a function with an order of k1k-1.

Numerical experiments showcased the accuracy and stability of the proposed CLS-GSP method. Notably, the discretized Laplace-Beltrami operator obtained using CLS-GSP has eigenvalues with positive real parts, indicating stability for solving the diffusion equation on a manifold. Our approach presents a promising framework for robust numerical algorithms in manifold-based applications.

Acknowledgment

The work of Leung was supported in part by the Hong Kong RGC grant 16302223.

References

  • [1] M. Belkin, J. Sun, and Y. Wang. Constructing Laplace operator from point clouds in rd. In Proceedings of the twentieth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1031–1040. Society for Industrial and Applied Mathematics, 2009.
  • [2] T. Belytschko, Y. Krongauz, D. Organ, M. Fleming, and P. Krysl. Meshless methods: An overview and recent developments. Comput. Meth. Appl. Mech. Engrg., 139:3–47, 1996.
  • [3] JJ. Benito, F. Urena, and L. Gavete. Influence of several factors in the generalized finite difference method. Applied Mathematical Modelling, 25(12):1039–1053, 2001.
  • [4] JJ. Benito, F. Urena, and L. Gavete. Solving parabolic and hyperbolic equations by the generalized finite difference method. Journal of computational and applied mathematics, 209(2):208–233, 2007.
  • [5] M.D. Buhmann. Radial Basis Functions. Cambridge University Press, 2003.
  • [6] M.D. Buhmann and D. Nira. Spectral convergence of multiquadric interpolation. Proceedings of the Edinburgh Mathematical Society, 36(2):319–333, 1993.
  • [7] S. G. Chen, M. H. Chi, and J. Y. Wu. High-Order Algorithms for Laplace–Beltrami Operators and Geometric Invariants over Curved Surfaces. J. Sci. Comput., 65:839–865, 2015.
  • [8] S.-G. Chen and J.-Y. Wu. Discrete conservation laws on curved surfaces. SIAM J. Sci. Comput., 35(2):A719–A739, 2013.
  • [9] S.-G. Chen and J.-Y. Wu. Discrete conservation laws on evolving surfaces. SIAM J. Sci. Comput., 38(3):A1725–A1742, 2016.
  • [10] G.E. Fasshauer. Meshfree approximation methods with MATLAB. World Scientific, 2007.
  • [11] N. Flyer and E. Lehto. Rotational transport on a sphere: Local node refinement with radialbasis functions. J. Comp. Phys., 229:1954–1969, 2010.
  • [12] B. Fornberg. Generation of finite difference formulas on arbitrarily spaced grids. Mathematics of computation, 51(184):699–706, 1988.
  • [13] B. Fornberg. A practical guide to pseudospectral methods. Cambridge University Press, 1996.
  • [14] B. Fornberg. Classroom note: Calculation of weights in finite difference formulas. SIAM review, 40(3):685–691, 1998.
  • [15] B. Fornberg and N. Flyer. Accuracy of radial basis function interpolation and derivative approximations on 1-D infinite grids. Advances in Computational Mathematics, 23(1-2):5–20, 2005.
  • [16] B. Fornberg, N. Flyer, and J.M. Russell. Comparisons between pseudospectral and radial basis function derivative approximations. IMA J. Numer. Anal., pages 1–28, 2008.
  • [17] J.B. Greer. An improvement of a recent Eulerian method for solving PDEs on general geometries. J. Sci. Comp., 29(3):321–352, 2005.
  • [18] R. Hardy. Multiquadratic equations of topography and other irregular surface. Journal of Geophysical Research, 76:1905–1915, 1971.
  • [19] S. Hon, S. Leung, and H.-K. Zhao. A cell based particle method for modeling dynamic interfaces. J. Comp. Phys., 272:279–306, 2014.
  • [20] R. Lai, J. Liang, and H. Zhao. A local mesh method for solving pdes on point clouds. Inverse Problems and Imaging, 7(3):737–755, 2013.
  • [21] Y.K. Lee and S. Leung. A Simple Embedding Method for the Laplace-Beltrami Eigenvalue Problem on Implicit Surfaces. Comm. App. Math. and Comp., 2023.
  • [22] S. Leung, J. Lowengrub, and H.K. Zhao. A grid based particle method for high order geometrical motions and local inextensible flows. J. Comput. Phys., 230:2540–2561, 2011.
  • [23] S. Leung and H.K. Zhao. A grid based particle method for moving interface problems. J. Comput. Phys., 228:2993–3024, 2009.
  • [24] J. Liang, R. Lai, T.W. Wong, and H. Zhao. Geometric understanding of point clouds using Laplace-Beltrami operator. Computer Vision and Pattern Recongnition (CVPR), pages 214–221, 2012.
  • [25] C.B. Macdonald, J. Brandman, and S. Ruuth. Solving eigenvalue problems on curved surfaces using the closest point method. J. Comp. Phys., 230:7944–7956, 2011.
  • [26] C.B. Macdonald and S.J. Ruuth. Level set equations on surfaces via the closest point method. J. Sci. Comput., 35:219–240, 2008.
  • [27] V.P. Nguyen, T. Rabczuk, S. Bordas, and M. Duflot. Meshless methods: A review and computer implementation aspects. Mathematics and Computers in Simulation, 79:763–813, 2008.
  • [28] S. J. Osher and J. A. Sethian. Fronts propagating with curvature dependent speed: algorithms based on Hamilton-Jacobi formulations. J. Comput. Phys., 79:12–49, 1988.
  • [29] C. Piret. The orthogonal gradients method: A radial basis functions method for solving partial differential equations on arbitrary surfaces. J. Comput. Phys., 231:4662–4675, 2012.
  • [30] S.J. Ruuth and B. Merriman. A simple embedding method for solving partial differential equations on surfaces. J. Comput. Phys., 227:1943–1961, 2008.
  • [31] C. Shu, H. Ding, and K. S. Yeo. Local radial basis function-based differential quadrature method and its application to solve two-dimensional incompressible navier-stokes equations. Comput. Meth. Appl. Mech. Engrg., 192:941–954, 2003.
  • [32] J. G. Wang and G. R. Liu. A point interpolation meshless method based on radial basis functions. Int. J. Num. Meth. Eng., 54:1623–1648, 2002.
  • [33] M. Wang, S. Leung, and H. Zhao. Modified Virtual Grid Difference (MVGD) for Discretizing the Laplace-Beltrami Operator on Point Clouds. SIAM J. Sci. Comput., 40(1):A1–A21, 2018.
  • [34] H. Wendland. Scattered data approximation. Cambridge University Press, 2005.
  • [35] T. Wong and S. Leung. A fast sweeping method for eikonal equations on implicit surfaces. J. Sci. Comput., 67:837–859, 2016.
  • [36] G. B. Wright. Radial basis function interpolation: Numerical and analytical developments. Ph.D. thesis, University of Colorado, Boulder, 2003.
  • [37] J.-Y. Wu, M.-H. Chi, and S.-G. Chen. A local tangential lifting differential method for triangular meshes. Mathematics and Computers in Simulation, 80:2386–2402, 2010.
  • [38] N. Ying. Numerical Methods for Partial Differential Equations from Interface Problems. HKUST PhD Thesis, 2017.