This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Numerical reconstruction for 3D nonlinear SAR imaging via a version of the convexification method

Vo Anh Khoa Department of Mathematics, Florida A&M University, Tallahassee, FL 32307, USA [email protected] Michael Victor Klibanov Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223, USA [email protected] William Grayson Powell Department of Mathematics, University of Wisconsin Madison, Madison, WI 53706, USA [email protected]  and  Loc Hoang Nguyen Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223, USA [email protected]
Abstract.

This work extends the applicability of our recent convexification-based algorithm for constructing images of the dielectric constant of buried or occluded target. We are orientated towards the detection of explosive-like targets such as antipersonnel land mines and improvised explosive devices in the non-invasive inspections of buildings. In our previous work, the method is posed in the perspective that we use multiple source locations running along a line of source to get a 2D image of the dielectric function. Mathematically, we solve a 1D coefficient inverse problem for a hyperbolic equation for each source location. Different from any conventional Born approximation-based technique for synthetic-aperture radar, this method does not need any linearization. In this paper, we attempt to verify the method using several 3D numerical tests with simulated data. We revisit the global convergence of the gradient descent method of our computational approach.

Key words and phrases:
SAR imaging, coefficient inverse problem, convexification, global convergence, simulated data, delay-and-sum procedure
1991 Mathematics Subject Classification:
78A46, 65L70, 65C20
This work was supported by US Army Research Laboratory and US Army Research Office grant W911NF-19-1-0044. V. A. Khoa was supported by the Faculty Research Awards Program (FRAP) at Florida A&M University, under the project #007633.

1. Introduction

Synthetic-aperture radar (SAR) imaging is a commonly used technique in reconstructing images of surfaces of planets, and in detecting antipersonnel land mines and improvised explosive devices; cf. e.g. [2, 3, 7, 8] for some essential backgrounds concerning SAR imaging. The conventional SAR imaging is based on the linearization via the Born approximation [7]. While providing accurate images of shapes and locations of targets, the linearization is problematic to produce accurate values of their dielectric constants, see, e.g. [10]. However, since dielectric constants characterize constituent materials of targets, then it is obviously important to accurately compute them.

To address this problem, a new convexification based nonlinear SAR imaging, CONSAR, was proposed in [10] for the first time. Testing of CONSAR for both computationally simulated and experimentally collected data has shown that dielectric constants of targets are indeed accurately computed [10, 19]. However, these publications are only about obtaining 2D SAR images on the so-called “slant range” plane.  In this case both the point source and the radiating antenna move along an interval of a single straight line. Thus, unlike [10, 19], the current paper is about testing the CONSAR technique for 3D SAR images. By our experience, collection of experimental SAR data along just one line is time consuming [19]. Therefore, we consider here the case when the source and the detector concurrently move along only three (3) parallel lines. In principle, one should have many such lines of course to get accurate solutions. Hence, because of an obvious lack of the information content in the input data, it is unrealistic to expect that resulting SAR images would be accurate in terms of shapes of targets. This is why sizes of computed images of our targets are quite different from the true ones. Nevertheless, we accurately image dielectric constants and locations of targets. Only computationally simulated data are considered here.

In this paper, our target application is the non-invasive inspections of buried targets when one tries to image the buried or occluded target using backscattering data collected by e.g., flying devices or roving vehicles (cf. [2, 9]). Having knowledge of the dielectric constant of the target plays a vital role in the development of future classification algorithms that distinguish between explosives and clutter in military applications. Thus, it is anticipated that the knowledge of dielectric constants of targets should decrease the false alarm rate.

In fact, the conventional time dependent SAR data when the source and the detector move concurrently along an interval of a straight line, depend on two variables. On the other hand, we want to image 3D objects. Therefore, the data for the corresponding Coefficient Inverse Problem (CIP) are underdetermined. This is why we work here with the above mentioned more informative case of three parallel lines. Although these data are also underdetermined, we still hope to reconstruct some approximations of 3D images of targets. It is worth mentioning that solving underdetermined or non overdetermined CIPs is omnipresent in applications that we have mentioned above. This is one of the main challenges that researchers encounter in practice is finding a good approximation of the target’s dielectric constant. This is also a long standing research project that the corresponding author and his research team have been working on for almost a decade, see, e.g. [11, 12, 13, 14, 15, 16, 17] for mathematical models and numerical methods to identify the material of the buried targets.

Solving for the target’s dielectric constant in this scenario is a CIP for a hyperbolic PDE. It is well-known that CIPs are both severely ill-posed and nonlinear thus, requiring a careful designation of an appropriate numerical method. Very recently, we have proposed in [10, 19] a new convexification-based algorithm to construct a 2D image of the target’s dielectric constant using SAR data. In fact, the approach of [10, 19] means a nonlinear SAR imaging, as opposed to the conventional linear one. We point out that the computational approach of [10, 19] to the SAR data is a heuristic one. Nevertheless, it was shown in [10] that this approach works well numerically for computationally simulated data. It is more encouraging that it was demonstrated in [10, 19] that this approach works well for experimentally collected data. The same approach to the SAR data is used in this paper. Using multiple source locations running along an interval of a straight line, our computational approach is to solve a number of 1D CIPs.

To solve numerically each of our CIPs, we transform first the hyperbolic equation into a nonlocal nonlinear boundary value problem (BVP), based upon the original idea of [20]. An approximate solution of this BVP is then found by minimizing a suitable weighted Tikhonov-like cost functional, which is proved to be globally strictly convex. Its strict convexity is based on the presence of the Carleman Weight Function (CWF) in it. The CWF is the function which is used as the weight function in the Carleman estimate for the principal part of the differential operator involved in that BVP. Ultimately, the 2D image which we construct for the case of each of our three lines can be understood as the collection of 1D cross-sections of the so-called slant-range plane established by the “moving” source/antenna positions. To construct the 3D image, we combine those 2D ones, see sections 3 and 5 for details.

Conventional numerical methods for CIPs are based on minimizations of various least squares cost functionals, including the Tikhonov regularization functional, see, e.g. [4, 5, 6]. It is well known, however, that such a functional is typically non convex and has, therefore, multiple local minima and ravines, which, in turn plague the minimization procedure. The convexification method was initially proposed in [21, 22] with the goal to avoid that phenomenon of multiple local minima and ravines of conventional least squares cost functionals for CIPs. While publications [21, 22] are purely theoretical ones, the paper [23] has generated many follow up numerical studies of the convexification of this research group, see, e.g. the above cited publications of this research group and references cited therein. We also refer to the recently published book [24].

The convexification uses a significantly modified idea of the Bukhgeim–Klibanov method [25] of 1981. However, in [25] and many follow up publications, only proofs of global uniqueness and stability theorems for CIPs were studied, see, e.g. [26, 27, 28, 29, 30] and references cited therein. On the other hand, the convexification is going in the numerical direction: it is designed for constructions of globally convergent numerical methods for CIPs. This is unlike the conventional locally convergent numerical methods for CIPs, which are based on minimizations of those functionals.

As stated above, depending on the differential operator involved in the CIP, the convexification method is based on an appropriate Carleman Weight Function to construct a weighted Tikhonov-like cost functional. The Carleman Weight Function plays two roles. First, it convexifies the conventional Tikhonov functional. Second, it helps to control nonlinearities involved in the underlying CIP.

Definition 1. We call a numerical method for a CIP globally convergent if there is a theorem which claims that this method reaches at least one point in a sufficiently small neighborhood of the true solution of that CIP without any advanced knowledge of a small neighborhood of this solution.

The convexification is a globally convergent numerical method because one can prove the convergence of the scheme towards the true solution starting from any point located in a given bounded set of a suitable Hilbert space and the diameter of this set can be arbitrary large.

As to the moving sources, we remark that in the publication [16] a version of the convexification method was developed for the first time both analytically and numerically for a 3D inverse medium problem for the Helmholtz equation with the backscattering data generated by point sources running along a straight line and at a fixed frequency. Then this result was confirmed on experimental data in [17, 18]. In [16, 17, 18], we were working with the data in the frequency domain, whereas the SAR data are the time-dependent ones. In addition, in the case of the SAR data, both source and detector move concurrently along a straight line. On the other hand, in [16, 17, 18] the source moves along a part of a straight line, whereas the detector moves independently along a part of a plane. We also refer to works of Novikov and his group [31, 32] for a different approach to CIPs with the moving source.

We also add that our recent publications [10, 19] about CONSAR were focusing only on the 2-D reconstructions. Therefore, the main difference between these papers and the current one is that here we obtain a 3-D image of the target dielectric constant, assuming that the data are collected from multiple lines of sources.

Our paper is organized as follows. In sections 2, 3 and 4, we revisit our setting for the CIP imaging and our convexification-based method proposed in [19]. In section 5, we verify the 3-D numerical performance of our method using computationally simulated data. Finally, we present concluding remarks in section 6.

2. Statement of the forward problem

Prior to the statement of the forward problem, we discuss the setting of SAR imaging. We focus only on the stripmap SAR processing; see e.g. [33] for details of this setup. With the non-invasive inspection of buildings and land mines detection being in mind, we prepare to have a radiating antenna and a receiver in the SAR device. As the antenna radiates pulses of the time-resolved component of the electric wave field, the receiver collects the backscattering signal. Our SAR device moves along a straight line. In this work, we assume that it moves it along multiple parallel lines, which, we hope, may create a 3-D image of the target. It is worth mentioning that in our inverse setting in section 3 below, we assume that the antenna and the receiver coincide, and form a point, which moves along certain parallel lines. This assumption is a typical one in SAR imaging; cf. [7]. Indeed, in practice the antenna and the receiver are rather close to each other, and the distance between them is much less than the distance between the line over which they concurrently move and the targets of interest. However, when generating the SAR data in computational simulations, we solve a forward problem, in which we assume that the antenna is a disk.

We denote below 𝐱=(x,y,z)3.\mathbf{x}=(x,y,z)\in\mathbb{R}^{3}. For z0>0z^{0}>0, we consider the source location 𝐱0=(x0,yj0,z0)Ljsrc\mathbf{x}^{0}=(x^{0},y_{j}^{0},z^{0})\in L_{j}^{\text{src}} for j=1,j0¯j=\overline{1,j_{0}}, j0j_{0}\in\mathbb{N}^{\ast}. By choosing a certain value of j0j_{0} and varying x0x^{0} in the interval (L,L)(-L,L) for L>0L>0, we define a finite set of lines LjsrcL_{j}^{\text{src}} of sources/receivers as follows:

(2.1) Ljsrc:={𝐱0=(x0,yj0,z0)3:x0(L,L),j=1,j0¯}.L_{j}^{\text{src}}:=\left\{\mathbf{x}^{0}=\left(x^{0},y_{j}^{0},z^{0}\right)\in\mathbb{R}^{3}:x^{0}\in\left(-L,L\right),j=\overline{1,j_{0}}\right\}.

Therefore, each line of sources LjsrcL_{j}^{\text{src}} is parallel to the xx-axis. The antenna and the receiver coincide and are located at the point 𝐱0=(x0,yj0,z0)Ljsrc.\mathbf{x}^{0}=\left(x^{0},y_{j}^{0},z^{0}\right)\in L_{j}^{\text{src}}. The number yj0y_{j}^{0} is the same for all points of LjsrcL_{j}^{\text{src}} and this number is changing as lines LjsrcL_{j}^{\text{src}} changes. The number z0z^{0} is the same for all lines (2.1). Assuming that the parameter x0(L,L)x^{0}\in\left(-L,L\right) in (2.1), we actually assume that the antenna/receiver point characterized by x0x^{0} moves along each line Ljsrc,L_{j}^{\text{src}}, j=1,j0¯.j=\overline{1,j_{0}}.

Let the number R>0.R>0. Our domain of interest is

Ω=(R,R)33.\Omega=(-R,R)^{3}\subset\mathbb{R}^{3}.

This domain consists of two parts. The lower part Ω{z<0}\Omega\cap\left\{z<0\right\} mimics the sandbox containing the unknown buried object. Meanwhile, the upper part Ω{z>0}\Omega\cap\left\{z>0\right\} is assumed to be the air part, where we move our SAR device. For simplicity, we assume that the interface between the air and sand parts is located at {z=0}\left\{z=0\right\}. Let εr(𝐱)C1(3)\varepsilon_{r}(\mathbf{x})\in C^{1}\left(\mathbb{R}^{3}\right) be a function that represents the spatially distributed dielectric constant. We assume that

(2.2) {εr(𝐱)4, for {z<0}Ω,εr(𝐱)=1, for {z>0}Ω. \left\{\begin{array}[]{c}\varepsilon_{r}\left(\mathbf{x}\right)\geq 4,\text{ for }\left\{z<0\right\}\cap\Omega,\\ \varepsilon_{r}\left(\mathbf{x}\right)=1,\text{ for }\left\{z>0\right\}\cap\Omega.\text{ }\end{array}\right.

Physically, assumption (2.2) is reasonable. Cf. e.g. [17], we know that the dielectric constant of the air/vacuum is identically one. Meanwhile, for the other materials (for example, the dry sand which follows the non-invasive inspections of our interest), the dielectric constant is larger than the unity.

Figure 1. A schematic diagram of our SAR imaging setting. A flying device moves along multiple parallel straight lines in xx-direction to collect the backscattering data. On this device, we assume that the transmitter and the detector have the same position. The transmitter is a disk-shaped antenna with an elevation angle of π/4\pi/4 (yellow part). For our purpose of mimicking the landmine detection, we assume the unknown object is buried in a dry sand region. The notion of our convexification-based method is solving 1D CIPs for every source location. Under a certain elevation angle (red part), solutions of these 1-D CIPs allow us to approximately reconstruct a 3-D image.

Let T>0T>0 be the observation time. Practically, TT is found by doubling the distance from the line of sources to the furthest point of the sandbox. For every source location 𝐱0Ljsrc\mathbf{x}^{0}\in L_{j}^{\text{src}} (see (2.1)), we consider the following forward problem:

(2.3) {ν2(𝐱)utt=Δu+F(𝐱,𝐱0,t),u(𝐱,𝐱0,0)=ut(𝐱,𝐱0,0),[nu(𝐱,𝐱0,t)+ν1(𝐱)u(𝐱,𝐱0,t)]|Ω=0,𝐱0Ljsrc,𝐱Ω,t(0,T),j=1,j0¯,\displaystyle\begin{cases}\nu^{-2}\left(\mathbf{x}\right)u_{tt}=\Delta u+F\left(\mathbf{x},\mathbf{x}^{0},t\right),\\ u\left(\mathbf{x},\mathbf{x}^{0},0\right)=u_{t}\left(\mathbf{x},\mathbf{x}^{0},0\right),\\ \left.\left[\partial_{n}u\left(\mathbf{x},\mathbf{x}^{0},t\right)+\nu^{-1}\left(\mathbf{x}\right)u\left(\mathbf{x},\mathbf{x}^{0},t\right)\right]\right|_{\partial\Omega}=0,\\ \mathbf{x}^{0}\in L_{j}^{\text{src}},\mathbf{x}\in\Omega,t\in\left(0,T\right),j=\overline{1,j_{0}},\end{cases}

where n=n(𝐱),𝐱Ωn=n\left(\mathbf{x}\right),\mathbf{x}\in\partial\Omega is the unit outward looking normal vector on Ω.\partial\Omega. In (2.3), u(𝐱,𝐱0,t)u(\mathbf{x},\mathbf{x}^{0},t) is the amplitude of a time-resolved component of the electric wave field. The function ν(𝐱)=c0/εr(𝐱)\nu(\mathbf{x})=c_{0}/\sqrt{\varepsilon_{r}(\mathbf{x})} represents the speed of light in the medium with c0c_{0} is the speed of light in the vacuum. Here, the source function FF is defined as

(2.4) F(𝐱,𝐱0,t)={Re(χD(𝐱,𝐱0)eiω0teiα0(tτ0/2)2)if t(0,τ0],𝐱0Ljsrc,0if t>τ0,F\left(\mathbf{x},\mathbf{x}^{0},t\right)=\begin{cases}\text{Re}\left(\chi_{D}\left(\mathbf{x},\mathbf{x}^{0}\right)e^{-\text{i}\omega_{0}t}e^{-\text{i}\alpha_{0}\left(t-\tau_{0}/2\right)^{2}}\right)&\text{if }t\in\left(0,\tau_{0}\right],\mathbf{x}^{0}\in L_{j}^{\text{src}},\\ 0&\text{if }t>\tau_{0},\end{cases}

where i=1\text{i}=\sqrt{-1}. This source function models well the linear modulated pulse, where ω0\omega_{0} is the carrier frequency, α0\alpha_{0} is the chirp rate [7], and τ0\tau_{0} is its duration. The function χD\chi_{D} in (2.4) represents the disk-shaped antenna of our interest with its radius D>0D>0, thickness D~>0\widetilde{D}>0 and centered at a point 𝐱Ljsrc\mathbf{x}\in L_{j}^{\text{src}}. We rotate the disk via a coordinate transformation 𝐱𝐱=(x,y,z)\mathbf{x}\mapsto\mathbf{x}^{\prime}=(x^{\prime},y^{\prime},z^{\prime}) defined by

(2.5) x=xx0,y=cos(θ)(yyj0)sin(θ)(zz0),z=sin(θ)(yyj0)+cos(θ)(zz0),\begin{array}[]{rcll}x^{\prime}&=&x-x^{0},&\\ y^{\prime}&=&\cos(\theta)(y-y_{j}^{0})-\sin(\theta)(z-z^{0}),&\\ z^{\prime}&=&\sin(\theta)(y-y_{j}^{0})+\cos(\theta)(z-z^{0}),&\\ &&&\end{array}

where θ\theta is the elevation angle. Hereby, the function χD\chi_{D} is given by

(2.6) χD(𝐱,𝐱0)={1if x2+y2<D and |z|D~,0otherwise.\chi_{D}(\mathbf{x},\mathbf{x}^{0})=\begin{cases}1&\text{if }\sqrt{{x^{\prime}}^{2}+{y^{\prime}}^{2}}<D\text{ and }|z^{\prime}|\leq\widetilde{D},\\ 0&\text{otherwise.}\end{cases}

We consider the so-called absorbing boundary conditions in (2.3). Mathematically, the solution to the acoustic wave equation is sought on the whole space. As introduced above, the computational domain Ω\Omega is, however, finite and the boundary condition should be equipped with the underlying PDE. Moreover, at this point, numerical simulations show the non-physical back reflections of the waves incident at the boundaries. Therefore, the absorbing boundary conditions are used to minimize these spurious reflections; cf. e.g. [34] and references cited therein.

3. Statement of the inverse problem

Our application being in mind is to identify the material of the buried object. Therefore, our inverse problem aims to compute the spatial distribution of the dielectric constant εr(𝐱)\varepsilon_{r}(\mathbf{x}) involved in the coefficient ν(𝐱)\nu(\mathbf{x}) in the PDE of (2.3).

3.1. Statement of the problem

SAR Inverse Problem. Suppose that functions Gj(𝐱0,t)=u(𝐱0,𝐱0,t)G_{j}(\mathbf{x}^{0},t)=u(\mathbf{x}^{0},\mathbf{x}^{0},t) for t(0,T),t\in\left(0,T\right), 𝐱0Ljscr\mathbf{x}^{0}\in L_{j}^{\text{scr}} and j=1,j0¯j=\overline{1,j_{0}} are given as our SAR data. Find a function εr(𝐱)C1(3)\varepsilon_{r}(\mathbf{x})\in C^{1}\left(\mathbb{R}^{3}\right) satisfying conditions in (2.2).

Cf. [10, 19], our strategy is to image the unknown function on the so-called slant-range plane, denoted by PP. We define this plane in the following manner. For each antenna/source location 𝐱j0,m=(xm0,yj0,z0)Ljscr\mathbf{x}_{j}^{0,m}=\left(x_{m}^{0},y_{j}^{0},z^{0}\right)\in L_{j}^{\text{scr}}, we define the central line of the antenna CRL(𝐱j0,m)CRL(\mathbf{x}_{j}^{0,m}) that passes through the point 𝐱j0,m\mathbf{x}_{j}^{0,m}, m=1,,N,m=1,...,N, where NN is the total number of locations of the source on the line Ljscr.L_{j}^{\text{scr}}. Each line CRL(𝐱j0,m)CRL(\mathbf{x}_{j}^{0,m}) is orthogonal to the xx-axis and has a certain angle with the plane {y=yj0}\left\{y=y_{j}^{0}\right\}. Recall that the line LjscrL_{j}^{\text{scr}} is parallel to the xx-axis, see (2.1). Denoted by θ\theta, that angle is called the elevation angle presumably determined by the propagating direction of the radiated energy of the antenna, see (2.4)-(2.6). Henceforth, for each line Ljscr,L_{j}^{\text{scr}}, the slant-range plane PjP_{j} is defined as the plane passing through the central lines of antennas and the line of sources moving along LjscrL_{j}^{\text{scr}}. Hence, multiple lines of sources give multiple slant-range planes. As a by-product, the solution obtained on this plane is called the slant-range image ε~r,j(𝐱)\widetilde{\varepsilon}_{r,j}(\mathbf{x}) of the unknown dielectric constant εr(𝐱)\varepsilon_{r}(\mathbf{x}) on the slant-range plane PjP_{j}. To obtain an approximate image of the function εr(𝐱),\varepsilon_{r}(\mathbf{x}), we then combine slant-range images εr,j(𝐱)\varepsilon_{r,j}(\mathbf{x}) on planes PjP_{j}, j=1,j0¯.j=\overline{1,j_{0}}. For clarity, we depict an illustration of our SAR imaging setting in Figure 1.

To obtain the image in the slant range plane Pj,P_{j}, we solve many 1-D Coefficient Inverse Problems (CIPs) along lines of antennas CRL(𝐱j0,m),m=1,,N.CRL(\mathbf{x}_{j}^{0,m}),m=1,...,N. Let ε~r,j,m(𝐱)\widetilde{\varepsilon}_{r,j,m}(\mathbf{x}) be the solution of this CIP along the line CRL(𝐱j0,m).CRL(\mathbf{x}_{j}^{0,m}). Then the resulting 2-D function ε~r,j(𝐱)\widetilde{\varepsilon}_{r,j}(\mathbf{x}) is obtained by averaging of these functions over m=1,,N,m=1,...,N, see details in section 5.

3.2. 1D Coefficient Inverse Problems

We now present our computational approach to calculate the slant-range dielectric constant ε~r,j(𝐱)\widetilde{\varepsilon}_{r,j}(\mathbf{x}) on each slant range plane PjP_{j}. For every j=1,j0¯j=\overline{1,j_{0}}, we consider the mm-th source location 𝐱j0,m\mathbf{x}_{j}^{0,m} for m=1,N¯m=\overline{1,N}. Observe that when introducing a pseudo variable xx\in\mathbb{R} and considering a smooth function c(x)c(x) for our unknown function ε~r(𝐱)\widetilde{\varepsilon}_{r}(\mathbf{x}) along each central line CRL(𝐱j0,m)CRL(\mathbf{x}_{j}^{0,m}) of the antenna passing through the source 𝐱j0,m\mathbf{x}_{j}^{0,m}, we can scale to x(0,1)x\in(0,1) based upon the length of that central line. This way, we treat c(x)c(x) as the solution of a 1-D CIP solved along each central line of the source 𝐱j0,m\mathbf{x}_{j}^{0,m}.

By (2.2), for a number c¯4\overline{c}\geq 4 we consider the function c(x)C3()c\left(x\right)\in C^{3}\left(\mathbb{R}\right) such that

(3.1) c(x)={[4,c¯]for x(0,1),1for x0 and x1.c\left(x\right)=\begin{cases}\in\left[4,\overline{c}\right]&\text{for }x\in\left(0,1\right),\\ 1&\text{for }x\leq 0\text{ and }x\geq 1.\end{cases}

Hereby, we consider the following forward problem:

(3.2) {c(x)c02utt=uxxfor x,t(0,T),u(x,0)=0,ut(x,0)=δ(x)for x.\begin{cases}\frac{c(x)}{c_{0}^{2}}u_{tt}=u_{xx}&\text{for }x\in\mathbb{R},t\in\left(0,T\right),\\ u\left(x,0\right)=0,\quad u_{t}\left(x,0\right)=\delta\left(x\right)&\text{for }x\in\mathbb{R}.\end{cases}

In (3.2), the variable tt is considered in nanosecond (ns), whilst the unit for xx is meter (m). Physically, the speed of light in the vacuum c0=0.3c_{0}=0.3 (m/ns). Considering dimensionless variables XX and τ,\tau,

(3.3) X=x/(0.3m) and τ=t/ns,X=x/(0.3\text{m})\text{ and }\tau=t/\text{ns},

we arrive at

(3.4) uXX=(0.3m)2uxx,uττ=(1ns)2utt.u_{XX}=(0.3\text{m})^{2}u_{xx},\quad u_{\tau\tau}=(1\text{ns})^{2}u_{tt}.

Thus, we obtain the dimensionless regime of the forward problem (3.2) becomes:

(3.5) {c~(X)uττ=uXX,c~(X)=c(0.3X),u(X,0)=0,uτ(X,0)=δ(X).\left\{\begin{array}[]{c}\widetilde{c}(X)u_{\tau\tau}=u_{XX},\quad\widetilde{c}(X)=c(0.3X),\\ u\left(X,0\right)=0,u_{\tau}\left(X,0\right)=\delta\left(X\right).\end{array}\right.

The change of variables (3.3) means that X=1X=1 implies x=0.3x=0.3 (m) in the reality, and τ=1\tau=1 indicates t=1t=1 (ns). We now state our coefficient inverse problem.

1-D Coefficient Inverse Problem (1-D CIP). Assume that the following functions g0(τ)g_{0}\left(\tau\right) and g1(τ)g_{1}\left(\tau\right)\  are given:

(3.6) u(0,τ)=g0(τ),uX(0,τ)=g1(τ),τ(0,T).u\left(0,\tau\right)=g_{0}\left(\tau\right),\quad u_{X}\left(0,\tau\right)=g_{1}\left(\tau\right),\quad\tau\in\left(0,T\right).

Determine the function c~(X)\widetilde{c}\left(X\right) in (3.5) such that the corresponding function c(x)C3()c\left(x\right)\in C^{3}\left(\mathbb{R}\right) satisfies conditions (3.1).

To solve this 1D CIP numerically, we employ a version of [10, 19] of the convexification method. First, we change variables as:

(3.7) Y=Y(X)=0Xc~(s)𝑑s.Y=Y(X)=\int_{0}^{X}\sqrt{\widetilde{c}(s)}ds.

Observe that dY/dX=c~(X)1dY/dX=\sqrt{\widetilde{c}(X)}\geq 1. Consider the following functions:

w(Y,τ)=u(X(Y),τ)c~1/4(X(Y)),Q(Y)=c~1/4(X(Y)),\displaystyle w\left(Y,\tau\right)=u\left(X\left(Y\right),\tau\right)\widetilde{c}^{1/4}\left(X\left(Y\right)\right),\quad Q\left(Y\right)=\widetilde{c}^{-1/4}\left(X\left(Y\right)\right),
(3.8) p(Y)=Q′′(Y)Q(Y)2[Q(Y)Q(Y)]2.\displaystyle p\left(Y\right)=\frac{Q^{\prime\prime}\left(Y\right)}{Q\left(Y\right)}-2\left[\frac{Q^{\prime}\left(Y\right)}{Q\left(Y\right)}\right]^{2}.

The function p(Y)p(Y) is smooth and p(Y)=0p(Y)=0 for Y<0Y<0 and Y>c¯Y>\sqrt{\overline{c}}. Moreover, the following PDE holds with initial and boundary conditions holds:

(3.9) wττ=wYY+p(Y)wfor Y,τ(0,T~),\displaystyle w_{\tau\tau}=w_{YY}+p\left(Y\right)w\quad\text{for }Y\in\mathbb{R},\tau\in\left(0,\widetilde{T}\right),
(3.10) w(Y,0)=0,wτ(Y,0)=δ(Y)for Y,\displaystyle w\left(Y,0\right)=0,\quad w_{\tau}\left(Y,0\right)=\delta\left(Y\right)\quad\text{for }Y\in\mathbb{R},
(3.11) w(0,τ)=g0(τ),wY(0,τ)=g1(τ)for τ(0,T~),\displaystyle w\left(0,\tau\right)=g_{0}\left(\tau\right),\quad w_{Y}\left(0,\tau\right)=g_{1}\left(\tau\right)\quad\text{for }\tau\in\left(0,\widetilde{T}\right),

where the number T~2c¯\widetilde{T}\geq 2\sqrt{\overline{c}} depends on TT.

Remark 3.1.

Cf. [10, 19] in this SAR inception, the smoothness of the unknown dielectric constant c(x)c(x) should be C3()C^{3}(\mathbb{R}). This assumption is used when we revisit our theorems below. In this regard, the function p(Y)p(Y) belongs to C1()C^{1}(\mathbb{R}), see (3.8). The number T~2c¯\widetilde{T}\geq 2\sqrt{\overline{c}} is conditioned because it guarantees the uniqueness of the CIP (3.9)–(3.11), see [35, Theorem 2.6 of Chapter 2]. As soon as the function p(Y)p(Y) is reconstructed, the original function c(x)c(x) is reconstructed using (3.7) and (3.8) via the procedure described in [20, section 7.2].

From now onward, we follow a novel transformation commenced in [20] to obtain a nonlocal nonlinear PDE for (3.9)–(3.11). When doing so, we take an arbitrary bc¯b\geq\sqrt{\overline{c}} and then take μ(0,2αb)\mu\in(0,2\alpha b) for α(0,1/2)\alpha\in(0,1/2). We introduce the following 2D regions:

(3.12) 𝒦:={(Y,τ)2:Y(0,b),τ(0,T~)},\displaystyle\mathcal{K}:=\left\{\left(Y,\tau\right)\in\mathbb{R}^{2}:Y\in\left(0,b\right),\tau\in\left(0,\widetilde{T}\right)\right\},
(3.13) 𝒦α,μ,b:={(Y,τ)2:Y+ατ<2αbμfor Y,τ>0}.\displaystyle\mathcal{K}_{\alpha,\mu,b}:=\left\{\left(Y,\tau\right)\in\mathbb{R}^{2}:Y+\alpha\tau<2\alpha b-\mu\;\text{for }Y,\tau>0\right\}.

Henceforth, we consider a new function v(Y,τ),v(Y,\tau),

(3.14) v(Y,τ)=w(Y,τ+Y)C3(Y0,τ0).v(Y,\tau)=w(Y,\tau+Y)\in C^{3}(Y\geq 0,\tau\geq 0).

Using multi-variable chain rules, we obtain the following PDE for v(Y,τ)v(Y,\tau) from (3.9):

(3.15) vYY2vYτ+p(Y)v=0for (Y,τ)𝒦.v_{YY}-2v_{Y\tau}+p\left(Y\right)v=0\quad\text{for }\left(Y,\tau\right)\in\mathcal{K}.

Furthermore, by (3.11), it yields v(Y,0)=1/2v\left(Y,0\right)=1/2 for Y(0,b)Y\in\left(0,b\right), which then results in the fact that

(3.16) p(Y)=4vYτ(Y,0),Y(0,b).p(Y)=4v_{Y\tau}(Y,0),\quad Y\in(0,b).

Differentiating both sides of (3.15) with respect to τ\tau and setting

(3.17) V(Y,τ)=vτ(Y,τ)C2(𝒦¯),V(Y,\tau)=v_{\tau}(Y,\tau)\in C^{2}(\overline{\mathcal{K}}),

we obtain the following overdetermined boundary value problem for a nonlinear hyperbolic equation:

(3.18) VYY2VYτ+4VY(Y,0)V=0for (Y,τ)𝒦,\displaystyle V_{YY}-2V_{Y\tau}+4V_{Y}\left(Y,0\right)V=0\quad\text{for }\left(Y,\tau\right)\in\mathcal{K},
(3.19) V(0,τ)=q0(τ):=g0(τ)for τ(0,T~),\displaystyle V\left(0,\tau\right)=q_{0}\left(\tau\right):=g_{0}^{\prime}\left(\tau\right)\quad\text{for }\tau\in\left(0,\widetilde{T}\right),
(3.20) VY(0,τ)=q1(τ):=g0′′(τ)+g1(τ)for τ(0,T~),\displaystyle V_{Y}\left(0,\tau\right)=q_{1}\left(\tau\right):=g_{0}^{\prime\prime}\left(\tau\right)+g_{1}^{\prime}\left(\tau\right)\quad\text{for }\tau\in\left(0,\widetilde{T}\right),
(3.21) VY(b,τ)=0for τ(0,T~).\displaystyle V_{Y}\left(b,\tau\right)=0\quad\text{for }\tau\in\left(0,\widetilde{T}\right).

Boundary condition (3.21) follows from the absorbing boundary condition, which was proven in [36] for our 1-D case.

Remark 3.2.

Essentially, our heuristic computational approach allows us to solve the non-overdetermined 3D CIP via the solutions of many overdetermined boundary value problems. The transformations we have presented are necessary because we want to use a suitable Carleman estimate. This estimate is applied to deal with nonlinearities involved in the PDE operator. Frequently, the nonlinearity is terms are not avoidable when working on underdetermined. and non-overdetermined CIPs (cf. e.g. [11, 15, 16]).

In our SAR perspective, we observe that after computing V(Y,τ)V(Y,\tau) for (Y,τ)𝒦(Y,\tau)\in\mathcal{K} from (3.18)–(3.21), we then obtain

(3.22) p(Y)=4VY(Y,0),Y(0,b),p(Y)=4V_{Y}(Y,0),\quad Y\in(0,b),

by (3.16). Hence, the target unknown coefficient c(x)c(x) of the original 1D CIP is sought after getting Q(Y)Q(Y) in (3.8).

4. A version of the convexification method: theorems revisited

Consider the subspace H0k(𝒦)Hk(𝒦)H_{0}^{k}(\mathcal{K})\subset H^{k}(\mathcal{K}) for kk\in\mathbb{N}^{\ast}. In this work, we introduce

H02(𝒦):={uH2(𝒦):u(0,τ)=uY(0,τ),uY(b,τ)=0},\displaystyle H_{0}^{2}\left(\mathcal{K}\right):=\left\{u\in H^{2}\left(\mathcal{K}\right):u\left(0,\tau\right)=u_{Y}\left(0,\tau\right),u_{Y}\left(b,\tau\right)=0\right\},
H04(𝒦):=H4(𝒦)H02(𝒦).\displaystyle H_{0}^{4}\left(\mathcal{K}\right):=H^{4}\left(\mathcal{K}\right)\cap H_{0}^{2}\left(\mathcal{K}\right).

As mentioned above, several transformations lead us to an overdetermined BVP. Since this problem is nonlinear, then any conventional optimization-based approach, including the Tikhonov regularization and the least-square method may suffer from the phenomenon of local minima and ravines. This is the main reason which has originally prompted the corresponding author to develop the convexification approach [21, 22]. The main ingredient of the method and its variants is using an appropriate Carleman Weight Function to “convexify” the cost functional. The associated high-order regularization terms usually play the role in controlling nonlinear terms involved in the PDE operator. Thereby, one obtains a unique minimizer and then, the global convergence of the gradient-like minimization procedure is attained. Depending on the principal parts of differential operators, one can choose different Carleman Weight Functions, see, e.g. [24] for a variety of Carleman estimates for partial differential operators. In this work, we consider the following Carleman Weight Function for the linear operator Y22Yτ\partial_{Y}^{2}-2\partial_{Y}\partial_{\tau} (see (3.18)):

ψλ(Y,τ)=e2λ(Y+ατ),λ1,α(0,1/2).\psi_{\lambda}(Y,\tau)=e^{-2\lambda(Y+\alpha\tau)},\quad\lambda\geq 1,\alpha\in(0,1/2).

Denoting

(4.1) 𝒮(V)=VYY2VYτ+4VY(Y,0)V, (Y,τ)𝒦,\mathcal{S}(V)=V_{YY}-2V_{Y\tau}+4V_{Y}(Y,0)V,\text{ }(Y,\tau)\in\mathcal{K},

we come up with the following weighted Tikhonov-like functional Jλ,γ:H4(𝒦)+J_{\lambda,\gamma}:H^{4}(\mathcal{K})\rightarrow\mathbb{R}_{+}:

(4.2) Jλ,γ(V)=𝒦[𝒮(V)]2ψλ𝑑Y𝑑τ+γVH4(𝒦)2,γ(0,1).J_{\lambda,\gamma}\left(V\right)=\int_{\mathcal{K}}\left[\mathcal{S}\left(V\right)\right]^{2}\psi_{\lambda}dYd\tau+\gamma\left\|V\right\|_{H^{4}\left(\mathcal{K}\right)}^{2},\quad\gamma\in(0,1).

We choose the high-order regularization term in H4(𝒦)H^{4}(\mathcal{K}) because of the embedding H4(𝒦)C2(𝒦¯)H^{4}(\mathcal{K})\subset C^{2}(\overline{\mathcal{K}}). This C2C^{2} regularity is helpful in controlling our nonlinear term in (3.18); see [20, Theorem 2]. Following our convexification framework in, e.g., [16, 15, 20], we introduce the set B(r,q0,q1),B\left(r,q_{0},q_{1}\right),

(4.3) B(r,q0,q1)\displaystyle B\left(r,q_{0},q_{1}\right)
={uH4(𝒦):u(0,τ)=q0(τ),uY(0,τ)=q1(τ),uY(b,τ)=0,uH4(𝒦)<r}.\displaystyle=\left\{u\in H^{4}\left(\mathcal{K}\right):u\left(0,\tau\right)=q_{0}\left(\tau\right),u_{Y}\left(0,\tau\right)=q_{1}\left(\tau\right),u_{Y}\left(b,\tau\right)=0,\left\|u\right\|_{H^{4}\left(\mathcal{K}\right)}<r\right\}.

in which r>0r>0 is an arbitrary number. However, in our computational practice we use the H2(𝒦)H^{2}\left(\mathcal{K}\right)-norm, see section 5.1.

Minimization Problem. Minimize the cost functional Jλ,γ(V)J_{\lambda,\gamma}\left(V\right) defined in (4.2) on the set B(r,q0,q1)¯.\overline{B\left(r,q_{0},q_{1}\right)}.

We are now in the position to go over our analysis in [20, 36] of the convexification-based method for the nonlinear boundary value problem (3.18)–(3.21).

Theorem 4.1 (Carleman estimate [36]).

There exist constants C=C(α,𝒦)>0C=C(\alpha,\mathcal{K})>0 and λ0=λ0(α,𝒦)1\lambda_{0}=\lambda_{0}(\alpha,\mathcal{K})\geq 1 such that for all functions uH02(𝒦)u\in H_{0}^{2}(\mathcal{K}) and all λλ0\lambda\geq\lambda_{0} the following Carleman estimate holds true:

𝒦(uYY2uYτ)2ψλ𝑑Y𝑑τC𝒦(λ(uY2+uτ2)+λ3u2)ψλ𝑑Y𝑑τ\displaystyle\int\limits_{\mathcal{K}}\left(u_{YY}-2u_{Y\tau}\right)^{2}\psi_{\lambda}dYd\tau\geq C\int\limits_{\mathcal{K}}\left(\lambda\left(u_{Y}^{2}+u_{\tau}^{2}\right)+\lambda^{3}u^{2}\right)\psi_{\lambda}dYd\tau
+C0b(λuY2+λ3u2)(Y,0)e2λY𝑑YCe2λT~0b(λuY2+λ3u2)(Y,T~)𝑑Y.\displaystyle+C\int\limits_{0}^{b}\left(\lambda u_{Y}^{2}+\lambda^{3}u^{2}\right)\left(Y,0\right)e^{-2\lambda Y}dY-Ce^{-2\lambda\widetilde{T}}\int\limits_{0}^{b}\left(\lambda u_{Y}^{2}+\lambda^{3}u^{2}\right)\left(Y,\widetilde{T}\right)dY.
Theorem 4.2 (Global strict convexity [20]).

For any λ,γ>0\lambda,\gamma>0 and functions VB(r,q0,q1)¯V\in\overline{B\left(r,q_{0},q_{1}\right)}, the cost functional Jλ,γJ_{\lambda,\gamma} defined in (4.2) has the Fréchet derivative Jλ,γ(V)H04(𝒦).J_{\lambda,\gamma}^{\prime}\left(V\right)\in H_{0}^{4}\left(\mathcal{K}\right). Let λ0=λ0(α,𝒦)1\lambda_{0}=\lambda_{0}(\alpha,\mathcal{K})\geq 1 be the constant of Theorem 4.1 and let T~2b.\widetilde{T}\geq 2b. Then there exists a sufficiently large number λ1=λ1(𝒦,r,α)λ0\lambda_{1}=\lambda_{1}\left(\mathcal{K},r,\alpha\right)\geq\lambda_{0} such that for all λλ1\lambda\geq\lambda_{1} and all γ[2eλαT~,1),\gamma\in[2e^{-\lambda\alpha\widetilde{T}},1), the cost functional (4.2) is strictly convex on the set B(r,q0,q1)¯\overline{B\left(r,q_{0},q_{1}\right)}. Moreover, for all V1,V2B(r,q0,q1)¯V_{1},V_{2}\in\overline{B\left(r,q_{0},q_{1}\right)} the following estimate holds true:

(4.4) Jλ,γ(V2)Jλ,γ(V1)Jλ,γ(V1)(V2V1)Ce2λ(2αbμ)V2V1H1(𝒦α,μ,b)2\displaystyle J_{\lambda,\gamma}\left(V_{2}\right)-J_{\lambda,\gamma}\left(V_{1}\right)-J_{\lambda,\gamma}^{\prime}\left(V_{1}\right)\left(V_{2}-V_{1}\right)\geq Ce^{-2\lambda\left(2\alpha b-\mu\right)}\left\|V_{2}-V_{1}\right\|_{H^{1}\left(\mathcal{K}_{\alpha,\mu,b}\right)}^{2}
+Ce2λ(2αbμ)V2(y,0)V1(y,0)H1(0,2αbμ)2+γ2V2V1H4(𝒦)2,\displaystyle+Ce^{-2\lambda\left(2\alpha b-\mu\right)}\left\|V_{2}\left(y,0\right)-V_{1}\left(y,0\right)\right\|_{H^{1}\left(0,2\alpha b-\mu\right)}^{2}+\frac{\gamma}{2}\left\|V_{2}-V_{1}\right\|_{H^{4}\left(\mathcal{K}\right)}^{2},

where the constant C=C(α,μ,b,r)>0C=C\left(\alpha,\mu,b,r\right)>0 depends only on listed parameters. Furthermore, the functional Jλ,γ(V)J_{\lambda,\gamma}\left(V\right) has unique minimizer VminV_{\min} on the set B(r,q0,q1)¯\overline{B\left(r,q_{0},q_{1}\right)} and

Jλ,γ(Vmin)(VVmin)0, VB(r,q0,q1)¯.J_{\lambda,\gamma}^{\prime}\left(V_{\min}\right)\left(V-V_{\min}\right)\geq 0,\text{ }\forall V\in\overline{B\left(r,q_{0},q_{1}\right)}.

By the conventional regularization theory (cf. [37]), the existence of the ideal noiseless data g0(τ),g1(τ)g_{0}^{\ast}\left(\tau\right),g_{1}^{\ast}\left(\tau\right) is assumed a priori. The existence of the corresponding exact coefficient p(Y)p^{\ast}\left(Y\right) and the exact function w(Y,τ)w^{\ast}\left(Y,\tau\right) in (3.9)–(3.11) is assumed as well. Moreover, this function p(Y)p^{\ast}\left(Y\right) should also satisfy conditions p(Y)=0p^{\ast}(Y)=0 for Y<0Y<0 and Y>c¯Y>\sqrt{\overline{c}} as ones for p(Y)p(Y). Having the function w(Y,τ)w^{\ast}\left(Y,\tau\right), we apply the above transformations (3.14), (3.17) to obtain the corresponding function V(Y,τ)V^{\ast}\left(Y,\tau\right). Henceforth, it is natural to assume that functions q0q_{0}^{\ast} and q1q_{1}^{\ast} are the noiseless data q0q_{0} and q1q_{1} respectively; see (3.19) and (3.20). Thus, we assume below that the function VB(r,q0,q1)V^{\ast}\in B(r,q_{0}^{\ast},q_{1}^{\ast}). Hence, it follows from (3.18)–(3.21) and (4.1) that

𝒮(V)=0,(Y,τ)𝒦,and p(Y)=4vY(Y,0),Y(0,b).\mathcal{S}(V^{\ast})=0,\quad(Y,\tau)\in\mathcal{K},\;\text{and }p^{\ast}(Y)=4v_{Y}^{\ast}(Y,0),\quad Y\in(0,b).

Consider a sufficiently small number σ(0,1)\sigma\in(0,1) that characterizes the noise level between the data q0,q1q_{0},q_{1} and q0,q1q_{0}^{\ast},q_{1}^{\ast}. To obtain the zero boundary conditions at {x=0}\left\{x=0\right\} in (3.19) and (3.20), consider two functions FB(r,q0,q1)F\in B(r,q_{0},q_{1}) and FB(r,q0,q1)F^{\ast}\in B(r,q_{0}^{\ast},q_{1}^{\ast}). We assume that

(4.5) FFH4(𝒦)<σ.\left\|F-F^{\ast}\right\|_{H^{4}(\mathcal{K})}<\sigma.

Consider functions W,W,W,W^{\ast},

W=VF,W=VVB0(2r):={uH04(𝒦):uH4(𝒦)<2r}.W^{\ast}=V^{\ast}-F^{\ast},W=V-V\in B_{0}(2r):=\left\{u\in H_{0}^{4}(\mathcal{K}):\left\|u\right\|_{H^{4}\left(\mathcal{K}\right)}<2r\right\}.

Besides, the function W+FB(3r,q0,q1)W+F\in B(3r,q_{0},q_{1}), WB0(2r)\forall W\in B_{0}(2r). We modify the cost functional Jλ,γ(V)J_{\lambda,\gamma}\left(V\right) as

J~λ,γ(W)=Jλ,γ(W+F), WB0(2r).\widetilde{J}_{\lambda,\gamma}(W)=J_{\lambda,\gamma}(W+F),\text{ }\forall W\in B_{0}(2r).

It follows from Theorem 2 that the functional J~λ,γ\widetilde{J}_{\lambda,\gamma} is also globally strict convex on the ball B0(2r)¯\overline{B_{0}(2r)} for λλ2=λ1(𝒦,3r,α)λ1(𝒦,r,α).\lambda\geq\lambda_{2}=\lambda_{1}\left(\mathcal{K},3r,\alpha\right)\geq\lambda_{1}\left(\mathcal{K},r,\alpha\right). Also, by Theorem 2, for each value of the parameter λλ2,\lambda\geq\lambda_{2}, the functional J~λ,γ(W)\widetilde{J}_{\lambda,\gamma}\left(W\right) has a unique minimizer Wmin,λ,γW_{\min,\lambda,\gamma} on the set B0(2r)¯\overline{B_{0}(2r)} and

(4.6) J~λ,γ(Wmin,λ,γ)(WWmin,λ,γ)0, WB0(2r)¯.\widetilde{J}_{\lambda,\gamma}^{\prime}(W_{\min,\lambda,\gamma})\left(W-W_{\min,\lambda,\gamma}\right)\geq 0,\text{ }\forall W\in\overline{B_{0}(2r)}.

It follows from the proof of Theorem 5 of [20] that inequality (4.6) plays an important role in the proof of Theorem 3.

Theorem 4.3 (Accuracy estimate of the minimizer [20]).

Assume that (4.5) holds true and let T~4b.\widetilde{T}\geq 4b. We choose

β=α(T~4b)+μ2(2αbμ),ρ=12min{β,1}.\beta=\frac{\alpha\left(\widetilde{T}-4b\right)+\mu}{2\left(2\alpha b-\mu\right)},\quad\rho=\frac{1}{2}\min\left\{\beta,1\right\}.

Let λ1=λ1(𝒦,r,α)\lambda_{1}=\lambda_{1}\left(\mathcal{K},r,\alpha\right) by the number of obtained Theorem 4.2 and let λ2=λ1(𝒦,3r,α)λ1(𝒦,r,α)\lambda_{2}=\lambda_{1}\left(\mathcal{K},3r,\alpha\right)\geq\lambda_{1}\left(\mathcal{K},r,\alpha\right). Let σ0(0,1)\sigma_{0}\in\left(0,1\right) be a sufficiently small number such that lnσ01/(2(2αbμ))λ2\ln\sigma_{0}^{-1/\left(2\left(2\alpha b-\mu\right)\right)}\geq\lambda_{2}. For any σ(0,σ0)\sigma\in(0,\sigma_{0}), we choose

(4.7) λ=λ(σ)=lnσ1/(2(2αbμ))>λ2,\displaystyle\lambda=\lambda\left(\sigma\right)=\ln\sigma^{-1/\left(2\left(2\alpha b-\mu\right)\right)}>\lambda_{2},
(4.8) γ=γ(σ)=2eλαT~=2σ(αT~)/(2(2αbμ)).\displaystyle\gamma=\gamma\left(\sigma\right)=2e^{-\lambda\alpha\widetilde{T}}=2\sigma^{\left(\alpha\widetilde{T}\right)/\left(2\left(2\alpha b-\mu\right)\right)}.

and let the regularization parameter γ[2eλαT~,1).\gamma\in[2e^{-\lambda\alpha\widetilde{T}},1). Let Vmin,λ,γ(Y,τ)V_{\min,\lambda,\gamma}\left(Y,\tau\right) be the unique minimizer of the functional Jλ,γ(V)J_{\lambda,\gamma}\left(V\right) on the set B(r,q0,q1)B\left(r,q_{0},q_{1}\right). Let the function pmin,λ,γ(Y)p_{\min,\lambda,\gamma}\left(Y\right) be defined as: pmin,λ,γ(Y)=4YVmin,λ,γ(Y,0)p_{\min,\lambda,\gamma}\left(Y\right)=4\partial_{Y}V_{\min,\lambda,\gamma}\left(Y,0\right), see (3.22). Then there exists a constant C=C(𝒦,α,μ,b,r)>0C=C(\mathcal{K},\alpha,\mu,b,r)>0 depending only on listed parameters such that

(4.9) Vmin,λ,γVH1(𝒦α,μ,b)Cσρ,\displaystyle\left\|V_{\min,\lambda,\gamma}-V^{\ast}\right\|_{H^{1}\left(\mathcal{K}_{\alpha,\mu,b}\right)}\leq C\sigma^{\rho},
(4.10) pmin,λ,γpL2(0,2αbμ)Cσρ.\displaystyle\left\|p_{\min,\lambda,\gamma}-p^{\ast}\right\|_{L^{2}\left(0,2\alpha b-\mu\right)}\leq C\sigma^{\rho}.

Now, we state a theorem about the global convergence of the gradient descent method of the minimization of the functional Jλ,γ(V)J_{\lambda,\gamma}\left(V\right). Let ω(0,1)\omega\in(0,1) be the step size of this method. Fix an arbitrary number ϑ(0,1/3)\vartheta\in(0,1/3). We restrict the starting point, denoted by V(0)V^{(0)}, to be an arbitrary point in the set B(ϑr,q0,q1)B(r,q0,q1)B\left(\vartheta r,q_{0},q_{1}\right)\subset B(r,q_{0},q_{1}). Then the gradient descent method reads as:

(4.11) V(n)=V(n1)ωJλ,γ(V(n1)),n=1,2,,V^{\left(n\right)}=V^{\left(n-1\right)}-\omega J_{\lambda,\gamma}^{\prime}\left(V^{\left(n-1\right)}\right),\quad n=1,2,\ldots,

where V(n)V^{(n)} approaches the minimizer Vmin,λ,γV_{\min,\lambda,\gamma} as nn is large. This scheme is well-defined because Jλ,γ(V(n1))H04(𝒦)J_{\lambda,\gamma}^{\prime}\left(V^{(n-1)}\right)\in H_{0}^{4}(\mathcal{K}); see Theorem 4.2. Hence, functions V(n)V^{(n)} satisfy the same boundary conditions as in (4.3) for all nn.

Below, we assume that our minimizer Vmin,λ,γV_{\min,\lambda,\gamma} belongs to the set B(ϑr,q0,q1)B\left(\vartheta r,q_{0},q_{1}\right), which is an interior point of the set B(r,q0,q1)B\left(r,q_{0},q_{1}\right). This enabled us in [19] to prove the global convergence of the gradient descent scheme (4.11). This assumption is plausible because by (4.9) the distance between the ideal solution VV^{\ast} and the minimizer Vmin,λ,γV_{\min,\lambda,\gamma} is small with respect to the noise level σ(0,1),\sigma\in(0,1), although only in the H1(𝒦α,μ,b)H^{1}\left(\mathcal{K}_{\alpha,\mu,b}\right)-norm rather than in the stronger norm H4(𝒦)H^{4}\left(\mathcal{K}\right), see (4.3).

Theorem 4.4 (Global convergence of the gradient descent method [19]).

Suppose that the parameters λ\lambda and γ\gamma are taken as in Theorem 4.2. Then there exists a sufficiently small constant w0(0,1)w_{0}\in(0,1) such that the sequence of the gradient descent method {V(n)}n=0B(r,q0,q1)\left\{V^{(n)}\right\}_{n=0}^{\infty}\subset B\left(r,q_{0},q_{1}\right) for every ω(0,ω0)\omega\in(0,\omega_{0}). Moreover, for every ω(0,ω0)\omega\in(0,\omega_{0}) there exists a number θ=θ(ω)(0,1)\theta=\theta(\omega)\in(0,1) such that

Vmin,λ,γV(n)H4(𝒦)θnVmin,λ,γV(0)H4(𝒦),n=1,2,\left\|V_{\min,\lambda,\gamma}-V^{\left(n\right)}\right\|_{H^{4}\left(\mathcal{K}\right)}\leq\theta^{n}\left\|V_{\min,\lambda,\gamma}-V^{\left(0\right)}\right\|_{H^{4}\left(\mathcal{K}\right)},\quad n=1,2,\ldots

Combining Theorems 4.3 and 4.4, we can prove that the functions V(n)V^{(n)} converge to the ideal solution VV^{\ast} as long as the noise level σ\sigma in the data tends to zero. In addition, we obtain the convergence of the corresponding sequence {p(n)}n=0\left\{p^{(n)}\right\}_{n=0}^{\infty} towards the ideal function pp^{\ast}. It works via the expression p(n)(Y)=4YVY(n)(Y,0)p^{(n)}(Y)=4\partial_{Y}V_{Y}^{(n)}(Y,0) in (3.22).

Theorem 4.5 ([19]).

Suppose all conditions of Theorems 4.3 and 4.4 hold. Then the following convergence estimate is valid:

VV(n)H1(𝒦α,μ,b)\displaystyle\left\|V^{\ast}-V^{\left(n\right)}\right\|_{H^{1}\left(\mathcal{K}_{\alpha,\mu,b}\right)} +pp(n)L2(0,2αbμ)\displaystyle+\left\|p^{\ast}-p^{\left(n\right)}\right\|_{L^{2}\left(0,2\alpha b-\mu\right)}
Cσρ+CθnVmin,λ,γV(0)H4(𝒦),n=1,2,,\displaystyle\leq C\sigma^{\rho}+C\theta^{n}\left\|V_{\min,\lambda,\gamma}-V^{\left(0\right)}\right\|_{H^{4}\left(\mathcal{K}\right)},\quad n=1,2,\ldots,

where the constant C=C(𝒦,α,μ,b,r)>0C=C(\mathcal{K},\alpha,\mu,b,r)>0 depends only on listed parameters.

It is now worth mentioning that in our convergence results above, rr is arbitrary and the starting point V(0)V^{(0)} is arbitrary in B(ϑr,q0,q1)B(\vartheta r,q_{0},q_{1}) with ϑ\vartheta being fixed in (0,1/3)(0,1/3). Therefore, our numerical method for solving the underlying nonlinear inverse problem does not require any advanced knowledge of a small neighborhood of the true solution. In this sense, the method is globally convergent, see Definition 1 in Introduction.

Remark 4.6.

Even though the values of λ\lambda in our CWF should be large in our theory, our rich computational experience working with the convexification tells us that we can choose λ\lambda between 1 and 3. For this We refer the reader to our previous publications on the convexification; see [15, 16, 17, 18, 19, 20, 38, 39] some other references cited therein. In the present work, we choose λ=1.05\lambda=1.05, and it works well.

5. Numerical studies

To generate the data for the inverse problem, we have solved the forward problem (2.3)-(2.6) by the standard implicit scheme. Therefore, we describe in this section our numerical solution of only the inverse problem.

It is worth noting that we suppose to have a buried object in the sand region {z<0}Ω\left\{z<0\right\}\cap\Omega of our computational domain Ω\Omega, see (2.2). The dielectric constant of this object is assumed to be different from that of the dry sand. Hence, the sand region is actually heterogeneous. The solution of problem (2.3) includes the signal from the sand, which causes a significant challenge when working on the inversion. In this paper, we follow the heuristic approach commenced in works of this research group with experimentally collected data [15, 17] to get rid from the sand signals. To do so, for each source location, we consider uref(𝐱,t)u_{\text{ref}}(\mathbf{x},t) as the reference data. The reference data are generated for the case when an inclusion is not present in the simulated sandbox. This means that we take εr(𝐱)=4\varepsilon_{r}(\mathbf{x})=4 for z0z\leq 0 and εr(𝐱)=1\varepsilon_{r}(\mathbf{x})=1 for z>0.z>0. Indeed, the dielectric constant of the dry sand equals 4 (cf. [17, 18]). We remark that we only need to generate these reference data once for any numerical examples. Let u(𝐱,t)u^{\ast}(\mathbf{x},t) be the simulated data in the case when an inclusion is present in the sandbox. Then, our target data after filtering out the sand signals are defined by

usc(𝐱,t)=u(𝐱,t)uref(𝐱,t).u_{\text{sc}}(\mathbf{x},t)=u^{\ast}(\mathbf{x},t)-u_{\text{ref}}(\mathbf{x},t).

And this is the function we work with to solve our inverse problem.

5.1. Parameters

In this part, we show numerically how our proposed method works to create 3-D images from SAR data taken on three lines Ljscr,j=1,2,3.L_{j}^{\text{scr}},j=1,2,3. Experimentally, the data collection is very expensive. Therefore, it is pertinent to use only three lines in these numerical studies. This prepares a playground to work with experimental SAR data in the next publication. Different from [10] in which the Lippmann–Schwinger equation was used to generate the data, our SAR data are computed by the finite difference solver of section 5.1. Cf. [10, 19], we use the standard finite difference approximation to compute the minimizer of Jλ,γJ_{\lambda,\gamma} defined in (4.2). In this regard, the discrete operators are formulated to approximate the operator 𝒮\mathcal{S}. Since the integration in Jλ,γJ_{\lambda,\gamma} is over the two-dimensional rectangle 𝒦\mathcal{K}, we introduce its equidistant step size as (ΔY,Δτ)\left(\Delta_{Y},\Delta_{\tau}\right) for the grid points (Y,τ)\left(Y,\tau\right) in that domain. Then the discrete solution is sought by the minimization of the corresponding discrete cost functional with respect to the values of the function VV at grid points. As in [10], we only consider the regularization term in the H2(𝒦)H^{2}(\mathcal{K})-norm for the numerical inversion; compared with the H4(𝒦)H^{4}(\mathcal{K})-norm in the theory. This reduces the complexity of computations and thus, saves the elapsed time.

Remark 5.1.

Below we work only with dimensionless variables, also, see (3.3). In this context, 1 in the dimensionless spatial variable is 0.3 (meter) in reality, and 1 in the dimensionless time variable is 1 (nanosecond).

As to our forward problem (2.3)-(2.6), we consider Ω=(2,2)3\Omega=(-2,2)^{3}. Essentially, our domain of interest is (0.6 m,0.6 m)3(-0.6\text{ m},0.6\text{ m})^{3} that corresponds to a scanning ground region of |0.6(0.6)|3/2=0.864\left|0.6-(-0.6)\right|^{3}/2=0.864 (m3\text{m}^{3}). For LjsrcL_{j}^{\text{src}}, we take j1,j0¯j\in\overline{1,j_{0}} with j0=3j_{0}=3 lines of sources with the increment 0.05 (1.5 cm), which is the distance between 3 lines of source. The central source position of our entire configuration is located at x0=0,y20=1.7,z0=0.8x^{0}=0,y_{2}^{0}=-1.7,z^{0}=0.8. With the increment 0.05, our source yy-locations are y10=1.75,y20=1.7,y30=1.65y_{1}^{0}=-1.75,y_{2}^{0}=-1.7,y_{3}^{0}=-1.65, respectively. The length of each of these lines equals to 1.4 (0.42 m), i.e. L=0.7L=0.7, and every source line has 28 source positions, i.e. M=28M=28. Our circular disk antenna is taken with the diameter D=0.1D=0.1 (3 cm) and its associated parameters are chosen by τ0=1(ns),ω0=6π×108(Hz),α0=8π×106/τ0\tau_{0}=1\;(\text{ns}),\omega_{0}=6\pi\times 10^{8}\;(\text{Hz}),\alpha_{0}=8\pi\times 10^{6}/\tau_{0} and D~=h𝐱=|Ω|/(N𝐱1)\widetilde{D}=h_{\mathbf{x}}=\left|\Omega\right|/(N_{\mathbf{x}}-1) with N𝐱=81N_{\mathbf{x}}=81. As a result, the carrier wavelength is λ~0=2πc0/ω0=1(m)\widetilde{\lambda}_{0}=2\pi c_{0}/\omega_{0}=1\;(\text{m}), and the Fresnel number FrD2/(λ~0|z0|)=1.25×1031\text{Fr}\approx D^{2}/(\widetilde{\lambda}_{0}\left|z^{0}\right|)=1.25\times 10^{-3}\ll 1 guarantees well the far field zone; see Fraunhofer condition in [42, section 7.1.3]. The refinement N𝐱N_{\mathbf{x}} is chosen because h𝐱=0.05h_{\mathbf{x}}=0.05 is compatible with an increment of 1.5 (cm).

Refer to caption
(a)
Refer to caption
(b)
Refer to caption
(c)
Refer to caption
(d)
Figure 2. A typical sample of the time-dependent computationally simulated data for our Model 1. This is dimensionless time τ=t/ns\tau=t/\text{ns}, where tt is time in nanoseconds (ns), see (3.3). (a) The simulated data at the central source position x0=0,y20=1.7x^{0}=0,y^{0}_{2}=-1.7 before truncation with T=10T=10. (b) The corresponding simulated data with all source positions and 1000 time samples for the time domain. (c) The simulated data after truncation. (d) The corresponding truncated data with all source positions and 1000 time samples.

For our numerical results, the parameters were chosen as

λ=1.05,α=0.49,γ=1010\lambda=1.05,\alpha=0.49,\gamma=10^{-10}

and the discretization (ΔY,Δτ)=(8×103,8×103)\left(\Delta_{Y},\Delta_{\tau}\right)=(8\times 10^{-3},8\times 10^{-3}). As mentioned above, even though our theory is valid for large λ\lambda, we have observed numerically in all our previous publications on the convexification that λ[1,3]\lambda\in[1,3] works well; see our previous numerical results in, e.g., [15, 16, 17, 18, 19, 20] and [38, 39]. The elevation angle is taken by θ=π/4\theta=\pi/4. Similar to our first work on SAR [10], we multiply the simulated delay-and-sum data by a calibration factor CFCF. Here, we choose CF=1.75×1015CF=1.75\times 10^{15}. Since our method relies on solving 1D problems, which is an approximate model, we then have no choice but to use that calibration factor.

5.2. Reconstruction results

We consider two models for our numerical testing. In both cases the objects are centered at the point (0,0,0.14)(0,0,-0.14).

  • Model 1: An ellipsoid with principal semi-major axis and two semi-minor axes, respectively, being 0.2 (6 cm), 0.12 (3.6 cm) and 0.04 (1.2 cm) in x,yx,y and zz directions respectively. This ellipsoid has the dielectric constant εtrue(ellip)=15\varepsilon_{\text{true}}\left(\text{ellip}\right)=15.

  • Model 2: A rectangular prism with dimensions 0.18×0.08×0.080.18\times 0.08\times 0.08 (L×\timesW×\timesH) corresponding to 5.4×2.4×2.45.4\times 2.4\times 2.4 in cm3\text{cm}^{3}. It has the dielectric constant εtrue(prism)=23.8\varepsilon_{\text{true}}\left(\text{prism}\right)=23.8.

Remark 5.2.

We recall in the context of (3.4) that 1 in the dimensionless spatial variable is 0.3 (m) in reality. Therefore, the top point of the surface of our simulated objects is close to the air/ground interface {z=0}\left\{z=0\right\} within |0.14+0.04|×30=3\left|-0.14+0.04\right|\times 30=3 (cm) of the burial depth, which is typical in de-mining operations; cf. e.g. [40].

These two examples model well the so-called “appearing” dielectric constant of metallic objects we experimented with in [41]. It was established numerically in [41] that the range of the appearing dielectric constant of metals is [10,30][10,30]. Moreover, this range includes the dielectric constant 23.8 of the experimental watered glass bottle in [19]. Both metallic and glass bodied land mines are frequently used in the battlefields.

Suppose that our SAR data are G(𝐱j0,m,t)G(\mathbf{x}_{j}^{0,m},t) for each j=1,j0¯j=\overline{1,j_{0}} and m=1,N¯m=\overline{1,N}. Recall that 𝐱j0,m=(xm0,yj0,z0)Ljscr\mathbf{x}_{j}^{0,m}=\left(x_{m}^{0},y_{j}^{0},z^{0}\right)\in L_{j}^{\text{scr}} is the source location number mm on the line of sources LjscrL_{j}^{\text{scr}} number jj. Prior of getting the data g0g_{0} and g1g_{1} in (3.6), we perform a data preprocessing procedure to filter out unwanted artifacts observed in the time-resolved signals. First, we eliminate small “peaks” of the signals by setting:

(5.1) G~(𝐱j0,m,t)={0if |G(𝐱j0,m,t)|<0.1maxt[0,T],m[1,M]|G(𝐱j0,m,t)|,G(𝐱j0,m,t)otherwise.\widetilde{G}\left(\mathbf{x}_{j}^{0,m},t\right)=\begin{cases}0&\text{if }\left|G\left(\mathbf{x}_{j}^{0,m},t\right)\right|<0.1\max_{t\in\left[0,T\right],m\in[1,M]}\left|G\left(\mathbf{x}_{j}^{0,m},t\right)\right|,\\ G\left(\mathbf{x}_{j}^{0,m},t\right)&\text{otherwise}.\end{cases}

Second, we use the built-in function findpeaks in MATLAB to count the number of the peaks. Then, we deliberately keep only two first peaks of the signals because we believe that those are the most important ones for the object. The resulting signals are denoted by G^(𝐱j0,m,t)\widehat{G}(\mathbf{x}_{j}^{0,m},t). We continue our preprocessing procedure by applying the delay-and-sum technique to G^(𝐱j0,m,t)\widehat{G}(\mathbf{x}_{j}^{0,m},t) as in [10]. For brevity, we do not detail internal steps of this data preprocessing procedure, but refer to [10, section III-A].

Now we focus on the preprocessing on the delay-and-sum data. In this context, for each line Ljscr,L_{j}^{\text{scr}}, j=1,2,3j=1,2,3 of source, the data are along the slant range planes which we reconstruct. Let PjP_{j} be the slant range plane which corresponds to the line of sources Ljscr,j=1,2,3.L_{j}^{\text{scr}},j=1,2,3. In this plane PjP_{j}, we, slightly abusing the same notations, consider xsx_{s} as the source number and ysy_{s} as the variable for the axis orthogonal to the line of source LjscrL_{j}^{\text{scr}}. Those ysy_{s} are actually the transformation of tt using time of arrival. Thereby, we denote the delay-and-sum data by SADj(xs,ys)\text{SAD}_{j}(x_{s},y_{s}). We now introduce a truncation to control the size of the computed object. The “heuristically” truncated delay-and-sum data are given by

(5.2) SADjtr(xs,ys)={SADj(xs,ys),if SADj(xs,ys)0.95maxxs,ys|SADj(xs,ys)|,0,otherwise.\text{SAD}_{j}^{\text{tr}}(x_{s},y_{s})=\begin{cases}\text{SAD}_{j}(x_{s},y_{s}),&\text{if }\text{SAD}_{j}(x_{s},y_{s})\geq 0.95\max_{x_{s},y_{s}}\left|\text{SAD}_{j}(x_{s},y_{s})\right|,\\ 0,&\text{otherwise}.\end{cases}

This way, we use only the absolute values as the data g0g_{0} for each 1D inverse problem along the central line of the antenna. We also remark that the argument of [19, Remark 4] implies that the Neumann data g1g_{1} equals to the derivative of g0g_{0}, i.e. g1=g0.g_{1}=g_{0}^{\prime}. Our data preprocessing is exemplified in Figures 2a2d.

To illustrate how our forward model works, consider Model 1 as an example. This object is centered at (0,0,0.14)(0,0,-0.14). The central point of the source line y20=1.7y_{2}^{0}=-1.7 is located at (0,1.7,0.8)(0,-1.7,0.8). Therefore, the distance between these two points is (1.7)2+(0.8+0.14)21.94\sqrt{(-1.7)^{2}+(0.8+0.14)^{2}}\approx 1.94. Hence, the backscattering time-resolved signal should bump at the point 1.94×2=3.881.94\times 2=3.88 of the time domain. This is approximately the point we observe in Figure 2a. More precisely, our approximate signal bumps at the point τ=4.13\tau=4.13, which is 10% difference with τ=3.88\tau=3.88. But given that we have the whole ellipsoid rather than just its central point, this difference is acceptable for our simulations. This confirms how reliable our forward solver is. We can stop looking for the wave after the object’s bumps, which is after around the point τ=5\tau=5; see again Figure 2a. However, we enlarge the time domain up to the point τ=T=10\tau=T=10 to avoid any possible boundary reflections of the wave propagation. Moreover, we take into account 1000 time samples in Figure 2b to identify well the location of every wave with respect to tt.

For each line Ljscr,L_{j}^{\text{scr}}, j=1,2,3j=1,2,3 of sources, we use the truncation (5.2) to form a rectangular area (xs,ys)[s1j,s2j]×[l1j,l2j]Pj\left(x_{s},y_{s}\right)\in[s_{1}^{j},s_{2}^{j}]\times[l_{1}^{j},l_{2}^{j}]\subset P_{j}. As mentioned above when considering variables xs,ysx_{s},y_{s}, this rectangle is used in the slant range plane involving the transformation of τ\tau in yy using times of arrival.

Remark 5.3.

The selection of this rectangular region explains the reason why our computed objects in Figures 3b, 3c and Figures 4b, 4c have the rectangle-like shape. We are not focusing our work on the shape of the object since it is very challenging. Rather, we are interested in the accuracy of the computed dielectric constant ε~r\widetilde{\varepsilon}_{r} of our two models. We are also interested in the dimensions of the computed object, and the presence of the above-mentioned rectangular region is helpful in controlling those dimensions.

Given a slant range plane Pj,P_{j}, we solve NN 1-D CIPs for NN sources 𝐱j0,m=(xm0,yj0,z0)Ljscr,m=1,,N.\mathbf{x}_{j}^{0,m}=\left(x_{m}^{0},y_{j}^{0},z^{0}\right)\in L_{j}^{\text{scr}},m=1,...,N. Let the corresponding discrete solutions be discrete functions defined in Pj,P_{j}, ε~1,j(xm0,ys)\widetilde{\varepsilon}_{1,j}(x_{m}^{0},y_{s}). Note again that slightly abusing notation, the variable ysy_{s} here is the grid point in PjP_{j}. To reduce the artifacts, we average all the 1D solutions in our rectangular area [s1j,s2j]×[l1j,l2j]Pj[s_{1}^{j},s_{2}^{j}]\times[l_{1}^{j},l_{2}^{j}]\subset P_{j}. To do so, for each jj, we define first the semi-discrete function ε~2,j(xm0,y)\widetilde{\varepsilon}_{2,j}(x_{m}^{0},y) for xm0[s1j,s2j],y[l1j,l2j]x_{m}^{0}\in[s_{1}^{j},s_{2}^{j}],y\in[l_{1}^{j},l_{2}^{j}] as

(5.3) ε~2,j(xm0,y)={30,if maxys[l1j,l2j](|ε~1(xm0,ys)|)>500,maxys[l1j,l2j](|ε~1(xm0,ys)|),otherwise,\widetilde{\varepsilon}_{2,j}\left(x_{m}^{0},y\right)=\begin{cases}30,&\text{if }\max_{y_{s}\in[l_{1}^{j},l_{2}^{j}]}\left(\left|\widetilde{\varepsilon}_{1}\left(x_{m}^{0},y_{s}\right)\right|\right)>500,\\ \max_{y_{s}\in[l_{1}^{j},l_{2}^{j}]}\left(\left|\widetilde{\varepsilon}_{1}\left(x_{m}^{0},y_{s}\right)\right|\right),&\text{otherwise,}\end{cases}

since our domain of interest of the dielectric constant is [10,30]. Thus, for each source line of sources LjscrL_{j}^{\text{scr}}, our computed slant-range solution in PjP_{j}, denoted by ε~r,j(xm0,ys)\widetilde{\varepsilon}_{r,j}(x_{m}^{0},y_{s}), is computed by

(5.4) ε~r,j(xm0,ys)={|s2s1|1xm0[s1j,s2j]ε~2,j(xm0,y)in [s1j,s2j]×[l1j,l2j],1elsewhere.\widetilde{\varepsilon}_{r,j}\left(x_{m}^{0},y_{s}\right)=\begin{cases}\left|s_{2}-s_{1}\right|^{-1}\sum_{x_{m}^{0}\in\left[s_{1}^{j},s_{2}^{j}\right]}\widetilde{\varepsilon}_{2,j}\left(x_{m}^{0},y\right)&\text{in }\left[s_{1}^{j},s_{2}^{j}\right]\times\left[l_{1}^{j},l_{2}^{j}\right],\\ 1&\text{elsewhere}.\end{cases}

We define our computed slant-range solution in each slant range plane PjP_{j} by (5.4) because from the above preprocessing data, we expect our inclusion in only the rectangle area. Therefore, it is relevant that the dielectric constant outside of that area should be unity. Finally, having all computed slant-range solutions ε~r,1,ε~r,2,ε~r,3\widetilde{\varepsilon}_{r,1},\widetilde{\varepsilon}_{r,2},\widetilde{\varepsilon}_{r,3}, we assign one value of the final dielectric constant ε~r\widetilde{\varepsilon}_{r} of our object by averaging the maximum values of those solutions. In particular, let ε~max,j=max(xm0,ys)Pjε~r,j(xm0,ys)\widetilde{\varepsilon}_{\max,j}=\max_{(x_{m}^{0},y_{s})\in P_{j}}\widetilde{\varepsilon}_{r,j}(x_{m}^{0},y_{s}) and let SL3\text{SL}\subset\mathbb{R}^{3} be the prism formed by 3 slant-range planes PjP_{j}. We then compute

(5.5) ε~r(x,y,z)={13(ε~max,1+ε~max,2+ε~max,3)in ΩSL,1elsewhere..\widetilde{\varepsilon}_{r}\left(x,y,z\right)=\begin{cases}\frac{1}{3}\left(\widetilde{\varepsilon}_{\max,1}+\widetilde{\varepsilon}_{\max,2}+\widetilde{\varepsilon}_{\max,3}\right)&\text{in }\Omega\cap\text{SL},\\ 1&\text{elsewhere}.\end{cases}.

In addition, we do the linear interpolation with respect to jj to help improve the resolution of images.

5.3. Computational results

We now first bring in our results for the case of noiseless data and then for the case of noisy data.

5.3.1. Noiseless data

In the case of Model 1, our computed dielectric constant for the ellipsoid ε~r(ellip)=15.06\widetilde{\varepsilon}_{r}(\text{ellip})=15.06, compared with the true value εtrue(ellip)=15\varepsilon_{\text{true}}(\text{ellip})=15. For Model 2, the computed dielectric constant for the rectangular prism is ε~r(prism)=22.97\widetilde{\varepsilon}_{r}(\text{prism})=22.97, compared with the true value εtrue(prism)=23.8\varepsilon_{\text{true}}(\text{prism})=23.8. Therefore, our approximations of the dielectric constants are quite accurate.

We now comment on dimensions of computed images. Recall that by Remark 5.3 our images have only rectangular shapes. One can see in Figures 3a and 3b that the dimensions of the computed ellipsoid of Model 1 are about 0.55×0.04×0.080.55\times 0.04\times 0.08 (L×W×H\text{L}\times\text{W}\times\text{H}). On the other hand, dimensions of the true ellipsoid are 0.4×0.24×0.080.4\times 0.24\times 0.08 (L×W×H\text{L}\times\text{W}\times\text{H}). This means that the computed object is longer in length, but smaller in width and is the same in height. Similarly, for Model 2, one can see in Figures 4a and 4b that the dimensions of the computed prism are 0.5×0.03×0.030.5\times 0.03\times 0.03 (L×W×H\text{L}\times\text{W}\times\text{H}), while dimensions of the true prism are 0.36×0.16×0.160.36\times 0.16\times 0.16 (L×W×H\text{L}\times\text{W}\times\text{H}). Hence, the computed prism is longer in length but smaller in width and height than the true prism.

We remark that the isovalue for the presentation of the dielectric constants (as well as sizes of imaged targets) in Figures 3b and 4b equals to εtrue1\varepsilon_{\text{true}}-1 because we want to see how close our approximate dielectric constant to the true one is.

Consider now Figures 3c and 4c in which we have the 2D cross-sections of the computed images by the plane {y=0}\left\{y=0\right\}, the location is not so accurate as we have 10% difference in location from our forward solver. We hope to improve these in our future work.

5.3.2. Noisy data

While the above results are for the case of the noiseless data, we now introduce a noise in the data. We add a random additive noise to the truncated simulated data G~(𝐱j0,m,t)\widetilde{G}\left(\mathbf{x}_{j}^{0,m},t\right) (see (5.1)). In this regard, we introduce

(5.6) G~noise(𝐱j0,m,t)=G~(𝐱j0,m,t)+σrand(𝐱j0,m,t)max𝐱j0,m,t|G~(𝐱j0,m,t)|,j=1,2,3,m=1,,N.\widetilde{G}_{\text{noise}}\left(\mathbf{x}_{j}^{0,m},t\right)=\widetilde{G}\left(\mathbf{x}_{j}^{0,m},t\right)+\sigma\text{rand}\left(\mathbf{x}_{j}^{0,m},t\right)\max_{\mathbf{x}_{j}^{0,m},t}\left|\widetilde{G}\left(\mathbf{x}_{j}^{0,m},t\right)\right|,\quad j=1,2,3,m=1,...,N.

Here, σ(0,1)\sigma\in(0,1) represents the noise level and “rand” is a random number uniformly distributed in the interval (1,1)(-1,1). We use now the noisy data (5.6) for both Models 1 and 2 with σ=0.05,\sigma=0.05, which is 5%5\% of noise.

The resulting computed dielectric constant for the ellipsoid is ε~r(ellip)=13.52\widetilde{\varepsilon}_{r}(\text{ellip})=13.52, compared with the true value εtrue(ellip)=15\varepsilon_{\text{true}}(\text{ellip})=15. For Model 2, our computed dielectric constant for the rectangular prism is ε~r(prism)=22.08\widetilde{\varepsilon}_{r}(\text{prism})=22.08, compared with the true value εtrue(prism)=23.8\varepsilon_{\text{true}}(\text{prism})=23.8. In both models images for noisy data are about the same as the ones on Figures 3 and 4 for noiseless data.

Refer to caption
(a) True
Refer to caption
(b) Computed
Refer to caption
(c) Reconstructed slant-range image
Figure 3. Our 3-D reconstruction of the ellipsoid of Model 1 with the true dielectric constant εtrue(ellip)=15\varepsilon_{\text{true}}(\text{ellip})=15. This is the case of noiseless data. The ellipsoid is buried in a sandbox with the dielectric constant ε(sand)=4\varepsilon(\text{sand})=4. The data are taken in the air with the dielectric constant ε(air)=1\varepsilon(\text{air})=1. (a) True 3D image with dimensions 0.4×0.08×0.120.4\times 0.08\times 0.12 (L×\timesW×\timesH). (b) Reconstructed image of the prism with ε~comp(ellip)=15.06\widetilde{\varepsilon}_{\text{comp}}(\text{ellip})=15.06. The isovalue 14 is used. Sizes of the computed ellipsoid are 0.55×0.04×0.080.55\times 0.04\times 0.08 (L×\timesW×\timesH). Hence, the computed object is longer in length, but smaller in width and is the same in height, as compared with the true ellipsoid. (c) The cross-section of the 3-D image by the plane {y=0}\left\{y=0\right\} of the object superimposed with the true one (solid curve). The image with 5% noise in the data (see (5.6)) is about the same. The computed dielectric constant in the case of noisy data is ε~comp(ellip)=13.52\widetilde{\varepsilon}_{\text{comp}}\left(\text{ellip}\right)=13.52.
Refer to caption
(a) True
Refer to caption
(b) Computed
Refer to caption
(c) Reconstructed slant-range image
Figure 4. Our 3-D reconstruction of the rectangular prism of Model 2 with the true dielectric constant εtrue(prism)=23.8\varepsilon_{\text{true}}(\text{prism})=23.8. This is the case of noiseless data. The prism is buried in a sandbox with the dielectric constant ε(sand)=4\varepsilon(\text{sand})=4. The data are taken in the air with the dielectric constant ε(air)=1\varepsilon(\text{air})=1. (a) True 3D image with dimensions 0.18×0.08×0.080.18\times 0.08\times 0.08 (L×\timesW×\timesH). (b) Reconstructed image of the prism with ε~comp(prism)=22.97\widetilde{\varepsilon}_{\text{comp}}(\text{prism})=22.97. The isovalue 22.5 is used. Sizes of the computed prism are 0.5×0.03×0.030.5\times 0.03\times 0.03 (L×\timesW×\timesH). Hence, the computed prism is longer in length but smaller in width and height than the true prism. (c) The cross-section of the 3-D image by the plane {y=0}\left\{y=0\right\} of the object superimposed with the true one (solid curve). The image with 5% noise in the data (see (5.6)) is about the same. The computed dielectric constant in the case of noisy data is ε~comp(prism)=22.08.\widetilde{\varepsilon}_{\text{comp}}\left(\text{prism}\right)=22.08.

6. Concluding remarks

In summary, we have numerically tested the convexification-based nonlinear SAR (CONSAR) imaging on identifying images of the dielectric constant of buried objects in a three-dimensional setting. We have numerically observed provides accurate values of dielectric constants of targets. However, the computed locations and sizes of tested targets are not as accurate as the convexification usually provides for conventional CIPs, see, e.g. the works of this research group for applications oof the convexification to experimental data [17, 18]. We hope to address these questions in follow up publications.

Acknowledgment

Vo Anh Khoa acknowledges Professor Taufiquar Khan (University of North Carolina at Charlotte) for the moral support of his research career.

References

  • [1]
  • [2] M. Amin, Through-the-wall Radar Imaging, CRC Press, Boca Raton, FL, 2011.
  • [3] S. A. Carn, Application of synthetic aperture radar (SAR) imagery to volcano mapping in the humid tropics: a case study in East Java, Indonesia, Bulletin of Volcanology 61 (1-2) (1999) 92–105. doi:10.1007/s004450050265.
  • [4] G. Chavent, Nonlinear Least Squares for Inverse Problems: Theoretical Foundations and Step-by-Step Guide for Applications, Scientic Computation, Springer, New York, 2009.
  • [5] A. V. Goncharsky and S.Y. Romanov, Iterative methods for solving coefficient inverse problems of wave tomography in models with attenuation, Inverse Problems, 33, 025003, 2017.
  • [6] A. V. Goncharsky and S.Y. Romanov, A method of solving the coefficient inverse problems of wave tomography Computers and Mathematics with Applications, 77, 967–980, 2019.
  • [7] M. Gilman, E. Smith, S. Tsynkov, Transionospheric Synthetic Aperture Imaging, Springer International Publishing, 2017. doi:10.1007/978-3-319-52127-5.
  • [8] S. Rotheram, J. T. Macklin, Inverse Methods for Ocean Wave Imaging by SAR, in: Inverse Methods in Electromagnetic Imaging, Springer Netherlands, 1985, pp. 907–930. doi:10.1007/978-94-009-5271-3_11.
  • [9] L. Nguyen, M. Ressler, J. Sichina, Sensing through the wall imaging using the Army Research Lab ultra-wideband synchronous impulse reconstruction (UWB SIRE) radar, in: K. I. Ranney, A. W. Doerry (Eds.), Radar Sensor Technology XII, SPIE, 2008. doi:10.1117/12.776869.
  • [10] M. V. Klibanov, A. V. Smirnov, V. A. Khoa, A. J. Sullivan, L. H. Nguyen, Through-the-wall nonlinear SAR imaging, IEEE Transactions on Geoscience and Remote Sensing 59 (9) (2021) 7475–7486. doi: 10.1109/TGRS.2021.3055805
  • [11] M. V. Klibanov, N. T. Thành, Recovering dielectric constants of explosives via a globally strictly convex cost functional, SIAM Journal on Applied Mathematics 75 (2) (2015) 518–537. doi:10.1137/140981198.
  • [12] N. T. Thành, L. Beilina, M. V. Klibanov, M. A. Fiddy, Imaging of buried objects from experimental backscattering time-dependent measurements using a globally convergent inverse algorithm, SIAM Journal on Imaging Sciences 8 (1) (2015) 757–786. doi:10.1137/140972469.
  • [13] A. E. Kolesov, M. V. Klibanov, L. H. Nguyen, D.-L. Nguyen, N. T. Thành, Single measurement experimental data for an inverse medium problem inverted by a multi-frequency globally convergent numerical method, Applied Numerical Mathematics 120 (2017) 176–196. doi:10.1016/j.apnum.2017.05.007.
  • [14] D.-L. Nguyen, M. V. Klibanov, L. H. Nguyen, M. A. Fiddy, Imaging of buried objects from multi-frequency experimental data using a globally convergent inversion method, Journal of Inverse and Ill-posed Problems 26 (4) (2018) 501–522. doi:10.1515/jiip-2017-0047.
  • [15] M. V. Klibanov, A. E. Kolesov, D.-L. Nguyen, Convexification method for an inverse scattering problem and its performance for experimental backscatter data for buried targets, SIAM Journal on Imaging Sciences 12 (1) (2019) 576–603. doi:10.1137/18m1191658.
  • [16] V. A. Khoa, M. V. Klibanov, L. H. Nguyen, Convexification for a three-dimensional inverse scattering problem with the moving point source, SIAM Journal on Imaging Sciences 13 (2) (2020) 871–904. doi:10.1137/19m1303101.
  • [17] V. A. Khoa, G. W. Bidney, M. V. Klibanov, L. H. Nguyen, L. H. Nguyen, A. J. Sullivan, V. N. Astratov, Convexification and experimental data for a 3D inverse scattering problem with the moving point source, Inverse Problems 36 (8) (2020) 085007. doi:10.1088/1361-6420/ab95aa.
  • [18] V. A. Khoa, G. W. Bidney, M. V. Klibanov, L. H. Nguyen, L. H. Nguyen, A. J. Sullivan, V. N. Astratov, An inverse problem of a simultaneous reconstruction of the dielectric constant and conductivity from experimental backscattering data, Inverse Problems in Science and Engineering (2020) 1–24. doi:10.1080/17415977.2020.1802447.
  • [19] M. V. Klibanov, V. A. Khoa, A. V. Smirnov, L. H. Nguyen, G. W. Bidney, L. H. Nguyen, A. J. Sullivan, V. N. Astratov, Convexification inversion method for nonlinear SAR imaging with experimentally collected data, Journal of Applied and Industrial Mathematics 15 (3) (2021) 413–436. doi:10.1134/S1990478921030054.
  • [20] A. V. Smirnov, M. V. Klibanov, A. J. Sullivan, L. H. Nguyen, Convexification for an inverse problem for a 1D wave equation with experimental data, Inverse Problems 36 (9) (2020) 095008. doi:10.1088/1361-6420/abac9a.
  • [21] M. V. Klibanov, O. V. Ioussoupova, Uniform strict convexity of a cost functional for three-dimensional inverse scattering problem, SIAM Journal on Mathematical Analysis 26 (1) (1995) 147–179. doi:10.1137/s0036141093244039.
  • [22] M. V. Klibanov, Global convexity in a three-dimensional inverse acoustic problem, SIAM Journal on Mathematical Analysis 28 (6) (1997) 1371–1388. doi:10.1137/s0036141096297364.
  • [23] A. B. Bakushinskii, M. V. Klibanov, N. A. Koshev, Carleman weight functions for a globally convergent numerical method for ill-posed Cauchy problems for some quasilinear PDEs, Nonlinear Analysis: Real World Applications 34 (2017) 201–224. doi:10.1016/j.nonrwa.2016.08.008.
  • [24] M. V. Klibanov, J. Li, Inverse Problems and Carleman Estimates, De Gruyter, 2021. doi:10.1515/9783110745481.
  • [25] A. L. Bukhgeim, M. V. Klibanov, Uniqueness in the large of a class of multidimensional inverseproblems, Soviet Mathematics Doklady 17 (1981) 244–247.
  • [26] L. Beilina, M. V. Klibanov, Approximate Global Convergence and Adaptivity for Coefficient Inverse Problems, Springer US, 2012. doi:10.1007/978-1-4419-7805-9.
  • [27] M. Bellassoued, M. Yamamoto, Carleman Estimates and Applications to Inverse Problems for Hyperbolic Systems, Springer Japan, 2017. doi:10.1007/978-4-431-56600-7.
  • [28] M. V. Klibanov, A. A. Timonov, Carleman Estimates for Coefficient Inverse Problems and Numerical Applications, VSP, Utrecht, 2004. doi:10.1515/9783110915549.
  • [29] M. V. Klibanov, Carleman estimates for global uniqueness, stability and numerical methods for coefficient inverse problems, Journal of Inverse and Ill-Posed Problems 21 (4) (2013). doi:10.1515/jip-2012-0072.
  • [30] M. Yamamoto, Carleman estimates for parabolic equations and applications, Inverse Problems 25 (12) (2009) 123013. doi:10.1088/0266-5611/25/12/123013.
  • [31] N. V. Alekseenko, V. A. Burov, O. D. Rumyantseva, Solution of the three-dimensional acoustic inverse scattering problem. The modified Novikov algorithm, Acoustical Physics 54 (3) (2008) 407–419. doi:10.1134/s1063771008030172.
  • [32] R. Novikov, The ¯\bar{\partial}-approach to approximate inverse scattering at fixed energy in three dimensions, International Mathematics Research Papers 2005 (6) (2005) 287–349. doi:10.1155/imrp.2005.287.
  • [33] G. A. Showman, Stripmap SAR, in: Principles of Modern Radar: Advanced techniques, Institution of Engineering and Technology, pp. 259–335. doi:10.1049/sbra020e_ch7.
  • [34] R. A. Renaut, Absorbing boundary conditions, difference operators, and stability, Journal of Computational Physics 100 (2) (1992) 434. doi:10.1016/0021-9991(92)90257-y.
  • [35] V. G. Romanov, Inverse Problems of Mathematical Physics, Walter de Gruyter GmbH & Co.KG, 2019.
  • [36] A. Smirnov, M. Klibanov, L. Nguyen, Convexification for a 1D hyperbolic coefficient inverse problem with single measurement data, Inverse Problems & Imaging 14 (5) (2020) 913–938. doi:10.3934/ipi.2020042.
  • [37] A. N. Tikhonov, Numerical Methods for the Solution of Ill-Posed Problems, Springer Netherlands, Dordrecht, 1995.
  • [38] M. V. Klibanov, A. E. Kolesov, A. Sullivan, L. Nguyen, A new version of the convexification method for a 1D coefficient inverse problem with experimental data, Inverse Problems 34 (11) (2018) 115014. doi:10.1088/1361-6420/aadbc6.
  • [39] M. V. Klibanov, A. E. Kolesov, Convexification of a 3-D coefficient inverse scattering problem, Computers & Mathematics with Applications 77 (6) (2019) 1681–1702. doi:10.1016/j.camwa.2018.03.016.
  • [40] D. J. Daniels, A review of GPR for landmine detection, Sensing and Imaging: An International Journal 7 (3) (2006) 90–123. doi:10.1007/s11220-006-0024-5.
  • [41] A. V. Kuzhuget, L. Beilina, M. V. Klibanov, A. Sullivan, L. Nguyen, M. A. Fiddy, Blind backscattering experimental data collected in the field and an approximately globally convergent inverse algorithm, Inverse Problems 28 (9) (2012) 095007. doi:10.1088/0266-5611/28/9/095007.
  • [42] A. Lipson, S. G. Lipson, H. Lipson, Optical Physics, Cambridge University Press, 2018.