This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Quantum Simulation for Quantum Dynamics with Artificial Boundary Conditions

Shi Jin [email protected] School of Mathematical Sciences, Institute of Natural Sciences, MOE-LSC, Shanghai Jiao Tong University, Shanghai, 200240, P. R. China. Shanghai Artificial Intelligence Laboratory, Shanghai, China. Nana Liu [email protected] School of Mathematical Sciences, Institute of Natural Sciences, MOE-LSC, Shanghai Jiao Tong University, Shanghai, 200240, P. R. China. Shanghai Artificial Intelligence Laboratory, Shanghai, China. University of Michigan-Shanghai Jiao Tong University Joint Institute, Shanghai 200240, China. Xiantao Li [email protected] Department of Mathematics, The Pennsylvania State University, University Park, Pennsylvania 16802, USA Yue Yu [email protected] School of Mathematical Sciences, Institute of Natural Sciences, MOE-LSC, Shanghai Jiao Tong University, Shanghai, 200240, P. R. China.

Quantum Simulation for Quantum Dynamics with Artificial Boundary Conditions

Shi Jin [email protected] School of Mathematical Sciences, Institute of Natural Sciences, MOE-LSC, Shanghai Jiao Tong University, Shanghai, 200240, P. R. China. Shanghai Artificial Intelligence Laboratory, Shanghai, China. Nana Liu [email protected] School of Mathematical Sciences, Institute of Natural Sciences, MOE-LSC, Shanghai Jiao Tong University, Shanghai, 200240, P. R. China. Shanghai Artificial Intelligence Laboratory, Shanghai, China. University of Michigan-Shanghai Jiao Tong University Joint Institute, Shanghai 200240, China. Xiantao Li [email protected] Department of Mathematics, The Pennsylvania State University, University Park, Pennsylvania 16802, USA Yue Yu [email protected] School of Mathematical Sciences, Institute of Natural Sciences, MOE-LSC, Shanghai Jiao Tong University, Shanghai, 200240, P. R. China.
Abstract

Quantum dynamics, typically expressed in the form of a time-dependent Schrödinger equation with a Hermitian Hamiltonian, is a natural application for quantum computing. However, when simulating quantum dynamics that involves the emission of electrons, it is necessary to use artificial boundary conditions (ABC) to confine the computation within a fixed domain. The introduction of ABCs alters the Hamiltonian structure of the dynamics, and existing quantum algorithms can not be directly applied since the evolution is no longer unitary. The current paper utilizes a recently introduced Schrödingerisation method [JLY22a, JLY22b] that converts non-Hermitian dynamics to a Schrödinger form, for the artificial boundary problems. We implement this method for three types of ABCs, including the complex absorbing potential technique, perfectly matched layer methods, and Dirichlet-to-Neumann approach. We analyze the query complexity of these algorithms, and perform numerical experiments to demonstrate the validity of this approach. This helps to bridge the gap between available quantum algorithms and computational models for quantum dynamics in unbounded domains.

1 Introduction

Quantum computing is an emerging technology that harnesses the laws of quantum mechanics to deliver unprecedented computational power [Hid19, NC02, Pre18, RP11]. Quantum algorithms operate on an nn-qubit Hilbert space with dimension 2n2^{n}, offering vast multidimensional spaces for computational models. Hence, it has a unique capability to handle large-scale scientific computing problems. A natural application is the PDEs from time-dependent Schrödinger equations (TDSE), which follow unitary evolutions and hence the wave functions can be coherently represented on quantum computers. Known as Hamiltonian simulations, a variety of efficient algorithms have been developed toward this end [BCC+15, BCK15, BCC+17, LC17, LC19, CGJ19, KSB19, AFL21, JLY22b, AFL22, FLT23].

While the TDSE is formulated in the entire physical space, in practice, the computation has to be done in a bounded domain, typically in locations where the electron density is concentrated. For situations when the electrons are being emitted outside the computational domain, such as the photoionization process, or when they are being drawn away, such as the ionization process, this approach, however, can result in an extraordinarily large computational domain. An artificial boundary condition (ABC) that may absorb outgoing wave packets and, more importantly, keep the size of the computational domain to a minimum, is frequently used to solve such problems. A correctly set up ABC will yield the same result as if the computation is done in the infinite domain, e.g., see the survey paper [AAB+08].

Due to the introduction of the ABCs, the computer simulation is now following a dynamics that is no longer unitary. As a result, existing Hamiltonian simulation techniques can not be directly applied. The goal of this paper is to bridge this emerging gap, by mapping the non-Hamiltonian dynamics back to a Schrödinger equation or Hamiltonian system, to enable the immediate Hamiltonian simulation capability. This is accomplished by using the recently introduced Schrödingerization technique [JLY22a, JLY22b]. It is a generic framework to convert any linear partial (and ordinary) differential equation or dynamical system into a system of Schrödinger equations or a Hamiltonian system by going to the space which is one dimensional higher, via the warped phase transformation. A key advantage of the Schrödingerization approach is its simplicity and generality, where the general Hamiltonian the system evolves under has a simple form. When the boundary value problem is considered, resulting in an inhomogeneous system that is not Hamiltonian, one just needs to introduce one auxiliary variable to make the system homogeneous, and then the Schrödingerization process still applies [JLY22a, JLY22b].

In this paper, we explore three popular types of ABCs, including the complex absorbing potential method, perfect-matched layers method, and Dirichlet-to-Neumann approach, to captivate the quantum implementation utilizing the Schrödingerization approach.

Another potential alternative to implementing the ABCs is by using quantum linear solvers (QLS) [HHL09, CKS17]. This can be done by first performing spatial discretization that reduces the models to a system of linear ODEs, followed by time-marching schemes. Runge-Kutta methods, spectral (collocation) methods [CL19], multi-step methods [BCOW17], and Dyson series [Kro22], are some of the important existing methods. In some earlier works [CL19], the analysis for this type of algorithm assumes that the matrix AA in the linear ODE system is diagonalizable, and the final complexity involves a condition number of the eigenvector matrix. This is later improved by Krovi [Kro22], where the condition number is replaced by the following bound,

C(A)=supt[0,T]exp(At).C(A)=\sup_{t\in[0,T]}\norm{\exp(At)}.

In the context of ABCs, due to the dissipative nature, the matrix AA is typically stable, in that the real parts of the eigenvalues are non-positive. Indeed, such stability property is a key focus in the development and analysis of ABCs [EM79]. The stability implies that C(A)=1C(A)=1, thus completely removing the dependence of the complexity on the condition number.

However, QLS-based methods still require more involved steps like a truncation of Taylor series expansions or the construction of a unitary operation associated with the inverse of a matrix. The latter is often realised through the block-encoding formalism [GSLW19]. While these results might give similar query and gate complexity to the Schrödingerization approach, it is not straightforward to actually show explicitly how the inverse is obtained in the block-encoding formalism, which makes it difficult to perform in practice. This is in contrast to the Schrödingerization method, which is a much simpler scheme where the Hamiltonian to be simulated is explicitly given, which places the problem directly in the realm of Hamiltonian simulation. The cost in the Schrödingerization method is also independent of the condition number.

We note that there are also very recent alternatives to QLS for simulating amplitude-encoded solutions of differential equations, for instance schemes based on block-encoding [ALWZ22] or linear combination of unitaries (LCU) [ALL23]. The application of these methods to ABCs could provide a complementary viewpoint to our methods here.

2 A brief review of existing boundary conditions

We first review several types of artificial boundary conditions that are commonly used in practice. The goal of these numerical techniques is to be able to simulate the dynamics in the entire space d\mathbb{R}^{d}, i.e.,

itψ(𝒙,t)=H^ψ(𝒙,t),𝒙d,i\frac{\partial}{\partial t}\psi(\bm{x},t)=\hat{H}\psi(\bm{x},t),\quad\bm{x}\in\mathbb{R}^{d}, (1)

where H^\hat{H} is a Hamiltonian operator expressed in reduced units,

H^=22+V(𝒙,t).\hat{H}=-\frac{\nabla^{2}}{2}+V(\bm{x},t). (2)

We assume that the initial wave function ψ(𝒙,0)\psi(\bm{x},0) is compactly supported in a subdomain, here denoted by Ωd\Omega\subset\mathbb{R}^{d}. On the other hand, due to the initial momentum, or the influence of the external potential, the wave function can spread well beyond Ω\Omega, as shown in Figure 1. To avoid the need for continuously expanding the computational domain, it is necessary to develop suitable boundary conditions that can propagate wave packets beyond the domain’s boundaries. These conditions, known as artificial boundary conditions (ABCs), are intended to prevent boundary reflections, and hence the interference with the dynamics inside the domain.

Refer to caption
Figure 1: Waves propagating outside the computational domain Ω=[3,3]×[3,3]\Omega=[-3,3]\times[-3,3].

2.1 Complex Absorbing Potentials

The idea of the complex absorbing potential (CAP) is to introduce an artificial potential with negative imaginary part to the TDSE. This modifies the TDSE (1) to,

itψ(𝒙,t)=H^ψ(𝒙,t)+W(𝒙)ψ(𝒙,t),𝒙D.i\frac{\partial}{\partial t}\psi(\bm{x},t)=\hat{H}\psi(\bm{x},t)+W(\bm{x})\psi(\bm{x},t),\quad\bm{x}\in D. (3)

Here DΩD\supset\Omega is a bigger domain that offers an absorbing layer, D\ΩD\backslash\Omega, surrounding the computational domain Ω\Omega. The complex potential W(𝒙)W(\bm{x}) is selected so that it is zero in the computational domain Ω\Omega to keep the original dynamics unaltered, while in a surrounding absorbing layer, it has a negative imaginary part with magnitude slowly increased away from the boundary Ω\partial\Omega (see Figure 2 for example). The hope is that the wave function at the boundary of the absorbing layer is sufficiently damped at which point it can be simply set to zero. As a popular technique in computational chemistry, a wide variety of parametric forms for the imaginary potential have been proposed and extensively tested [MPNE04, YE18]. Some of the choices may include a non-zero real part in W(𝒙)W(\bm{x}) to offer more flexibility.

Refer to caption
Figure 2: An example of the complex absorbing potential W(x,y)W(x,y) for a two-dimensional domain (The amplitude is for ImW-\text{Im}W). Ω=[2.2,2.2]×[2.2,2.2]\Omega=[-2.2,2.2]\times[-2.2,2.2] and D=[3,3]×[3,3]D=[-3,3]\times[-3,3]. Motivated by the study in [RM96], we choose a polynomial form W(x,y)=10i(|x|2.2)+210i(|y|2.2)+2W(x,y)=-10i(\absolutevalue{x}-2.2)_{+}^{2}-10i(\absolutevalue{y}-2.2)_{+}^{2}.

From the computational perspective, the CAP method only introduces a modification to the diagonal elements of H^\hat{H}. The implementation is quite straightforward and the boundary conditions outside the absorbing layer can be chosen as the homogeneous Dirichlet boundary conditions.

2.2 Perfectly Matched Layers methods

Another classical strategy is the perfectly matched layer (PML) [Ber94], where one first constructs a buffer layer so that the outgoing waves in the computational domain are exactly preserved (perfect matching). Although conceptually similar to CAP, the PML, at least at the continuous level, can theoretically absorb the outgoing waves. The most common mathematical approach to ensure the perfectly matching property is to introduce a complex stretching of the spatial coordinate to derive a modified equation in the buffer layer, and then the resulting models are discretized simultaneously in the implementation.

PML has applications in many areas of science and engineering, such as electromagnetics, acoustics, and fluid dynamics, where accurate simulation of wave propagation is critical for understanding and predicting physical phenomena. For the TDSE (1), the PML has been developed by Zheng [Zhe07] and Nissen and Kreiss [NK11]. We follow the derivation in [NK11], and to briefly demonstrate the formulation, we first consider the 1d Schrödinger equation,

itψ=12x2ψ+V(x).i\partial_{t}\psi=-\frac{1}{2}\partial_{x}^{2}\psi+V(x). (4)

A derivation of a PML starts by replacing the time derivative with multiplication by iω-i\omega with ω\omega being the frequency variable. This changes the equation to,

ωψ=12x2ψ+V(x).\omega\psi=-\frac{1}{2}\partial_{x}^{2}\psi+V(x). (5)

In the second step, one performs a coordinate stretching,

x11+iσ(x)/ωx,\partial_{x}\longrightarrow\frac{1}{1+i\sigma(x)/\omega}\partial_{x},

which reduces (5) to

ωψ+iσ(x)ψ=\displaystyle\omega\psi+i\sigma(x)\psi= 12x(11+iσ(x)/ωxψ)+(1+iσ(x)/ω)V(x),\displaystyle-\frac{1}{2}\partial_{x}\left(\frac{1}{1+i\sigma(x)/\omega}\partial_{x}\psi\right)+\big{(}1+i\sigma(x)/\omega)V(x),
=\displaystyle= x22ψ+V(x)+12x(iσ(x)/ω1+iσ(x)/ωxψ)+iσ(x)ωV(x).\displaystyle-\frac{\partial_{x}^{2}}{2}\psi+V(x)+\frac{1}{2}\partial_{x}\left(\frac{i\sigma(x)/\omega}{1+i\sigma(x)/\omega}\partial_{x}\psi\right)+\frac{i\sigma(x)}{\omega}V(x).

The selection of the function σ\sigma may follow a similar recipe in the CAP method. Since we assume that V=0V=0 outside the domain Ω\Omega, we can drop the last term. We continue by defining an auxiliary function,

χ=σ(x)/ω1+iσ(x)/ωxψ.\chi=\frac{\sigma(x)/\omega}{1+i\sigma(x)/\omega}\partial_{x}\psi.

Finally, by converting iω-i\omega back to the time derivative, we obtain a system of equations,

itψ=\displaystyle i\partial_{t}\psi= H^ψiσ(x)ψ+i2xχ,\displaystyle\hat{H}\psi-i\sigma(x)\psi+\frac{i}{2}\partial_{x}\chi, (6)
itχ=\displaystyle i\partial_{t}\chi= σ(x)xψiσ(x)χ.\displaystyle\sigma(x)\partial_{x}\psi-i\sigma(x)\chi.

We should point out that this idea of stretching the coordinate shows a close resemblance to the exterior coordinate scaling method [SE00, HRB07]. But the corresponding modified equations are quite different from the equations that we derived here.

For a two-dimensional problem, one can repeat the same steps in the second coordinate and generalize the model as,

{itψ=H^ψiσ(x)ψiσ(y)ψ+i2xχ+i2yϕ,itχ=σ(x)xψiσ(x)χ,itϕ=σ(y)yψiσ(y)ϕ,(x,y)D.\left\{\begin{aligned} i\partial_{t}\psi=&\hat{H}\psi-i\sigma(x)\psi-i\sigma(y)\psi+\frac{i}{2}\partial_{x}\chi+\frac{i}{2}\partial_{y}\phi,\\ i\partial_{t}\chi=&\sigma(x)\partial_{x}\psi-i\sigma(x)\chi,\\ i\partial_{t}\phi=&\sigma(y)\partial_{y}\psi-i\sigma(y)\phi,\end{aligned}\right.\quad(x,y)\in D. (7)

In the interior of Ω\Omega, we have σ0\sigma\equiv 0, and thus the new equations (7) will be reduced to the TDSE (1). Meanwhile, outside the buffer layer, i.e., at D,\partial D, the wave function can be set to zero. Notice that the perfectly matching property no longer holds after the numerical discretization. Therefore, a suitable choice of σ\sigma, together with a sufficiently large buffer layer, is crucial to the performance of a PML.

2.3 The Dirichlet-to-Neumann Approach

The third type of ABC is based on a Dirichlet-to-Neumann (DtN) map. One can first apply a semi-discrete scheme to the TDSE (1), e.g., a standard finite difference scheme for the kinetic energy. This still yields an infinite system of ODEs. The next step is a domain decomposition. Toward this end, we define ΩI\Omega_{\text{I}} to be the grid points in Ω\Omega, and similarly, let ΩI​I\Omega_{\text{I\!I}} be the set of grid points outside the domain Ω.\Omega.

The semi-discrete model can be written in the following compact form:

{i𝝍˙I(t)=HI,I(t)𝝍I(t)+HI,I​I(t)𝝍I​I(t),i𝝍˙I​I(t)=HI​I,I(t)𝝍I(t)+HI​I,I​I(t)𝝍I​I(t),\left\{\begin{aligned} i\dot{\bm{\psi}}_{\text{I}}(t)&=H_{\text{I,I}}(t)\bm{\psi}_{\text{I}}(t)+H_{\text{I,I\!I}}(t)\bm{\psi}_{\text{I\!I}}(t),\\ i\dot{\bm{\psi}}_{\text{I\!I}}(t)&=H_{\text{I\!I,I}}(t)\bm{\psi}_{\text{I}}(t)+H_{\text{I\!I,I\!I}}(t)\bm{\psi}_{\text{I\!I}}(t),\end{aligned}\right. (8)

where 𝝍I=[ψ(𝒙k)]kΩI\bm{\psi}_{\text{I}}=[\psi(\bm{x}_{k})]_{k\in\Omega_{\text{I}}} and 𝝍I​I=[ψ(𝒙k)]kΩI​I\bm{\psi}_{\text{I\!I}}=[\psi(\bm{x}_{k})]_{k\in\Omega_{\text{I\!I}}}. 𝝍InI\bm{\psi}_{\text{I}}\in\mathbb{C}^{n_{\text{I}}} and 𝝍I​InI​I\bm{\psi}_{\text{I\!I}}\in\mathbb{C}^{n_{\text{I\!I}}}. We denote the discretization of the Hamiltonian operator in a partitioned form,

H=[HI,IHI,I​IHI​I,IHI​I,I​I].H=\begin{bmatrix}H_{\text{I,I}}&H_{\text{I,I\!I}}\\ H_{\text{I\!I,I}}&H_{\text{I\!I,I\!I}}\end{bmatrix}. (9)

Since the Schrödinger equation (1) has been descretized in space, we use ˙\dot{\quad} to denote the time derivatives hereafter. In addition, since we assumed V=0V=0 in ΩI​I\Omega_{\text{I\!I}}, the operator HI​I,I​IH_{\text{I\!I,I\!I}} only contains the kinetic energy (Laplacian).

For the grid points in the computational domain ΩI\Omega_{\text{I}}, one can separate out the grid points next to the boundary. Denote the set of those grid points by Γ\Gamma:

Γ={𝒙jΩI:if there exists 𝒙kΩI​I such that Hjk0}.\Gamma=\{\bm{x}_{j}\in\Omega_{\text{I}}:\mbox{if there exists $\bm{x}_{k}\in\Omega_{\text{I\!I}}$ such that $H_{jk}\neq 0$}\}.

By reordering the grid points, one can arrange 𝝍I\bm{\psi}_{\text{I}} as follows

𝝍I=[𝝍Γ𝝍I\Γ].\bm{\psi}_{\text{I}}=\begin{bmatrix}\bm{\psi}_{\Gamma}\\ \bm{\psi}_{\text{I}\backslash\Gamma}\end{bmatrix}. (10)

As a result, the off-diagonal block of the Hamiltonian may be written as

HI​I,I=[HI​I,ΓHI​I,I\Γ]=[HI​I,Γ  0].H_{\text{I\!I,I}}=\big{[}H_{\text{I\!I,}\Gamma}\;\;H_{\text{I\!I},\text{I}\backslash\Gamma}\big{]}=\big{[}H_{\text{I\!I,}\Gamma}\;\;\bm{0}\big{]}.

By using Laplace transform, Wu and Li (see Section II in [WL20]) derived the following exact boundary condition,

{𝝍˙I(t)=iHI,I𝝍I(t)iETϕΓ(t),ϕΓ(t)=0tκ(tτ)𝝍Γ(τ)𝑑τ,\left\{\begin{aligned} \dot{\bm{\psi}}_{\text{I}}(t)=&-iH_{\text{I,I}}\bm{\psi}_{\text{I}}(t)-iE^{T}\bm{\phi}_{\Gamma}(t),\\ \bm{\phi}_{\Gamma}(t)=&\int_{0}^{t}\kappa(t-\tau)\bm{\psi}_{\Gamma}(\tau)d\tau,\end{aligned}\right. (11)

Here EE is a restriction operator to extract the components of a wave function that correspond to grid points at the boundary Γ\Gamma from a function defined in ΩI\Omega_{\text{I}}, i.e.,

𝝍Γ=E𝝍I.\bm{\psi}_{\Gamma}=E\bm{\psi}_{\text{I}}.

Furthermore, ϕΓ\bm{\phi}_{\Gamma} represents the influence of the wave functions in ΩI​I\Omega_{\text{I\!I}} on the wave functions in ΩI\Omega_{\text{I}},

ϕΓ=HΓ,I​I𝝍I​I(t).\bm{\phi}_{\Gamma}=H_{\Gamma,\text{I\!I}}\bm{\psi}_{\text{I\!I}}(t).

But this connection is not needed in the implementation of (11).

The second equation in (11) is regarded as discrete DtN map, where κ(t)\kappa(t) is the real-time kernel function which corresponds to K(s)K(s) in the Laplace domain with

K(s)=HΓ,I​I(HI​I,I​IisI)1HI​I,Γ.K(s)=-H_{\Gamma,\text{I\!I}}(H_{\text{I\!I},\text{I\!I}}-isI)^{-1}H_{\text{I\!I},\Gamma}.

Such ABCs can also be derived without the spatial discretization [HH04, AAB+08]. For example, for one-dimensional problems, the integral term can be expressed as a fractional derivative [Arn98, AAB+08]. But a discretization would be needed afterward.

Up to this point, the formulation is exact. In practice, K(s)K(s) is often treated using a Padé approximation. For instance, a zeroth order approximation involves selecting an s0s_{0} as the interpolation point, and let

R=K(s0)=HΓ,I​I(HI​I,I​Iis0I)1HI​I,Γ.R=K(s_{0})=-H_{\Gamma,\text{I\!I}}\left(H_{\text{I\!I,I\!I}}-is_{0}I\right)^{-1}H_{\text{I\!I,}\Gamma}. (12)

Combining with (11), the ABC becomes,

𝝍˙I(t)=iHI,I𝝍I(t)iETRE𝝍I(t).\dot{\bm{\psi}}_{\text{I}}(t)=-iH_{\text{I,I}}\bm{\psi}_{\text{I}}(t)-iE^{T}RE\bm{\psi}_{\text{I}}(t). (13)

This approximation also adds a potential with the anti-Hermitian part being negative definite [WL20]. Thus it can be considered as a CAP method; But the Hermitian part is not necessarily zero and the anti-Hermitian part is usually not diagonal.

From the first-order Padé approximation, one obtains the ABC as follows,

{ddt𝝍I(t)=iHI,I𝝍I(t)iETϕΓ(t),ddtϕΓ(t)=BϕΓ(t)+AE𝝍I(t).\left\{\begin{aligned} \frac{d}{dt}\bm{\psi}_{\text{I}}(t)&=-iH_{\text{I,I}}\bm{\psi}_{\text{I}}(t)-iE^{T}\bm{\phi}_{\Gamma}(t),\\ \frac{d}{dt}\bm{\phi}_{\Gamma}(t)&=B\bm{\phi}_{\Gamma}(t)+AE\bm{\psi}_{\text{I}}(t).\end{aligned}\right. (14)

By a straightforward interpolation at s=s0s=s_{0}, one has (see the proof of Theorem 3.2 in [WL20]),

A=iHΓ,I​IHI​I,Γ,B=s0IAR1.A=-iH_{\Gamma,\text{I\!I}}H_{\text{I\!I,}\Gamma},\qquad B=s_{0}I-AR^{-1}. (15)

We have presented the DtN approach as if HI​I,I​IH_{\text{I\!I,I\!I}} is a finite matrix. One can take an infinite volume limit by imposing a zero far-field condition. Another important observation is that the matrix RR in (12) involves the operator HI​I,I​IH_{\text{I\!I,I\!I}} in the exterior, but the DtN map can be computed from a discrete boundary element equation [MR09, Li12], which only involves the wave function at the boundary. Such equations can be further simplified by seeking a sparse approximation of RR using least-squares [Li09].

3 Quantum simulations via Schrödingerization

From the proceeding section, we observe that the TDSE under the three types of ABCs can be expressed in the following general form,

ddt|ϕ(t)=iH0|ϕ(t)H1|ϕ(t).\frac{d}{dt}\ket{\phi(t)}=-iH_{0}\ket{\phi(t)}-H_{1}\ket{\phi(t)}. (16)

Here |ϕ(t)\ket{\phi(t)} is a quantum state that encodes the wave function, possibly the auxiliary functions, in the subdomain. H0H_{0} and H1H_{1} are both Hermitian, and H0H_{0} is a slight modification of the original Hamiltonian.

The Schrödingerization approach was first proposed in [JLY22b]. The key step is the warped phase transformation 𝒗(t,p)=epϕ(t)\bm{v}(t,p)=e^{-p}\bm{\phi}(t), defined for p>0p>0 and symmetrically extends the initial data to p<0p<0. This transformation converts (16) to a system of linear convection equations:

{ddt𝒗(t,p)=iH0𝒗+H1p𝒗,𝒗(0,p)=e|p|𝝍0.\begin{cases}\frac{d}{dt}\bm{v}(t,p)=-iH_{0}\bm{v}+H_{1}\partial_{p}\bm{v},\\ \bm{v}(0,p)=e^{-|p|}\bm{\psi}_{0}.\end{cases} (17)

On the other hand, the solution ϕ(t)\bm{\phi}(t) can be restored by a simple integration,

𝝍(t)=0𝒗(t,p)𝑑p.\bm{\psi}(t)=\int_{0}^{\infty}\bm{v}(t,p)dp. (18)

A more intuitive view is by discretizing the pp dimension and concatenating the corresponding function for each pp. Toward this end, we observe that 𝒗(t,p)\bm{v}(t,p) has an exponential decay in the pp space. Hence we may choose a sufficiently large interval, such that,

0𝒗(t,p)𝑑p=ab𝒗(t,p)𝑑p+𝒪(ϵ).\int_{0}^{\infty}\bm{v}(t,p)dp=\int_{a}^{b}\bm{v}(t,p)dp+\mathcal{O}(\epsilon).

Next, we choose sub-intervals with size Δp=(ba)/M\Delta p=(b-a)/M with M=2NM=2N being an even number and the grid points denoted by a=p0<p1<<pM=ba=p_{0}<p_{1}<\cdots<p_{M}=b. Then the mapping from 𝒗\bm{v} to ψ\psi can be implemented using a quadrature formula,

ab𝒗(t,p)𝑑pΔp2[𝒗(t,a)+𝒗(t,b)]+Δpm=1M1𝒗(t,pm).\int_{a}^{b}\bm{v}(t,p)dp\approx\frac{\Delta p}{2}\left[\bm{v}(t,a)+\bm{v}(t,b)\right]+\Delta p\sum_{m=1}^{M-1}\bm{v}(t,p_{m}).

To compute 𝒗(t,p),\bm{v}(t,p), let the vector 𝒘\bm{w} be the collection of the function 𝒗\bm{v} at these grid points, defined more precisely as follows,

𝒘=[𝒘1;𝒘2;;𝒘n],\bm{w}=[\bm{w}_{1};\bm{w}_{2};\cdots;\bm{w}_{n}],

with “;” indicating the straightening of {𝒘i}i1\{\bm{w}_{i}\}_{i\geq 1} into a column vector. This can also be expressed as a superposition state using |k\ket{k} as a new basis,

𝒘i=k𝒗i(t,pk)|k,\bm{w}_{i}=\sum_{k}\bm{v}_{i}(t,p_{k})\ket{k},

By applying the discrete Fourier transformation in the pp direction, one arrives at,

ddt𝒘(t)=i(H0I)𝒘+i(H1Pμ)𝒘.\frac{d}{dt}\bm{w}(t)=-i(H_{0}\otimes I)\bm{w}+i(H_{1}\otimes P_{\mu})\bm{w}. (19)

Here, PμP_{\mu} is the matrix expression of the momentum operator ip-i\partial_{p}, given by

Pμ=ΦDμΦ1,Dμ=diag(μN,,μN1),P_{\mu}=\Phi D_{\mu}\Phi^{-1},\qquad D_{\mu}=\text{diag}(\mu_{-N},\cdots,\mu_{N-1}),

where μl=2πl/(ba)\mu_{l}=2\pi l/(b-a) are the Fourier modes and

Φ=(ϕjl)M×M=(ϕl(xj))M×M,ϕl(x)=eiμl(xa)\Phi=(\phi_{jl})_{M\times M}=(\phi_{l}(x_{j}))_{M\times M},\qquad\phi_{l}(x)=e^{i\mu_{l}(x-a)}

is the matrix representation of the discrete Fourier transform.

At this point, we have successfully mapped the dynamics back to a Hamiltonian system. By a change of variables 𝒘~=(IuΦ1)𝒘\tilde{\bm{w}}=(I_{u}\otimes\Phi^{-1})\bm{w}, one has

ddt𝒘~(t)=i(H0I)𝒘~+i(H1Dμ)𝒘~.\frac{d}{dt}\tilde{\bm{w}}(t)=-i(H_{0}\otimes I)\tilde{\bm{w}}+i(H_{1}\otimes D_{\mu})\tilde{\bm{w}}. (20)

This is more amenable to an approximation by a quantum algorithm. In particular, if H0H_{0} and H1H_{1} are sparse, then (20) is a Schrödinger equation with the Hamiltonian H=H0IH1DμH=H_{0}\otimes I-H_{1}\otimes D_{\mu} that inherits the sparsity.

3.1 Schrödingerization for the CAP method

As an example, we consider the CAP method (3) applied to the TDSE (1) in two dimensions (𝒙=(x,y)\bm{x}=(x,y)), although the extension to higher dimensions is straightforward. As an example, the five-point finite difference is used to discretize the Laplacian on the grid points,

x0<x1<<xN<xN+1,y0<y1<<yN<yN+1.x_{0}<x_{1}<\cdots<x_{N}<x_{N+1},\qquad y_{0}<y_{1}<\cdots<y_{N}<y_{N+1}.

After a spatial discretization of (3), the semi-discrete system for the unknowns ψ(xi,yj,t)\psi(x_{i},y_{j},t) for 1i,jN1\leq i,j\leq N has the following form:

{ddt𝝍(t)=iH0𝝍(t)H1𝝍(t),𝝍(0)=𝝍0,\begin{cases}\frac{d}{dt}\bm{\psi}(t)=-iH_{0}\bm{\psi}(t)-H_{1}\bm{\psi}(t),\\ \bm{\psi}(0)=\bm{\psi}_{0},\end{cases} (21)

where H0H_{0} is a real symmetric matrix corresponding to the discretization of H^\hat{H}, together with the real parts of the absorbing potential W(x,y)W(x,y), and H1=diag(Im𝑾)H_{1}=-\text{diag}(\text{Im}\bm{W}) is a diagonal matrix given by the imaginary part WW of the complex potential, where 𝑾=ijW(xi,yj)|i,j\bm{W}=\sum_{ij}W(x_{i},y_{j})\ket{i,j}. In fact, let DxxD_{xx} be the one-dimensional difference matrices for the second-order derivatives (xx\partial_{xx}) under the homogeneous boundary conditions. Then

H0=12(DxxI+IDxx)+𝑽+diag(Re𝑾),𝑽=diag(V(xi,yj)),H_{0}=-\frac{1}{2}(D_{xx}\otimes I+I\otimes D_{xx})+\bm{V}+\text{diag}(\text{Re}\bm{W}),\qquad\bm{V}=\text{diag}(V(x_{i},y_{j})), (22)

where we have assumed the same partitions along xx and yy directions.

3.2 Schrödingerization for the PML method

The PML in (7) involves both first and second-order derivatives. By applying the central differences to these derivatives, we obtain a straightforward discretization,

{t𝝍=iHV𝝍(I𝝈x+𝝈yI)𝝍+12(IDx)𝝌+12(DyI)ϕ,t𝝌=i(I(𝝈xDx))𝝍(I𝝈y)𝝌,tϕ=i((𝝈yDy)I)𝝍(𝝈yI)ϕ,\begin{cases}\partial_{t}\bm{\psi}=-iH_{V}\bm{\psi}-(I\otimes\bm{\sigma}_{x}+\bm{\sigma}_{y}\otimes I)\bm{\psi}+\frac{1}{2}(I\otimes D_{x})\bm{\chi}+\frac{1}{2}(D_{y}\otimes I)\bm{\phi},\\ \partial_{t}\bm{\chi}=-i(I\otimes(\bm{\sigma}_{x}D_{x}))\bm{\psi}-(I\otimes\bm{\sigma}_{y})\bm{\chi},\\ \partial_{t}\bm{\phi}=-i((\bm{\sigma}_{y}D_{y})\otimes I)\bm{\psi}-(\bm{\sigma}_{y}\otimes I)\bm{\phi},\end{cases}

where DxD_{x} and DxxD_{xx} are the one-dimensional difference matrices for finite-difference approximations of the first- and second-order derivatives (x\partial_{x} and xx\partial_{xx}) under homogeneous boundary conditions, 𝝈x=diag(σ(x1),,σ(xN))\bm{\sigma}_{x}=\text{diag}(\sigma(x_{1}),\cdots,\sigma(x_{N})) is a diagonal matrix and

HV=12(DxxI+IDxx)+𝑽,𝑽=diag(V(xi,yj)).H_{V}=-\frac{1}{2}(D_{xx}\otimes I+I\otimes D_{xx})+\bm{V},\qquad\bm{V}=\text{diag}(V(x_{i},y_{j})). (23)

Let 𝒖h(t)=[𝝍;𝝌;ϕ]\bm{u}_{h}(t)=[\bm{\psi};\bm{\chi};\bm{\phi}]. Then the above system can be written in matrix form,

ddt𝒖h=Lh𝒖h,\frac{d}{dt}\bm{u}_{h}=L_{h}\bm{u}_{h},

where

Lh=[iHV(I𝝈x+𝝈yI)12(IDx)12(DyI)i(I(𝝈xDx))(I𝝈x)Oi((𝝈yDy)I)O(𝝈yI)].L_{h}=\begin{bmatrix}-iH_{V}-(I\otimes\bm{\sigma}_{x}+\bm{\sigma}_{y}\otimes I)&\frac{1}{2}(I\otimes D_{x})&\frac{1}{2}(D_{y}\otimes I)\\ -i(I\otimes(\bm{\sigma}_{x}D_{x}))&-(I\otimes\bm{\sigma}_{x})&O\\ -i((\bm{\sigma}_{y}D_{y})\otimes I)&O&-(\bm{\sigma}_{y}\otimes I)\end{bmatrix}. (24)

In accordance with (21), one has H0=(LhLh)/(2i)H_{0}=-(L_{h}-L_{h}^{\dagger})/(2i) and H1=(Lh+Lh)/2H_{1}=-(L_{h}+L_{h}^{\dagger})/2.

3.3 Schrödingerization for the DtN approach

Before we apply the Schrödingerization procedure for the DtN method (14), we first introduce a change of variables, by defining,

𝝍Γ(t)=(HΓ,I​IHI​I,Γ)1/2ϕΓ(t).\bm{\psi}_{\Gamma}(t)=\left(H_{\Gamma,\text{I\!I}}H_{\text{I\!I,}\Gamma}\right)^{-1/2}\bm{\phi}_{\Gamma}(t). (25)

The invertibility of the matrix can be seen from the linear independence of the rows of the matrix HΓ,I​IH_{\Gamma,\text{I\!I}}.

Direct substitutions into (14) yield,

{ddt𝝍I(t)=iHI,I𝝍I(t)iΣI,Γ𝝍Γ(t),ddt𝝍Γ(t)=iΣI,ΓT𝝍I(t)+ΣΓ,Γ𝝍Γ(t).\left\{\begin{aligned} \frac{d}{dt}\bm{\psi}_{\text{I}}(t)&=-iH_{\text{I,I}}\bm{\psi}_{\text{I}}(t)-i\Sigma_{\text{I},\Gamma}\bm{\psi}_{\Gamma}(t),\\ \frac{d}{dt}\bm{\psi}_{\Gamma}(t)&=-i\Sigma_{\text{I},\Gamma}^{T}\bm{\psi}_{\text{I}}(t)+\Sigma_{\Gamma,\Gamma}\bm{\psi}_{\Gamma}(t).\end{aligned}\right. (26)

Here we have defined matrices,

S=\displaystyle S= HΓ,I​IHI​I,Γ,\displaystyle H_{\Gamma,\text{I\!I}}H_{\text{I\!I,}\Gamma}, (27)
ΣI,Γ\displaystyle\Sigma_{\text{I},\Gamma} =ETS1/2,\displaystyle=E^{T}S^{1/2},
ΣΓ,Γ\displaystyle\Sigma_{\Gamma,\Gamma} =s0I+iS1/2R1S1/2.\displaystyle=s_{0}I+iS^{1/2}R^{-1}S^{1/2}.

These notations are motivated by the notions of self-energy in electron transport [BMO+02].

More explicit expressions for these matrices can be obtained using the formula (12):

R=\displaystyle R= HΓ,I​I(HI​I,I​Iis0I)1HI​I,Γ,\displaystyle-H_{\Gamma,\text{I\!I}}\left(H_{\text{I\!I,I\!I}}-is_{0}I\right)^{-1}H_{\text{I\!I,}\Gamma},
=\displaystyle= HΓ,I​I(HI​I,I​I2+s02I)1(HI​I,I​I+is0I)HI​I,Γ\displaystyle-H_{\Gamma,\text{I\!I}}\left(H_{\text{I\!I,I\!I}}^{2}+s_{0}^{2}I\right)^{-1}\left(H_{\text{I\!I,I\!I}}+is_{0}I\right)H_{\text{I\!I,}\Gamma}
=\displaystyle= G0is0G1,\displaystyle-G_{0}-is_{0}G_{1},

Here to arrive at the second line, we have used the trivial fact that HI​I,I​IH_{\text{I\!I,I\!I}} commutes with II. In the third line, we have defined the matrices

G0=HΓ,I​I(HI​I,I​I2+s02I)1HI​I,I​IHI​I,Γ,G1=HΓ,I​I(HI​I,I​I2+s02I)1HI​I,Γ.G_{0}=H_{\Gamma,\text{I\!I}}\left(H_{\text{I\!I,I\!I}}^{2}+s_{0}^{2}I\right)^{-1}H_{\text{I\!I,I\!I}}H_{\text{I\!I,}\Gamma},\quad G_{1}=H_{\Gamma,\text{I\!I}}\left(H_{\text{I\!I,I\!I}}^{2}+s_{0}^{2}I\right)^{-1}H_{\text{I\!I,}\Gamma}. (28)

To separate out the Hermitian and skew-Hermitian parts of ΣΓ,Γ\Sigma_{\Gamma,\Gamma}, we notice that,

12(R1+(R1))=12R1(R+R)(R1)=R1G0(R1).\frac{1}{2}\left(R^{-1}+\left(R^{-1}\right)^{\dagger}\right)=\frac{1}{2}R^{-1}\left(R^{\dagger}+R\right)\left(R^{-1}\right)^{\dagger}=-R^{-1}G_{0}\left(R^{-1}\right)^{\dagger}.

Similarly, we have,

12i(R1(R1))=12iR1(RR)(R1)=R1G1(R1).\frac{1}{2i}\left(R^{-1}-\left(R^{-1}\right)^{\dagger}\right)=\frac{1}{2i}R^{-1}\left(R^{\dagger}-R\right)\left(R^{-1}\right)^{\dagger}=-R^{-1}G_{1}\left(R^{-1}\right)^{\dagger}.

Now we can apply Schrödingerization procedure with the Hermitian and skew-Hermitian parts given by,

H0=[HI,IΣI,ΓΣI,ΓTS12R1G0(R1)S12],H1=[𝟎𝟎𝟎s0I+S12R1G1(R1)S12].H_{0}=\left[\begin{array}[]{cc}H_{\text{I,I}}&\Sigma_{\text{I},\Gamma}\\ \Sigma_{\text{I},\Gamma}^{T}&-S^{\frac{1}{2}}R^{-1}G_{0}\left(R^{-1}\right)^{\dagger}S^{\frac{1}{2}}\end{array}\right],\quad H_{1}=\left[\begin{array}[]{cc}\bm{0}&\bm{0}\\ \bm{0}&-s_{0}I+S^{\frac{1}{2}}R^{-1}G_{1}\left(R^{-1}\right)^{\dagger}S^{\frac{1}{2}}\end{array}\right]. (29)

4 Numerical Results

We now present results from some numerical experiments. As a concrete example, we consider the TDSE in two dimensions, and choose a square domain Ω:=[3,3]×[3,3].\Omega:=[-3,3]\times[-3,3]. Following the example in [Zhe07], we choose the initial condition as follows,

ψ0(x,y)={1+cos(πr)+i(cos(2πr)1),r1,0otherwise.\psi_{0}(x,y)=\begin{cases}1+\cos(\pi r)+i(\cos(2\pi r)-1),&r\leq 1,\\ 0&\text{otherwise}.\end{cases}

Here r=x2+y2.r=\sqrt{x^{2}+y^{2}}. For the potential, we choose V(x,y)=sin(2πr)V(x,y)=\sin(2\pi r) in the unit disc and extend it to zero outside.

In the numerical experiments, we solve (20) using the matrix exponential:

𝒘~(t)=eiHt𝒘~(0).\tilde{\bm{w}}(t)=e^{-iHt}\tilde{\bm{w}}(0).

This is implemented on a classical computer using MATLAB’s built-in function expm.m. In the implementation, we also take Nx=Np=64N_{x}=N_{p}=64 and p[L,R]=[5,5]p\in[L,R]=[-5,5]. It is important, however, to note that the purpose of these experiments is to demonstrate that the equation (20) in the Schrödingerization captures the dynamics under the ABCs. A quantum implementation of (20) is much more feasible than classical computers, due to the less dependence on the dimension.

We first examine the CAP method. In these experiments, the imaginary potential W(x,y)W(x,y) is chosen as the sum of w1(x,y)w_{1}(x,y) and w2(x,y)w_{2}(x,y), with

w1(x,y)={10i(|x|2.2)2,|x|>2.20,otherwise,w2(x,y)={10i(|y|2.2)2,|y|>2.20,otherwise,w_{1}(x,y)=\begin{cases}-10i(|x|-2.2)^{2},&|x|>2.2\\ 0,&\mbox{otherwise}\end{cases},\qquad w_{2}(x,y)=\begin{cases}-10i(|y|-2.2)^{2},&|y|>2.2\\ 0,&\mbox{otherwise}\end{cases},

as shown in Figure 2. The results from quantum simulations of (3) and (20) at t=0.3,0.6t=0.3,0.6 and t=0.9t=0.9 are displayed in Figure 3(d-f), from which we observe very similar results to the ones given in Figure 3(a-c) for the original form (3). This verifies the correctness of the protocol of the Schrödingerization approach.

Refer to caption
(a) T=0.3T=0.3
Refer to caption
(b) T=0.6T=0.6
Refer to caption
(c) T=0.9T=0.9
Refer to caption
(d) Schrödingerization: T=0.3T=0.3
Refer to caption
(e) Schrödingerization: T=0.6T=0.6
Refer to caption
(f) Schrödingerization: T=0.9T=0.9
Figure 3: Snapshots of the real parts of ψ\psi computed using the CAP method. Top: a direct implementation of (3); Implementation of the Schrödingerization form (20).

Now we turn to the PML method. We implemented (7) on the same test problem, and the results are depicted in Figure 4. Here the function σ\sigma is chosen as

σ(x)={10(x2.2)2,x>2.2,10(2.2x)2,x<2.2.\sigma(x)=\begin{cases}10(x-2.2)^{2},&\qquad x>2.2,\\ 10(-2.2-x)^{2},&\qquad x<-2.2.\\ \end{cases}
Refer to caption
(a) Original: T=0.3T=0.3
Refer to caption
(b) Original: T=0.6T=0.6
Refer to caption
(c) Original: T=0.9T=0.9
Refer to caption
(d) Schrödingerization: T=0.3T=0.3
Refer to caption
(e) Schrödingerization: T=0.6T=0.6
Refer to caption
(f) Schrödingerization: T=0.9T=0.9
Figure 4: Snapshots of the real parts of ψ\psi computed using the PML method. (a-c): the original form; (d-f): the Schrödingerization form.

To implement the DtN approach, one can solve a discrete boundary element equation [WL20] to determine the matrix RR in (12), which then determines the self-energy in (27). Since our purpose is to test the Schrödingerization approach, we compute RR from (12) by choosing a relatively large domain DΩD\supset\Omega and simply set ΩI​I=D\ΩI\Omega_{\text{I\!I}}=D\backslash\Omega_{\text{I}}. For the test example, it suffices to choose D=[6,6]2D=[-6,6]^{2}.

Since all the matrices (HI,IH_{\text{I},\text{I}}, HI,I​IH_{\text{I},\text{I\!I}}, and HI​I,I​IH_{\text{I\!I},\text{I\!I}}) can be directly extracted from the Hamiltonian matrix on DD, we can compute the matrices in (27) by direct matrix inversion and multiplications. In (15), we take s0=1s_{0}=1. The direct simulation with the DtN ABC can be done by solving the ODEs (26). The results are displayed in the top row in Figure 5. Meanwhile, a quantum algorithm would solve (20) with the two matrices H0H_{0} and H1H_{1} from (29). We observe that the classical algorithm is able to maintain the propagating pattern of the wavefront without boundary reflections. The solution based on the Schrödingerization has a similar performance. The slight difference can be attributed by the discretization in the pp space.

Refer to caption
(a) Original: T=0.3T=0.3
Refer to caption
(b) Original: T=0.6T=0.6
Refer to caption
(c) Original: T=0.9T=0.9
Refer to caption
(d) Schrödingerization: T=0.3T=0.3
Refer to caption
(e) Schrödingerization: T=0.6T=0.6
Refer to caption
(f) Schrödingerization: T=0.9T=0.9
Figure 5: Snapshots of the real parts of ψ\psi computed using the DtN method. (a-c): the original form; (d-f): the Schrödingerization form.

5 The implementation complexity of the quantum algorithms

Once the quantum dynamics with ABCs is turned into a Hamiltonian system (20), one can apply a Hamiltonian simulation algorithm to produce the wave function |ψ(T).\ket{\psi(T)}. Due to the possible time-dependence potential V(𝒙,t)V(\bm{x},t), a Hamiltonian simulation algorithm for time-dependent Hamiltonian H(t)H(t) should be considered. To assess the algorithm complexity associated with the implementation of the ABCs, we use the recent results by Berry et al. [BCS+20, Theorem 10], although the other algorithms can also be used for the assessment. Here we simply highlight the query complexity,

Theorem 5.1.

The TDSE (1) with an ss-sparse Hamiltonian H(t)H(t) can be simulated from t=0t=0 to t=Tt=T within error ϵ\epsilon with query complexity,

𝒪(sHmax,1log(sHmax,1/ϵ)loglog(Hmax,1/ϵ)).\mathcal{O}\left(s\norm{H}_{\text{max,1}}\frac{\log(s\norm{H}_{\text{max,1}}/\epsilon)}{\log\log(\norm{H}_{\text{max,1}}/\epsilon)}\right). (30)

Here the norm is defined as,

Hmax,1=0THmax(t)𝑑t,Hmax(t):=maxi,j|Hi,j(t)|.\norm{H}_{\text{max,1}}=\int_{0}^{T}\norm{H}_{\text{max}}(t)dt,\quad\norm{H}_{\text{max}}(t):=\max_{i,j}\absolutevalue{H_{i,j}(t)}. (31)
Table 1: Summary of the complexity with implementing the time-dependent Schrödinger equation (20) obtained from the Schrödingerization of the three types of ABCs. The 𝒪~\widetilde{\mathcal{O}} notation rules out logarithmic factors.
Artificial Boundary Condition Query Complexity
Complex Absorbing Potential (CAP) 𝒪~(TΔx2+Vmax,1+TWmaxΔx2)\widetilde{\mathcal{O}}\left(\frac{T}{\Delta x^{2}}+\norm{V}_{\text{max},1}+\frac{T\norm{W}_{\text{max}}}{\Delta x^{2}}\right)
Perfectly Matched Layers (PML) 𝒪~(TΔx2+Vmax,1+TσmaxΔx3)\widetilde{\mathcal{O}}\left(\frac{T}{\Delta x^{2}}+\norm{V}_{\text{max},1}+\frac{T\norm{\sigma}_{\text{max}}}{\Delta x^{3}}\right)
Dirichlet-to-Neumann Map (DtN) 𝒪~(sΣ(TΔx2+Vmax,1))\widetilde{\mathcal{O}}\left(s_{\Sigma}\left(\frac{T}{\Delta x^{2}}+\norm{V}_{\text{max},1}\right)\right)

We now discuss the complexity with implementing the time-dependent Schrödinger equation (20) obtained from the Schrödingerization of the three types of ABCs. Since the discretization error for equation (17) is 𝒪(Δp)\mathcal{O}(\Delta p) (since e|p|e^{-|p|} is only continuous but not continuously differentiable in pp) and the discretization in real space has error 𝒪(Δx2)\mathcal{O}(\Delta x^{2}), we set

Δp=𝒪(Δx2)\Delta p=\mathcal{O}\big{(}\Delta x^{2}\big{)} (32)

to make the discretization error comparable.

The CAP method. For the CAP method, in light of (22), we obtain directly that

H0(t)max\displaystyle\|H_{0}(t)\|_{\text{max}} 2Δx2+maxi,j|V(xi,yj,t)|+maxi,j|ReW(xi,yj)|,\displaystyle\leq\frac{2}{\Delta x^{2}}+\max_{i,j}|V(x_{i},y_{j},t)|+\max_{i,j}|\text{Re}W(x_{i},y_{j})|,
H1max\displaystyle\|H_{1}\|_{\text{max}} maxi,j|ImW(xi,yj)|.\displaystyle\leq\max_{i,j}|\text{Im}W(x_{i},y_{j})|.

Thus we can conclude that

Hmax=H0IH1Dμmax2Δx2+Vmax,1+WmaxΔx2,\|H\|_{\text{max}}=\|H_{0}\otimes I-H_{1}\otimes D_{\mu}\|_{\text{max}}\leq\frac{2}{\Delta x^{2}}+\norm{V}_{\text{max,1}}+\frac{\norm{W}_{\text{max}}}{\Delta x^{2}},

where we have used Dμ=𝒪(1/Δp)\|D_{\mu}\|=\mathcal{O}(1/\Delta p) together with (32). Meanwhile, the sparsity of HH is s(H)=𝒪(1)s(H)=\mathcal{O}(1) (For dd dimensions, it is 𝒪(d)\mathcal{O}(d)). Combining these estimates gives the total complexity, as shown in the first row of Table 1.


The PML method. We examine the magnitude of the Hamiltonian in (20) from (24) and (23),

H0(t)max\displaystyle\|H_{0}(t)\|_{\text{max}} =12(LhLh)maxHVmax+(𝝈xmax+𝝈ymax)(Dxmax+Dymax)\displaystyle=\|\frac{1}{2}(L_{h}-L_{h}^{\dagger})\|_{\text{max}}\leq\|H_{V}\|_{\text{max}}+\left(\|\bm{\sigma}_{x}\|_{\text{max}}+\|\bm{\sigma}_{y}\|_{\text{max}}\right)\left(\|D_{x}\|_{\text{max}}+\|D_{y}\|_{\text{max}}\right)
1Δx2+maxi,j|V(xi,yj,t)|+4maxi|σ(xi)|1Δx.\displaystyle\leq\frac{1}{\Delta x^{2}}+\max_{i,j}|V(x_{i},y_{j},t)|+4\max_{i}|\sigma(x_{i})|\frac{1}{\Delta x}.

Similarly, we have,

H1max=12(Lh+Lh)max(𝝈x+𝝈y)(Dx+Dy)4maxi|σ(xi)|1Δx.\displaystyle\|H_{1}\|_{\text{max}}=\|\frac{1}{2}(L_{h}+L_{h}^{\dagger})\|_{\text{max}}\leq(\|\bm{\sigma}_{x}\|+\|\bm{\sigma}_{y}\|)(\|D_{x}\|+\|D_{y}\|)\leq 4\max_{i}|\sigma(x_{i})|\frac{1}{\Delta x}.

These two estimates give

Hmax1Δx2+Vmax+σmax1ΔxΔp.\|H\|_{\text{max}}\lesssim\frac{1}{\Delta x^{2}}+\norm{V}_{\text{max}}+\norm{\sigma}_{\text{max}}\frac{1}{\Delta x\Delta p}.

Finally, one can also deduce from (24) and (23) that the sparsity s(H)=𝒪(1)s(H)=\mathcal{O}(1). The total complexity is shown in the second row of Table 1.


The DtN method. The query complexity with the DtN method is provided in the third row of Table 1. To prove this estimate, we follow the equations (27) and (29). First, we notice that S=HΓ,I​IHI​I,ΓS=H_{\Gamma,\text{I\!I}}H_{\text{I\!I,}\Gamma} in (27) is sparse and the elements are of the order 1Δx4.\frac{1}{\Delta x^{4}}. Therefore, ΣI,Γmax=𝒪(1Δx2).\norm{\Sigma_{\text{I},\Gamma}}_{\text{max}}=\mathcal{O}\left(\frac{1}{\Delta x^{2}}\right). Following the same observation, we deduce that Rmax=𝒪(1Δx2).\norm{R}_{\text{max}}=\mathcal{O}\left(\frac{1}{\Delta x^{2}}\right). Using the formulas (28) that we derived in the previous section, we find that, G0max=𝒪(1Δx2),\norm{G_{0}}_{\text{max}}=\mathcal{O}\left(\frac{1}{\Delta x^{2}}\right), but G0max=𝒪(1).\norm{G_{0}}_{\text{max}}=\mathcal{O}(1).

Inserting these estimates into (29), we have

H(t)max=𝒪(1Δx2+Vmax+1Δp).\|H(t)\|_{\text{max}}=\mathcal{O}\left(\frac{1}{\Delta x^{2}}+\norm{V}_{\text{max}}+\frac{1}{\Delta p}\right).
Refer to caption
Figure 6: The effective sparsity of ΣΓ,Γ\Sigma_{\Gamma,\Gamma}. Left: entries satisfying |Σij|>0.01ΣΓ,Γmax\absolutevalue{\Sigma_{ij}}>0.01\norm{\Sigma_{\Gamma,\Gamma}}_{\text{max}} (one-percent cut); Right: entries satisfying |bij|>0.02ΣΓ,Γmax\absolutevalue{b_{ij}}>0.02\norm{\Sigma_{\Gamma,\Gamma}}_{\text{max}} (two-percent cut). ΣΓ,Γmax:=maxi,j|Σij|.\norm{\Sigma_{\Gamma,\Gamma}}_{\text{max}}:=\max_{i,j}\absolutevalue{\Sigma_{ij}}.

The other factor that contributes to the complexity is the sparsity of the matrices in (27). Numerical observations of these matrices suggest that they are close to identity matrices. For example, for the matrix ΣΓ,Γ\Sigma_{\Gamma,\Gamma} in (27), we plotted in Figure 6 its entries by removing entries that are much smaller than its maximum norm. We let sΣs_{\Sigma} be such an effective sparsity and the overall complexity is linear in sΣs_{\Sigma}.

6 Conclusion

In this article, we use the Schrödingerization approch recently introduced in [JLY22a, JLY22b] for quantum dynamics with articifial boundary conditions (ABCs). While a quantum dynamics with ABC is no longer a Hermitian Hamiltonian system, which is most natural for quantum simulation, the Schrödingerization approach makes it so in a simple fashion. We give the implementation details and estimate the computational complexities for three representative ABCs, including the complex absorbing potential method, perfectly matched layer methods, and Dirichlet-to-Neumann approach. Our numerical experiments validate this apporach, demonstrating that the Schrödingerized systems yield the same results as the original dynamics with ABCs.

Our approach provides a simple quantum simulation algorithms for quantum dynamics in unbounded domains. As pointed out in [JLY22a, JLY22b], the Schrödingerization apporach is not only applicable to quantum dynamics with ABCs, it also applies to general linear partial differential equations with physical boundary conditions. It can also be applied to such equations with interface conditions. These will be the subject of our future research.

Acknowledgements

SJ was partially supported by the NSFC grant No. 12031013, the Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102), and the Innovation Program of Shanghai Municipal Education Commission (No. 2021-01-07-00-02-E00087). NL acknowledges funding from the Science and Technology Program of Shanghai, China (21JC1402900). YY was partially supported by China Postdoctoral Science Foundation (no. 2022M712080). XL has supported by a Seed Grant from the Institute of Computational and Data Science (ICDS) at Penn State.

References

  • [AAB+08] Xavier Antoine, Anton Arnold, Chritophe Besse, Matthias Ehrhardt, and Achim Schädle. A review of transparent and artificial boundary conditions techniques for linear and nonlinear Schrödinger equations. Communications in Computational Physics, 4(4):729–796, 2008.
  • [AFL21] Dong An, Di Fang, and Lin Lin. Time-dependent unbounded hamiltonian simulation with vector norm scaling. Quantum, 5:459, 2021.
  • [AFL22] Dong An, Di Fang, and Lin Lin. Time-dependent hamiltonian simulation of highly oscillatory dynamics and superconvergence for schrödinger equation. Quantum, 6:690, 2022.
  • [ALL23] Dong An, Jin-Peng Liu, and Lin Lin. Linear combination of Hamiltonian simulation for non-unitary dynamics with optimal state preparation cost. arXiv preprint arXiv:2303.01029, 2023.
  • [ALWZ22] Dong An, Jin-Peng Liu, Daochen Wang, and Qi Zhao. A theory of quantum differential equation solvers: limitations and fast-forwarding. arXiv preprint arXiv:2211.05246, 2022.
  • [Arn98] Anton Arnold. Numerically absorbing boundary conditions for quantum evolution equations. VLSI design, 6(1-4):313–319, 1998.
  • [BCC+15] Dominic W Berry, Andrew M Childs, Richard Cleve, Robin Kothari, and Rolando D Somma. Simulating Hamiltonian Dynamics with a truncated Taylor series. Physical Review Letters, 114(9), 2015.
  • [BCC+17] Dominic W Berry, Andrew M Childs, Richard Cleve, Robin Kothari, and Rolando D Somma. Exponential improvement in precision for simulating sparse Hamiltonians. Forum of Mathematics, Sigma, 5, 2017.
  • [BCK15] Dominic W Berry, Andrew M Childs, and Robin Kothari. Hamiltonian simulation with nearly optimal dependence on all parameters. In Proceedings of the 56th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2015), 2015.
  • [BCOW17] Dominic W Berry, Andrew M Childs, Aaron Ostrander, and Guoming Wang. Quantum algorithm for linear differential equations with exponentially improved dependence on precision. Communications in Mathematical Physics, 356(3):1057–1081, 2017.
  • [BCS+20] Dominic W Berry, Andrew M Childs, Yuan Su, Xin Wang, and Nathan Wiebe. Time-dependent Hamiltonian simulation with L1{L}^{1}-norm scaling. Quantum, 4:254, 2020.
  • [Ber94] Jean-Pierre Berenger. A perfectly matched layer for the absorption of electromagnetic waves. Journal of Computational Physics, 114(2):185–200, 1994.
  • [BMO+02] Mads Brandbyge, José-Luis Mozos, Pablo Ordejón, Jeremy Taylor, and Kurt Stokbro. Density-functional method for nonequilibrium electron transport. Physical Review B, 65(16):165401, 2002.
  • [CGJ19] Shantanav Chakraborty, András Gilyén, and Stacey Jeffery. The power of block-encoded matrix powers: Improved regression techniques via faster Hamiltonian simulation. In Proceedings of the 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019), 2019.
  • [CKS17] Andrew M Childs, Robin Kothari, and Rolando D Somma. Quantum algorithm for systems of linear equations with exponentially improved dependence on precision. SIAM Journal on Computing, 46(6):1920–1950, 2017.
  • [CL19] Andrew M Childs and Jin-Peng Liu. Quantum spectral methods for differential equations. arXiv preprint arXiv:1901.00961, 2019.
  • [EM79] Bjorn Engquist and Andrew Majda. Radiation boundary conditions for acoustic and elastic wave calculations. Communications on pure and applied mathematics, 32:313–357, 1979.
  • [FLT23] Di Fang, Lin Lin, and Yu Tong. Time-marching based quantum solvers for time-dependent linear differential equations. Quantum, 7:955, 2023.
  • [GSLW19] András Gilyén, Yuan Su, Guang Hao Low, and Nathan Wiebe. Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, pages 193–204, 2019.
  • [HH04] Houde Han and Zhongyi Huang. Exact artificial boundary conditions for the Schrödinger equation in R2R^{2}. Communications in Mathematical Sciences, 2(1):79–94, 2004.
  • [HHL09] Aram W. Harrow, Avinatan Hassidim, and Seth Lloyd. Quantum algorithm for solving linear systems of equations. Physical Review Letters, 103(15):150502, October 2009. arXiv: 0811.3171.
  • [Hid19] J. D Hidary. Quantum Computing: An Applied Approach. Springer, 2019.
  • [HRB07] F He, C Ruiz, and A Becker. Absorbing boundaries in numerical solutions of the time-dependent schrödinger equation on a grid using exterior complex scaling. Physical Review A, 75(5):053407, 2007.
  • [JLY22a] Shi Jin, Nana Liu, and Yue Yu. Quantum simulation of partial differential equations via Schrödingerisation. arXiv:2212.13969, 2022.
  • [JLY22b] Shi Jin, Nana Liu, and Yue Yu. Quantum simulation of partial differential equations via Schrödingerisation: technical details. arXiv:2212.14703, 2022.
  • [Kro22] Hari Krovi. Improved quantum algorithms for linear and nonlinear differential equations. arXiv preprint arXiv:2202.01054, 2022.
  • [KSB19] Mária Kieferová, Artur Scherer, and Dominic W Berry. Simulating the dynamics of time-dependent Hamiltonians with a truncated Dyson series. Physical Review A, 99(4):042314, 2019.
  • [LC17] Guang Hao Low and Isaac L Chuang. Optimal Hamiltonian simulation by quantum signal processing. Physical Review Letters, 118(1), 2017.
  • [LC19] Guang Hao Low and Isaac L Chuang. Hamiltonian simulation by qubitization. Quantum, 3:163, 2019.
  • [Li09] Xiantao Li. Efficient boundary conditions for molecular statics models of solids. Physical Review B, 80(10):104112, 2009.
  • [Li12] Xiantao Li. An atomistic-based boundary element method for the reduction of molecular statics models. Computer Methods in Applied Mechanics and Engineering, 225:1–13, 2012.
  • [MPNE04] JG Muga, JP Palao, B Navarro, and IL Egusquiza. Complex absorbing potentials. Physics Reports, 395(6):357–426, 2004.
  • [MR09] Per-Gunnar Martinsson and Gregory J Rodin. Boundary algebraic equations for lattice problems. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 465(2108):2489–2503, 2009.
  • [NC02] Michael A Nielsen and Isaac Chuang. Quantum Computation and Quantum Information. American Association of Physics Teachers, 2002.
  • [NK11] Anna Nissen and Gunilla Kreiss. An optimized perfectly matched layer for the Schrödinger equation. Communications in Computational Physics, 9(1):147–179, 2011.
  • [Pre18] J. Preskill. Quantum computing in the NISQ era and beyond. Quantum, 2:79, 2018.
  • [RM96] UV Riss and H-D Meyer. Investigation on the reflection and transmission properties of complex absorbing potentials. The Journal of Chemical Physics, 105(4):1409–1419, 1996.
  • [RP11] E. G Rieffel and W. H Polak. Quantum Computing: A Gentle Introduction. MIT Press, 2011.
  • [SE00] EY Sidky and BD Esry. Boundary-free propagation with the time-dependent schrödinger equation. Physical Review Letters, 85(24):5086, 2000.
  • [WL20] Xiaojie Wu and Xiantao Li. Absorbing boundary conditions for the time-dependent Schrödinger-type equations in R3{R}^{3}. Physical Review E, 101(1):013304, 2020.
  • [YE18] Youliang Yu and BD Esry. An optimized absorbing potential for ultrafast, strong-field problems. Journal of Physics B: Atomic, Molecular and Optical Physics, 51(9):095601, 2018.
  • [Zhe07] Chunxiong Zheng. A perfectly matched layer approach to the nonlinear Schrödinger wave equations. Journal of Computational Physics, 227(1):537–556, 2007.