This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Weak formulation and spectral approximation of a Fokker-Planck equation for neural ensembles

Ling Yan,   Pei Zhang,   Yanli Wang,   Zhennan Zhou Beijing Computational Science Research Center, Beijing, China, 100193, email: [email protected]Beijing Computational Science Research Center, Beijing, China, 100193, email: [email protected]Beijing Computational Science Research Center, Beijing, China, 100193, email: [email protected]Institute for Theoretical Sciences, Westlake University, Hangzhou, China, 310030, email: [email protected]
Abstract

In this paper, we focus on efficiently and flexibly simulating the Fokker-Planck equation associated with the Nonlinear Noisy Leaky Integrate-and-Fire (NNLIF) model, which reflects the dynamic behavior of neuron networks. We apply the Galerkin spectral method to discretize the spatial domain by constructing a variational formulation that satisfies complex boundary conditions. Moreover, the boundary conditions in the variational formulation include only zeroth-order terms, with first-order conditions being naturally incorporated. This allows the numerical scheme to be further extended to an excitatory-inhibitory population model with synaptic delays and refractory states. Additionally, we establish the consistency of the numerical scheme. Experimental results, including accuracy tests, blow-up events, and periodic oscillations, validate the properties of our proposed method.

Key words: nonlinear noisy leaky integrate-and-fire model; Fokker-Planck equation; neuron network; spectral-Galerkin method.

Mathematics Subject Classification: 35Q92; 65M70; 92B20

1 Introduction

With the development of neuroscience, the analysis of large-scale neuron network models has attracted significant attention. In recent years, an increasing number of researchers have begun to employ mathematical theories and tools to analyze the activity of simulated neurons. Various stochastic methods [19, 20] are frequently used to simulate these mathematical models. However, the computational complexity escalates as the number of neurons in the neuron networks increases. To address this issue, researchers developed the mean-field theory [21, 13, 23], which has been the foundation of a growing number of models [14, 15, 22] that utilize its principles.

One of the most classic models in computational neuroscience is the Nonlinear Noisy Leaky Integrate-and-Fire (NNLIF) model, which utilizes partial differential equations to describe the dynamic activity of a single neuron’s membrane potential V(t)V(t) with statistical description, such as the distribution and the mean firing rate. The NNLIF model captures the temporal evolution of the membrane potential, considering both its decay and reset mechanisms. In the absence of external stimuli, the membrane potential decays exponentially towards the resting potential VLV_{L} and we assume VL=0V_{L}=0 for the sake of simplicity in this paper. Provided that neurons receive excitatory synaptic inputs from other neurons, the voltage of the neuron continues to increase. Once the membrane potential reaches the threshold voltage VFV_{F}, the membrane potential will not continue to increase, but instead will instantaneously reduce to the reset voltage VRV_{R}. This process will continuously repeat as the neuron responds to the synaptic inputs.

To characterize the dynamic behavior of the neuron population, the evolution equation for the probability density function, also known as the Fokker-Planck equation, has been derived as in [21, 2, 3]

{tp(v,t)+v[h(v,N(t))p(v,t)]a(N(t))vvp(v,t)=0,v(,VF]/{VR},p(v,0)=p0(v),p(,t)=p(VF,t)=0,p(VR,t)=p(VR+,t),vp(VR,t)=vp(VR+,t)+N(t)a,\begin{cases}\partial_{t}p(v,t)+\partial_{v}\big{[}h(v,N(t))p(v,t)\big{]}-a(N(t))\partial_{vv}p(v,t)=0,\quad v\in(-\infty,V_{F}]/\left\{V_{R}\right\},\\ p(v,0)=p^{0}(v),\quad p(-\infty,t)=p(V_{F},t)=0,\\ p(V^{-}_{R},t)=p(V^{+}_{R},t),\quad\partial_{v}p(V^{-}_{R},t)=\partial_{v}p(V^{+}_{R},t)+\frac{N(t)}{a},\end{cases} (1.1)

where

{N(t)=a(N(t))vp(VF,t),h(v,N(t))=v+bN(t),a(N(t))=a0+a1N(t).\begin{cases}N(t)=-a(N(t))\partial_{v}p(V_{F},t),\\ h(v,N(t))=-v+bN(t),\\ a(N(t))=a_{0}+a_{1}N(t).\end{cases} (1.2)

Here, the density function p(v,t)p(v,t) represents for a given time tt the probability of finding a neuron at voltage vv. The connectivity parameter bb is positive for excitatory networks and negative for inhibitory networks. And h(v,N(t))h(v,N(t)) is the drift coefficient and a(N(t))a(N(t)) is the diffusion coefficient. N(t)N(t) is the mean firing rate of the neurons at time tt, and it determines the magnitude of the flux shift, which results in the complicated boundary condition at v=VRv=V_{R}. The density function p(v,t)p(v,t) satisfies the mass conservation

VFp(v,t)dv=VFp0(v)dv=1.\int_{-\infty}^{V_{F}}p(v,t)\mathrm{d}v=\int_{-\infty}^{V_{F}}p^{0}(v)\mathrm{d}v=1. (1.3)

In addition, we suppose the initial value p0(v)p^{0}(v) satisfies the following assumptions [9]

Assumption 1 (Assumption (𝐇)\mathbf{(H)}).

p0(v)C1((,VR)(VR,VF])L1((,VR)(VR,VF))p^{0}(v)\in C^{1}\big{(}(-\infty,V_{R})\cup(V_{R},V_{F}]\big{)}\cap L^{1}\big{(}(-\infty,V_{R})\cup(V_{R},V_{F})\big{)}, is non-negative with p0(VF)=0p^{0}(V_{F})=0 and

limvvp0(v)=0.\lim_{v\to-\infty}\partial_{v}p^{0}(v)=0. (1.4)

In recent years, there have been a number of significant developments about the NNLIF model, including theoretical analysis and numerical simulation. In [3, 9], the author investigates analytical properties of the solutions, steady states and finite time blow-up phenomenon for excitatory-average networks or inhibitory-average networks. Building upon this, multiple population models have been studied in [7, 10] in various aspects, including stationary solutions and global solution behavior. Additionally, some researchers focus on the variants of the NNLIF model, such as models with the refractory state, and synaptic delay [10, 6]. Besides, there has been a growing interest in more complex solution behavior such that the emergence of cascade assemblies [19, 20] and periodic oscillations [2, 1], and the generalized solution of the NNLIF model has been proposed to allow blow-up events by introducing the dilated time scale [11], which may serve as a foundation tool for analyzing these complex dynamics in the PDE models.

Due to the non-linear drift and diffusion terms and the unique flux shift mechanism, it is challenging to investigate the Fokker-Planck equation (1.1) with analytical techniques. Therefore, numerical methods are often desired to approximate the exact solution of the Fokker-Planck equation (1.1). More specifically, we seek an efficient and flexible numerical method that can not only handle the complex boundary conditions and nonlinear terms but is also expected to be applicable to various forms of the Fokker-Planck equations for neuron networks, including an excitatory-inhibitory populations model with synaptic delays and refractory states.

Various numerical methods have been proposed to simulate the NNLIF PDE model. In [4], a finite difference method, which combines the WENO construction and the Chang-Cooper method, is applied to this model with the explicit third-order Runge-Kutta method. In [25], a comparison was made between a finite element method using Euler’s backward difference scheme and the WENO finite difference method, with the FEM method showing potential advantages for nonlinear noisy LIF models. Additionally, a discontinuous Galerkin method for the variant model with delay and refractory periods is proposed in [24] with justified stability properties.

In recent years, developing structure-preserving methods for this model has appeared to be a new trend. A conservative and conditionally positivity-preserving scheme is proposed in [17] in the framework of the finite difference method, and the semi-discrete scheme is also proved to satisfy the discrete relative entropy estimation. Based on [17], the authors propose an unconditionally positivity-preserving and asymptotic preserving numerical scheme in [16] and a specialized multi-scale solver, integrating macroscopic and microscopic solvers, is presented in [12] along with a rigorous numerical analysis for hybridizing both solvers.

To further reduce the computational complexity, there is also a trend to reduce the complex PDE dynamics to the hierarchy of moment equations, which naturally implies a computational tool. A moment method based on the maximum entropy principle is introduced in [28] to approximate the master equation of interacting neurons. In [29], an asymptotic preserving numerical method in the framework of the spectral method without the above limitation is introduced, where the construction of the basis function can satisfy the dynamic boundary conditions exactly, and the spectral convergence is validated. However, unnecessary boundary conditions are imposed in its variational formulation, and the extension of the method in [29] in the current form is rather limited.

In this paper, we focus on developing a flexible spectral-Galerkin method for solving the Fokker-Planck equation (1.1) in a semi-unbounded region. Spectral methods have demonstrated significant advantages for simulating the PDE model as shown in [29], particularly in terms of spectral accuracy. That method in [29] primarily utilizes the Legendre spectral-Galerkin method to solve a specific variational formulation of the Fokker-Planck equation. In this variational formulation, three types of boundary conditions are imposed, including the Dirichlet boundary conditions, the continuity condition, and the dynamical derivative conditions. However, due to the strict constraints imposed by the derivative conditions, it is challenging to extend such a method to models with more complex derivative boundary conditions, such as population models with synaptic delays and refractory periods. To overcome these limitations, we propose a Laguerre-Legendre spectral-Galerkin method (LLSGM) based on an alternative variational form. We reconstruct the weak formulation to only include the essential boundary conditions and the dynamical derivative boundary conditions are naturally implied. Consequently, the proposed numerical method can be readily extended to models with variant derivative boundary conditions. Additionally, we demonstrate the consistency of the numerical scheme with the Fokker-Planck equation, ensuring the reliability of the computational results.

We also carry out extensive numerical experiments with the proposed numerical method. We test the convergence in the temporal and spatial directions and the ability to capture the blow-up events in finite time. Additionally, we extend the proposed method to the excitatory-inhibitory populations with synaptic delays and refractory states and observe the periodic oscillations. The results indicate that the proposed method can be effectively applied to the variants of the Fokker-Planck equation (1.1). We stress that, in contrast to the spectral method in [29], the variational formulation presented here includes only zeroth-order boundary conditions, while the derivative boundary conditions in (1.1) are implicitly contained within the weak form, ensuring the general applicability of the method.

The organization of the paper is as follows. In Sec. 2, we introduce the weak solution and classical solution of the Fokker-Planck equation (1.1), and develop their conditional equivalence. In Sec. 3, we construct the spectral approximation based on the weak solution, where we introduce the basis functions to discrete the spatial direction and the semi-implicit scheme employed for discretizing the temporal direction. Furthermore, the consistency of the numerical scheme is verified. In Sec. 4, we extend our method to the two-populations model with synaptic delay and refractory period. In Sec. 5, we present numerical numerical experiments, including the one-population model and the two-population models.

2 Weak solution

In this section, we present a detailed description of both the weak solution and the classical solution to the PDE model (1.1)-(1.2). Additionally, we demonstrate the equivalence between the weak solution and the classical solution under certain conditions, establishing a critical theoretical foundation for the proposed numerical method in later sections.

First of all, we consider the definition of classical solution as presented in [18].

Definition 2.1 (classical solution).

(p(v,t),N(t))(p(v,t),N(t)) is a classical solution of the system (1.1)-(1.2) in the time interval [0,T][0,T] for any given 0<T<+0<T<+\infty with Assmps. (𝐇)\mathbf{(H)} 1 under the following conditions:

  • (1)

    N(t)N(t) is a continuous function for t[0,T]t\in[0,T],

  • (2)

    p(v,t)p(v,t) is continuous in the region {(v,t):<vVF,t[0,T]}\{(v,t):-\infty<v\leq V_{F},t\in[0,T]\},

  • (3)

    vvp(v,t)\partial_{vv}p(v,t) and tp(v,t)\partial_{t}p(v,t) are continuous in the region {(v,t):v(,VR)(VR,VF],t(0,T]}\big{\{}(v,t):v\in(-\infty,V_{R})\cup(V_{R},V_{F}],t\in(0,T]\big{\}},

  • (4)

    vp(VR,t)\partial_{v}p(V_{R}^{-},t), vp(VR+,t)\partial_{v}p(V_{R}^{+},t) and vp(VF,t)\partial_{v}p(V_{F}^{-},t) are well defined for t(0,T]t\in(0,T],

  • (5)

    For t(0,T]t\in(0,T], vp(v,t)0\partial_{v}p(v,t)\to 0 where vv\to-\infty,

  • (6)

    p(v,t)p(v,t) satisfies (1.1)-(1.2) and p(v,0)=p0(v)p(v,0)=p^{0}(v) for v(,VR)(VR,VF]v\in(-\infty,V_{R})\cup(V_{R},V_{F}].

Here, the fifth condition involves the limit of the derivative at negative infinity. For simplicity, we will use the notation vp(,t)=0\partial_{v}p(-\infty,t)=0 to denote

limvvp(v,t)=0.\lim_{v\to-\infty}\partial_{v}p(v,t)=0. (2.1)

Based on the classical solution p(v,t)p(v,t) as defined in Def. 2.1, we derive a weak solution by multiplying both sides of (1.1) with a test function ϕ(v)C0((,VF])\phi(v)\in C_{0}^{\infty}\big{(}(-\infty,V_{F}]\big{)}, where vvϕ(v)v\partial_{v}\phi(v), vϕ(v)\partial_{v}\phi(v) and vϕ(v)L((,VF])v\phi(v)\in L^{\infty}\big{(}(-\infty,V_{F}]\big{)}. We then perform integration by parts to obtain

VF[tp(v,t)ϕ(v)h(v,N(t))p(v,t)vϕ(v)+a(N(t))vp(v,t)vϕ(v)]dv+(h(v,N(t))p(v,t)ϕ(v)|VR+h(v,N(t))p(v,t)ϕ(v)|VR+VF)(a(N(t))vp(v,t)ϕ(v)|VR+a(N(t))vp(v,t)ϕ(v)|VR+VF)=0.\begin{split}\int^{V_{F}}_{-\infty}\Big{[}\partial_{t}p(v,t)\phi(v)-h(v,N(t))p(v,t)\partial_{v}\phi(v)+a(N(t))\partial_{v}p(v,t)\partial_{v}\phi(v)\Big{]}\mathrm{d}v\\ +\Big{(}h(v,N(t))p(v,t)\phi(v)\big{|}_{-\infty}^{V_{R}^{-}}+h(v,N(t))p(v,t)\phi(v)\big{|}_{V_{R}^{+}}^{V_{F}}\Big{)}\\ -\Big{(}a(N(t))\partial_{v}p(v,t)\phi(v)\big{|}_{-\infty}^{V_{R}^{-}}+a(N(t))\partial_{v}p(v,t)\phi(v)\big{|}_{V_{R}^{+}}^{V_{F}}\Big{)}=0.\end{split} (2.2)

Defining the operator (p,ϕ)\mathcal{B}(p,\phi) as

(p,ϕ)=VF[tp(v,t)ϕ(v)h(v,N(t))p(v,t)vϕ(v)+a(N(t))vp(v,t)vϕ(v)]dv,\mathcal{B}(p,\phi)=\int^{V_{F}}_{-\infty}\Big{[}\partial_{t}p(v,t)\phi(v)-h(v,N(t))p(v,t)\partial_{v}\phi(v)+a(N(t))\partial_{v}p(v,t)\partial_{v}\phi(v)\Big{]}\mathrm{d}v, (2.3)

then by applying the boundary conditions (1.2), (2.2) reduces to

(p,ϕ)+a(N(t))vp(VF,t)[ϕ(VR)ϕ(VF)]=0,\begin{split}\mathcal{B}(p,\phi)+a(N(t))\partial_{v}p(V_{F},t)\big{[}\phi(V_{R})-\phi(V_{F})\big{]}=0,\end{split} (2.4)

where p(v,t)p(v,t) satisfies

p(,t)=0,p(VF,t)=0,p(VR,t)=p(VR+,t).p(-\infty,t)=0,\qquad p(V_{F},t)=0,\qquad p(V^{-}_{R},t)=p(V^{+}_{R},t). (2.5)

Thus, the weak solutions are introduced formally as below.

Definition 2.2 (weak solution).

Define the trial function space

U(Ω)={p(v)1(Ω)|p(VF)=p()=0,p(VR)=p(VR+)},U(\Omega)=\Big{\{}p(v)\in\mathbb{H}^{1}(\Omega)~{}\big{|}~{}p(V_{F})=p(-\infty)=0,\,p(V_{R}^{-})=p(V_{R}^{+})\Big{\}}, (2.6)

where Ω=(,VR)(VR,VF)\Omega=(-\infty,V_{R})\cup(V_{R},V_{F}). Let p(v,t)C1([0,T];U(Ω))p(v,t)\in C^{1}\big{(}[0,T];U(\Omega)\big{)}, N(t)C([0,T])N(t)\in C([0,T]) and p(v,0)=p0(v)p(v,0)=p^{0}(v) satisfying Assmps. (𝐇)\mathbf{(H)} 1 for vΩv\in\Omega. Additionally, assuming that p(v,t)p(v,t) is differentiable at VFV_{F}, VR+V_{R}^{+} and VRV_{R}^{-}.

Then (p(v,t),N(t))(p(v,t),N(t)) is a weak solution of (1.1)-(1.2) if for any test function ϕ(v)1((,VF])\phi(v)\in\mathbb{H}^{1}\big{(}(-\infty,V_{F}]\big{)} with vvϕ1((,VF])v\partial_{v}\phi\in\mathbb{H}^{1}\big{(}(-\infty,V_{F}]\big{)}, ϕ(v)\phi(v) and vϕ(v)L((,VF])v\phi(v)\in L^{\infty}\big{(}(-\infty,V_{F}]\big{)}, the following equation

(p,ϕ)+a(N(t))vp(VF,t)[ϕ(VR)ϕ(VF)]=0\begin{split}\mathcal{B}(p,\phi)+a(N(t))\partial_{v}p(V_{F},t)\big{[}\phi(V_{R})-\phi(V_{F})\big{]}=0\end{split} (2.7)

is satisfied for all t(0,T]t\in(0,T].

Here, C1([0,T];U(Ω))C^{1}\big{(}[0,T];U(\Omega)\big{)} represents the space comprising all continuous functions f(v,t)f(v,t) from [0,T][0,T] to U(Ω)U(\Omega) with

f(t)C([0,T];X):=max0tTf(t)X<,\left\|f(t)\right\|_{C([0,T];X)}:=\max_{0\leq t\leq T}\left\|f(t)\right\|_{X}<\infty, (2.8)

and first derivatives are also continuous with

tf(t)C([0,T];X):=max0tTtf(t)X<,\left\|\partial_{t}f(t)\right\|_{C([0,T];X)}:=\max_{0\leq t\leq T}\left\|\partial_{t}f(t)\right\|_{X}<\infty, (2.9)

where X=U(Ω)X=U(\Omega). The set of bounded functions is denoted as L((,VF])L^{\infty}\big{(}(-\infty,V_{F}]\big{)}. 1((,VF])\mathbb{H}^{1}\big{(}(-\infty,V_{F}]\big{)} is used to denote the abbreviation of classical Sobolev space W1,2((,VF])W^{1,2}\big{(}(-\infty,V_{F}]\big{)}. It should be noted that the boundary conditions satisfied by the weak solution in Def. 2.2 only include the zeroth-order boundary conditions, while the derivative boundary condition in (1.1) is implied within the weak solution as a natural boundary condition. This treatment of boundary conditions simplifies the variational formulation of the problem (1.1) by removing the need to enforce higher-order derivatives at the boundaries.

Remark 2.1.

The model extension discussed in Sec. 4 further supports the reasonableness of retaining only the zeroth-order boundary conditions as essential boundary conditions. This demonstrates that the numerical method based on the weak solution presented in Sec. 3 is both practical and effective.

Before constructing the numerical scheme, we first demonstrate the relationship between the weak solution in Def. 2.2 and the classical solution in Def. 2.1.

Theorem 2.1 (Equivalence to the classical solution).

Given p(v,t)C1([0,T];U(Ω))p(v,t)\in C^{1}\big{(}[0,T];U(\Omega)\big{)}, if (p(v,t),N(t))(p(v,t),N(t)) is a classical solution of (1.1)-(1.2), then (p(v,t),N(t))(p(v,t),N(t)) is a weak solution. Conversely, if (p(v,t),N(t))(p(v,t),N(t)) is a weak solution of (1.1)-(1.2) and p(v,t)C1((0,T];C2((,VR)(VR,VF]))p(v,t)\in C^{1}\big{(}(0,T];C^{2}((-\infty,V_{R})\cup(V_{R},V_{F}])\big{)}, then (p(v,t),N(t))(p(v,t),N(t)) is a classical solution of (1.1)-(1.2).

Proof.

First, it is obvious that if p(v,t)C1([0,T];U(Ω))p(v,t)\in C^{1}\big{(}[0,T];U(\Omega)\big{)} and (p(v,t),N(t))(p(v,t),N(t)) is the classical solution of (1.1)-(1.2), then (p(v,t),N(t))(p(v,t),N(t)) is also a weak solution.

Then, given that p(v,t)C1((0,T];C2((,VR)(VR,VF]))p(v,t)\in C^{1}\big{(}(0,T];C^{2}((-\infty,V_{R})\cup(V_{R},V_{F}])\big{)}, it is suffices to verify if the weak solution (p(v,t),N(t))(p(v,t),N(t)) satisfies the conditions in Def. 2.1. It is easy to derive the conditions (1) - (4) in Def. 2.1 hold from Def. 2.2. Hence, it only remains to prove the conditions (5) - (6) in Def. 2.1 are valid.

It follows from (2.6) that

[h(v,N(t))p(v,t)ϕ(v)]|VR+[h(v,N(t))p(v,t)ϕ(v)]|VR+VF=0.\big{[}h(v,N(t))p(v,t)\phi(v)\big{]}\big{|}_{-\infty}^{V_{R}^{-}}+\big{[}h(v,N(t))p(v,t)\phi(v)\big{]}\big{|}_{V_{R}^{+}}^{V_{F}}=0. (2.10)

Then using integration by parts and (2.10), (2.7) is rewritten into

VF[p(v,t)]ϕ(v)dv+a(N(t))ϕ(VR)[vp(VR,t)vp(VR+,t)+vp(VF,t)]a(N(t))vp(,t)ϕ()=0,\begin{split}\int_{-\infty}^{V_{F}}\mathcal{F}[p(v,t)]\phi(v)\mathrm{d}v+a(N(t))\phi(V_{R})\big{[}\partial_{v}p(V_{R}^{-},t)-\partial_{v}p(V_{R}^{+},t)+\partial_{v}p(V_{F},t)\big{]}\\ -a(N(t))\partial_{v}p({-\infty},t)\phi({-\infty})=0,\end{split} (2.11)

where

[p(v,t)]=tp(v,t)+v[h(v,N(t))p(v,t)]a(N(t))vvp(v,t).\mathcal{F}[p(v,t)]=\partial_{t}p(v,t)+\partial_{v}\big{[}h(v,N(t))p(v,t)\big{]}-a(N(t))\partial_{vv}p(v,t). (2.12)

Taking

ϕ(v)𝕎1={ϕ(v)𝕎(,VF):ϕ(VR)=ϕ()=0},\phi(v)\in\mathbb{W}_{1}=\left\{\phi(v)\in\mathbb{W}(-\infty,V_{F}):\phi(V_{R})=\phi(-\infty)=0\right\}, (2.13)

where

𝕎(,VF)={ϕ(v)1(,VF]:vvϕ1(,VF]andϕ,vϕL(,VF]},\mathbb{W}(-\infty,V_{F})=\left\{\phi(v)\in\mathbb{H}^{1}(-\infty,V_{F}]:v\partial_{v}\phi\in\mathbb{H}^{1}(-\infty,V_{F}]\ \text{and}\ \phi,v\phi\in L^{\infty}(-\infty,V_{F}]\right\}, (2.14)

hence, (2.11) is reduced to

VF[p(v,t)]ϕ(v)dv=0.\int_{-\infty}^{V_{F}}\mathcal{F}[p(v,t)]\phi(v)\mathrm{d}v=0. (2.15)

Due to the continuity of [p(v,t)]\mathcal{F}[p(v,t)] and the arbitrariness of ϕ(v)\phi(v), we obtain

[p(v,t)]=0.\mathcal{F}[p(v,t)]=0. (2.16)

Next, taking

ϕ(v)𝕎2={ϕ(v)𝕎(,VF):ϕ(VR)=0}\phi(v)\in\mathbb{W}_{2}=\left\{\phi(v)\in\mathbb{W}(-\infty,V_{F}):\phi(V_{R})=0\right\} (2.17)

and substituting (2.16) into (2.11), the following identity holds

a(N(t))vp(,t)ϕ()=0,a(N(t))\partial_{v}p(-\infty,t)\phi(-\infty)=0, (2.18)

which implies

vp(,t)=0.\partial_{v}p(-\infty,t)=0. (2.19)

Finally, applying (2.16) and (2.19) to (2.11), we can derive

a(N(t))ϕ(VR)[vp(VR,t)vp(VR+,t)+vp(VF,t)]=0.a(N(t))\phi(V_{R})\left[\partial_{v}p(V_{R}^{-},t)-\partial_{v}p(V_{R}^{+},t)+\partial_{v}p(V_{F},t)\right]=0. (2.20)

Hence, due to the arbitrariness of ϕ(v)\phi(v), it holds

vp(VR,t)=vp(VR+,t)vp(VF,t)=vp(VR+,t)+N(t)a(N(t)).\begin{split}\partial_{v}p(V_{R}^{-},t)&=\partial_{v}p(V_{R}^{+},t)-\partial_{v}p(V_{F},t)\\ &=\partial_{v}p(V_{R}^{+},t)+\frac{N(t)}{a(N(t))}.\end{split} (2.21)

Consequently, from (2.16), (2.19) and (2.21), we can deduce conditions (5) - (6) in Def. 2.1 hold, and the proof is completed.

3 Numerical scheme

In this section, we develop a numerical scheme for the Fokker-Planck equation using a spectral method combined with semi-implicit time discretization to obtain a fully discrete scheme. Additionally, we demonstrate the consistency of the numerical scheme with the Fokker-Planck equation in the nonlinear case.

3.1 Weak formulation and basis function

In this section, we introduce the weak formulation based on (2.4), and a set of basis functions is constructed. Based on (2.4), the variational formulation of (1.1) is as bellow

{Find p(v,t)W such that(p,ϕ)+a(N(t))vp(VF,t)[ϕ(VR)ϕ(VF)]=0,ϕW,\begin{cases}\text{Find $p(v,t)\in W$ such that}\\ \begin{aligned} \mathcal{B}(p,\phi)+a(N(t))\partial_{v}p(V_{F},t)\left[\phi(V_{R})-\phi(V_{F})\right]=0,\quad\forall\phi\in W,\end{aligned}\end{cases} (3.1)

where WUW\subseteq U is the trial function space, with UU defined in (2.6). Since we employ the Galerkin spectral method for spatial discretization, the test function space coincides with the trial function space WW.

The trial function space WW is spanned by a set of basis functions, and one must carefully select function basis to handle the complex boundary conditions (2.5) of the problem (1.1). Denoting the set of basis functions of the trial function space WW by {ψk}k=1\left\{\psi_{k}\right\}_{k=1}^{\infty}, we can expand the approximate solution p(v,t)Wp(v,t)\in W as

p(v,t)=k=1uk(t)ψk(v).p(v,t)=\sum_{k=1}^{\infty}u_{k}(t)\psi_{k}(v). (3.2)

To satisfy the continuity requirement of the density function p(v,t)p(v,t) at VRV_{R}, we consider the use of a subset of basis functions designed to the boundary conditions (2.5), complemented by another subset designed to satisfy the homogeneous conditions. Specifically, let W=W1+W2W=W_{1}+W_{2}, where any density function p(v,t)W1p(v,t)\in W_{1} satisfies the boundary conditions (2.5) and any p(v,t)W2p(v,t)\in W_{2} satisfies the following homogeneous boundary conditions

p(,t)=0,p(VF,t)=0,p(VR)=p(VR+)=0.p(-\infty,t)=0,\qquad p(V_{F},t)=0,\qquad p(V^{-}_{R})=p(V^{+}_{R})=0. (3.3)

This decomposition method enables us to handle complex boundary conditions more effectively while enhancing the flexibility of the numerical method. This flexibility primarily lies in the choice of basis functions in W1W_{1}, which address the boundary conditions, while W2W_{2} is used to improve accuracy within the interval.

To keep the dimension of W1W_{1} as low as possible, we observe that the four boundary values to consider from the boundary conditions (2.5) are the values of the density function p(v,t)p(v,t) at VFV_{F}, VR+V_{R}^{+}, VRV_{R}^{-}, and negative infinity. However, there are only three conditions in the boundary conditions (2.5). Hence, only a single basis function g1(v)g_{1}(v) is needed. We can define W1W_{1} as

W1=span{g1(v)},W_{1}=\mathrm{span}\left\{g_{1}(v)\right\}, (3.4)

where the basis function should satisfy

{g1()=0,g1(VR)=1,v(,VR),{g1(VF)=0,g1(VR)=1,v(VR,VF].\begin{cases}g_{1}(-\infty)=0,\\ g_{1}(V_{R})=1,\end{cases}v\in(-\infty,V_{R}),\qquad\begin{cases}g_{1}(V_{F})=0,\\ g_{1}(V_{R})=1,\end{cases}v\in(V_{R},V_{F}]. (3.5)

Specifically, we can determine a simple form for the basis function g1(v)g_{1}(v), which is a piecewise function composed of an exponentially decaying function and a linear function

g1={eβ(VRv)/2,v(,VR),vVFVRVF,v(VR,VF],g_{1}=\begin{cases}e^{-\beta(V_{R}-v)/2},\quad v\in(-\infty,V_{R}),\\ \frac{v-V_{F}}{V_{R}-V_{F}},\quad v\in(V_{R},V_{F}],\end{cases} (3.6)

where β\beta is an adjustable parameter. It should be noted that the basis function g1(v)g_{1}(v) can also be chosen in other forms.

Next, we proceed to the construction of W2W_{2}. Given the discontinuity of the interval Ω=(,VR)(VR,VF)\Omega=(-\infty,V_{R})\cup(V_{R},V_{F}) at point VRV_{R}, handling homogeneous boundary conditions (3.3) is non-trivial. To address this issue, we divide the interval Ω\Omega into two subintervals, namely (,VR)(-\infty,V_{R}) and (VR,VF)(V_{R},V_{F}). Specifically, we define

W2P^(,VR)+P(VR,VF),W_{2}\subseteq\hat{P}_{\infty}(-\infty,V_{R})+P_{\infty}(V_{R},V_{F}), (3.7)

where

P^(,VR)={ev+VR2f|fP(,VR)}\hat{P}_{\infty}(-\infty,V_{R})=\left\{e^{-\frac{-v+V_{R}}{2}}f\big{|}f\in P_{\infty}(-\infty,V_{R})\right\} (3.8)

and P(a,b)P_{\infty}(a,b) represents the set of all real polynomials over the interval (a,b)(a,b). According to the homogeneous boundary conditions (3.3), the values of the density function p(v,t)p(v,t) at the boundaries of the intervals (,VR)(-\infty,V_{R}) and (VR,VF)(V_{R},V_{F}) should be zero. This implies that the basis functions must vanish at these boundary points. Thus, W2W_{2} can be rewritten in the following form

W2=X^(,VR)+X(VR,VF),W_{2}=\hat{X}_{(-\infty,V_{R})}+X_{(V_{R},V_{F})}, (3.9)

where

X^(,VR)\displaystyle\hat{X}_{(-\infty,V_{R})} ={φP^(,VR):φ()=φ(VR)=0},\displaystyle=\big{\{}\varphi\in\hat{P}_{\infty}(-\infty,V_{R}):\varphi(-\infty)=\varphi(V_{R})=0\big{\}}, (3.10)
X(VR,VF)\displaystyle X_{(V_{R},V_{F})} ={φP(VR,VF):φ(VR)=φ(VF)=0}.\displaystyle=\big{\{}\varphi\in P_{\infty}(V_{R},V_{F}):\varphi(V_{R})=\varphi(V_{F})=0\big{\}}.

Here, we consider the construction of basis functions in X^(,VR)\hat{X}_{(-\infty,V_{R})} and X(VR,VF)X_{(V_{R},V_{F})}. Because X^(,VR)\hat{X}_{(-\infty,V_{R})} is defined on a semi-infinite domain, we can utilize Laguerre functions to span X^(,VR)\hat{X}_{(-\infty,V_{R})}. The generalized Laguerre functions [27] of degree nn are defined by

^n(α)(x)=ex/2n(α)(x)=xαex/2n!dndxn(xn+αex),x+,α>1,n0,\hat{\mathcal{\ell}}^{(\alpha)}_{n}(x)=e^{-x/2}\mathcal{\ell}^{(\alpha)}_{n}(x)=\frac{x^{-\alpha}e^{x/2}}{n!}\frac{\mathrm{d}^{n}}{\mathrm{d}x^{n}}(x^{n+\alpha}e^{-x}),\quad x\in\mathbb{R}_{+},\quad\alpha>-1,\quad n\geq 0, (3.11)

where n(α)(x)\mathcal{\ell}^{(\alpha)}_{n}(x) is Laguerre polynomials. These Laguerre functions are orthogonal with the weight function ω^α(x)=xα\hat{\omega}_{\alpha}(x)=x^{\alpha}, namely,

0+^n(α)(x)^m(α)(x)ω^α(x)dx=γn(α)δmn,\int_{0}^{+\infty}\hat{\mathcal{\ell}}^{(\alpha)}_{n}(x)\hat{\mathcal{\ell}}^{(\alpha)}_{m}(x)\hat{\omega}_{\alpha}(x)\mathrm{d}x=\gamma_{n}^{(\alpha)}\delta_{mn}, (3.12)

where

γn(α)=Γ(n+α+1)n!.\gamma_{n}^{(\alpha)}=\frac{\Gamma(n+\alpha+1)}{n!}. (3.13)

The three-term recurrence relation of Laguerre functions is

^0(α)(x)=ex/2,^1(α)(x)=(α+1x)ex/2,\displaystyle\hat{\mathcal{\ell}}^{(\alpha)}_{0}(x)=e^{-x/2},\quad\hat{\mathcal{\ell}}^{(\alpha)}_{1}(x)=(\alpha+1-x)e^{-x/2},
(n+1)^(n+1)(α)(x)=(2n+α+1x)^n(α)(x)(n+α)^(n1)(α)(x).\displaystyle(n+1)\hat{\mathcal{\ell}}^{(\alpha)}_{(n+1)}(x)=(2n+\alpha+1-x)\hat{\mathcal{\ell}}^{(\alpha)}_{n}(x)-(n+\alpha)\hat{\mathcal{\ell}}^{(\alpha)}_{(n-1)}(x). (3.14)

Since ^n(α)(x)\hat{\ell}^{(\alpha)}_{n}(x) is defined on (0,+)(0,+\infty), we construct the basis function hkL(v)h_{k}^{L}(v) defined on (,VR)(-\infty,V_{R}) with a coordinate transformation

hkL(v)=^k(0)(v+VR)+a1^k+1(0)(v+VR).h_{k}^{L}(v)=\hat{\mathcal{\ell}}_{k}^{(0)}(-v+V_{R})+a_{1}\hat{\mathcal{\ell}}_{k+1}^{(0)}(-v+V_{R}). (3.15)

where

^n(0)(0)=1,limv^n(0)(v)=0.\hat{\mathcal{\ell}}_{n}^{(0)}(0)=1,\quad\lim_{v\to-\infty}\hat{\mathcal{\ell}}_{n}^{(0)}(v)=0. (3.16)

Because the solution satisfies p(,t)=p(VR,t)=0p(-\infty,t)=p(V_{R}^{-},t)=0, basis functions hkL(v)h_{k}^{L}(v) should satisfy the following conditions

hkL()=0,hkL(VR)=0.h_{k}^{L}(-\infty)=0,\qquad h_{k}^{L}(V_{R}^{-})=0. (3.17)

Therefore, we get a1=1a_{1}=-1.

For the bounded region on the right side of VRV_{R}, we employ basis functions that differ from those used in the left interval (,VR)(-\infty,V_{R}). Specifically, Legendre polynomials Lk(v)L_{k}(v) are chosen to construct the basis functions on the interval (VR,VF)(V_{R},V_{F}). Since the domain of Legendre polynomials is [1,1][-1,1], a coordinate transformation

x(v)=vVF+VR2VFVR2,v(VR,VF)x(v)=\frac{v-\frac{V_{F}+V_{R}}{2}}{\frac{V_{F}-V_{R}}{2}},\quad v\in(V_{R},V_{F}) (3.18)

is necessary. Furthermore, the basis functions in the interval (VR,VF)(V_{R},V_{F}) are given by

hkR(v)=k(v)+b1k+1(v)+b2k+2(v),h_{k}^{R}(v)=\mathcal{H}_{k}(v)+b_{1}\mathcal{H}_{k+1}(v)+b_{2}\mathcal{H}_{k+2}(v), (3.19)

where

k(v)=Lk(x).\mathcal{H}_{k}(v)=L_{k}(x). (3.20)

Similarly, from (3.9) and (3.10), we obtain

hkR(VR+)=0,hkR(VF)=0.h_{k}^{R}(V_{R}^{+})=0,\quad h_{k}^{R}(V_{F})=0. (3.21)

By substituting (3.21) into (3.19), we get that b1=0b_{1}=0 and b2=1b_{2}=-1.

In conclusion, we have designed a suitable set of basis functions for the trial function space WW as follows

{ψk}k=1={g1,h0L,h1L,,hM1L,,h0R,h1R,,hM1R,}.\left\{\psi_{k}\right\}^{\infty}_{k=1}=\left\{g_{1},h_{0}^{L},h_{1}^{L},\cdots,h_{M-1}^{L},\cdots,h_{0}^{R},h_{1}^{R},\cdots,h_{M-1}^{R},\cdots\right\}. (3.22)

Then the solution of (3.1) can be written as

p(v,t)=k=1uk(t)ψk(v).p(v,t)=\sum_{k=1}^{\infty}u_{k}(t)\psi_{k}(v). (3.23)

3.2 Fully discrete scheme

In this section, we introduce a Laguerre-Legendre spectral-Galerkin method (LLSGM) for the spatial discretization of the Fokker-Planck equation (1.1), transforming it into a system of ordinary differential equations (ODEs) in time. We then derive the fully discrete scheme for the weak formulation in two steps. First, we construct a finite-dimensional trial function space and apply the spectral method for spatial discretization. In the second step, a semi-implicit method is used for temporal discretization, resulting in a nonlinear fully discrete scheme.

To obtain the finite-dimensional trial function space WW, we denote the approximation space WMW_{M} as

WM=W1+W2M,W_{M}=W_{1}+W_{2}^{M}, (3.24)

where W1W_{1} is a one-dimensional space and W2MW_{2}^{M} is the truncated space of W2W_{2}. From this, we derive the following form of W2MW_{2}^{M} using (3.9) and (3.10)

W2M=X^M(,VR)+XM(VR,VF),W_{2}^{M}=\hat{X}_{M(-\infty,V_{R})}+X_{M(V_{R},V_{F})}, (3.25)

where

X^M(,VR)\displaystyle\hat{X}_{M(-\infty,V_{R})} ={φP^M(,VR):φ()=φ(VR)=0},\displaystyle=\big{\{}\varphi\in\hat{P}_{M}(-\infty,V_{R}):\varphi(-\infty)=\varphi(V_{R})=0\big{\}}, (3.26)
XM(VR,VF)\displaystyle X_{M(V_{R},V_{F})} ={φPM(VR,VF):φ(VR)=φ(VF)=0}.\displaystyle=\big{\{}\varphi\in P_{M}(V_{R},V_{F}):\varphi(V_{R})=\varphi(V_{F})=0\big{\}}.

The notation P^M(,VR)\hat{P}_{M}(-\infty,V_{R}) represents

P^M(,VR)={ev+VR2f|fPM(,VR)}\hat{P}_{M}(-\infty,V_{R})=\left\{e^{-\frac{-v+V_{R}}{2}}f~{}\big{|}~{}f\in P_{M}(-\infty,V_{R})\right\} (3.27)

and PM(a,b)P_{M}(a,b) denotes the set of all real polynomials of degree at most MM in the interval (a,b)(a,b). In W1W_{1}, we have identified that a single basis function g1(v)g_{1}(v) satisfies the boundary conditions (3.5). Therefore, the dimension of the approximation space WMW_{M} is 2M+12M+1. Then the numerical solution pM(v,t)p_{M}(v,t) can be expressed as

pM(v,t)=k=12M+1u^k(t)ψk(v),p_{M}(v,t)=\sum_{k=1}^{2M+1}\hat{u}_{k}(t)\psi_{k}(v), (3.28)

where

{ψk}k=12M+1={g1,h0L,h1L,,hM1L,h0R,h1R,,hM1R}.\left\{\psi_{k}\right\}^{2M+1}_{k=1}=\left\{g_{1},h_{0}^{L},h_{1}^{L},\cdots,h_{M-1}^{L},h_{0}^{R},h_{1}^{R},\cdots,h_{M-1}^{R}\right\}. (3.29)

According to the spectral-Galerkin method in [26], the variational formulation obtained from (3.1) is as bellow

{Find pM(v,t)WM such that(pM,ϕ)+a(N(t))vpM(VF,t)[ϕ(VR,t)ϕ(VF,t)]=0,ϕWM,\begin{cases}\text{Find $p_{M}(v,t)\in W_{M}$ such that}\\ \begin{aligned} \ \mathcal{B}(p_{M},\phi)+a(N(t))\partial_{v}p_{M}(V_{F},t)\left[\phi(V_{R},t)-\phi(V_{F},t)\right]=0,\quad\forall\phi\in W_{M},\end{aligned}\end{cases} (3.30)

where WMW_{M} is the subspace of WW. Substituting the numerical solution (3.28) to the variational formulation (3.30), we obtain the semi-discrete form

Hd𝒖^dt+A𝒖^bN(t)B𝒖^+a(N(t))C𝒖^+a(N(t))D𝒖^=0,H\frac{\mathrm{d}\hat{\boldsymbol{u}}}{\mathrm{d}t}+A\hat{\boldsymbol{u}}-bN(t)B\hat{\boldsymbol{u}}+a(N(t))C\hat{\boldsymbol{u}}+a(N(t))D\hat{\boldsymbol{u}}=0, (3.31)

where

𝒖^=(u^1(t),u^2(t),u^3(t),,u^2M+1(t))T,Hjk=VFψkψjdv,Ajk=VFvψkvψjdv,Bjk=VFψkvψjdv,Cjk=VFvψkvψjdv,Djk=vψk(VF)[ψj(VR)ψj(VF)],j,k=1,2,,2M+1.\begin{split}&\hat{\boldsymbol{u}}=\big{(}\hat{u}_{1}(t),\hat{u}_{2}(t),\hat{u}_{3}(t),...,\hat{u}_{2M+1}(t)\big{)}^{T},\\ &H_{jk}=\int_{-\infty}^{V_{F}}\psi_{k}\psi_{j}\mathrm{d}v,\quad A_{jk}=\int_{-\infty}^{V_{F}}v\psi_{k}\partial_{v}\psi_{j}\mathrm{d}v,\\ &B_{jk}=\int_{-\infty}^{V_{F}}\psi_{k}\partial_{v}\psi_{j}\mathrm{d}v,\quad C_{jk}=\int_{-\infty}^{V_{F}}\partial_{v}\psi_{k}\partial_{v}\psi_{j}\mathrm{d}v,\\ &D_{jk}=\partial_{v}\psi_{k}(V_{F})\left[\psi_{j}(V_{R})-\psi_{j}(V_{F})\right],\quad j,k=1,2,\cdots,2M+1.\end{split} (3.32)

The mean firing rate N(t)N(t) is obtained by

N(t)=a(N(t))k=12M+1u^k(t)vψk(VF).N(t)=-a(N(t))\sum_{k=1}^{2M+1}\hat{u}_{k}(t)\partial_{v}\psi_{k}(V_{F}). (3.33)

The initial coefficients can be determined by the projection method,

VFk=12M+1u^k(0)ψk(v)ψj(v)dv=VFp0(v)ψj(v)dv,j=1,2,,2M+1.\int_{-\infty}^{V_{F}}\sum_{k=1}^{2M+1}\hat{u}_{k}(0)\psi_{k}(v)\psi_{j}(v)\mathrm{d}v=\int_{-\infty}^{V_{F}}p^{0}(v)\psi_{j}(v)\mathrm{d}v,\quad j=1,2,\cdots,2M+1. (3.34)

To achieve a balance between stability and efficiency in temporal discretization, we employ a semi-implicit scheme, a technique well-suited for spectral methods. The linear part of the equation is treated implicitly to enhance stability, while the nonlinear part is treated explicitly to avoid the complexity of solving nonlinear equations. Then the fully discretized form of the variational formulation (3.30) is as follows

(1ΔtH+AbNnB+anC+anD)𝒖^n+1=1ΔtH𝒖^n,\left(\frac{1}{\Delta t}H+A-bN^{n}B+a^{n}C+a^{n}D\right)\hat{\boldsymbol{u}}^{n+1}=\frac{1}{\Delta t}H\hat{\boldsymbol{u}}^{n}, (3.35)

where

Nn=a(Nn)k=12M+1u^knvψk(VF).N^{n}=-a(N^{n})\sum_{k=1}^{2M+1}\hat{u}^{n}_{k}\partial_{v}\psi_{k}(V_{F}). (3.36)

3.3 Consistency of the numerical scheme

In the above section, we have derived the variational formulation and developed a numerical scheme for the Fokker-Planck equation (1.1). Note that the exact solution does not satisfy the numerical scheme (3.30) when it is truncated to the same order. However, we shall show that it satisfies the numerical scheme (3.30) when the dimension of the approximation space goes to infinity.

It is clear that

p(v,t)=k=1uk(t)ψk(v)p(v,t)=\sum_{k=1}^{\infty}u_{k}(t)\psi_{k}(v) (3.37)

satisfies the variational formulation (3.1). Define

(p,ϕ)=(p,ϕ)+a(N(t))vp(VF,t)[ϕ(VR)ϕ(VF)],\displaystyle\mathcal{L}(p,\phi)=\mathcal{B}(p,\phi)+a(N(t))\partial_{v}p(V_{F},t)\left[\phi(V_{R})-\phi(V_{F})\right], (3.38)

we have

(p,ϕ)=0,ϕW.\mathcal{L}(p,\phi)=0,\quad\forall\phi\in W. (3.39)

From the variational formulation (3.30), we have

(pM,ϕ)=0,ϕWM,\mathcal{L}(p_{M},\phi)=0,\quad\forall\phi\in W_{M}, (3.40)

where

pM=k=12M+1u^k(t)ψk(v).p_{M}=\sum_{k=1}^{2M+1}\hat{u}_{k}(t)\psi_{k}(v). (3.41)

Obviously, provided that we truncate the exact solution p(v,t)p(v,t) in (3.37) to

p~M=k=12M+1uk(t)ψk(v),\tilde{p}_{M}=\sum_{k=1}^{2M+1}u_{k}(t)\psi_{k}(v), (3.42)

then

p(v,t)=k=1uk(t)ψk(v)=p~M+k=2M+2uk(t)ψk(v).p(v,t)=\sum_{k=1}^{\infty}u_{k}(t)\psi_{k}(v)=\tilde{p}_{M}+\sum_{k=2M+2}^{\infty}u_{k}(t)\psi_{k}(v). (3.43)

It is straightforward that p~M\tilde{p}_{M} does not satisfy the variational formulation (3.30). Specifically, the coefficients uku_{k} from p~M\tilde{p}_{M} are the same as those in (3.37) for the first 2M+12M+1 terms, but differ from the coefficients u^k\hat{u}_{k} calculated from the variational formulation (3.30), as shown in (3.28). Therefore,

(p~M,ϕ)0,ϕWM.\mathcal{L}(\tilde{p}_{M},\phi)\neq 0,\quad\forall\phi\in W_{M}. (3.44)

However, we claim that p~M\tilde{p}_{M} satisfies the variational formulation (3.30) in the limit as M+M\to+\infty. Namely, for any fixed M1>0M_{1}>0 and any test function ϕiWM1\phi_{i}\in W_{M_{1}}, M>M1M>M_{1}

(p~M,ϕi)=(p~M,ϕi)+a~(N~(t))vp~M(VF,t)[ϕi(VR,t)ϕi(VF,t)]0,\displaystyle\mathcal{L}(\tilde{p}_{M},\phi_{i})=\mathcal{B}(\tilde{p}_{M},\phi_{i})+\tilde{a}(\tilde{N}(t))\partial_{v}\tilde{p}_{M}(V_{F},t)\left[\phi_{i}(V_{R},t)-\phi_{i}(V_{F},t)\right]\to 0, (3.45)

where

{N~(t)=a~(N~(t))vp~M(VF,t),h~(v,N~(t))=v+bN~(t),a~(N~(t))=a0+a1N~(t).\begin{cases}\tilde{N}(t)=-\tilde{a}(\tilde{N}(t))\partial_{v}\tilde{p}_{M}(V_{F},t),\\ \tilde{h}(v,\tilde{N}(t))=-v+b\tilde{N}(t),\\ \tilde{a}(\tilde{N}(t))=a_{0}+a_{1}\tilde{N}(t).\end{cases} (3.46)

Let us denote the above assertion as the consistency of the numerical scheme (3.30). Consequently, we can summarize the above discussion as the following theorem.

Theorem 3.1 (Consistency).

Assume that

limM+pp~ML2(,VF]=0,\displaystyle\lim_{M\to+\infty}\left\|p-\tilde{p}_{M}\right\|_{L^{2}(-\infty,V_{F}]}=0, (3.47)
limM+t(pp~M)L2(,VF]=0,\displaystyle\lim_{M\to+\infty}\left\|\partial_{t}(p-\tilde{p}_{M})\right\|_{L^{2}(-\infty,V_{F}]}=0,
limM+v(pp~M)L(VR,VF]=0,\displaystyle\lim_{M\to+\infty}\left\|\partial_{v}(p-\tilde{p}_{M})\right\|_{L^{\infty}(V_{R},V_{F}]}=0,

then the numerical scheme (3.30) is consistent with the differential equation (1.1), which means that (3.45)-(3.46) hold as MM\to\infty.

Proof.

Denoting k=2M+2uk(t)ψk(v)\sum\limits_{k=2M+2}^{\infty}u_{k}(t)\psi_{k}(v) as RM(v,t)R_{M}(v,t), then we rewrite p(v,t)p(v,t) as

p(v,t)=p~M(v,t)+RM(v,t).p(v,t)=\tilde{p}_{M}(v,t)+R_{M}(v,t). (3.48)

Here, our goal is to show

limM+(p~M,ϕi)=0,ϕiWM1.\lim_{M\to+\infty}\mathcal{L}(\tilde{p}_{M},\phi_{i})=0,\quad\phi_{i}\in W_{M_{1}}. (3.49)

First, we observe that

(p,ϕi)=0,ϕiWM1,\mathcal{L}(p,\phi_{i})=0,\quad\phi_{i}\in W_{M_{1}}, (3.50)

because of WM1WW_{M_{1}}\subseteq W from the variational formulation (3.1). By subtracting this identity, we get

(p~M,ϕi)=\displaystyle\mathcal{L}(\tilde{p}_{M},\phi_{i})= (p~M,ϕi)(p,ϕi)\displaystyle\mathcal{L}(\tilde{p}_{M},\phi_{i})-\mathcal{L}(p,\phi_{i}) (3.51)
=\displaystyle= VFt(p~Mp)ϕi+(hph~p~M)vϕi+(a~vp~Mavp)vϕidv\displaystyle\int_{-\infty}^{V_{F}}\partial_{t}(\tilde{p}_{M}-p)\phi_{i}+(hp-\tilde{h}\tilde{p}_{M})\partial_{v}\phi_{i}+(\tilde{a}\partial_{v}\tilde{p}_{M}-a\partial_{v}p)\partial_{v}\phi_{i}\mathrm{d}v
+[a~vp~M(VF,t)avp(VF,t)][ϕi(VR)ϕi(VF)].\displaystyle+\big{[}\tilde{a}\partial_{v}\tilde{p}_{M}(V_{F},t)-a\partial_{v}p(V_{F},t)\big{]}\big{[}\phi_{i}(V_{R})-\phi_{i}(V_{F})\big{]}.

To facilitate the comparison of the terms in (3.51), we introduce intermediate terms as follows

(p~M,ϕi)=\displaystyle\mathcal{L}(\tilde{p}_{M},\phi_{i})= VFt(p~Mp)ϕidv+VF(hph~p+h~ph~p~M)vϕidv\displaystyle\int_{-\infty}^{V_{F}}\partial_{t}(\tilde{p}_{M}-p)\phi_{i}\,\mathrm{d}v+\int_{-\infty}^{V_{F}}\big{(}hp-\tilde{h}p+\tilde{h}p-\tilde{h}\tilde{p}_{M}\big{)}\partial_{v}\phi_{i}\,\mathrm{d}v (3.52)
+VF(a~vp~Ma~vp+a~vpavp)vϕidv\displaystyle+\int_{-\infty}^{V_{F}}\big{(}\tilde{a}\partial_{v}\tilde{p}_{M}-\tilde{a}\partial_{v}p+\tilde{a}\partial_{v}p-a\partial_{v}p\big{)}\partial_{v}\phi_{i}\,\mathrm{d}v
+[a~vp~M(VF,t)a~vp(VF,t)+a~vp(VF,t)avp(VF,t)][ϕi(VR)ϕi(VF)].\displaystyle+\big{[}\tilde{a}\partial_{v}\tilde{p}_{M}(V_{F},t)-\tilde{a}\partial_{v}p(V_{F},t)+\tilde{a}\partial_{v}p(V_{F},t)-a\partial_{v}p(V_{F},t)\big{]}\left[\phi_{i}(V_{R})-\phi_{i}(V_{F})\right].

After simplification, we group the consistency error terms and obtain

(p~M,ϕi)=k=17Ik,\displaystyle\mathcal{L}(\tilde{p}_{M},\phi_{i})=\sum_{k=1}^{7}I_{k}, (3.53)

where

I1\displaystyle I_{1} =VFt(p~Mp)ϕidv,I2=VF(hh~)pvϕidv,I3=VFh~(pp~M)vϕidv,\displaystyle=\int_{-\infty}^{V_{F}}\partial_{t}(\tilde{p}_{M}-p)\phi_{i}\mathrm{d}v,\quad I_{2}=\int_{-\infty}^{V_{F}}(h-\tilde{h})p\partial_{v}\phi_{i}\mathrm{d}v,\quad I_{3}=\int_{-\infty}^{V_{F}}\tilde{h}(p-\tilde{p}_{M})\partial_{v}\phi_{i}\mathrm{d}v, (3.54)
I4\displaystyle I_{4} =VFa~(vp~Mvp)vϕidv,I5=VF(a~a)vpvϕidv,\displaystyle=\int_{-\infty}^{V_{F}}\tilde{a}(\partial_{v}\tilde{p}_{M}-\partial_{v}p)\partial_{v}\phi_{i}\mathrm{d}v,\quad I_{5}=\int_{-\infty}^{V_{F}}(\tilde{a}-a)\partial_{v}p\partial_{v}\phi_{i}\mathrm{d}v,
I6\displaystyle I_{6} =a~[vp~M(VF,t)vp(VF,t)][ϕi(VR)ϕi(VF)],\displaystyle=\tilde{a}\big{[}\partial_{v}\tilde{p}_{M}(V_{F},t)-\partial_{v}p(V_{F},t)\big{]}\left[\phi_{i}(V_{R})-\phi_{i}(V_{F})\right],
I7\displaystyle I_{7} =(a~a)vp(VF,t)[ϕi(VR)ϕi(VF)].\displaystyle=(\tilde{a}-a)\partial_{v}p(V_{F},t)\left[\phi_{i}(V_{R})-\phi_{i}(V_{F})\right].

Since

N(t)=avp(VF,t)=(a0+a1N(t))vp(VF,t),\displaystyle N(t)=-a\partial_{v}p(V_{F},t)=-\big{(}a_{0}+a_{1}N(t)\big{)}\partial_{v}p(V_{F},t), (3.55)

we have

N(t)=a0vp(VF,t)1+a1vp(VF,t).N(t)=\frac{-a_{0}\partial_{v}p(V_{F},t)}{1+a_{1}\partial_{v}p(V_{F},t)}. (3.56)

Therefore, defining

N~(t)=a0vp~M(VF,t)1+a1vp~M(VF,t).\tilde{N}(t)=\frac{-a_{0}\partial_{v}\tilde{p}_{M}(V_{F},t)}{1+a_{1}\partial_{v}\tilde{p}_{M}(V_{F},t)}. (3.57)

it holds that

hh~\displaystyle h-\tilde{h} =ba0vp~M(VF,t)vp(VF,t)(1+a1vp(VF,t))(1+a1vp~M(VF,t)),\displaystyle=\frac{ba_{0}\partial_{v}\tilde{p}_{M}(V_{F},t)-\partial_{v}p(V_{F},t)}{\big{(}1+a_{1}\partial_{v}p(V_{F},t)\big{)}\big{(}1+a_{1}\partial_{v}\tilde{p}_{M}(V_{F},t)\big{)}}, (3.58)
a~a\displaystyle\tilde{a}-a =a1a0vp(VF,t)vp~M(VF,t)(1+a1vp(VF,t))(1+a1vp~M(VF,t)).\displaystyle=\frac{a_{1}a_{0}\partial_{v}p(V_{F},t)-\partial_{v}\tilde{p}_{M}(V_{F},t)}{\big{(}1+a_{1}\partial_{v}p(V_{F},t)\big{)}\big{(}1+a_{1}\partial_{v}\tilde{p}_{M}(V_{F},t)\big{)}}.

Hence, from the error estimation (3.47), there exists M0>0M_{0}>0 and M0>0M_{0}^{{}^{\prime}}>0 such that

hh~L(VR,VF]\displaystyle\left\|h-\tilde{h}\right\|_{L^{\infty}(V_{R},V_{F}]} M0(v(p~Mp)(VF,t)L(VR,VF],\displaystyle\leq M_{0}\left\|(\partial_{v}(\tilde{p}_{M}-p)(V_{F},t)\right\|_{L^{\infty}(V_{R},V_{F}]}, (3.59)
a~aL(VR,VF]\displaystyle\left\|\tilde{a}-a\right\|_{L^{\infty}(V_{R},V_{F}]} M0(v(pp~M)(VF,t)L(VR,VF].\displaystyle\leq M^{{}^{\prime}}_{0}\left\|(\partial_{v}(p-\tilde{p}_{M})(V_{F},t)\right\|_{L^{\infty}(V_{R},V_{F}]}.

Namely,

limM+hh~L(VR,VF]\displaystyle\lim_{M\to+\infty}\left\|h-\tilde{h}\right\|_{L^{\infty}(V_{R},V_{F}]} =0,\displaystyle=0, (3.60)
limM+a~aL(VR,VF]\displaystyle\lim_{M\to+\infty}\left\|\tilde{a}-a\right\|_{L^{\infty}(V_{R},V_{F}]} =0.\displaystyle=0.

Hence, it is easy to prove that

|I6|=|a~[vp~M(VF,t)vp(VF,t)]||a~|(v(p~Mp)(VF,t)L(VR,VF],\displaystyle|I_{6}|=\big{|}\tilde{a}\left[\partial_{v}\tilde{p}_{M}(V_{F},t)-\partial_{v}p(V_{F},t)\right]\big{|}\leq|\tilde{a}|\left\|(\partial_{v}(\tilde{p}_{M}-p)(V_{F},t)\right\|_{L^{\infty}(V_{R},V_{F}]}, (3.61)
|I7|=|(a~a)vp(VF,t)|a~aL(VR,VF]|vp(VF,t)|,\displaystyle|I_{7}|=\big{|}(\tilde{a}-a)\partial_{v}p(V_{F},t)\big{|}\leq\left\|\tilde{a}-a\right\|_{L^{\infty}(V_{R},V_{F}]}\big{|}\partial_{v}p(V_{F},t)\big{|},

which implies that

limM+|I6|=0,limM+|I7|=0.\lim_{M\to+\infty}|I_{6}|=0,\quad\lim_{M\to+\infty}|I_{7}|=0. (3.62)

From the definition of weak solution, we can know that ϕi(v)H1((,VF])\phi_{i}(v)\in H^{1}((-\infty,V_{F}]). From the Holder’s inequality, we get

|I1|\displaystyle|I_{1}| =|VFt(p~Mp)ϕidv|(VF(t(p~Mp))2dv)12(VFϕi2dv)12\displaystyle=\left|\int_{-\infty}^{V_{F}}\partial_{t}(\tilde{p}_{M}-p)\phi_{i}\mathrm{d}v\right|\leq\left(\int_{-\infty}^{V_{F}}\big{(}\partial_{t}(\tilde{p}_{M}-p)\big{)}^{2}\mathrm{d}v\right)^{\frac{1}{2}}\left(\int_{-\infty}^{V_{F}}\phi_{i}^{2}\mathrm{d}v\right)^{\frac{1}{2}} (3.63)
=t(p~Mp)L2(,VF](VFϕi2dv)12.\displaystyle=\left\|\partial_{t}(\tilde{p}_{M}-p)\right\|_{L^{2}(-\infty,V_{F}]}\left(\int_{-\infty}^{V_{F}}\phi_{i}^{2}\mathrm{d}v\right)^{\frac{1}{2}}.

Therefore, we obtain

limM+|I1|=0.\lim_{M\to+\infty}|I_{1}|=0. (3.64)

Because of ϕi(v)H1((,VF])\phi_{i}(v)\in H^{1}((-\infty,V_{F}]), vϕiL2((,VF])\partial_{v}\phi_{i}\in L^{2}((-\infty,V_{F}]). Then from (3.60) and Holder’s inequality, we obtain

|I2|\displaystyle|I_{2}| =|VF[(hh~)p]vϕidv|hh~L(VR,VF]VFpvϕidv\displaystyle=\left|\int_{-\infty}^{V_{F}}[(h-\tilde{h})p]\partial_{v}\phi_{i}\mathrm{d}v\right|\leq\left\|h-\tilde{h}\right\|_{L^{\infty}(V_{R},V_{F}]}\int_{-\infty}^{V_{F}}p\partial_{v}\phi_{i}\mathrm{d}v (3.65)

and

limM+|I2|=0.\lim_{M\to+\infty}|I_{2}|=0. (3.66)

From the definition of weak solution, we know that vvϕiH1((,VF])v\partial_{v}\phi_{i}\in H^{1}((-\infty,V_{F}]). Therefore, applying the Holder’s inequality, we get that

|I3|\displaystyle|I_{3}| =|VFh~(pp~M)vϕidv|=|VF(vvϕi+bN~(t)vϕi)(pp~M)dv|\displaystyle=\left|\int_{-\infty}^{V_{F}}\tilde{h}(p-\tilde{p}_{M})\partial_{v}\phi_{i}\mathrm{d}v\right|=\left|\int_{-\infty}^{V_{F}}\left(-v\partial_{v}\phi_{i}+b\tilde{N}(t)\partial_{v}\phi_{i}\right)(p-\tilde{p}_{M})\mathrm{d}v\right| (3.67)
(VF(vvϕi+bN~(t)vϕi)2dv)12(VF(pp~M)2dv)12,\displaystyle\leq\left(\int_{-\infty}^{V_{F}}\left(-v\partial_{v}\phi_{i}+b\tilde{N}(t)\partial_{v}\phi_{i}\right)^{2}\mathrm{d}v\right)^{\frac{1}{2}}\left(\int_{-\infty}^{V_{F}}(p-\tilde{p}_{M})^{2}\mathrm{d}v\right)^{\frac{1}{2}},

which can be written as

|I3|\displaystyle|I_{3}| (VF(vvϕi+bN~(t)vϕi)2dv)12pp~ML2(,VF].\displaystyle\leq\left(\int_{-\infty}^{V_{F}}\left(-v\partial_{v}\phi_{i}+b\tilde{N}(t)\partial_{v}\phi_{i}\right)^{2}\mathrm{d}v\right)^{\frac{1}{2}}\left\|p-\tilde{p}_{M}\right\|_{L^{2}(-\infty,V_{F}]}. (3.68)

Thus, we have

limM+|I3|=0.\lim_{M\to+\infty}|I_{3}|=0. (3.69)

Similarly, we can obtain the following conclusion

|I4|\displaystyle|I_{4}| =|VFa~(vp~Mvp)vϕidv|=|VF(a0+a1N~(t))v(p~Mp)vϕidv|\displaystyle=\left|\int_{-\infty}^{V_{F}}\tilde{a}(\partial_{v}\tilde{p}_{M}-\partial_{v}p)\partial_{v}\phi_{i}\mathrm{d}v\right|=\left|\int_{-\infty}^{V_{F}}(a_{0}+a_{1}\tilde{N}(t))\partial_{v}(\tilde{p}_{M}-p)\partial_{v}\phi_{i}\mathrm{d}v\right| (3.70)
(VF((a0+a1N~(t))vϕi)2dv)12v(p~Mp)L2(,VF],\displaystyle\leq\left(\int_{-\infty}^{V_{F}}\left((a_{0}+a_{1}\tilde{N}(t))\partial_{v}\phi_{i}\right)^{2}\mathrm{d}v\right)^{\frac{1}{2}}\left\|\partial_{v}(\tilde{p}_{M}-p)\right\|_{L^{2}(-\infty,V_{F}]},
|I5|\displaystyle|I_{5}| =|VF(a~a)vpvϕidv|=(a~a)VFvpvϕidva~aL(VR,VF]VFvpvϕidv.\displaystyle=\left|\int_{-\infty}^{V_{F}}(\tilde{a}-a)\partial_{v}p\partial_{v}\phi_{i}\mathrm{d}v\right|=(\tilde{a}-a)\int_{-\infty}^{V_{F}}\partial_{v}p\partial_{v}\phi_{i}\mathrm{d}v\leq\left\|\tilde{a}-a\right\|_{L^{\infty}(V_{R},V_{F}]}\int_{-\infty}^{V_{F}}\partial_{v}p\partial_{v}\phi_{i}\mathrm{d}v.

Therefore we get

limM+|I4|=0,limM+|I5|=0.\lim_{M\to+\infty}|I_{4}|=0,\quad\lim_{M\to+\infty}|I_{5}|=0. (3.71)

In conclusion, we have

limM+Ik=0,k=1,2,,7.\displaystyle\lim_{M\to+\infty}I_{k}=0,\quad k=1,2,\cdots,7. (3.72)

Accordingly, substituting (3.72) to (3.54), we obtain

limM+(p~M,ϕi)=limM+k=17Ik=0.\begin{split}\lim_{M\to+\infty}\mathcal{L}(\tilde{p}_{M},\phi_{i})=\lim_{M\to+\infty}\sum_{k=1}^{7}I_{k}=0.\end{split} (3.73)

Namely, the numerical scheme (3.30) is consistent with the Fokker-Planck equation (1.1). The proof is completed.

4 Model extensions

In the previous section, we introduced a fully discrete numerical scheme for the Fokker-Planck equation (1.1). Thanks to the flexibility of the LLSGM, this method is readily applicable to more complex models. In this section, we consider a more realistic NNLIF model (see e.g. [8]), featuring two populations, synaptic delays, and refractory periods, and its numerical simulation is more challenging. To verify the flexibility of the proposed numerical method, we explore the extended models from a numerical perspective.

4.1 Model introduction

In the one-population model, we examine the simplest scenario in which all neurons are considered either excitatory or inhibitory. However, in large-scale neuron networks, the dynamical behavior becomes more intricate due to the interplay between two distinct populations of neurons, excitatory and inhibitory. To accurately capture the dynamics of a neuron network, the model (1.1) has been extended to incorporate both excitatory populations (E-P) and inhibitory populations (I-P). The extended system now comprises two probability density functions, pE(v,t)p_{E}(v,t) for the excitatory population and pI(v,t)p_{I}(v,t) for the inhibitory population, each governed by an analogous Fokker-Planck equation [7] with (1.1), as follows

tpα+v[hα(v,NE(t),NI(t))pα(v,t)]aα(NE(t),NI(t))vvpα(v,t)=Nα(t)δ(vVR),\partial_{t}p_{\alpha}+\partial_{v}\big{[}h^{\alpha}\big{(}v,N_{E}\left(t\right),N_{I}\left(t\right)\big{)}p_{\alpha}(v,t)\big{]}-a_{\alpha}\big{(}N_{E}\left(t\right),N_{I}\left(t\right)\big{)}\partial_{vv}p_{\alpha}(v,t)=N_{\alpha}(t)\delta(v-V_{R}), (4.1)

where

Nα(t)=aα(NE(t),NI(t))vpα(VF,t),α,β=E,IN_{\alpha}(t)=-a_{\alpha}\left(N_{E}(t),N_{I}(t)\right)\partial_{v}p_{\alpha}\left(V_{F},t\right),\quad\alpha,\beta=E,I (4.2)

is the mean firing rate of the population α\alpha. The drift and diffusion coefficients are

hα(v,NE,NI)\displaystyle h^{\alpha}\left(v,N_{E},N_{I}\right) =v+bEαNEbIαNI+(bEαbEE)νE,ext,\displaystyle=-v+b_{E}^{\alpha}N_{E}-b_{I}^{\alpha}N_{I}+\left(b_{E}^{\alpha}-b_{E}^{E}\right)\nu_{E,\text{ext}}, (4.3)
aα(NE,NI)\displaystyle a_{\alpha}\left(N_{E},N_{I}\right) =dEανE,ext+dEαNE+dIαNI,\displaystyle=\mathrm{d}_{E}^{\alpha}\nu_{E,\text{ext}}+\mathrm{d}_{E}^{\alpha}N_{E}+\mathrm{d}_{I}^{\alpha}N_{I},

where νE,ext0\nu_{E,\text{ext}}\geq 0 describes the external inputs originating from excitatory neurons. The synaptic strength between the two populations is governed by the connectivity parameters bβα0b^{\alpha}_{\beta}\geq 0 and dβα0d^{\alpha}_{\beta}\geq 0, which correspond to the parameters bb and a1a_{1} in (1.1), respectively. Specifically, these parameters describe the strength of synaptic connections from neurons in population β\beta to neurons in population α\alpha. In contrast, the connectivity parameter bb in the one-population model primarily represents the interactions between neurons within the same population, without the distinction between excitatory and inhibitory neurons.

In addition to the interactions between the two populations, synaptic delays, which describe the transmission delay of a spike, are introduced to enhance the realism of the model. When a neuron in population β\beta fires, its effect on neurons in population α\alpha is delayed by a constant time interval DβαD^{\alpha}_{\beta}. The Fokker-Planck equations with the synaptic delays are expressed as

tpα+v[hα(v,NE(tDEα),NI(tDIα))pα]aα(NE(tDEα),NI(tDIα))vvpα=Nα(t)δ(vVR).\partial_{t}p_{\alpha}+\partial_{v}\big{[}h^{\alpha}\big{(}v,N_{E}\left(t-D_{E}^{\alpha}\right),N_{I}\left(t-D_{I}^{\alpha}\right)\big{)}p_{\alpha}\big{]}-a_{\alpha}\big{(}N_{E}\left(t-D_{E}^{\alpha}\right),N_{I}\left(t-D_{I}^{\alpha}\right)\big{)}\partial_{vv}p_{\alpha}=N_{\alpha}(t)\delta(v-V_{R}). (4.4)

After firing, neurons enter a refractory period, during which they are temporarily unresponsive to inputs. Over time, neurons gradually recover from their refractory state, becoming responsive again to inputs. The mean recovery rate from the refractory period is denoted by Mα(t)M_{\alpha}(t). This mechanism is modeled by the following differential equations that describe the probability of neurons being in the refractory state RαR_{\alpha}

dRα(t)dt=Nα(t)Mα(t).\displaystyle\frac{\mathrm{d}R_{\alpha}(t)}{\mathrm{d}t}=N_{\alpha}(t)-M_{\alpha}(t). (4.5)

There are multiple choices for Mα(t)M_{\alpha}(t) [1, 5]. In this section, we consider

Mα(t)=Rα(t)τα,M_{\alpha}(t)=\frac{R_{\alpha}(t)}{\tau_{\alpha}}, (4.6)

where τα\tau_{\alpha} measures the mean duration of the refractory period. Therefore, the corresponding Fokker-Planck equation, which includes the recovery mechanism, is given by

tpα+v[hα(v,NE(tDEα),NI(tDIα))pα]aα(NE(tDEα),NI(tDIα))vvpα=Mα(t)δ(vVR).\partial_{t}p_{\alpha}+\partial_{v}\big{[}h^{\alpha}\big{(}v,N_{E}\left(t-D_{E}^{\alpha}\right),N_{I}\left(t-D_{I}^{\alpha}\right)\big{)}p_{\alpha}\big{]}-a_{\alpha}\big{(}N_{E}\left(t-D_{E}^{\alpha}\right),N_{I}\left(t-D_{I}^{\alpha}\right)\big{)}\partial_{vv}p_{\alpha}=M_{\alpha}(t)\delta(v-V_{R}). (4.7)

The source term on the right-hand side represents the fact that the membrane potential is immediately reset to VRV_{R} after the neuron fires, and it stays in the refractory period for a duration.

Compared to the previous NNLIF model (1.1), this mesoscopic model includes two Fokker-Planck equations that govern the evolution of the probability density functions pα(v,t)p_{\alpha}(v,t), along with two ordinary differential equations (ODEs) that describe the evolution of the refractory states RαR_{\alpha} of the neurons. By setting the same equation on (,VR)(VR,VF](-\infty,V_{R})\cup(V_{R},V_{F}] and incorporating the jump conditions, Dirichlet boundary conditions, and initial conditions, we have this nonlinear system [10]

{tpα+v[hα(v,NE(tDEα),NI(tDIα))pα]aα(NE(tDEα),NI(tDIα))vvpα=0,dRα(t)dt=Nα(t)Mα(t),Nα(t)=aα(NE(tDEα),NI(tDIα))vpα(VF,t),pα(v,0)=pα0(v)0,Rα(0)=Rα00,pα(,t)=pα(VF,t)=0,pα(VR,t)=pα(VR+,t),vpα(VR,t)vpα(VR+,t)=Mα(t)aα(NE(t),NI(t)).\begin{cases}\displaystyle\partial_{t}p_{\alpha}+\partial_{v}\big{[}h^{\alpha}\big{(}v,N_{E}\left(t-D_{E}^{\alpha}\right),N_{I}\left(t-D_{I}^{\alpha}\right)\big{)}p_{\alpha}\big{]}-a_{\alpha}\big{(}N_{E}\left(t-D_{E}^{\alpha}\right),N_{I}\left(t-D_{I}^{\alpha}\right)\big{)}\partial_{vv}p_{\alpha}=0,\\ \begin{aligned} \displaystyle&\frac{\mathrm{d}R_{\alpha}(t)}{\mathrm{d}t}=N_{\alpha}(t)-M_{\alpha}(t),\\ &N_{\alpha}(t)=-a_{\alpha}\big{(}N_{E}(t-D_{E}^{\alpha}),N_{I}(t-D_{I}^{\alpha})\big{)}\partial_{v}p_{\alpha}\left(V_{F},t\right),\end{aligned}\\ \begin{aligned} \displaystyle p_{\alpha}(v,0)=p^{0}_{\alpha}(v)\geq 0,\,R_{\alpha}(0)=R^{0}_{\alpha}\geq 0,\end{aligned}\\ \displaystyle p_{\alpha}(-\infty,t)=p_{\alpha}(V_{F},t)=0,\,p_{\alpha}(V_{R}^{-},t)=p_{\alpha}(V_{R}^{+},t),\,\partial_{v}p_{\alpha}(V_{R}^{-},t)-\partial_{v}p_{\alpha}(V_{R}^{+},t)=\frac{M_{\alpha}(t)}{a_{\alpha}(N_{E}(t),N_{I}(t))}.\end{cases} (4.8)

Since the number of neurons remains unchanged, the total mass should be conserved, as shown below

VFpα(v,t)dv+Rα(t)=VFpα0(v)dv+Rα0=1,t0,α=E,I.\int_{-\infty}^{V_{F}}p_{\alpha}(v,t)\mathrm{d}v+R_{\alpha}(t)=\int_{-\infty}^{V_{F}}p_{\alpha}^{0}(v)\mathrm{d}v+R_{\alpha}^{0}=1,\quad\forall t\geq 0,\quad\alpha=E,I. (4.9)

Despite previous theoretical and numerical studies on the model by [8, 10, 24], several properties remain unexplored. This study further investigates the periodic solution phenomena in this model.

4.2 Numerical scheme

We now elaborate the numerical scheme for (4.8)\eqref{eq:two population}, employing the spectral method to discretize space and a semi-implicit format for time discretization of the Fokker-Planck equations. Concurrently, the ODEs are solved using the forward Euler method.

Thanks to the derivation in Sec. 2, the variational formulation of equation (4.8) can be naturally obtained as follows

{Given pα0(v)W, find pα(v,t)W,α=E,I, such thatVF(tpα(v,t)ϕ(v)hαpα(v,t)vϕ(v)+aαvpα(v,t)vϕ(v))dvaαvpα(VF,t)ϕ(VF)Mα(t)ϕ(VR)=0,ϕW,dRα(t)dt=Nα(t)Mα(t),pα(v,0)=pα0(v),Rα(0)=Rα0.\begin{cases}\text{Given $p^{0}_{\alpha}(v)\in W$, find $p_{\alpha}(v,t)\in W,\alpha=E,I$, such that}\\ \begin{aligned} \int_{-\infty}^{V_{F}}\Big{(}\partial_{t}p_{\alpha}(v,t)\phi(v)&-h^{\alpha}p_{\alpha}(v,t)\partial_{v}\phi(v)+a_{\alpha}\partial_{v}p_{\alpha}(v,t)\partial_{v}\phi(v)\Big{)}\mathrm{d}v\\ &-a_{\alpha}\partial_{v}p_{\alpha}(V_{F},t)\phi(V_{F})-M_{\alpha}(t)\phi(V_{R})=0,\quad\forall\phi\in W,\end{aligned}\\ \displaystyle\frac{\mathrm{d}R_{\alpha}(t)}{\mathrm{d}t}=N_{\alpha}(t)-M_{\alpha}(t),\\ p_{\alpha}(v,0)=p^{0}_{\alpha}(v),\,R_{\alpha}(0)=R^{0}_{\alpha}.\end{cases} (4.10)

Here WW is the trial function space. Similar to the fully discrete scheme in Sec. 3.2, we first truncate the trial function space WW to obtain WMW_{M}, then the numerical solution pα,Mp_{\alpha,M} is expanded as

pα,M(v,t)=k=12M+1uα,k(t)ψk(v),ψkWM.p_{\alpha,M}(v,t)=\sum_{k=1}^{2M+1}u_{\alpha,k}(t)\psi_{k}(v),\quad\forall\psi_{k}\in W_{M}. (4.11)

The initial condition for the expansion coefficients {uα,k(0)}k=12M+1\{u_{\alpha,k}(0)\}_{k=1}^{2M+1} can be obtained by the least square approximation

VFk=12M+1uα,k(0)ψk(v)ψj(v)dv=VFpα0(v)ψj(v)dv,j=1,2,,2M+1.\int_{-\infty}^{V_{F}}\sum_{k=1}^{2M+1}u_{\alpha,k}(0)\psi_{k}(v)\psi_{j}(v)\mathrm{d}v=\int_{-\infty}^{V_{F}}p^{0}_{\alpha}(v)\psi_{j}(v)\mathrm{d}v,\quad j=1,2,\cdots,2M+1. (4.12)

Similarly, we choose the semi-implicit scheme in the time direction. Namely, the nonlinear part is treated explicitly and other parts are treated implicitly. Notice that, the drift coefficients hαh_{\alpha} and diffusion coefficients aαa_{\alpha} in the model involve synaptic delay terms related to historical information. So we define the delayed time index

nβα=max(0,nDβαΔt).n_{\beta}^{\alpha}=\max\left(0,n-\frac{D_{\beta}^{\alpha}}{\Delta t}\right). (4.13)

Consequently, one should adjust the parameters to ensure that nβαn_{\beta}^{\alpha} is an integer in numerical experiments. After time discretization, we obtain the fully discrete scheme as follows, for α=E,I\alpha=E,I,

{(1ΔtH+AVα(NEnEα,NInIα)B+aα(NEnEα,NInIα)C+aα(NEnEα,NInIα)G)𝒖αn+1=1ΔtH𝒖αn+MαnF,Rαn+1=Rαn+Δt(NαnMαn),\begin{cases}\vspace{5pt}\displaystyle\left(\frac{1}{\Delta t}H+A-V_{\alpha}(N_{E}^{n_{E}^{\alpha}},N_{I}^{n_{I}^{\alpha}})B+a_{\alpha}(N_{E}^{n_{E}^{\alpha}},N_{I}^{n_{I}^{\alpha}})C+a_{\alpha}(N_{E}^{n_{E}^{\alpha}},N_{I}^{n_{I}^{\alpha}})G\right)\boldsymbol{u}^{n+1}_{\alpha}=\frac{1}{\Delta t}H\boldsymbol{u}^{n}_{\alpha}+M^{n}_{\alpha}F,\\ R^{n+1}_{\alpha}=R^{n}_{\alpha}+\Delta t(N^{n}_{\alpha}-M^{n}_{\alpha}),\end{cases} (4.14)

where

𝒖αn=(uα,1(tn),uα,2(tn),,uα,2M+1(tn))T,\displaystyle\boldsymbol{u}^{n}_{\alpha}=\big{(}u_{\alpha,1}(t^{n}),u_{\alpha,2}(t^{n}),\cdots,u_{\alpha,2M+1}(t^{n})\big{)}^{T}, (4.15)
Rαn=Rα(tn),\displaystyle R^{n}_{\alpha}=R_{\alpha}(t^{n}),
Mαn=Rαnτα,\displaystyle M_{\alpha}^{n}=\frac{R_{\alpha}^{n}}{\tau_{\alpha}},
Gjk=vψk(VF)ψj(VF),j,k=1,2,,2M+1,\displaystyle G_{jk}=\partial_{v}\psi_{k}(V_{F})\psi_{j}(V_{F}),\quad j,k=1,2,\cdots,2M+1,
F=(ψ1(VR),ψ2(VR),,ψ2M+1(VR))T.\displaystyle F=(\psi_{1}(V_{R}),\psi_{2}(V_{R}),\cdots,\psi_{2M+1}(V_{R}))^{T}.

and the matrix H,A,B,CH,A,B,C are defined in (3.32). The mean firing rate can be obtained by solving the following equations

NEn=aE(NEnEα,NInIα)k=12M+1uE,knvψk(VF),\displaystyle N^{n}_{E}=-a_{E}(N_{E}^{n_{E}^{\alpha}},N_{I}^{n_{I}^{\alpha}})\sum_{k=1}^{2M+1}u^{n}_{E,k}\partial_{v}\psi_{k}(V_{F}), (4.16)
NIn=aI(NEnEα,NInIα)k=12M+1uI,knvψk(VF).\displaystyle N^{n}_{I}=-a_{I}(N_{E}^{n_{E}^{\alpha}},N_{I}^{n_{I}^{\alpha}})\sum_{k=1}^{2M+1}u^{n}_{I,k}\partial_{v}\psi_{k}(V_{F}).

The synaptic delay terms in (4.14) is mainly reflected in Vα,aαV_{\alpha},a_{\alpha} as

Vα(NEnEα,NInIα)=bEαNEnEαbIαNInIα+(bEαbEE)νE,ext,\displaystyle V_{\alpha}(N_{E}^{n_{E}^{\alpha}},N_{I}^{n_{I}^{\alpha}})=b^{\alpha}_{E}N_{E}^{n_{E}^{\alpha}}-b^{\alpha}_{I}N_{I}^{n_{I}^{\alpha}}+(b^{\alpha}_{E}-b^{E}_{E})\nu_{E,\text{ext}}, (4.17)
aα(NEnEα,NInIα)=dEανE,ext+dEαNEnEα+dIαNInIα.\displaystyle a_{\alpha}(N_{E}^{n_{E}^{\alpha}},N_{I}^{n_{I}^{\alpha}})=\mathrm{d}_{E}^{\alpha}\nu_{E,\text{ext}}+\mathrm{d}_{E}^{\alpha}N_{E}^{n_{E}^{\alpha}}+\mathrm{d}_{I}^{\alpha}N_{I}^{n_{I}^{\alpha}}.

5 Numerical test

In this section, several numerical tests are studied to validate LLSGM. First, the convergence rates of the numerical schemes proposed in (3.35) for the one-population model and (4.14) for the two-population model are studied, followed by a stability study in Sec. 5.1. Then, the numerical examples for the efficiency of LLSGM are tested, including the comparison with the finite difference method [16] and the spectral method [29] in Sec. 5.2. Moreover, the blow-up phenomena for both the one-population and two-population models are also investigated in Sec. 5.3. Finally, the numerical experiments are extended to the two-population model with synaptic delays and refractory states in Sec. 5.4, where transitions between periodic oscillations, steady states, and blow-up events are explored.

In all numerical tests, VFV_{F} and VRV_{R} are fixed as VF=2V_{F}=2 and VR=1V_{R}=1, and the initial distribution is chosen as the Gauss function

pG(v)=12πσ0M0e(vv0)22σ02,p_{G}(v)=\frac{1}{\sqrt{2\pi}\sigma_{0}M_{0}}\mathrm{e}^{-\frac{(v-v_{0})^{2}}{2\sigma_{0}^{2}}}, (5.1)

where v0v_{0} and σ02\sigma_{0}^{2} are the mean and variance respectively. Here, M0M_{0} is a normalization factor, which makes pG(v)p_{G}(v) satisfy

VFpG(v)dv=1.\int_{-\infty}^{V_{F}}p_{G}(v)\mathrm{d}v=1. (5.2)

In the numerical tests, v0,σ0v_{0},\sigma_{0}, and M0M_{0} are set problem-dependent.

5.1 Study of the order of accuracy and the stability test

In this section, the order of accuracy for LLSGM including the temporal and spatial convergence of the one-population model (1.1) and (1.2) is first tested, followed by the convergence order of LLSGM in both temporal and spatial discretizations for the two-population model. Finally, the stability of LLSGM for the one-population model is examined by varying the expansion number MM and the time-step length Δt\Delta t.

5.1.1 Order of accuracy for one-population model

In this section, the convergence order for LLSGM in temporal and spatial discretization is studied. The initial distribution in (5.1) is set as v0=1v_{0}=-1 and σ02=0.5\sigma_{0}^{2}=0.5 with the parameters in (1.1) chosen as a0=1,a1=0.1a_{0}=1,a_{1}=0.1 and b=0b=0.

The convergence of the explicit-implicit scheme (3.35) in temporal discretization is tested by calculating the L2L^{2} and LL^{\infty} errors between the numerical solution obtained with different time-step lengths and the reference solution. Here, the reference solution is obtained by the finite-difference method (FDM) proposed in [17] with the mesh fine enough. In the simulation, the expansion number MM is set as M=16M=16, and the final time is t=0.2t=0.2. Tab. 1 shows the error calculated by (5.3)

OΔt,Ll=log2p2ΔtprlpΔtprl,O_{\Delta t,L^{l}}=\log_{2}\frac{\left\|p_{2\Delta t}-p_{r}\right\|_{l}}{\left\|p_{\Delta t}-p_{r}\right\|_{l}}, (5.3)

where prp_{r} denotes the reference solution and pΔtp_{\Delta t} is the numerical solution with the time-step length Δt\Delta t. For the numerical solution, the time-step lengths are set as Δt=0.04,0.02,0.01\Delta t=0.04,0.02,0.01 and 0.0050.005. Tab. 1 indicates the first-order convergence of (3.35) in the temporal direction.

Δt\Delta t pΔtpr2\left\|p_{\Delta t}-p_{r}\right\|_{2} OΔt,L2O_{\Delta t,L^{2}} pΔtpr\left\|p_{\Delta t}-p_{r}\right\|_{\infty} OΔt,LO_{\Delta t,L^{\infty}}
0.04 4.58e-03 —— 3.89e-03 ——
0.02 2.36e-03 0.95 2.02e-03 0.95
0.01 1.20e-03 0.97 1.04e-03 0.97
0.005 6.09e-04 0.98 5.31e-04 0.96
Table 1: (Order of accuracy in Sec. 5.1.1) The first order convergence of the explicit-implicit scheme (3.35) in the temporal discretization for the one-population model.

Then the convergence of LLSGM (3.35) in the spatial space is studied, where the L2L^{2} error between the numerical solution obtained with different expansion numbers MM and the reference solution is calculated. Here, the same reference solution obtained by FDM is utilized. For the numerical solution, the time-step length is fixed as Δt=0.001\Delta t=0.001, and the expansion number varies from M=3M=3 to 1212. Here, we want to emphasize that the number of basis functions is 2M+12M+1. The corresponding error is calculated as

OM,L2=lnpMpr2,O_{M,L^{2}}=\ln\left\|p_{M}-p_{r}\right\|_{2}, (5.4)

where pMp_{M} is the numerical solution with the different expansion number MM. Fig. LABEL:fig:ex1_spatial_one shows the error obtained by (5.4) with different MM, where the numerical results obtained by odd and even MM are plotted respectively. In Fig. LABEL:fig:ex1_spatial_one, we observe a linear relation between the error (5.4) and MM, which indicates the spectral convergence of LLSGM.

5.1.2 Order of accuracy for two-population model

diffusion coefficient aa 11
external input νE,ext\nu_{E,\text{ext}} 0
refractory periods τE,τI\tau_{E},\tau_{I} 0
initial distribution [v0E,(σ0E)2][v_{0}^{E},(\sigma_{0}^{E})^{2}] [1,0.5][-1,0.5]
[v0I,(σ0I)2][v_{0}^{I},(\sigma_{0}^{I})^{2}] [0,0.25][0,0.25]
synaptic delays DEE,DEI,DIE,DIID_{E}^{E},D_{E}^{I},D_{I}^{E},D_{I}^{I} 0
connectivity parameters [bEE,bEI][b_{E}^{E},b_{E}^{I}] [0.5,0.5][0.5,0.5]
[bIE,bII][b_{I}^{E},b_{I}^{I}] [0.75,0.25][0.75,0.25]
Table 2: (Order of accuracy in Sec. 5.1.2) Computational parameters in (4.8) for the two-population model.

In this section, the convergence order of LLSGM applied to the two-population model is studied, focusing on both temporal and spatial discretization. For the two-population model, the same Gaussian function (5.1) is utilized for the initial distribution of both excitatory and inhibitory populations with the mean and variance denoted by v0αv_{0}^{\alpha} and (σ0α)2,α=E,I(\sigma_{0}^{\alpha})^{2},\alpha=E,I. Here, we set Mα=NαM_{\alpha}=N_{\alpha} and the other parameters in (4.8) are shown in Tab. 2.

Excitatory population pΔtEprE2\left\|p^{E}_{\Delta t}-p^{E}_{r}\right\|_{2} OΔt,L2EO^{E}_{\Delta t,L^{2}} pΔtEprE\left\|p^{E}_{\Delta t}-p^{E}_{r}\right\|_{\infty} OΔt,LEO^{E}_{\Delta t,L^{\infty}}
Δt=0.04\Delta t=0.04 4.16e-03 —— 3.52e-03 ——
Δt=0.02\Delta t=0.02 2.15e-03 0.95 1.83e-03 0.94
Δt=0.01\Delta t=0.01 1.09e-03 0.97 9.46e-04 0.95
Δt=0.005\Delta t=0.005 5.54e-04 0.98 4.81e-04 0.98
Inhibitory population pΔtIprI2\left\|p^{I}_{\Delta t}-p^{I}_{r}\right\|_{2} OΔt,L2IO^{I}_{\Delta t,L^{2}} pΔtIprI\left\|p^{I}_{\Delta t}-p^{I}_{r}\right\|_{\infty} OΔt,LIO^{I}_{\Delta t,L^{\infty}}
Δt=0.04\Delta t=0.04 1.02e-02 —— 1.12e-02 ——
Δt=0.02\Delta t=0.02 5.28e-03 0.96 5.72e-03 0.97
Δt=0.01\Delta t=0.01 2.68e-03 0.98 2.87e-03 0.99
Δt=0.005\Delta t=0.005 1.35e-03 0.99 1.42e-03 1.01
Table 3: (Order of accuracy in Sec. 5.1.2) The first-order convergence of the scheme (4.14) in the temporal discretization for the two-population model.

Similar to the one-population model, to assess the temporal convergence of the numerical scheme (4.14), the L2L^{2} and LL^{\infty} errors between the numerical solutions obtained with different time-step lengths and the reference solution are calculated. Here, the reference solution is obtained by the finite-difference method [17] with a sufficiently fine mesh. In the simulation, the expansion number is fixed as M=16M=16. The error calculated by (5.3) for both the excitatory and inhibitory populations at t=0.2t=0.2 is shown in Tab. 3, where the time-step length utilized to obtain the numerical solution is Δt=0.04,0.02,0.01\Delta t=0.04,0.02,0.01 and 0.0050.005. It clearly demonstrates the first-order convergence of the scheme (4.14) in the temporal direction.

Next, the spatial convergence of LLSGM (4.14) is investigated by computing the L2L^{2} error between numerical solutions obtained with different expansion numbers MM and the same reference solution. The L2L^{2} error is calculated by (5.4). In this simulation, the time step length is fixed at Δt=0.001\Delta t=0.001, while the expansion number MM varies from 33 to 1212. Fig. LABEL:fig:ex2_spatial displays the error obtained by (5.4) with odd and even MM plotted separately, where a linear relationship between the error (5.4) and MM is revealed, demonstrating the spectral convergence of LLSGM for the more complex two-population model (4.8).

5.1.3 Stability test for one-population model

In this section, we assess the stability of LLSGM applied to the one-population model (1.1). The initial distribution (5.1) is set as v0=1v_{0}=-1 and σ02=0.5\sigma_{0}^{2}=0.5, and the parameters in (1.1) are a=1,a1=0.1a=1,a_{1}=0.1 and b=0b=0. Here, the reference solution is obtained by the finite difference method. The L2L^{2} error between the numerical solution and the reference solution at t=0.2t=0.2 for different expansion numbers MM and time-step lengths Δt\Delta t is shown in Tab. 4. The L2L^{2} error remains bounded and does not exhibit significant increases with changes in MM and Δt\Delta t, demonstrating the numerical stability of LLSMG.

MM Δt\Delta t 0.1 0.05 0.025 0.0125 0.0063 0.0032
3 1.63e-01 1.69e-01 1.73e-01 1.75e-01 1.76e-01 1.76e-01
4 6.96e-02 6.86e-02 6.81e-02 6.79e-02 6.77e-02 6.77e-02
5 6.27e-02 6.44e-02 6.55e-02 6.61e-02 6.65e-02 6.66e-02
6 2.04e-02 1.85e-02 1.78e-02 1.75e-02 1.74e-02 1.74e-02
7 2.01e-02 1.91e-02 1.90e-02 1.90e-02 1.91e-02 1.92e-02
8 1.11e-02 6.80e-03 4.81e-03 4.06e-03 3.81e-03 3.74e-03
9 1.09e-02 6.58e-03 4.69e-03 4.07e-03 3.92e-03 3.89e-03
10 1.06e-02 5.91e-03 3.45e-03 2.36e-03 1.97e-03 1.85e-03
11 1.05e-02 5.68e-03 3.03e-03 1.71e-03 1.15e-03 9.56e-04
12 1.05e-02 5.68e-03 3.02e-03 1.67e-03 1.06e-03 8.38e-04
13 1.05e-02 5.64e-03 2.94e-03 1.51e-03 7.76e-04 4.17e-04
Table 4: (Stability test in Sec. 5.1.3) The L2L^{2} error of LLSGM with different MM and Δt\Delta t.

5.2 Efficiency comparison

In this section, the efficiency of LLSGM is validated by comparing it with the finite difference method (FDM) proposed in [17] and the Legendre-Galerkin method (LGM) in [29]. All simulations are performed on the model Intel Core i5-1135G7 [email protected] GHz.

In the simulation, we consider the initial distribution (5.1) with v0=0v_{0}=0 and σ02=0.25\sigma_{0}^{2}=0.25. The time-step length is fixed at Δt=107\Delta t=10^{-7} so that the time discretization error is negligible compared to the error in the spatial space. The final time is t=0.5t=0.5 and parameters in (1.1) are set as a=1a=1 and b=0.5b=0.5. Here, the reference solution prp_{r} for both LLSGM and LGM is the numerical solution with M=30M=30, respectively. For FDM, the reference solution prp_{r} is the numerical solution obtained with spatial size h=1128h=\frac{1}{128}.

The L2L^{2} error and the computational time for LLSGM, LGM, and FDM are presented in Tab. 5. For the same expansion number, the error and the computational time of LLSGM are slightly less than those required by LGM in [29]. Furthermore, when the expansion number is small, the computational time of LLSGM is significantly less than that of FDM when the similar order of the numerical error is obtained. When the expansion number is larger, the computational time of LLSGM is much less than that of FDM. Specially, when the numerical error reaches e5e^{-5}, the computational time of FDM is more than ten times of LLSGM.

LLSGM LGM FDM
MM pMpr2\left\|p_{M}-p_{r}\right\|_{2} CPU time (s) MM pMpr2\left\|p_{M}-p_{r}\right\|_{2} CPU time (s) hh phpr2\left\|p_{h}-p_{r}\right\|_{2} CPU time (s)
4 3.55e-02 30.98 4 6.18e-02 38.85 1/4 3.75e-03 47.72
8 6.72e-03 47.55 8 9.51e-03 63.97 1/8 1.12e-03 93.72
12 1.33e-04 62.75 12 4.04e-04 73.48 1/16 3.12e-04 219.30
16 2.11e-05 83.09 16 7.23e-05 96.21 1/32 8.16e-05 536.48
20 1.96e-06 106.14 20 2.26e-06 139.73 1/64 1.98e-05 4741.07
Table 5: (Efficiency test in Sec. 5.2) The computational time of LLSGM, LGM and FDM for (1.1).

Next, considering the more complex two-population model (4.8), the computational time for periodic solutions obtained by LLSGM and FDM is compared. In the simulation, the final time is set as t=0.5t=0.5 and the time-step length is Δt=107\Delta t=10^{-7}, while other parameters are listed in Tab. 6. The corresponding computational time is provided in Tab. 7. Significantly less time is required by LLSGM compared to FDM, demonstrating its greater efficiency for a more complex scenario.

diffusion coefficient aa 11
external input νE,ext\nu_{E,\text{ext}} 2020
refractory periods τE,τI\tau_{E},\tau_{I} 0.0250.025
initial distribution [v0E,(σ0E)2][v_{0}^{E},(\sigma_{0}^{E})^{2}] [1,0.5][-1,0.5]
[v0I,(σ0I)2][v_{0}^{I},(\sigma_{0}^{I})^{2}] [1,0.5][-1,0.5]
synaptic delays DEE,DEI,DIE,DIID_{E}^{E},D_{E}^{I},D_{I}^{E},D_{I}^{I} 0.10.1
connectivity parameters [bEE,bEI][b_{E}^{E},b_{E}^{I}] [3.5,4][3.5,4]
[bIE,bII][b_{I}^{E},b_{I}^{I}] [0.75,3][0.75,3]
Table 6: (Efficiency test in Sec. 5.2) Computational parameters in (4.8) for the two-population model with delays and refractory states.
LLSGM MM pMEprE2\left\|p^{E}_{M}-p^{E}_{r}\right\|_{2} pMIprI2\left\|p^{I}_{M}-p^{I}_{r}\right\|_{2} CPU time (s)
4 3.97e-01 2.32e-01 65.72
8 2.12e-02 2.27e-03 91.12
12 1.47e-03 3.06e-04 122.29
16 1.41e-04 2.78e-05 163.61
FDM hh phEprE2\left\|p_{h}^{E}-p_{r}^{E}\right\|_{2} phIprI2\left\|p_{h}^{I}-p_{r}^{I}\right\|_{2} CPU time (s)
1/4 2.78e-02 1.97e-01 137.34
1/8 8.02e-03 7.74e-02 213.06
1/16 2.19e-03 1.94e-02 509.48
1/32 7.78e-04 4.54e-03 1259.75
1/64 3.31e-04 9.04e-04 11375.21
Table 7: (Efficiency test in Sec. 5.2) The computation time of LLSGM and FDM for (4.8).

5.3 Blow-up events

In this section, the blow-up events are first studied with LLSGM for the one-population model, and the occurrence of blow-up phenomena in the two-population model is then observed. The blow-up events have been widely studied in the NNLIF model, where the solution may blow up in a finite time.

5.3.1 One-population model

In this section, the blow-up events are studied in detail with LLSGM for the one-population model. The study in [3, 29] shows that the blow-up events occur if the parameter bb in (1.1) is sufficiently large. In the simulation, the parameters are set as a0=1a_{0}=1, a1=0a_{1}=0 and b=3b=3, and the scheme (3.35) is utilized with the expansion number M=16M=16 and the time step length fixed as Δt=0.001\Delta t=0.001.

The evolution of the mean firing rate N(t)N(t) is plotted in Fig. LABEL:fig:blow_up1_N, and the distribution of the density function p(v,t)p(v,t) on vv at t=2.95,3.15t=2.95,3.15 and 3.353.35 is shown in Fig. LABEL:fig:blow_up1_p. It can be observed that the density function becomes increasingly concentrated around VRV_{R}, with a sharp increase in response to a rapid increase in the firing rate N(t)N(t).

5.3.2 Two-population model

In this section, we extend blow-up events by applying the LLSGM method to the excitatory-inhibitory population model (4.8). As shown in [7, 10], blow-up events can occur for the two-population model when the connectivity parameter bEEb_{E}^{E} becomes sufficiently large. In the simulation, we set Mα=NαM_{\alpha}=N_{\alpha}, and the scheme (4.14) is adopted with the expansion number M=16M=16 and a fixed time-step length Δt=0.001\Delta t=0.001. The detailed parameters in (4.8) are listed in Tab. 8.

diffusion coefficient aa 11
external input νE,ext\nu_{E,\text{ext}} 0
refractory periods τE,τI\tau_{E},\tau_{I} 0
initial distribution [v0E,(σ0E)2][v_{0}^{E},(\sigma_{0}^{E})^{2}] [1,0.5][-1,0.5]
[v0I,(σ0I)2][v_{0}^{I},(\sigma_{0}^{I})^{2}] [1,0.5][-1,0.5]
synaptic delays DEE,DEI,DIE,DIID_{E}^{E},D_{E}^{I},D_{I}^{E},D_{I}^{I} 0
connectivity parameters [bEE,bEI][b_{E}^{E},b_{E}^{I}] [3,0.5][3,0.5]
[bIE,bII][b_{I}^{E},b_{I}^{I}] [0.75,0.25][0.75,0.25]
Table 8: (Blow-up events for the two-population model in Sec. 5.3.2) Computational parameters in (4.8) for the two-population model.

Fig. LABEL:fig:blow_up2E_N and Fig. LABEL:fig:blow_up2I_N illustrate the evolution of the mean firing rates NE(t)N_{E}(t) and NI(t)N_{I}(t), while Fig. LABEL:fig:blow_up2E_p and Fig. LABEL:fig:blow_up2I_p show the distribution of the density functions pE(v,t)p_{E}(v,t) and pI(v,t)p_{I}(v,t) at different time. As depicted in Fig. LABEL:fig:blow_up2, two populations nearly blow up simultaneously. The concentration of the density functions around VRV_{R}, as shown in Fig. LABEL:fig:blow_up2E_p and Fig. LABEL:fig:blow_up2I_p, provides a clear indication of the blow-up events. Compared to the inhibitory population, the density function of the excitatory population accumulates more distinctly at the reset voltage VRV_{R}, resulting in a sharper peak.

5.4 Two-population model with delays and refractory states

In this section, we perform numerical experiments to investigate the dynamics of the two-population model (4.8), focusing on the significant roles of refractory states and synaptic delays. These factors are critical in shaping the collective behavior of neuron populations and are known to influence the emergence of periodic solutions, as shown in [10]. However, the interactions between excitatory and inhibitory populations have been comparatively less explored.

To better understand the interaction between excitatory and inhibitory populations in the neuron networks, specifically, we consider a special scenario where an inhibitory population, exhibiting periodic oscillations under external stimuli due to synaptic delays and refractory states [10], is coupled with an excitatory population. By varying the connectivity between the two populations, we aim to observe the dynamic behavior of the system, including transitions between periodic oscillations, steady states, and blow-up events.

In the simulation, the expansion number is fixed at M=16M=16, with a time-step length of Δt=107\Delta t=10^{-7}, and the other parameters are listed in Tab. 6. Since the blow-up events occur for large enough excitatory synaptic strength bEEb_{E}^{E} [7], we systematically vary bEEb_{E}^{E} while keeping other parameters fixed. For numerical convenience, the synaptic delays are considered integral multiples of the time-step length. When the excitatory synaptic strength bEEb_{E}^{E} is small, the dynamic behavior of the system is expected to resemble that of a single inhibitory population. As bEEb_{E}^{E} increases, the system is anticipated to transition to a steady state. For sufficiently large values of bEEb_{E}^{E}, blow-up events are expected to occur in the system.

Results from the numerical experiments confirm the expected transitions. When the excitatory synaptic strength bEEb_{E}^{E} is relatively small, it seems that the inhibitory population dominates the system, and the firing rates Nα(t)N_{\alpha}(t) exhibit periodic oscillations, as shown in Fig. LABEL:fig_periodic_NE and Fig. LABEL:fig_periodic_NI for bEE=3.5b_{E}^{E}=3.5.

As bEEb_{E}^{E} increases, the influence of the excitatory population seems to intensify, gradually weakening the dominance of the inhibitory population, and shifting the system towards a steady state. In this case, both the excitatory firing rate NE(t)N_{E}(t) and inhibitory firing rate NI(t)N_{I}(t) increase without explosion and become constant after a certain period, depicted in Fig. LABEL:fig_steady_NE and Fig. LABEL:fig_steady_NI for bEE=3.82b_{E}^{E}=3.82. This transition occurs because the excitatory synaptic strength is enhanced, leading to a more balanced interplay between the excitatory population and the inhibitory population.

Finally, when bEEb_{E}^{E} is sufficiently large, the excitatory population dominates the system, causing a blow-up event that leads to the overall blow-up in the system, as observed in Fig. LABEL:fig_blowup_NE and Fig. LABEL:fig_blowup_NI where at bEE=4b_{E}^{E}=4. The firing rates Nα(t)N_{\alpha}(t) grow without bound, consistent with the numerical experiments in [10].

6 Conclusion

In this work, we have presented an efficient numerical scheme based on the spectral method to solve the Fokker-Planck equation in the semi-unbounded region associated with the Nonlinear Noisy Leaky Integrate-and-Fire model. The scheme is consistent with the Fokker-Planck equation and exhibits spectral convergence in the spatial direction. A comparison of computation time at the same accuracy level demonstrates the high efficiency of LLSGM. Additionally, we successfully extend the method to an excitatory-inhibitory population model with synaptic delays and refractory periods and explore the transitions between periodic solutions, steady states, and blow-up events. In the future, we may develop a more stable high-order numerical scheme in time and investigate more complex properties of the Fokker-Planck equation.

Acknowledgements

We thank Prof. Jie Shen from Eastern Institute of Technology and Prof. Jiwei Zhang from Wuhan University for their valuable suggestions. The work of Zhennan Zhou is partially supported by the National Key R&D Program of China (Project No. 2021YFA1001200, 2020YFA0712000), and the National Natural Science Foundation of China (Grant No. 12031013, 12171013). This work of Yanli Wang is partially supported by the President Foundation of China Academy of Engineering Physics (YZJJZQ2022017) and the National Natural Science Foundation of China (Grant No. 12171026, U2230402 and 12031013).

References

  • [1] N. Brunel. Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. Journal of Computational Neuroscience, 8(3):183–208, 2000.
  • [2] N. Brunel and V. Hakim. Fast global oscillations in networks of integrate-and-fire neurons with low firing rates. Neural Computation, 11(7):1621–1671, 1999.
  • [3] M. Cáceres, J. Carrillo, and B. Perthame. Analysis of nonlinear noisy integrate & fire neuron models: blow-up and steady states. The Journal of Mathematical Neuroscience, 1(7):1–33, 2011.
  • [4] M. Cáceres, J. Carrillo, and L. Tao. A numerical solver for a nonlinear Fokker-Planck equation representation of neuronal network dynamics. Journal of Computational Physics, 230(4):1084–1099, 2011.
  • [5] M. Cáceres and B. Perthame. Beyond blow-up in excitatory integrate and fire neuronal networks: refractory period and spontaneous activity. Journal of Theoretical Biology, 350:81–89, 2014.
  • [6] M. Cáceres, P. Roux, D. Salort, and R. Schneider. Global-in-time solutions and qualitative properties for the NNLIF neuron model with synaptic delay. Communications in Partial Differential Equations, 44(12):1358–1386, 2019.
  • [7] M. Cáceres and R. Schneider. Blow-up, steady states and long time behaviour of excitatory-inhibitory nonlinear neuron models. Kinetic and Related Models, 10(3):587–612, 2016.
  • [8] M. Cáceres and R. Schneider. Towards a realistic NNLIF model: analysis and numerical solver for excitatory-inhibitory networks with delay and refractory periods. ESAIM: Mathematical Modelling and Numerical Analysis, 52(5):1733–1761, 2018.
  • [9] J. Carrillo, M. González, M. Gualdani, and M. Schonbek. Classical solutions for a nonlinear Fokker-Planck equation arising in computational neuroscience. Communications in Partial Differential Equations, 38(3):385–409, 2013.
  • [10] M. Cáceres and R. Schneider. Analysis and numerical solver for excitatory-inhibitory networks with delay and refractory periods. ESAIM: Mathematical Modelling and Numerical Analysis, 52(5):1733–1761, 2018.
  • [11] X. Dou and Z. Zhou. Dilating blow-up time: a generalized solution of the NNLIF neuron model and its global well-posedness. arXiv preprint arXiv:2206.06972, 2022.
  • [12] Z. Du, Y. Xie, and Z. Zhou. A synchronization-capturing multiscale solver to the noisy integrate-and-fire neuron networks. Multiscale Modeling & Simulation, 22(1):561–587, 2024.
  • [13] G. Dumont and P. Gabriel. The mean-field equation of a leaky integrate-and-fire neural network: measure solutions and steady states. Nonlinearity, 33(12):6381, 2020.
  • [14] G. Dumont, J. Henry, and C. Tarniceriu. Noisy threshold in neuronal models: connections with the noisy leaky integrate-and-fire model. Journal of Mathematical Biology, 73(6):1413–1436, 2016.
  • [15] G. Dumont, J. Henry, and C. Tarniceriu. Oscillations in a fully connected network of leaky integrate-and-fire neurons with a poisson spiking mechanism. Journal of Nonlinear Science, 34(1):18, 2024.
  • [16] Q. He, J. Hu, and Z. Zhou. A structure preserving numerical scheme for Fokker-Planck equations of structured neural networks with learning rules. SIAM Journal on Scientific Computing, 44(4):B1045–B1067, 2022.
  • [17] J. Hu, J. Liu, Y. Xie, and Z. Zhou. A structure preserving numerical scheme for Fokker-Planck equations of neuron networks: numerical analysis and exploration. Journal of Computational Physics, 433:110195, 2021.
  • [18] J. Liu, Z. Wang, Y. Zhang, and Z. Zhou. Rigorous justification of the Fokker-Planck equations of neural networks based on an iteration perspective. SIAM Journal on Mathematical Analysis, 54(1):1270–1312, 2022.
  • [19] K. Newhall, G. Kovačič, P. Kramer, and D. Cai. Cascade-induced synchrony in stochastically driven neuronal networks. Physical review E, 82(4):041903, 2010.
  • [20] K. Newhall, G. Kovačič, P. Kramer, D. Zhou, A. Rangan, and D. Cai. Dynamics of current-based, Poisson driven, integrate-and-fire neuronal networks. Communications in Mathematical Sciences, 8(2):541–600, 2010.
  • [21] D. Nykamp and D. Tranchina. A population density approach that facilitates large-scale modeling of neural networks: analysis and an application to orientation tuning. Journal of Computational Neuroscience, 8:19–50, 2000.
  • [22] D. Nykamp and D. Tranchina. A population density approach that facilitates large-scale modeling of neural networks: extension to slow inhibitory synapses. Neural Computation, 13(3):511–546, 2001.
  • [23] A. Renart, N. Brunel, and X. Wang. Mean-field theory of irregularly spiking neuronal populations and working memory in recurrent cortical networks. Computational neuroscience: A comprehensive approach, pages 431–490, 2004.
  • [24] D. Sharma and P. Singh. Discontinuous Galerkin approximation for excitatory-inhibitory networks with delay and refractory periods. International Journal of Modern Physics C, 31(03):2050041, 2020.
  • [25] D. Sharma, P. Singh, R. Agarwal, and M. Koksal. Numerical approximation for nonlinear noisy leaky integrate-and-fire neuronal model. Mathematics, 7(4):363, 2019.
  • [26] J. Shen. Efficient spectral-Galerkin method I. Direct solvers of second-and fourth-order equations using Legendre polynomials. SIAM Journal on Scientific Computing, 15(6):1489–1505, 1994.
  • [27] J. Shen, T. Tang, and L. Wang. Spectral methods: algorithms, analysis and applications, volume 41. Springer Science & Business Media, 2011.
  • [28] J. Zhang, Y. Shao, A. Rangan, and L. Tao. A coarse-graining framework for spiking neuronal networks: from strongly-coupled conductance-based integrate-and-fire neurons to augmented systems of ODEs. Journal of Computational Neuroscience, 46:211–232, 2019.
  • [29] P. Zhang, Y. Wang, and Z. Zhou. A spectral method for a Fokker-Planck equation in neuroscience with applications in neuron networks with learning rules. Communications in Computational Physics, 35(1):70–106, 2024.