This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

11institutetext: 1 Center for Bioinformatics, National Laboratory of Protein Engineering and Plant Genetic Engineering, School of Life Sciences, Peking University, Beijing, 100871, China;
2 Yuanpei College, Peking University, Beijing, 100871, China;
3 School of Mathematical Sciences, Peking University, Beijing 100871, China;
4 Department of Mathematics, University of California, Berkeley, CA, USA 94720;
5 Center for Quantitative Biology, Peking University, Beijing, 100871, China;
6 Courant Institute of Mathematical Sciences, New York University, NY, USA 10003;
7 Department of Mathematics and Statistics, University of Massachusetts Amherst, MA, 01003.
Equal contribution to this paper.
Corresponding authors.
L. Tao 11email: [email protected];
Z.-C. Xiao 11email: [email protected];
Y. Li 11email: [email protected]

Learning biological neuronal networks with artificial neural networks: neural oscillations

Ruilin Zhang1,2,∗    Zhongyi Wang1,3,∗    Tianyi Wu1,3    Yuhang Cai4    Louis Tao1,5,†    Zhuo-Cheng Xiao6,†    Yao Li7,†
(Received: date / Accepted: date / Edited: date)
Abstract

First-principles-based modelings have been extremely successful in providing crucial insights and predictions for complex biological functions and phenomena. However, they can be hard to build and expensive to simulate for complex living systems. On the other hand, modern data-driven methods thrive at modeling many types of high-dimensional and noisy data. Still, the training and interpretation of these data-driven models remain challenging. Here, we combine the two types of methods to model stochastic neuronal network oscillations. Specifically, we develop a class of first-principles-based artificial neural networks to provide faithful surrogates to the high-dimensional, nonlinear oscillatory dynamics produced by neural circuits in the brain. Furthermore, when the training data set is enlarged within a range of parameter choices, the artificial neural networks become generalizable to these parameters, covering cases in distinctly different dynamical regimes. In all, our work opens a new avenue for modeling complex neuronal network dynamics with artificial neural networks.

Keywords:
Artificial neural networkGamma oscillations Data-driven methods Generalization
journal: Journal of Mathematical Biology

1 Introduction

The last few decades have seen rapid developments of first-principles-based mathematical models to study living systems. Based on a collection of a priori physiological and physical principles, the evolution of mathematical models can offer significant advantages in understanding, reproducing, and predicting complex biological phenomena. However, first-principle-based models can be prohibitively expensive to build due to the large number of parameters and variables characterizing the complexity of biological details, e.g., multiple time scales, complicated interactions between biological elements, among others. Alternatively, modern data-driven models focusing on phenomenological or empirical observations are gaining ground in mathematical biology, in that they are designed to deal with high dimensional and noisy data janes2006data (1, 2, 3, 4, 5). However, one is still faced with the daunting task of making sense of the coordinates and parameters of the data-driven models to identify interpretable and biologically meaningful features.

In this study, we investigate how the combination of the two classes of methods can be used to study spiking neuronal networks (SNNs). SNNs are capable of producing highly nonlinear, high-dimensional, and multi-timescale dynamics, and have been widely used to investigate cortical functions and their computation principles (see, e.g., ghosh2009spiking (6, 7, 8, 9, 10)). First-principles-based model reduction methods such as coarse-graining and mean-field theories have been developed to better understand SNN dynamics wilson1972excitatory (11, 12, 13, 14, 15, 16). On the other hand, artificial neural networks (ANN, and its offspring, deep neural networks, or DNN) are a modern data-driven method inspired by the nervous system. DNN has been extremely successful in both engineering applications (image processing, natural language processing, etc.) and applied mathematics (parameter estimation, numerical ordinary/partial differential equations, inverse problem, etc.). See aggarwal2018neural (17, 18, 19, 20, 21), for instance. In particular, it has been shown recently that DNN can well approximate both finite and infinite-dimensional operators barron1994approximation (22, 23). The idea of using DNN surrogates in models to replace the firing rate of SNN was first explored by zhang2020dnn (24). This motivates us to propose a first-principle-based deep-learning framework that replaces high dimensional mappings within neuronal network dynamics by artificial neurons.

The neuroscience problem we address in this paper is the γ\gamma-band oscillations, a type of 30-90 Hz oscillatory physiological dynamics prominent in many brain regions HenrieShapley2005 (25, 26, 27, 28, 29, 30, 31, 32, 33). Remarkably, in previous studies, γ\gamma-oscillations can be produced in simple, idealized SNN models involving only two neural populations, excitatory (E) and inhibitory (I) chariker2018rhythm (34, 35, 36, 37). More specifically, due to transient noise and/or external stimulus, highly correlated spiking patterns (previously termed multiple-firing events, or MFEs) are repeatedly produced from the competitions between E/I populations, involving the interplay between multiple timescales. MFEs are a type of stochastic, high-dimensional emergent phenomena, with rapid and transient dynamical features that are sensitive to the biophysical parameters of the network. Therefore, it is a very challenging task to build model reductions that can provide biological insights for γ\gamma-oscillations in a wide range of parameter regimes.

This paper explores learning the complex γ\gamma-oscillations with first-principle-based DNNs. Our previous study revealed that the complex γ\gamma-oscillatory dynamics can be captured by a Poincare mapping FF projecting the network state at one initiation of an MFE to the next initiation cai2021model (15). Therefore, FF is a high-dimensional mapping subjected to biophysical parameters of SNNs, and thus very hard to analyze. Instead, we approximate FF by A. using coarse-graining (CG) and discrete cosine transform (DCT) to reduce the dimensionality of the state space, and B. benefiting from the representation power of DNNs. Specifically, DNNs provide a unified framework for varying SNN model parameters, revealing the potential of generalization to different dynamical regimes of the emergent network oscillations. Despite the significant underlying noise and the drastic dimensional reductions, our DNNs successfully capture the main feature of the γ\gamma-oscillations in SNNs. This effectively makes the DNN a surrogate of the true biological neuronal networks.

The organization of this paper is as follows. Section 2 introduces the neuronal network model that serves as the ground truth. The descriptions and capturing algorithm of MFEs are depicted in Section 3. Section 4 discusses how to set up the training set for artificial networks. The main results are demonstrated in Section 5. Section 6 is the conclusion and discussion.

2 Neuronal network model description

Throughout this manuscript, we study SNN dynamics with a Markovian integrate-and-fire (MIF) neuronal network model. This model imitates a small, local circuit of the brain and shares many features of local circuits in living brains, including extensive recurrent interactions between neurons, leading to the emergence of γ\gamma-oscillations. We will evaluate the performance of DNNs based on their predictive power of dynamics produced by the MIF model.

2.1 An Markovian spiking neuronal network

We consider a simple spiking network consisting of NEN_{E} excitatory (E) neurons and NIN_{I} inhibitory (I) neurons, homogeneously connected. The membrane potential (VV) of each neuron is governed by Markovian dynamics, with the following assumptions:

  1. 1.

    VV takes value in a finite, discrete state space;

  2. 2.

    For neuron ii, ViV_{i} is driven by both external and recurrent E/I inputs from other neurons, through the effect of the arrival of spikes (or action potentials);

  3. 3.

    A spike is released from neuron ii when ViV_{i} is driven to the firing threshold. Immediately after that, neuron ii enters the refractory state before resetting to the rest state;

  4. 4.

    For a spike released by neuron ii, a set of post-synaptic neurons is chosen randomly. Their membrane potentials are driven by this spike.

We now explain these assumptions in detail.

Single neuron dynamics. Let us index the NEN_{E} excitatory neurons from 1,,NE1,\cdots,N_{E}, and the NIN_{I} inhibitory neurons from NE+1,,NE+NIN_{E}+1,\cdots,N_{E}+N_{I}. For neuron ii (i=1,,NE+NIi=1,...,N_{E}+N_{I}), the membrane potential ViV_{i} lies in a discrete state space Γ\Gamma

ViΓ:={Mr,Mr+1,,1,0,1,,M}{},V_{i}\in\Gamma:=\{-M_{r},-M_{r}+1,\cdots,-1,0,1,\cdots,M\}\cup\{\mathcal{R}\},

where the states MrM_{r}, MM and \mathcal{R} are the inhibitory reversal potential, the spiking threshold, and the refractory state, respectively. Once a neuron is driven to the threshold MM, its membrane potential ViV_{i} enters the refractory state \mathcal{R}. After an exponentially distributed waiting time with mean τ\tau_{\mathcal{R}}, ViV_{i} is reset to the rest state 0.

Within the state space Γ\Gamma, ViV_{i} is driven by the external and recurrent inputs to neuron ii. Yet, while a neuron is in the refractory state \mathcal{R}, it does not respond to any stimuli. The external (i.e., from outside the network itself) stimulus serves as an analog of feedforward sensory input, e.g., from the thalamus or from other brain regions. In this paper, the external inputs to individual neurons are modeled as series of impulsive kicks, whose arrival times are drawn from independent & identical Poisson processes. The rates of the Poisson processes, λE,I\lambda^{E,I}, are taken to be constants across the E/I populations. Each kick received by neuron ii increases ViV_{i} by 11.

An E/I neuron will spike when its membrane potential ViV_{i} reaches threshold MM, sending an E/I kick to its postsynaptic neurons (the choice of which will be discussed momentarily). Each recurrent E spike received by neuron ii takes effect on ViV_{i} after an independent, exponentially distributed time delay τExp(τE)\mathbf{\tau}\sim{\rm Exp}(\tau^{E}). The excitatory spikes received by neuron ii that have not yet taken effect form a pending E-spike pool, with size HiEH_{i}^{E}. Therefore, HiEH_{i}^{E} increases by 1 when an an E kick arrives at neuron ii, and drops by 1 when a pending spike takes effect. This discussion applies to the II spikes as well: the size of pending I-spike pool is HiIH_{i}^{I}, and waiting time of the pending spikes are subjected to τExp(τI)\mathbf{\tau}\sim{\rm Exp}(\tau^{I}). In summary, the state of neuron ii is therefore described by a triplet

(Vi,HiE,HiI).(V_{i},H_{i}^{E},H_{i}^{I}).

We note that the pool sizes HiEH_{i}^{E} and HiIH_{i}^{I} may be viewed as E and I synaptic currents of neuron ii in the classical leaky integrate-and-fire neuron model gerstner2014neuronal (38).

Impacts of spikes. The effects of the recurrent E/I spikes on membrane potentials are different. When a pending E-spike takes effect, ViV_{i} is increased by [SQ,E]+uE[S^{Q,E}]+u^{E}, where

uEBernoulli(p)andp\displaystyle u^{E}\sim{\rm Bernoulli}(p)\quad\textrm{and}\quad p =SQ,E[SQ,E],\displaystyle=S^{Q,E}-[S^{Q,E}],

where Q{E,I}Q\in\{E,I\} and [][\cdot] denotes the floor integer function. Likewise, when a pending I-spike takes effect, ViV_{i} is decreased by [SQ,I]+uI[S^{Q,I}]+u^{I}, where

uIBernoulli(q)andq\displaystyle u^{I}\sim{\rm Bernoulli}(q)\quad\textrm{and}\quad q =SQ,I[SQ,I],\displaystyle=S^{Q,I}-[S^{Q,I}],

ViV_{i} is strictly bounded the state space Γ\Gamma. Should ViV_{i} exceed MM after an E-spike increment, it will reset to \mathcal{R} and neuron ii spikes immediately. On the other hand, should ViV_{i} go below Mr-M_{r} due to an I-spike, it will stay at Mr-M_{r} instead.

A homogeneous network architecture. We close our discussion of the MIF model by clarifying the choice of network architecture. Instead of having a predetermined network architecture with fixed synaptic coupling strengths, the postsynaptic neurons of each spike are decided on-the-fly. That is to say, a new set of postsynaptic neurons is chosen independently for each spike. More specifically, when a type-QQ^{\prime} neuron spikes, the targeted postsynaptic neurons in the QQ populations, excluding the spiking neuron itself, are chosen with probabilities PQQP^{QQ^{\prime}} (Q,Q{E,I}Q,Q^{\prime}\in\{E,I\}). We point out that the motivation of this simplification is for analytical and computational convenience by making neurons interchangeable within each subtype, and is standard in many previous theoretical studies cai2006kinetic (14, 12, 11, 13, 15, 38).

To summarize, the state space of the network is denoted as 𝛀\mathbf{\Omega}. A network state ω𝛀\omega\in\mathbf{\Omega} consists of 3(NE+NI)3(N_{E}+N_{I}) components

ω=(\displaystyle\omega=( V1,,VNE,VNE+1,,VNE+NI,\displaystyle V_{1},\cdots,V_{N_{E}},V_{N_{E}+1},\cdots,V_{N_{E}+N_{I}},
H1E,,HNEE,HNE+1E,,HNE+NIE,\displaystyle H^{E}_{1},\cdots,H^{E}_{N_{E}},H^{E}_{N_{E}+1},\cdots,H^{E}_{N_{E}+N_{I}},
H1I,,HNEI,HNE+1I,,HNE+NII).\displaystyle H^{I}_{1},\cdots,H^{I}_{N_{E}},H^{I}_{N_{E}+1},\cdots,H^{I}_{N_{E}+N_{I}}). (1)

2.2 Parameters used for simulations

The choices of parameters are adopted from our previous studies li2019stochastic (16, 15, 39). For all SNN parameters used in the simulations, we list their definitions and values in Table 1. Here, we remark that the projection probability between neurons PQQP^{Q^{\prime}Q} (Q,Q{E,I}Q,Q^{\prime}\in\{E,I\}) are chosen to match the anatomical data in the macaque visual cortex, see chariker2016orientation (10) for reference. Also, τE<τI\tau^{E}<\tau^{I}, since it is known that the Glu-AMPA receptors act faster than the GABA-GABA receptors, with both on a time scale of milliseconds koch1999biophysics (40).

On the other hand, four parameters concerning recurrent synaptic coupling strength will be tested and varied in this study. This is because they are sensitive for SNN dynamics, and yet hard to directly measure by current experiments methods chariker2016orientation (10, 41). The range of tested parameter set 𝚯={SEE,SEI,SIE,SII}4\mathbf{\Theta}=\{S^{EE},S^{EI},S^{IE},S^{II}\}\subset\mathbb{R}^{4} is also given by Table 1.

Parameter Group Parameter Meaning Value
Network NEN^{E} Number of E cells 300
architecture NIN^{I} Number of I cells 100
PEEP^{EE} E-to-E coupling probability 0.15
PEIP^{EI} I-to-E coupling probability 0.50
PIEP^{IE} E-to-I coupling probability 0.50
PIIP^{II} I-to-I coupling probability 0.40
SEES^{EE} E-to-E synaptic weight [3.5,4.5][3.5,4.5]
SEIS^{EI} I-to-E synaptic weight [2.5,1.5][-2.5,-1.5]
SIES^{IE} E-to-I synaptic weight [2.5,3.5][2.5,3.5]
SIIS^{II} I-to-I synaptic weight [2.5,1.5][-2.5,-1.5]
Neuronal MM Threshold potential 100
physiology Mr-M_{r} Inhibitory reversal potential -66
τ\tau^{\mathcal{R}} Expectation of refractory period 3 ms
τE\tau^{E} Expectation of E-spike pending time 2 ms
τI\tau^{I} Expectation of I-spike pending time 4 ms
λE\lambda^{E} Total external spikes/s to E 3 kHz
λI\lambda^{I} Total external spikes/s to I 3 kHz
Table 1: Parameters regarding the network architecture (first row) and individual neuronal physiology (second row). Symbols, meanings, and values of relevant parameters are depicted.
Refer to caption
Figure 1: Divergence between SSA and tau-leaping methods. A. Raster plot from SSA (blue) and tau-leaping method (orange) using same seed. B. HEE trajectories from SSA and tau-leaping method. C-D. Voltage trajectories of an E/I neuron from SSA and tau-leaping method. Vertical red/black lines indicate the beginnings/endings of MFEs. (solid: SSA, dashed: tau-leaping)

3 Multiple-firing events

In the studies of γ\gamma-oscillations. In many brain regions, electrophysiological recordings of the local field potentials reveal temporal oscillations with power peaking in the γ\gamma-frequency band (30-90 Hz) HenrieShapley2005 (25, 26, 27, 28, 29, 30, 31, 32, 33). Because of the belief that these coherent rhythms play a crucial role in cognitive computations, there has been much work on understanding their mechanisms in different brain areas, in disparate dynamical regimes, and within various brain states azouz2000dynamic (42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54).

To explain the neural basis of γ\gamma-oscillations, a series of theoretical models have found transient, nearly synchronous collective spiking patterns that emerge from the tight competitions between E and I populations whittington2000inhibition (55, 36, 56, 57, 34). More specifically, rapid & coherent firing of neurons occurs within a short interval, and such spiking patterns recur with γ\gamma-band frequencies in a stochastic fashion. This phenomenon is termed multiple-firing events (MFEs). MFEs are triggered by a chain reaction initiated by recurrent excitation, and terminated by the accumulation of inhibitory synaptic currents. In this scenario, the γ\gamma-oscillations appear in electrophysiological recordings as the local change of the electric field generated by MFEs. We refer readers to rangan2013emergent (36, 57, 34, 16, 15, 39) for further discussions of MFEs.

The alternation of fast and slow phases is one of the most significant dynamical features of MFEs (Fig. 1). Namely, at the beginning of an MFE, a chain reaction leads to a transient state where many neurons fire collectively in a short time interval (the fast phase). On the other hand, during the interval between two successive MFEs (an inter-MFE interval, or IMI), the neuronal network exhibits low firing rates while the system recovers from the extensive firings via a relatively quiescent period (the slow phase). These two phases can be discriminated by the temporal coherence of spikes in the raster plot where the time and location of spikes are indicated (Fig. 1A, blue dots). Here, MFEs and IMIs are separated by vertical lines - we will discuss the method of capturing MFEs momentarily in Sect. 3.2.

The complexity of MFEs is partially reflected by its sensitivity to spike-timing fluctuations xiao2022multilevel (58). Specifically, during an MFE, the missing or misplacement of even a small number of spikes can significantly alter the path of network dynamics. Furthermore, the high dimensionality of state space 𝛀\mathbf{\Omega} and the high degree of nonlinearity make MFE hard to analyze. On the other hand, the network dynamics during IMIs are more robust to noise and exhibit low-dimensional features cai2021model (15, 39).

3.1 Spike-timing sensitiveness of transient dynamics

We illustrate this slow-fast dichotomy through a comparison between two different numerical simulations, one using the stochastic simulation algorithm (SSA) and another one using tau-leaping (Fig. 1).

Stochastic simulation algorithms vs. tau-leaping. In SSA, the timings of state transitions in phase space are exact, since the next state transition is generated by the minimum of finitely many independent, exponentially distributed random variables (see Appendix). On the other hand, algorithms simulating random processes with fixed time steps, such as tau-leaping, introduce errors to event times at every step. In a tau-leaping simulation with a fixed step size of Δt\Delta t, the effect of spikes (interactions among neurons) is processed after each time step. Hence, within each time-step, all events are uncorrelated, and changes are held off until the next update. Therefore, the precision of SSA is determined by the computational precision of the C++ code, which is much higher than tau-leaping.

Comparison. Intuitively, tau-leaping methods can well capture the slow network dynamics during IMIs, but not the fast chain reactions during MFEs. The detailed comparison is depicted in Fig. 1, where we choose Δt=0.5\Delta t=0.5 ms for tau-leaping method. To constrain differences induced by stochastic fluctuations, we couple the two simulations with the same external noise, leaving the intrinsic noise (the random waiting times of pending spikes and refractory states) generated within the algorithms themselves. The raster plots depict spiking events produced by both methods and diverge rapidly during MFEs (Fig. 1A. blue: SSA; orange: tau-leaping), since the crucial coherence between firing events is not accurately captured by tau-leaping. This point is also supported by the comparison between voltage traces of single-neurons (Fig. 1CD): Although well aligned at the beginning, the spike timing of the neurons are strongly affected by the transient fluctuation during MFEs.

In addition to the firing events, we also illustrate the comparison between pending spikes. Here we define

HQQ=iQHiQ,Q,Q{E,I},\displaystyle H^{Q^{\prime}Q}=\sum_{i\in Q}H_{i}^{Q^{\prime}},\quad Q,\,Q^{\prime}\in\{E,I\},

i.e., the total number of pending QQ^{\prime}-spikes for type-QQ neurons. Fig. 1B demonstrates the divergence of HEEH^{{EE}} collected from different simulations. Because of the identical external random noise, trajectories from two methods are similar in the first hundred milliseconds. However, very rapidly, the accumulation of errors in the first fast phase causes large divergence. Other pending spikes statistics yield similar disagreement between the two methods (data not shown).

Therefore, in the rest of this paper, we use SSA to simulate the Markov process of the SNN dynamics.

Refer to caption
Figure 2: The amount of recurrent excitation serves as the indicator of MFEs. A-B. Number of E spikes caused by recurrent excitation within each 0.25 ms interval. (SEE,SIE,SEI,SII)=(4,3,2.2,2)(S^{EE},S^{IE},S^{EI},S^{II})=(4,3,-2.2,-2) for A; (4.44,2.88,2.22,1.70(4.44,2.88,-2.22,-1.70 for B. C-D. Raster plot of the two simulations. Vertical red and black lines indicate the beginnings and endings of MFEs respectively.

3.2 Capturing MFEs from network dynamics

The first step of investigating MFE is the accurate detection of MFEs patterns from the temporal dynamics of SNN. Due to the lack of a rigorous definition in previous studies, we develop an algorithm to capture MFEs based on the indication of recurrent excitation (Algorithm 1), splitting the spiking patterns into consecutive phases of MFEs and IMIs.

The algorithm capturing MFEs. According to Sect. 3, the existence of the cascade of recurrent excitation has a causal relation with MFEs. Therefore, given a temporal spiking pattern generated by the SNN, Algorithm 1 detects the initiation of a candidate MFE with the following criteria:

  • As the influence of an E-to-E excitation, the second E-to-E spike is triggered;

  • The second E-to-E spike should occur within a 4 ms interval following the first one (twice the excitatory synaptic timescale τE=2\tau_{E}=2ms).

After the cascade of spiking events, the candidate MFE is deemed terminated if no additional E spike takes place within a 4 ms time-window. Furthermore, to exclude isolated firing events clustering by chance, we apply merging and filtering processes to each of the MFE candidates. More specifically, consecutive MFE candidates are merged into one if they occur within 2 ms. Candidates are eliminated if their duration is less than 5 ms or the number of spikes involved is less than 5. We comment that the filtering threshold is chosen based on the size of 400-neuron networks. A different filtering standard is employed for larger networks, presented below.

Algorithm 1, based on the timing of recurrent E-spikes, is employed throughout this manuscript to detect MFEs. Two examples, for different choices of parameters, are illustrated in Fig. 2, where the initiation and termination of MFEs are indicated by red and black vertical lines. In the rest of the paper, we denote the time sections of initiations and terminations of the mm-th MFE as tmt_{m} and tmt^{\prime}_{m}. Therefore, the MFE interval is [tm,tm][t_{m},t^{\prime}_{m}], whereas the IMI after that is [tm,tm+1][t^{\prime}_{m},t_{m+1}].

Algorithm 1 Capturing MFE
procedure Recording, Merging and Filtering MFE
     repeat
         A new event in simulation happens
         if State ==== Recording MFE then
5:              if #\# of EE spikes within previous 4 ms << 2  then
                  State \leftarrow Not in MFE
                  Record post-MFE network state ω\omega
              end if
         else
10:              if Two EE spikes occur within 4 ms then
                  Record at first EE spikes as pre-MFE network state ω\omega
              end if
         end if
     until current time ¿ terminate time
15:     Merge any consecutive MFE candidates with space << 2 ms
     for i=1i=1 to number of merged candidates do
         if (Length of candidate ii) ¿ 5 ms &&\&\& (Number of spike) ¿ threshold &&\&\& (peak HEEH^{EE}) ¿ (100 + start HEEH^{EE}then
              Register candidate ii as MFE, output all recorded network state
         end if
20:     end for
end procedure

4 Learning MFEs with artificial neural networks

Viewed within the framework of random dynamical systems, SNN dynamics exhibiting γ\gamma-oscillation can be effectively represented by a Poincare map theory wu2022multi (39). Consider a solution map of the SNN dynamics depicted in Sect. 2,

ωt=Φtθ(ω0,ξ),\displaystyle\omega_{t}=\Phi^{\theta}_{t}(\omega_{0},\xi),

where ω0𝛀\omega_{0}\in\mathbf{\Omega} is the initial condition of the SNN, θ𝚯\theta\in\mathbf{\Theta} indicates SNN parameters, and ξ\xi is the realization of all external/internal noises during the simulation. (The solution map satisfies Φt+s=ΦtΦs\Phi_{t+s}=\Phi_{t}\circ\Phi_{s}.) To study the recurrence of MFEs, we focus on the time sections {tm}\{t_{m}\} on which ωtm\omega_{t_{m}} returns to the initiation of the mmth MFE. Therefore, we define a sudo-Poincare map:

ωtm+1=Fθ(ωtm)=Φtm+1tmθ(ωtm,ξ).\omega_{t_{m+1}}=F^{\theta}(\omega_{t_{m}})=\Phi^{\theta}_{t_{m+1}-t_{m}}(\omega_{t_{m}},\xi). (2)

We note that FθF^{\theta} is not a Poincare map in the rigorous sense, since the initiation of MFEs depends on the temporal dynamics in a short interval (Algorithm 1).

The central goal of this paper is to investigate FθF^{\theta}. It is generally impractical to study FθF^{\theta} in an analytical manner due to the high dimensionality of 𝛀\mathbf{\Omega}. To overcome the difficulty, we propose a first-principle-based deep learning framework.

4.1 Dissecting FθF^{\theta} into slow/fast phases

The first step is to rephrase our goal guided by SNN dynamics. In the regime of γ\gamma-oscillations, the SNN dynamics is composed of the regular alternation of slow/fast phases, though the duration and detailed dynamics of each phases may vary. Therefore, the pseudo-Poincare map FθF^{\theta} is equivalent to the composition of two maps:

Fθ=F1θF2θ,F1,2θ:𝛀𝛀.\displaystyle F^{\theta}=F^{\theta}_{1}\circ F^{\theta}_{2},\quad F^{\theta}_{1,2}:\mathbf{\Omega}\rightarrow\mathbf{\Omega}. (3)

More specifically, F1θF^{\theta}_{1} maps the network state at the initiation of an MFE (ωtm\omega_{t_{m}}) to its termination (ωtm\omega_{t^{\prime}_{m}}), while F2θF^{\theta}_{2} maps the network state from the beginning of an IMI (ωtm\omega_{t^{\prime}_{m}}) to the initiation of the next MFE (ωtm+1\omega_{t_{m+1}}). We denote F1,2θF^{\theta}_{1,2} as MFE and IMI mappings, respectively. To summarize, the dynamical flow of Φt\Phi^{t} is equivalent to:

ωtmMFEF1θωtmIMIF2θωtm+1MFEF1θωtm+1...\,\omega_{t_{m}}\xrightarrow[\text{MFE}]{F^{\theta}_{1}}\omega_{t^{\prime}_{m}}\xrightarrow[\text{IMI}]{\ ~{}~{}~{}~{}F^{\theta}_{2}~{}~{}~{}~{}\ }\omega_{t_{m+1}}\xrightarrow[\text{MFE}]{F^{\theta}_{1}}\omega_{t^{\prime}_{m+1}}\,...

Our previous studies demonstrated that the slow and relatively quiescent dynamics during IMIs can be well captured by classical coarse-graining methods, i.e., the IMI mapping F2θF^{\theta}_{2} may be represented by the evolution of certain Fokker-Planck equations cai2006kinetic (14). However, this is not the case for the MFE mapping F1θF^{\theta}_{1} due to the highly transient and nonlinear dynamics. Instead, we turn to deep neural networks (DNN) due to its success in tackling high-dimensional and nonlinear problems. In the rest of this section, our goal is to train a DNN with relevant data and generate surrogates of F1θF^{\theta}_{1}. Our DNN has a feedforward structure with 4 layers, consisting of 512, 512, 512 and 128 neurons, respectively.

4.2 First-principle-based reductions of the problem

Instead of requiring the DNN to learn the full map F1θF^{\theta}_{1} and directly link the network state from ωtm\omega_{t_{m}} to ωtm\omega_{t^{\prime}_{m}}, we prepare a training set 𝒯trainθ\mathcal{T}^{\theta}_{\rm{train}} with first-principle-based dimensional reduction, noise elimination, and enlargement for robustness to facilitate the training process.

4.2.1 Coarse-graining SNN states

Approximating the features of MFE mapping F1θF^{\theta}_{1} with a DNN (or any statistical machine learning method) immediately faces the curse of dimensionality. Namely, F1θF^{\theta}_{1} maps a 3N3N-dimensional space 𝛀\mathbf{\Omega} to itself. To cope with NN, we propose a coarse-graining model reduction with a priori physiological information.

Instead of enumerating the actual state of every neuron, we assume the E and I neuronal populations form two ensembles. That is, the state of a type-Q neuron can be viewed as randomly drawn from a distribution 𝝆Q(v,HE,HI)\bm{\rho}^{Q}(v,H^{E},H^{I}) of the QQ-ensemble, where Q{E,I}Q\in\{E,I\}. Furthermore, note that there is no fixed architecture of synaptic connection in SNN (see Sect. 2) - any neurons in the same population has the same probability to be projected to when a spike occurs. Therefore, it is reasonable to decorrelate vv and the (HE,HI)(H^{E},H^{I}), i.e.,

𝝆Q(v,HE,HI)𝒑Q(v)𝒒Q(HE,HI),\bm{\rho}^{Q}(v,H^{E},H^{I})\sim\bm{p}^{Q}(v)\cdot\bm{q}^{Q}(H^{E},H^{I}), (4)

where 𝒑Q\bm{p}^{Q} and 𝒒Q\bm{q}^{Q} are marginal distributions of 𝝆Q\bm{\rho}^{Q}.

More specifically, 𝒑E(v),𝒑I(v)\bm{p}^{E}(v),\bm{p}^{I}(v) yield the distributions of neuronal voltages for both population on a partition of the voltage space Γ\Gamma:

Γ\displaystyle\Gamma =Γ1Γ2Γ22Γ\displaystyle=\Gamma_{1}\cup\Gamma_{2}\cup...\cup\Gamma_{22}\cup\Gamma_{\mathcal{R}}
=[Mr,0)[0,5)[M5,M=100){}.\displaystyle=[-M_{r},0)\cup[0,5)\cup...\cup[M-5,M=100)\cup\{\mathcal{R}\}. (5)

On the other hand, the distribution of pending spike 𝒒Q(HE,HI)\bm{q}^{Q}(H^{E},H^{I}) is effectively represented by the total number of pending spikes summed over each population, HQEH^{QE} and HQIH^{QI}. To summarize, we define a coarse-grained mapping 𝒞:𝛀𝛀~\mathcal{C}:\bf{\Omega}\mapsto\mathbf{\tilde{\Omega}} by projecting the network state ω\omega onto a 50-dimensional network state ω~\tilde{\omega}:

𝒞(ω)=\displaystyle\mathcal{C}(\omega)= ω~\displaystyle\,\tilde{\omega}
=\displaystyle= (𝒑E(v),𝒑I(v),HEE,HEI,HIE,HII)\displaystyle\left(\bm{p}^{E}(v),\bm{p}^{I}(v),H^{EE},H^{EI},H^{IE},H^{II}\right)
=\displaystyle= (n1E,n2E,,n22E,nRE,n1I,n2I,,n22I,nRI,\displaystyle\left(n^{E}_{1},n^{E}_{2},\cdots,n^{E}_{22},n^{E}_{R},n^{I}_{1},n^{I}_{2},\cdots,n^{I}_{22},n^{I}_{R},\right.
HEE,HEI,HIE,HII).\displaystyle\left.H^{EE},H^{EI},H^{IE},H^{II}\right). (6)

Here, nkQn^{Q}_{k} denotes the number of type-Q neurons whose potentials lie in Γk\Gamma_{k}, and 𝛀~50\mathbf{\tilde{\Omega}}\subset\mathbb{R}^{50} represents the coarse-grained state space. The precise definition of nkQn^{Q}_{k} is given in the Appendix.

We now train a DNN to learn the coarse-grained mapping F~1θ\widetilde{F}_{1}^{\theta} with the coarse-grained network states, ω~tm=F~1θ(ω~tm)\tilde{\omega}_{t^{\prime}_{m}}=\widetilde{F}_{1}^{\theta}(\tilde{\omega}_{t_{m}}), where ω~t=𝒞(ωt)\tilde{\omega}_{t}=\mathcal{C}(\omega_{t}). That is, for a set of fixed SNN parameters (θ\theta), the DNN forms a mapping

F^1θ,ϑ:𝛀~𝛀~\widehat{F}^{\theta,\vartheta}_{1}:\mathbf{\tilde{\Omega}}\mapsto\mathbf{\tilde{\Omega}}

where ϑ\vartheta is the hyperparameter for the trained DNN. From a simulation of SNN dynamics producing MM MFEs, each piece of data of the training set is composed by

  • Input: xm=ω~tmx_{m}=\tilde{\omega}_{t_{m}}, the coarse-grained network states at the initiation of the mm-th MFE, and

  • Output: ym=(ω~tm,SpE,SpI)y_{m}=(\tilde{\omega}_{t^{\prime}_{m}},\mathrm{Sp}_{E},\mathrm{Sp}_{I}), the coarse-grained state at the mm-th MFE termination and the numbers of E/I-spikes during MFEs (m{1,2,,M}m\in\{1,2,...,M\}).

The training process aims for the optimal ϑ\vartheta to minimize the following L2L^{2} loss function

θ(ϑ)=1Mm=1MymF^1θ,ϑ(xm)L22,\displaystyle\mathcal{L}^{\theta}(\vartheta)=\frac{1}{M}\sum_{m=1}^{M}\|y_{m}-\widehat{F}^{\theta,\vartheta}_{1}(x_{m})\|^{2}_{L^{2}}\,, (7)

and we hope to obtain the optimal coarse-grained MFE mapping F^1θ\widehat{F}^{\theta}_{1} as an effective surrogate of F~1θ\widetilde{F}^{\theta}_{1}.

4.2.2 Pre-processing: Eliminating high frequency noises

Using a discrete cosine transform (DCT), we pre-process the training data (x,y)(x,y) by eliminating the noisy dimensions in potential voltage distributions 𝒑Q\bm{p}^{Q}, i.e., the high frequency modes.

The raw distributions of membrane potentials 𝒑Q(v)\bm{p}^{Q}(v) contain significant high-frequency components (Fig. 3A). This is partially due to the stochasticity and the small number of neurons in the MIF model. Unfortunately, the high-frequency components could lead to “conservative” predictions, i.e., the training of DNN converges to the averages of post-MFE voltage distributions to minimize θ(ϑ)\mathcal{L}^{\theta}(\vartheta). Therefore, different inputs would yield similar outputs.

To resolve this issue, we apply a DCT method to remove the high frequency components from training data (x,y)(x,y), only preserving the first eight modes. This is sufficient to discriminate a pair of neurons whose membrane potentials |v1v2|>5|v_{1}-v_{2}|>5, where v1,v2Γv_{1},v_{2}\in\Gamma. For |v1v2|<5|v_{1}-v_{2}|<5, the two neurons are either place in the same or consecutive two bins after coarse-graining. Therefore, the E spikes needed to involve them in the upcoming MFE differs by at most 1.

More specifically, in the training set, 1.86×1051.86\times 10^{5} pairs of network states (x,y)(x,y) are collected from a 5000-second simulation of the SNN network (see Eq. 6). For the voltage distribution components of each state, 𝒑E(v)\bm{p}^{E}(v) and 𝒑I(v)\bm{p}^{I}(v), we remove the high-frequency components by

𝒑Q^=c1Tc{𝒑Q},Q{E,I}.\displaystyle\widehat{\bm{p}^{Q}}=\mathcal{F}_{c}^{-1}\circ T\circ\mathcal{F}_{c}\{\bm{p}^{Q}\},\quad Q\in\{E,I\}. (8)

Here, c\mathcal{F}_{c} indicates the DCT operator, and TT is a truncation matrix preserving the first eight components. Finally, in the training data (x,y)(x,y), the coarse-grained network state ω~\tilde{\omega} is replaced as

ω~=(𝒑E^(v),𝒑I^(v),HEE,HEI,HIE,HII).\displaystyle\tilde{\omega}=\left(\widehat{\bm{p}^{E}}(v),\widehat{\bm{p}^{I}}(v),H^{EE},H^{EI},H^{IE},H^{II}\right).

Fig. 3AB depicts an example of pre-processing a voltage distribution. We leave the rest of the details regarding the DCT methods to the Appendix.

4.2.3 Robustness of training

The robustness of DNN has been addressed in many previous studies goodfellow2014explaining (59, 60). One approach of improving robustness is via data augmentation, in which the training set is enlarged by adding modified copies of existing data shorten2019survey (61). Motivated by this idea, we propose a training set 𝒯trainθ\mathcal{T}^{\theta}_{\rm{train}} to account for more irregular inputs than SNN-produced network states, helping the trained DNN generate realistic and robust predictions for MFE dynamics. Based on the pre-processed network states ω~\tilde{\omega}, we increase the variability of the first eight frequency modes of 𝒑Q^(v)\widehat{\bm{p}^{Q}}(v) as they are the most salient information revealed by DCT.

𝒯trainθ\mathcal{T}^{\theta}_{\rm{train}} consists of pairs of pre/post-MFE network states (xm,ym)(x_{m},y_{m}). A candidate initial state close to a pre-MFE state xmx_{m} in 𝒯trainθ\mathcal{T}^{\theta}_{\rm{train}} is generated as follows:

  1. 1.

    The voltage distributions 𝒑Q^(v)\widehat{\bm{p}^{Q}}(v). We first collect the empirical distributions of all frequency modes (by concerning each entry of c{𝒑Q}\mathcal{F}_{c}\{\bm{p}^{Q}\}) from 1.86×1051.86\times 10^{5} pre-processed pre-MFE network states. The empirical distributions are fitted by direct combinations of Gaussian and exponential distributions (see, e.g., Fig. 3CD). We then increase the variances of fitted distributions by a factor of three, from which the 2nd-8th frequency modes are sampled. On the other hand, the first mode comes from the distributions with the original variances, since it indicates the numbers of neurons outside refractory state. Finally, the higher order of DCT frequency modes are treated as noises and truncated.

  2. 2.

    The pending spikes HQQH^{Q^{\prime}Q}. Likewise, the “fit-expand-sample” operations similar to how we treated the 2nd-8th frequency modes in 𝒑Q^(v)\widehat{\bm{p}^{Q}}(v) are applied to sample the number of pending spikes.

The factor of three is an arbitrary choice to address the variability of MFE dynamics produced by the SNN, and we leave a more systematic investigation on this issue to future work.

It is also important to make sure all training data are authentic abstractions of SNN states in MFE dynamics, i.e., an MFE can be triggered in a short-time simulation (5ms) from a pre-MFE state. Therefore, we perform a simulation that begins with each initial state generated by the enlargement above, and collect the corresponding pre-MFE states (xx) and post-MFE states (yy) if an MFE emerges.

In summary, the enlarged 𝒯trainθ\mathcal{T}^{\theta}_{\rm{train}} consists of 3×1053\times 10^{5} pairs of (xm,ym)(x_{m},y_{m}):

𝒯trainθ={(xm,ym):m=1,2,,3×105},\mathcal{T}^{\theta}_{\rm{train}}=\left\{(x_{m},y_{m}):m=1,2,...,3\times 10^{5}\right\},

where 50%50\% of the data comes directly from network simulation (after pre-processing of DCT), and the rest comes from the data augmentation process.

Refer to caption
Figure 3: Pre-processing and enlarging the training data-set. A. An example of pre-MFE E-neuron voltage distribution 𝒑E(v)\bm{p}^{E}(v) before DCT + iDCT. For clarity, the number of refractory neurons is depicted by the rightmost bin; B. The pre-processed 𝒑E^(v)\widehat{\bm{p}^{E}}(v) after DCT + iDCT; C. The distribution of the 2nd DCT mode of 𝒑E^(v)\widehat{\bm{p}^{E}}(v); D. The distribution of the second DCT mode of 𝒑I^(v)\widehat{\bm{p}^{I}}(v).

4.3 Generalization for different SNN parameters

Varying the synaptic coupling strengths. Trained with data from a dynamical system parameterized by θ\theta, DNN produces a surrogate mapping F^1θ\widehat{F}^{\theta}_{1}. Recall that θ\theta is a parameter point in a 4D cube of recurrent synaptic coupling strength

𝚯\displaystyle\mathbf{\Theta} ={(SEE,SIE,SEI,SII)\displaystyle=\left\{(S^{EE},S^{IE},S^{EI},S^{II})\in\right.
[3.5,4.5]×[2.5,3.5]×[2.5,1.5]×[2.5,1.5]}.\displaystyle\left.[3.5,4.5]\times[2.5,3.5]\times[-2.5,-1.5]\times[-2.5,-1.5]\right\}.

We treat F^1θ\widehat{F}^{\theta}_{1} as a milestone, and further propose a parameter-generic MFE mapping

F^1:𝛀~×𝚯𝛀~.\widehat{F}_{1}:\mathbf{\tilde{\Omega}}\times\mathbf{\Theta}\mapsto\mathbf{\tilde{\Omega}}.

That is, given any point θ𝚯\theta\in\mathbf{\Theta} and an pre-MFE state, the DNN predicts the post-MFE state. The optimization problem and loss function are analogous to Eq. 7.

The training set 𝒯train\mathcal{T}_{\rm{train}} for parameter-generic MFE mapping consists of SNN states from 2000020000 different of parameters points, which are randomly drawn from 𝚯\mathbf{\Theta}. To ensure reasonable SNN dynamics, each parameter point θ\theta is tested by the following criteria

(fE,fI)=L(θ)\displaystyle(f_{E},f_{I})=L(\theta)
fE50 Hz and fI100 Hz,\displaystyle f_{E}\leq 50\,\mbox{ Hz and }\,f_{I}\leq 100\mbox{ Hz,}

where L(θ)L(\theta) is a simple linear formula estimating firing rates of E/I populations based on recurrent synaptic weights (for details see Appendix and li2019well (37)). For each accepted point in parameter space, we perform a 500 ms simulation of SNN and collect 20 pairs of pre-processed, coarse-grained pre- and post-MFEs states (see Sect. 4.2).

Varying the network size. Our methods is generic to γ\gamma-oscillations produced by SNNs of different sizes. We demonstrate this on a 4000-neuron SNN (3000 E and 1000 I neurons) sharing all parameters with the previous 400-neuron SNN, except that the synaptic weights (SQQS^{QQ^{\prime}}) are 1/10-th of the values listed in Table 1. This change of synaptic weights aims to control the total recurrent E/I synaptic drives received by each neuron and allows the different network models to have the same mean-field limit. While deferring DNN predictions of SNN dynamics to Sect. 5, we here note the minor modifications to adopt our methods to the 4000-neuron SNN.

First, the filtering threshold of MFE is increased from 5 to 50 spikes when collecting MFE-related data (see Sect. 3.2), since the MFEs are larger. Second, because of the central limit theorem, less intrinsic noises are observed in the SNN dynamics. This leads to an immediate side effect: The span of pre/post-MFE network states collected from SNN simulations is relatively narrower within 𝛀\mathbf{\Omega}, i.e., the training data is less “general”. To compensate, we expand the spans of 𝒯trainθ\mathcal{T}^{\theta}_{\rm{train}} more courageously. That is, during the enlarging step of training set, the components of pre-MFE states are sampled from distributions with more significant variance expansions.

4.4 Training result

The trained DNNs provide faithful surrogate MFE mappings for both F^1θ\widehat{F}^{\theta}_{1} and F^1\widehat{F}_{1}.

Parameter-specific MFE mappings. We first illustrate F^1θ\widehat{F}^{\theta}_{1} at a particular parameter point

θ=(SEE,SEI,SIE,SII)=(4,3,2.2,2).\theta=(S^{EE},S^{EI},S^{IE},S^{II})=(4,3,-2.2,-2).

We test the predictive power of F^1θ\widehat{F}^{\theta}_{1} on a testing set

𝒯testθ={(xm,ym):m=1,2,,M=6×104},\mathcal{T}^{\theta}_{\rm{test}}=\left\{(x^{\prime}_{m},y^{\prime}_{m}):m=1,2,...,M=6\times 10^{4}\right\},

where (xm,ym)(x^{\prime}_{m},y^{\prime}_{m}) are coarse-grained pre/post-MFE states without pre-processing collected from SNN simulations. In Fig. 4A, one example of DNN prediction to post-MFE 𝒑E(v)\bm{p}^{E}(v) is compared to the SNN simulation results starting from the same xmx^{\prime}_{m}. Also, the comparison between predicted vs. simulated E/I-spike numbers during MFEs are depicted in Fig. 4B. To demonstrate the accuracy of F^1θ\widehat{F}^{\theta}_{1}, Fig. 4C depicts the L2L^{2} losses of different components of post-MFE states. The L2L^{2} loss of predicted 𝒑Q(v)\bm{p}^{Q}(v) is \sim4, while the averaged L2L^{2} difference between the post-MFE voltage distributions in the testing set is

mean1m,M𝒑mE(v)𝒑E(v)2+𝒑mI(v)𝒑I(v)220,\displaystyle\underset{1\leq m,\ell\leq M}{\textrm{mean}}\,\|\bm{p}^{E}_{m}(v)-\bm{p}^{E}_{\ell}(v)\|^{2}+\|\bm{p}^{I}_{m}(v)-\bm{p}^{I}_{\ell}(v)\|^{2}\approx 20\,,

Notably, DCT pre-processing effectively improves DNN predictions of voltage distributions by reducing the L2L^{2} loss from \sim10 to \sim4. Similar comparison is observed for the prediction of pending spike numbers in the post-MFE states (Fig. 4C inset).

Parameter-generic MFE mappings. Likewise, F^1\widehat{F}_{1} also provides faithful predictions following SNN dynamics in different parameter regimes by labeling training data with synaptic strength parameters. Fig. 4D shows similar comparison between L2L^{2} losses of different components of post-MFE states.

Refer to caption
Figure 4: DNNs predictions of post-MFE states. A-C: Mapping F^1θ\widehat{F}^{\theta}_{1} for θ=(SEE,SEI,SIE,SII)=(4,3,2.2,2)\theta=(S^{EE},S^{EI},S^{IE},S^{II})=(4,3,-2.2,-2). A. Left: a pre-MFE 𝒑E(v)\bm{p}^{E}(v); Right: post-MFE 𝒑E(v)\bm{p}^{E}(v) produced by ANN (blue) vs. spiking network simulations (orange). B. Comparison of E and I spike number during MFEs, ANN predictions vs. SSA simulations. The distributions are depicted by 10-th contours of max in ks-density estimation; C. L2L^{2} losses of predicted post-MFE 𝒑Q(v)\bm{p}^{Q}(v) and pending spike number HHs. Blue: auto-L2L^{2} differences between training data; orange: averaged L2L^{2}-loss without pre-processing; yellow: averaged L2L^{2}-loss with pre-processing. D. L2L^{2} loss for the parameter-generic mapping F^1\widehat{F}_{1}. The meaning of all bins are similar to B.

5 Producing a surrogate for SNN

Here, we depict how our ANN predictions provide a surrogate of the spiking network dynamics. We focus on the algorithm to replace FθF^{\theta} for a fixed θ\theta. The algorithm for parameter-generic FF is analogous.

Recall that SNN dynamics is divided into a fast phase (MFEs) and a slow phase (IMIs). Our first-principle-based DNN framework produces a surrogate mapping F^1θ\widehat{F}^{\theta}_{1}, and replace the pseudo-Poincare mapping FθF^{\theta} if complemented by F2θF^{\theta}_{2}. We approximate the IMI mapping F2θF^{\theta}_{2} by evolving SNN with a tau-leaping method, thereby producing the surrogate to the full SNN dynamics by alternating between the two phases.

To resemble the IMI dynamics, we first initialize the network state ω\omega from the coarse-grained post-MFE state ω~\tilde{\omega} predicted by F1θF^{\theta}_{1}. Since neurons in the MIF model are exchangeable, ω\omega can be randomly sampled from 𝒞1(ω~)\mathcal{C}^{-1}(\tilde{\omega}). Specifically, we evenly assign voltage to each type-QQ neurons based on pQ^\widehat{p^{Q}}. On the other hand, same-category pending spikes stay “pooled” and are assigned to each neuron interchangeably. After that, the network state ω\omega is evolved by a tau-leaping method with 1-ms timesteps. This process is terminated if more than three E-to-E spikes or six E spikes occur within 1 ms, after which the next MFE is deemed to start and the network state ω\omega is fed to F1θF^{\theta}_{1} for another round of prediction111The different criteria of MFE initiation from Algorithm 1 aims to ensure the robustness of capturing MFE, due to the lack of network state information within each timestep in the tau-leaping simulations.. The loop FθF^{\theta} alternating between F1θF^{\theta}_{1} and F2θF^{\theta}_{2} is thus closed. The mathematical descriptions of the SNN surrogate algorithm are summarized as follows:

  • (i)

    (ω~tm,SpE,SpI)=F^1θ(xm)(\tilde{\omega}_{t^{\prime}_{m}},\mathrm{Sp}_{E},\mathrm{Sp}_{I})=\widehat{F}^{\theta}_{1}(x_{m}), where ω~tm\tilde{\omega}_{t^{\prime}_{m}} is the coarse-grained post-MFE state.

  • (ii)

    Sample ωtm\omega_{t^{\prime}_{m}} from 𝒞1(ω~tm)\mathcal{C}^{-1}(\tilde{\omega}_{t^{\prime}_{m}}).

  • (iii)

    Evolve the network dynamics with tau-leaping method and initial condition ωtm\omega_{t^{\prime}_{m}}. Stop the simulation with terminal value ωtm+1\omega_{t^{\prime}_{m+1}}.

  • (iv)

    xm+1=𝒞(ωtm+1)x_{m+1}=\mathcal{C}(\omega_{t^{\prime}_{m+1}}).

  • (v)

    Repeat (i)-(iv) with m=m+1m=m+1.

We demonstrate the resembling power by producing the raster plot of SNN dynamics labeling all spiking events with neuron index vs. time in Figs. 5 and 6. While the spiking events during IMIs are given by tau-leaping simulation, the spiking patterns during MFEs consists of events uniformly randomly assigned to neurons and times within the MFE interval (with the total number of spikes (SpE,SpI)(\mathrm{Sp}_{E},\mathrm{Sp}_{I}) predicted by our DNN). The durations of the MFEs are randomly sampled from the empirical distribution collected from the SNN simulations. In the rest of the section, we give the details of our surrogate SNN dynamics constructed upon F^1θ\widehat{F}^{\theta}_{1} and F^1\widehat{F}_{1}.

Refer to caption
Figure 5: Surrogates of spiking network dynamics produced by the parameter-specific MFE mapping F^1θ\widehat{F}^{\theta}_{1}. A, C, E: resembling a 400-neuron spiking network; B, D, F: resembling a 4000-neuron network. A, B: Example of pre and post-MFE voltage distributions 𝒑E\bm{p}^{E} and 𝒑I\bm{p}^{I} in the surrogate dynamics. C, D: 10th level curves of the first two principal components of 𝒑E\bm{p}^{E} and 𝒑I\bm{p}^{I}. (Blue: 20k examples from enlarged training set, red: 2k examples from the surrogate dynamics.) E, F: Raster plots of simulated surrogate dynamics and the real dynamics staring from the same initial profiles.
Refer to caption
Figure 6: Surrogates of spiking network dynamics produced by the parameter-generic MFE mapping F^1\widehat{F}_{1}. A-F are in parallel to Fig. 5.

Parameter-specific predictions. We use the enlarged training set as described in Section 4 to generate F^1θ(xm)\widehat{F}^{\theta}_{1}(x_{m}). Here, θ=(SEE,SIE,SEI,SII)=(4,3,2.2,2)\theta=(S^{EE},S^{IE},S^{EI},S^{II})=(4,3,-2.2,-2) for the 400-neuron SNN, and the synaptic weights are normalized by 10 times in the 4000-neuron SNN.

We first focus on the surrogate dynamics of the 400-neuron SNN. Fig. 5A gives examples of pre-MFE voltage distributions 𝒑Q\bm{p}^{Q} generated by the tau-leaping simulation and post-MFE 𝒑Q\bm{p}^{Q} generated by DNN predictions. We further compare the distributions of the first two principal components of 𝒑Q\bm{p}^{Q} occurring in the surrogate dynamics and the training set (Fig. 5B. red: surrogate dynamics; blue: the training set). On the plane of the first two principal components, voltage distributions produced by surrogate dynamics distribute consistently with training sets, suggesting that the surrogate dynamics align very well with the ground-truth network dynamics. The red/blue contours indicate each tenth of the level curves of ks-density of the distributions. We also compare the raster plots of the two dynamics, where the initial SNN conditions are the same (Fig. 5E).

The right half of Fig. 5 depicts the surrogate dynamics to the 4000-neuron SNN model. The biased distribution of the principal components of 𝒑Q\bm{p}^{Q} is probably due to A. the relatively more narrow training sets (see discussions in Sect. 4.3), and B. the large 1 ms timestep in tau-leaping.

Parameter-generic predictions. The surrogate dynamics generated by the parameter-generic surrogate MFE mapping F^1\widehat{F}_{1} and tau-leaping are depicted in Fig. 6, whose panels are analogous to Fig. 5. (The “real” raster plots of SNN dynamics in Fig. 6E is fixed the same as Fig. 6E.) The comparable results demonstrate that the DNNs produce faithful surrogate to the SNN dynamics. Interestingly, by comparing panel C and D of Fig. 5 and 6, we find that the principal components of 𝒑Q\bm{p}^{Q} in the surrogate dynamics generated by F^1\widehat{F}_{1} are much less biased than F^1θ\widehat{F}^{\theta}_{1}. While a detailed explanation is beyond the current scope of this study, our current conjecture is that the more general training set of F^1\widehat{F}_{1} helps it deal with the less regular pre-MFE network states in the surrogate dynamics.

6 Discussions and conclusions

In this paper, we build an artificial neural network (ANN) surrogate of γ\gamma-dynamics arising from a biological neuronal circuit. The neuronal circuit model is a stochastic integrate-and-fire network that has been well-studied. Similar to many other models li2019well (37, 35, 36), it can exhibit semi-synchronous spiking activities called the multiple-firing events (MFEs), which are transient and highly nonlinear emergent phenomena of spiking neuronal networks.

In our study, the sensitive & transient MFE dynamics are represented by the MFE mappings that project the pre-MFE network states to post-MFE network states. The MFE mappings are faithfully approximated by ANNs, despite the significant intrinsic noise in the model. On the other hand, the slower and quieter dynamics between consecutive MFEs are evolved by standard tau-leaping simulations. Remarkably, a surrogate of spiking network dynamics is produced by combining ANN approximations and tau-leaping simulations, generating firing patterns consistent with the spiking network dynamics. Furthermore, the ANN surrogate can be generalized to a wide-range of synaptic coupling strengths.

This paper explores the methodology of learning biological neural circuits with ANNs. In this study, the biggest challenges of developing a successful ANN surrogate are A. processing the high-dimensional, noisy data and B. building a representative training set. Both challenges are addressed by the first-principle-based model reduction techniques, i.e., coarse-graining and discrete cosine transform. The model reductions remove the excessive intrinsic noise. The training set collects network states from simulations of SNNs and is enlarged to represent a broader class of voltage distributions. Therefore, the training set covers the “rare” voltage distributions occurring in spiking network dynamics with low probabilities.

Future work. The idea of ANN surrogates elaborated in this paper can be extended and applied to other network models. First, many models of brain dynamics share the difficulties of dimensionality, robustness, and generalizability. Therefore, we propose to extend our ideas to model more sophisticated dynamical phenomena of the brain, such as other types of neural oscillations, and to neural circuits with more complicated network architecture. Furthermore, the power of ANNs to handle large data-sets may allow us to extend our framework to deal with experimental data directly. In general, we are motivated by the recent success demonstrating the capability of deep neural networking in representing infinite-dimensional maps li2020fourier (62, 63, 64). Therefore, in future work, we suggest exploring more complex network structures (e.g., the DeepONet lu2021learning (63)) to build mappings between the states of neural circuits.

Another interesting but challenging issue is the interpretability of ANN surrogates, e.g., relating statistics, dynamical features, and architectures of the spiking network models to the ANNs. A potentially viable approach is to map neurons in the spiking networks to artificial neurons, then examine the connection weight after training. However, it is likely that this idea may need ANNs more complicated than the simple feed-forward DNN we considered here. To achieve this goal, one may consider different ANNs with different architectures, such as ResNet or LSTM he2016deep (65, 66). These studies may shed some light on how the dynamics and information flow in neural systems are represented in ANNs.

Acknowledgements

This work was partially supported by supported by the National Science and Technology Innovation 2030 Major Program through grant 2022ZD0204600 (R.Z. Z.W., T.W., L.T.), the Natural Science Foundation of China through grants 31771147 (R.Z., Z.W., T.W., L.T.) and 91232715 (L.T.). Z.X. is supported by the Courant Institute of Mathematical Sciences through Courant Instructorship. Y.L. is supported by NSF DMS-1813246 and NSF DMS-2108628.

7 Appendix

7.1 Tau-leaping and SSA algorithms

In this manuscript, the SNN dynamics are designed to be a Markovian process in phase space 𝛀\mathbf{\Omega}. The simulations are carried out by two algorithms: Tau-leaping and Stochastic Simulation Algorithm (SSA). The key difference is that, Tau-leaping method processes events that happen during a time step τ\tau in bulk, while SSA simulates the evolution event by event. Of the two, tau-leaping can be faster (with properly chosen τ\tau), while SSA is usually more precise with the precision that scales with C++ execution. Here we illustrate a Markov jump process as an example.

Algorithms. Consider X(t)={x1(t),x2(t),,xN(t)}X(t)=\{x_{1}(t),x_{2}(t),...,x_{N}(t)\}, where X(t)X(t) can take values in a discrete state space

S={s1,s2,,sMN}.S=\{s_{1},s_{2},\cdots,s_{M}\subset\mathbb{R}^{N}\}.

The transition from state XX to state sis_{i} at time t is denoted as Tsit(X)T_{s_{i}}^{t}(X), taking an exponential distributed waiting time with rate λsiX\lambda_{s_{i}\leftarrow X}. Here, siS(X)s_{i}\in S(X) which are states adjacent to state XX with a non-zero transition probability. For simplicity, we assume λsiX\lambda_{s_{i}\leftarrow X} does not explicitly depend on tt except via X(t)X(t).

Tau-leaping only considers X(t)X(t) on a time grid t=jht=jh, for j=0,1,,T/hj=0,1,...,T/h, assuming state transfer occurs for at most one time within within each step:

P(X(j+1)h=si)\displaystyle P(X^{(j+1)h}=s_{i})
={hλsiXjhsiS(Xjh),1hsiS(Xjh)λsiXjhsi=Xjh,0otherwise,\displaystyle=\begin{cases}h\lambda_{s_{i}\leftarrow X^{jh}}&\forall s_{i}\in S(X^{jh}),\\ 1-h\sum_{s_{i}\in S(X^{jh})}\lambda_{s_{i}\leftarrow X^{jh}}&s_{i}=X^{jh},\\ 0&\text{otherwise},\end{cases}

On the other hand, SSA accounts for this simulation problem as:

X(T)=TXktkTXk1tk1TX1t1(X(0)),\displaystyle X(T)=T_{X_{k}}^{t_{k}}\circ T_{X_{k-1}}^{t_{k-1}}\circ...\circ T_{X_{1}}^{t_{1}}(X(0)),

i.e., starting from X0X^{0}, XX transitions to X1,X2,,Xk=X(T)X_{1},X_{2},...,X_{k}=X(T) at time 0<t1<t2<<tk<T0<t_{1}<t_{2}<...<t_{k}<T.

For t<t<t+1t_{\ell}<t<t_{\ell+1}, we sample the transition time from Exp(siS(X(t))λsiX(t))\mathrm{Exp}(\sum_{s_{i}\in S(X(t))}\lambda_{s_{i}\leftarrow X(t)}). That is, for independent, exponentially distributed random variables

τiExp(λsiX(t)),\tau_{i}\sim{\rm Exp}(\lambda_{s_{i}\leftarrow X(t)}),

we have

t+1t=minsiS(X(t))τiExp(siS(X(t))λsiX(t)).t_{\ell+1}-t_{\ell}=\min_{s_{i}\in S(X(t))}\tau_{i}\sim\mathrm{Exp}(\sum_{s_{i}\in S(X(t))}\lambda_{s_{i}\leftarrow X(t)}).

Therefore, in each step of an SSA simulation, the system state evolves forward by an exponentially distributed random time, whose rate is the sum of rates of all exponential “clocks”. Then we randomly choose the exact state sis_{i} to which transition takes place with probability weighted by the sizes of the pending events.

Implementation on spiking networks. We note that X(t)X(t) will changes when

  1. 1.

    neuron ii receives external input (viv_{i} goes up for 1, including entering \mathcal{R});

  2. 2.

    neuron ii receives a spike (HiEH^{E}_{i} or HiIH^{I}_{i} goes up for 1);

  3. 3.

    a pending spike takes effect to neuron ii (viv_{i} goes up/down according to synaptic strengths);

  4. 4.

    neuron ii walks out from refractory (viv_{i} goes from \mathcal{R} to 0).

The corresponding transition rates are directly given (λE\lambda^{E} and λI\lambda^{I}) or the inverses of the physiological time scales (τE\tau^{E}, τI\tau^{I}, and τ\tau^{\mathcal{R}}). In a SSA simulation, when the state transition elicits a spike in a neuron, the synaptic outputs generated by this spike are immediately added to the pool of corresponding type of effects, and the neuron goes into refractory state. However, in a tau-leaping simulation, the spikes are recorded but the synaptic outputs are processed in bulk at the end of each time step. Therefore, all events within the same time step are uncorrelated.

7.2 The coarse-graining mapping

Here we give the definition of the coarse-grained mapping 𝒞\mathcal{C} in Eq. 6. For

ω=(\displaystyle\omega=( V1,,VNE,VNE+1,,VNE+NI,\displaystyle V_{1},\cdots,V_{N_{E}},V_{N_{E}+1},\cdots,V_{N_{E}+N_{I}},
H1E,,HNEE,HNE+1E,,HNE+NIE,\displaystyle H^{E}_{1},\cdots,H^{E}_{N_{E}},H^{E}_{N_{E}+1},\cdots,H^{E}_{N_{E}+N_{I}},
H1I,,HNEI,HNE+1I,,HNE+NII),\displaystyle H^{I}_{1},\cdots,H^{I}_{N_{E}},H^{I}_{N_{E}+1},\cdots,H^{I}_{N_{E}+N_{I}}),

we define

𝒞(ω)=ω~=(\displaystyle\mathcal{C}(\omega)=\tilde{\omega}=( n1E,n2E,,n22E,nRE,n1I,n2I,,n22I,nRI,\displaystyle n^{E}_{1},n^{E}_{2},\cdots,n^{E}_{22},n^{E}_{R},n^{I}_{1},n^{I}_{2},\cdots,n^{I}_{22},n^{I}_{R},
HEE,HEI,HIE,HII),\displaystyle H^{EE},H^{EI},H^{IE},H^{II}),

where,

niE\displaystyle n^{E}_{i} =j=1NE𝟏Γi(Vj),for i=1,,22;\displaystyle=\sum_{j=1}^{N_{E}}\mathbf{1}_{\Gamma_{i}}(V_{j}),\quad\text{for }i=1,\cdots,22;
nRE\displaystyle n^{E}_{R} =j=1NE𝟏{}(Vj);\displaystyle=\sum_{j=1}^{N_{E}}\mathbf{1}_{\{\mathcal{R}\}}(V_{j});
niI\displaystyle n^{I}_{i} =j=NE+1NE+NI𝟏Γi(Vj),for i=1,,22;\displaystyle=\sum_{j=N_{E}+1}^{N_{E}+N_{I}}\mathbf{1}_{\Gamma_{i}}(V_{j}),\quad\text{for }i=1,\cdots,22;
nRI\displaystyle n^{I}_{R} =j=NE+1NE+NI𝟏{}(Vj);\displaystyle=\sum_{j=N_{E}+1}^{N_{E}+N_{I}}\mathbf{1}_{\{\mathcal{R}\}}(V_{j});

and

HEE\displaystyle H^{EE} =j=1NEHjE;HIE=j=NE+1NE+NIHjE;\displaystyle=\sum_{j=1}^{N_{E}}H_{j}^{E};\qquad H^{IE}=\sum_{j=N_{E}+1}^{N_{E}+N_{I}}H_{j}^{E};
HEI\displaystyle H^{EI} =j=1NEHjI;HII=j=NE+1NE+NIHjI.\displaystyle=\sum_{j=1}^{N_{E}}H_{j}^{I};\qquad H^{II}=\sum_{j=N_{E}+1}^{N_{E}+N_{I}}H_{j}^{I}.

Here, 𝟏A(a)\mathbf{1}_{\mathrm{A}}(a) is an indicator function of set AA, i.e., 𝟏A(a)=1\mathbf{1}_{\mathrm{A}}(a)=1 aA\forall a\in\mathrm{A}, otherwise 𝟏A(a)=0\mathbf{1}_{\mathrm{A}}(a)=0. Γi\Gamma_{i} is a subset of the state space for membrane potential, and

Γi=[15+5i,10+5i)Γ.\Gamma_{i}=[-15+5i,-10+5i)\cap\Gamma.

7.3 Pre-processing surrogate data: discrete cosine transform

Here we explain how discrete cosine transform (DCT) works in the pre-processing. For an input probability mass vector

𝒑=(n1,n2,,n22),\bm{p}=(n_{1},n_{2},\cdots,n_{22}),

its DCT output c(𝒑)=(c1,c2,,c22)\mathcal{F}_{c}(\bm{p})=(c_{1},c_{2},...,c_{22}) is given by

ck=222l=122nl1+δklcos(π2N(2l1)(k1)),c_{k}=\sqrt{\frac{2}{22}}\sum_{l=1}^{22}\frac{n_{l}}{\sqrt{1+\delta_{kl}}}\cos\left(\frac{\pi}{2N}(2l-1)(k-1)\right), (9)

where δkl\delta_{kl} is the Kronecker delta function. The iDCT mapping c1\mathcal{F}^{-1}_{c} is defined as the inverse function of c\mathcal{F}_{c}.

7.4 The linear formula for firing rates

When preparing the parameter-generic training set, we use simple, linear formulas to estimate the firing rate of E neurons and I neurons (fEf_{E} and fIf_{I}, see li2019stochastic (16, 37)). We take θΘ\theta\in\mathrm{\Theta} for the synaptic coupling strength, while others constants are the same as Table 1.

fE=λE(M+CII)λICEI(MCEE)(M+CII)+(CEICIE)\displaystyle f_{E}=\frac{\lambda^{E}(M+C^{II})-\lambda^{I}C^{EI}}{(M-C^{EE})(M+C^{II})+(C^{EI}C^{IE})}
fI=λE(M+CEE)λECIE(MCEE)(M+CII)+(CEICIE)\displaystyle f_{I}=\frac{\lambda^{E}(M+C^{EE})-\lambda^{E}C^{IE}}{(M-C^{EE})(M+C^{II})+(C^{EI}C^{IE})}

where

CEE=NEPEESEE,CIE=NEPIESIE\displaystyle C^{EE}=N^{E}P^{EE}S^{EE},\ C^{IE}=N^{E}P^{IE}S^{IE}
CEI=NIPEISEI,CII=NIPIISII.\displaystyle C^{EI}=N^{I}P^{EI}S^{EI},\ C^{II}=N^{I}P^{II}S^{II}.

7.5 Deep network architecture in this paper

Artificial neural networks (ANNs) are models resembling biological neuronal networks. In general, they are interconnected computation units sending outputs to target units. Many different architectures are possible for ANNs; in this paper, we adopt the feedforward deep network architecture, which is one of the simplest (Fig. 8).

Refer to caption
Figure 7: Diagram of a feedforward ANN.

A feedforward ANN has a layered structure, where units in the ii-th layer drives the (i+1)(i+1)-th layer with a weight matrix 𝑾i\bm{W}_{i} and a bias vector bi\vec{b}_{i}. Computation is processed from one layer to the next. The ”input layer” takes input x\vec{x}, sending the output 𝑾1x+b1\bm{W}_{1}x+\vec{b}_{1} to the first ”hidden layer”; the first hidden layer then sends output 𝑾2f1(𝑾1x+b1)+b2\bm{W}_{2}f_{1}(\bm{W}_{1}x+\vec{b}_{1})+\vec{b}_{2} to the next layer, and so on, until the ”output layer” sends output vector y\vec{y}. In this paper, we implemented a feedforward ANN with 4 layers, having 512, 512, 512, 128 neurons respectively. We chose the Leaky ReLU function with default negative slope of 0.010.01 as our activation function f1()f_{1}(\cdot).

The training of feedforward ANNs is achieved by the back-propagation (BP) algorithm. Let 𝒩𝒩(x)\mathcal{NN}(x) denote the prediction of the ANN with input xx, and L()L(\cdot) the loss function. With each entry (x,y)(x,y) in the training data, we minimize the loss l=L(y𝒩𝒩(x))l=L(y-\mathcal{NN}(x)) following the gradients on each dimension of WiW_{i} and bib_{i}. The computation of gradients takes place from the last layer WnW_{n}’s and bnb_{n}, then ”propagated back” to adjust previous WiW_{i} and bib_{i} on each layer. We chose the mean-square error as our loss function, i.e. L()=||||L22.L(\cdot)=||\cdot||_{L^{2}}^{2}.

Refer to caption
Figure 8: Left: pre-, post- and predicted MFE profiles without DCT + iDCT; Middle: pre-, post- and predicted MFE profiles with DCT + iDCT; Right: pre-, post- and predicted MFE profiles with network parameters as additional inputs of ANN. (Left and Middle: SEES^{EE}, SIES^{IE}, SEIS^{EI}, SIIS^{II} = 4, 3, -2.2, -2; Right: 3.82, 3.24, -2.05, -1.87.)

7.6 Pre-processing in ANN predictions

As the supplementary information for Fig. 4, we compare how ANNs predict post-MFE voltage distributions 𝒑E\bm{p}^{E} and 𝒑I\bm{p}^{I} in three different settings in Fig. 8. In each panel divided by red lines, the left column gives an example of pre-MFE voltage distributions, while the right column compares the corresponding post-MFE voltage distributions collected from ANN predictions (red) vs. SSA simulation. Results from ANNs without pre-processing, with pre-processing, and the parameter-generic ANN are depicted in the left, middle, and right panels.

7.7 Principal components of voltage distributions

The voltage distribution vectors in the form below are used to plot the distribution in the phase space as shown in middle panels of Fig. 5 and Fig. 6.

(\displaystyle( n1E,n2E,,n22E,nRE,n1I,n2I,,n22I,nRI)\displaystyle n^{E}_{1},n^{E}_{2},\cdots,n^{E}_{22},n^{E}_{R},n^{I}_{1},n^{I}_{2},\cdots,n^{I}_{22},n^{I}_{R})

The vectors from training set (colored in blue in figures) are selected to generate basis of the phase space through svd function in numpy.linalg in python. The first two rows of the VV^{\top} are the first two PCs of the space. The scores of vectors from the training set and approximated results are dot products of these vectors and the normalized PCs.

The plain ks-density function in Matlab is used to estimate the kernel smoothing density of the profile distribution based on data points generated above. The contours show the level of each tenth of the maximal height (with 0.1% bias for demonstrating the top) in the distributions.

References

  • (1) Kevin A Janes and Michael B Yaffe “Data-driven modelling of signal-transduction networks” In Nature reviews Molecular cell biology 7.11 Nature Publishing Group, 2006, pp. 820–828
  • (2) Jan Hasenauer, Nick Jagiella, Sabrina Hross and Fabian J Theis “Data-driven modelling of biological multi-scale processes” In Journal of Coupled Systems and Multiscale Dynamics 3.2 American Scientific Publishers, 2015, pp. 101–121
  • (3) Dörte Solle et al. “Between the poles of data-driven and mechanistic modeling for process operation” In Chemie Ingenieur Technik 89.5 Wiley Online Library, 2017, pp. 542–561
  • (4) Rachael E Jack, Carlos Crivelli and Thalia Wheatley “Data-driven methods to diversify knowledge of human psychology” In Trends in cognitive sciences 22.1 Elsevier, 2018, pp. 1–5
  • (5) Mohammed AlQuraishi and Peter K Sorger “Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms” In Nature methods 18.10 Nature Publishing Group, 2021, pp. 1169–1180
  • (6) Samanwoy Ghosh-Dastidar and Hojjat Adeli “Spiking neural networks” In International journal of neural systems 19.04 World Scientific, 2009, pp. 295–308
  • (7) Filip Ponulak and Andrzej Kasinski “Introduction to spiking neural networks: Information processing, learning and applications.” In Acta neurobiologiae experimentalis 71.4, 2011, pp. 409–433
  • (8) Sou Nobukawa, Haruhiko Nishimura and Teruya Yamanishi “Chaotic resonance in typical routes to chaos in the Izhikevich neuron model” In Scientific reports 7.1 Nature Publishing Group, 2017, pp. 1–9
  • (9) Christoph Börgers and Nancy Kopell “Synchronization in networks of excitatory and inhibitory neurons with sparse, random connectivity” In Neural computation 15.3 MIT Press, 2003, pp. 509–538
  • (10) Logan Chariker, Robert Shapley and Lai-Sang Young “Orientation selectivity from very sparse LGN inputs in a comprehensive model of macaque V1 cortex” In Journal of Neuroscience 36.49 Soc Neuroscience, 2016, pp. 12368–12384
  • (11) Hugh R Wilson and Jack D Cowan “Excitatory and inhibitory interactions in localized populations of model neurons” In Biophysical journal 12.1 Elsevier, 1972, pp. 1–24
  • (12) Nicolas Brunel and Vincent Hakim “Fast global oscillations in networks of integrate-and-fire neurons with low firing rates” In Neural computation 11.7 MIT Press One Rogers Street, Cambridge, MA 02142-1209, USA journals-info …, 1999, pp. 1621–1671
  • (13) Michael A Buice and Jack D Cowan “Field-theoretic approach to fluctuation effects in neural networks” In Physical Review E 75.5 APS, 2007, pp. 051919
  • (14) David Cai, Louis Tao, Aaditya V Rangan and David W McLaughlin “Kinetic theory for neuronal network dynamics” In Communications in Mathematical Sciences 4.1 International Press of Boston, 2006, pp. 97–127
  • (15) Yuhang Cai, Tianyi Wu, Louis Tao and Zhuo-Cheng Xiao “Model Reduction Captures Stochastic Gamma Oscillations on Low-Dimensional Manifolds” In Frontiers in Computational Neuroscience Frontiers, 2021, pp. 74
  • (16) Yao Li and Hui Xu “Stochastic neural field model: multiple firing events and correlations” In Journal of mathematical biology 79.4 Springer, 2019, pp. 1169–1204
  • (17) Charu C Aggarwal “Neural networks and deep learning” In Springer 10 Springer, 2018, pp. 978–3
  • (18) Jürgen Schmidhuber “Deep learning in neural networks: An overview” In Neural networks 61 Elsevier, 2015, pp. 85–117
  • (19) Ki H Chon and Richard J Cohen “Linear and nonlinear ARMA model parameter estimation using an artificial neural network” In IEEE transactions on biomedical engineering 44.3 IEEE, 1997, pp. 168–174
  • (20) Housen Li, Johannes Schwab, Stephan Antholzer and Markus Haltmeier “NETT: Solving inverse problems with deep neural networks” In Inverse Problems 36.6 IOP Publishing, 2020, pp. 065005
  • (21) Maziar Raissi, Paris Perdikaris and George E Karniadakis “Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations” In Journal of Computational physics 378 Elsevier, 2019, pp. 686–707
  • (22) Andrew R Barron “Approximation and estimation bounds for artificial neural networks” In Machine learning 14.1 Springer, 1994, pp. 115–133
  • (23) Nikola Kovachki, Samuel Lanthaler and Siddhartha Mishra “On universal approximation and error bounds for Fourier Neural Operators” In Journal of Machine Learning Research 22 JMLR Press, 2021, pp. Art–No
  • (24) Yaoyu Zhang and Lai-Sang Young “DNN-assisted statistical analysis of a model of local cortical circuits” In Scientific reports 10.1 Nature Publishing Group, 2020, pp. 1–16
  • (25) J Andrew Henrie and Robert Shapley “LFP power spectra in V1 cortex: the graded effect of stimulus contrast” In Journal of Neurophysiology 94.1 American Physiological Society, 2005, pp. 479–490
  • (26) Michael Brosch, Eike Budinger and Henning Scheich “Stimulus-related gamma oscillations in primate auditory cortex” In Journal of Neurophysiology 87.6 American Physiological Society Bethesda, MD, 2002, pp. 2715–2725
  • (27) Markus Bauer, Robert Oostenveld, Maarten Peeters and Pascal Fries “Tactile spatial attention enhances gamma-band activity in somatosensory cortex and reduces low-frequency activity in parieto-occipital areas” In Journal of Neuroscience 26.2 Soc Neuroscience, 2006, pp. 490–501
  • (28) T.J. Buschman and E.K. Miller “Top-down versus bottom-up control of attention in the prefrontal and posterior parietal cortices” In Science 315, 2007, pp. 1860–1862
  • (29) W Pieter Medendorp et al. “Oscillatory activity in human parietal and occipital cortex shows hemispheric lateralization and memory effects in a delayed double-step saccade task” In Cerebral Cortex 17.10 Oxford University Press, 2007, pp. 2364–2374
  • (30) Marijn Wingerden, Martin Vinck, Jan V Lankelma and Cyriel MA Pennartz “Learning-associated gamma-band phase-locking of action–outcome selective neurons in orbitofrontal cortex” In Journal of Neuroscience 30.30 Soc Neuroscience, 2010, pp. 10025–10038
  • (31) J. Csicsvari, B. Jamieson, K. Wise and G. Buzsáki “Mechanisms of gamma oscillations in the hippocampus of the behaving rat” In Neuron 37, 2003, pp. 311–322
  • (32) Andrei T Popescu, Daniela Popa and Denis Paré “Coherent gamma oscillations couple the amygdala and striatum during learning” In Nature Neuroscience 12.6 Nature Publishing Group, 2009, pp. 801–807
  • (33) Matthijs AA Van Der Meer and A David Redish “Low and high gamma oscillations in rat ventral striatum have distinct relationships to behavior, reward, and spiking activity on a learned spatial decision task” In Frontiers in Integrative Neuroscience 3 Frontiers, 2009, pp. 9
  • (34) Logan Chariker, Robert Shapley and Lai-Sang Young “Rhythm and synchrony in a cortical network model” In Journal of Neuroscience 38.40 Soc Neuroscience, 2018, pp. 8621–8634
  • (35) Jiwei Zhang, Douglas Zhou, David Cai and Aaditya V Rangan “A coarse-grained framework for spiking neuronal networks: between homogeneity and synchrony” In Journal of computational neuroscience 37.1 Springer, 2014, pp. 81–104
  • (36) Aaditya V Rangan and Lai-Sang Young “Emergent dynamics in a model of visual cortex” In Journal of computational neuroscience 35.2 Springer, 2013, pp. 155–167
  • (37) Yao Li, Logan Chariker and Lai-Sang Young “How well do reduced models capture the dynamics in models of interacting neurons?” In Journal of mathematical biology 78.1 Springer, 2019, pp. 83–115
  • (38) Wulfram Gerstner, Werner M Kistler, Richard Naud and Liam Paninski “Neuronal dynamics: From single neurons to networks and models of cognition” Cambridge University Press, 2014
  • (39) Tianyi Wu et al. “Multi-band oscillations emerge from a simple spiking network” In arXiv preprint arXiv:2206.14942 (Under review by Chaos), 2022 URL: https://arxiv.org/abs/2206.14942
  • (40) Christof Koch “Biophysics of computations” Oxford University Press New York, 1999
  • (41) Zhuo-Cheng Xiao, Kevin K Lin and Lai-Sang Young “A data-informed mean-field approach to mapping of cortical parameter landscapes” In PLoS Computational Biology 17.12 Public Library of Science San Francisco, CA USA, 2021, pp. e1009718
  • (42) Rony Azouz and Charles M Gray “Dynamic spike threshold reveals a mechanism for synaptic coincidence detection in cortical neurons in vivo” In Proceedings of the National Academy of Sciences 97.14 National Acad Sciences, 2000, pp. 8110–8115
  • (43) R. Azouz and C.M. Gray “Adaptive coincidence detection and dynamic gain control in visual cortical neurons in vivo” In Neuron 37, 2003, pp. 513–523
  • (44) Axel Frien et al. “Fast oscillations display sharper orientation tuning than slower components of the same recordings in striate cortex of the awake monkey” In European Journal of Neuroscience 12.4 Wiley Online Library, 2000, pp. 1453–1465
  • (45) Thilo Womelsdorf et al. “Orientation selectivity and noise correlation in awake monkey area V1 are modulated by the gamma cycle” In Proceedings of the National Academy of Sciences 109.11 National Acad Sciences, 2012, pp. 4302–4307
  • (46) Jing Liu and William T Newsome “Local field potential in cortical area MT: stimulus tuning and behavioral correlations” In Journal of Neuroscience 26.30 Soc Neuroscience, 2006, pp. 7779–7790
  • (47) P. Fries, J.H. Reynolds, A.E. Rorie and R. Desimone “Modulation of oscillatory neuronal synchronization by selective visual attention” In Science 291, 2001, pp. 1560–1563
  • (48) Pascal Fries, Thilo Womelsdorf, Robert Oostenveld and Robert Desimone “The effects of visual stimulation and selective visual attention on rhythmic neuronal synchronization in macaque area V4” In Journal of Neuroscience 28.18 Soc Neuroscience, 2008, pp. 4823–4835
  • (49) Elizabeth P Bauer, Rony Paz and Denis Paré “Gamma oscillations coordinate amygdalo-rhinal interactions during learning” In Journal of Neuroscience 27.35 Soc Neuroscience, 2007, pp. 9369–9379
  • (50) Bijan Pesaran et al. “Temporal structure in neuronal activity during working memory in macaque parietal cortex” In Nature Neuroscience 5.8 Nature Publishing Group, 2002, pp. 805–811
  • (51) T. Womelsdorf et al. “Modulation of neuronal interactions through neuronal synchronization” In Science 316, 2007, pp. 1609–1612
  • (52) Erol Başar “A review of gamma oscillations in healthy subjects and in cognitive impairment” In International Journal of Psychophysiology 90.2, 2013, pp. 99–117 DOI: https://doi.org/10.1016/j.ijpsycho.2013.07.005
  • (53) John H Krystal et al. “Impaired tuning of neural ensembles and the pathophysiology of schizophrenia: a translational and computational neuroscience perspective” In Biological Psychiatry 81.10 Elsevier, 2017, pp. 874–885
  • (54) Alexandra J Mably and Laura Lee Colgin “Gamma oscillations in cognitive disorders” In Current Opinion in Neurobiology 52 Elsevier, 2018, pp. 182–187
  • (55) Miles A Whittington et al. “Inhibition-based rhythms: experimental and mathematical observations on network dynamics” In International Journal of Psychophysiology 38.3 Elsevier, 2000, pp. 315–336
  • (56) Roger D Traub et al. “Single-column thalamocortical network model exhibiting gamma oscillations, sleep spindles, and epileptogenic bursts” In Journal of Neurophysiology 93.4 American Physiological Society, 2005, pp. 2194–2232
  • (57) Logan Chariker and Lai-Sang Young “Emergent spike patterns in neuronal populations” In Journal of computational neuroscience 38.1 Springer, 2015, pp. 203–220
  • (58) Zhuo-Cheng Xiao and Kevin K Lin “Multilevel monte carlo for cortical circuit models” In Journal of Computational Neuroscience 50.1 Springer, 2022, pp. 9–15
  • (59) Ian J Goodfellow, Jonathon Shlens and Christian Szegedy “Explaining and harnessing adversarial examples” In arXiv preprint arXiv:1412.6572, 2014
  • (60) Xiaoyong Yuan, Pan He, Qile Zhu and Xiaolin Li “Adversarial examples: Attacks and defenses for deep learning” In IEEE transactions on neural networks and learning systems 30.9 IEEE, 2019, pp. 2805–2824
  • (61) Connor Shorten and Taghi M Khoshgoftaar “A survey on image data augmentation for deep learning” In Journal of big data 6.1 SpringerOpen, 2019, pp. 1–48
  • (62) Zongyi Li et al. “Fourier neural operator for parametric partial differential equations” In arXiv preprint arXiv:2010.08895, 2020
  • (63) Lu Lu et al. “Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators” In Nature Machine Intelligence 3.3 Nature Publishing Group, 2021, pp. 218–229
  • (64) Sifan Wang, Hanwen Wang and Paris Perdikaris “Learning the solution operator of parametric partial differential equations with physics-informed DeepONets” In Science advances 7.40 American Association for the Advancement of Science, 2021, pp. eabi8605
  • (65) Kaiming He, Xiangyu Zhang, Shaoqing Ren and Jian Sun “Deep residual learning for image recognition” In Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778
  • (66) Sepp Hochreiter and Jürgen Schmidhuber “Long short-term memory” In Neural computation 9.8 MIT Press, 1997, pp. 1735–1780