Model Order Reduction in Neuroscience
Abstract
Human brain contains approximately neurons, each with approximately connections, synapses, with other neurons. Most sensory, cognitive and motor functions of our brains depend on the interaction of a large population of neurons. In recent years, many technologies are developed for recording large numbers of neurons either sequentially or simultaneously. Increase in computational power and algorithmic developments have enabled advanced analyses of neuronal population parallel to the rapid growth of quantity and complexity of the recorded neuronal activity. Recent studies made use of dimensionality and model order reduction techniques to extract coherent features which are not apparent at the level of individual neurons. It has been observed that the neuronal activity evolves on low-dimensional subspaces. The aim of model reduction of large-scale neuronal networks is accurate and fast prediction of patterns and their propagation in different areas of brain. Spatiotemporal features of the brain activity are identified on low dimensional subspaces with methods such as
dynamic mode decomposition (DMD), proper orthogonal decomposition (POD), discrete empirical interpolation (DEIM) and combined parameter and state reduction.
In this paper, we give an overview about the currently used dimensionality reduction and model order reduction techniques in neuroscience.
Keywords:neuroscience, dimensionality reduction, proper orthogonal decomposition, discrete empirical interpolation, dynamic mode decomposition, state and parameter estimation.
Classification[MSC 2010]: 93A15,92C55, 37M10,37M99,37N40,65R32.
1 Introduction
Due to the advances in recording and imaging technologies, the number of recorded signals from brain cells increased significantly in the last few years. The recorded spatio-temporal neural activity give rise to networks with complex dynamics. In neuroscience,
molecular and cellular
level details are incorporated in large-scale models of the brain in order to
reproduce phenomena such as learning and behavior.
The rapid growth of simultaneous neuronal recordings in scale and resolution brings challenges with the analysis of the neuronal population activity. New computational approaches have to be developed to analyze, visualize, and understand large-scale recordings of neural activity. While algorithmic developments and the availability of significantly more computing power have enabled analysis of larger neuronal networks, these advances cannot keep pace with increasing size and complexity of recorded activity.
The activity of complex networks of neurons can often be described by relatively few distinct patterns. Model order reduction techniques enable us to identify the coherent spatial–temporal patterns.
The presence or absence of a neural mechanism can be analyzed for neuronal populations. Dimensionality reduction methods [6] are data-driven statistical techniques for forming and evaluating hypotheses about population activity structure, which are summarized in Section 2. One of the goals of neurosciences is fast and accurate predictions of the potential propagation in neurons. The differential equations describing the propagation of potential in neurons were developed and are described by Hodgkin and Huxley equations [12]. They consists of a coupled system of ordinary and partial differential equations (ODEs and PDEs). The dimension of the associated discretized systems is very large for accurately simulating neurons with realistic morphological structure and synaptic inputs. In Section 3 we present two model order reduction approaches based on POD and DEIM [5] which can predict accurately the potential propagation in large scale neuronal networks leading to important speedups [17, 16, 2]. Using the functional neuroimagining data from electroencephalography (EEG) or functional magnetic resonance imaging (fMRI), different regions of the brain can be inferred by dynamic causal modeling (DCM) [7]. Effective connectivity is parameterised in terms of coupling among unobserved brain states and neuronal activity in different regions. In Section 4 we describe a combined state and parameter reduction for parameter estimation and identification [10] to extract effective connectivity in neuronal networks from measured data, such as data from electroencephalography (EEG) or functional magnetic resonance imaging (fMRI). In Section 5 the data-driven, equation free, model order reduction method dynamic mode decomposition (DMD) is described for identifying sleep spindle networks [3]. Reduced order models with POD and DEIM and four variants of them are presented for neuronal synaptic plasticity and neuronal spiking networks in Section 6.
2 Dimensionality reduction methods
Coordination of responses across neurons exists only at the level of the population and not at the level of single neurons. The presence or absence of a neural mechanism can be analyzed for neuronal populations. Dimensionality reduction methods are data-driven statistical techniques for forming and evaluating hypotheses about population activity structure.
They produce low-dimensional representations of high-dimensional data with the aim to extract coherent patterns which preserve or highlight some feature of interest in the data [6]. The recorded neurons of dimension are likely not independent of each other, because they belong to a common network of neuronal populations. From the high-dimensional data of neuronal recordings, a smaller number of explanatory variables ( ) are extracted with the help of dimensionality reduction methods. The explanatory variables are not directly observed, therefore they are referred to as latent variables. The latent variables define a -dimensional space representing coherent patterns of the noisy neural activity of neurons.
There exists several dimensionality reduction methods which differ in the statistical interpretation of the preserved and discarded features of the neuronal populations. We summarize the commonly used statistical methods of dimensionality reduction methods in [6], where further references about the methods can be found.
Principal component and factor analysis; The most widely known technique to extract coherent patterns from high dimensional data is the modal decomposition. A particularly popular modal decomposition technique is principal component analysis (PCA), which derives modes ordered by their ability to account for energy or variance in the data. In particular, PCA is a static technique and does not model temporal dynamics of time-series data explicitly, so it often performs poorly in reproducing dynamic data, such as recordings of neural activity. The low-dimensional space identified by PCA captures variance of all types, including firing rate variability and spiking variability, whereas factor analysis (FA) discards the independent variance for each neuron. and preserves variance that is shared across neurons.
Time series methods: The temporal dynamics of the population activity can be identified if the data comes from a time series. The most commonly used time series methods for dimensionality reduction neural recordings are: hidden Markov models (HMM) [18], kernel smoothing followed by a static dimensionality reduction method, Gaussian process factor analysis (GPFA) [35], latent linear dynamical systems (LDS) [4] and latent nonlinear dynamical systems (NLDS) [26]. They produce latent neural trajectories that capture the shared variability across neurons. The HMM is applied when a jump between discrete states of neurons exists, other methods identify smooth changes in firing rates over time.
Methods with dependent variables: On many neuronal recordings the high-dimensional firing rate space is associated with labels of one or more dependent variables, like stimulus identity, decision identity or a time index. The dimensionality reduction aims in this case to project the data such that differences in these dependent variables are preserved. The linear discriminant analysis (LDA) can be used to find a low-dimensional projection in which the groups to which the data points belong are well separated.
Nonlinear dimensionality reduction methods:
All the previous methods assume a linear relationship between the latent and observed variables. When the data lies on a low-dimensional, nonlinear manifold in the high-dimensional space, a linear method may require more latent variables than the number of true dimensions of the data. The most frequently used methods to identify nonlinear manifolds are Isomap79 [31] and locally linear embedding (LLE) [28]. Because the nonlinear methods use local neighborhoods to estimate the structure of the manifold, population responses may not evenly explore the high-dimensional space. Therefore theses methods should be used with care.
3 Proper orthogonal decomposition (POD) and discrete empirical interpolation (DEIM) for Hodgin-Huxley model
One of the goals of neurosciences is fast and accurate predictions of the potential propagation in neurons. The differential equations describing propagation of potential in neurons are described by Hodgkin and Huxley (HH) cable equations [12]. They consists of a coupled system of ordinary (ODEs) and partial differential equations PDEs. Accurate simulation of morphology, kinetics and synaptic inputs of neurons requires solution of large systems of nonlinear ODEs. The complexity of the models are determined by the synapse density of the dentritic length ( one micron). In simulations, for one synapse per micron on a cell mm dendrite at compartments each with variables are needed, which results in coupled nonlinear system of ODEs [17, 16]. To recover complex dynamics, efficient reduced order neuronal methods are developed using the POD and DEIM from the snapshots of the in space and time discretized coupled PDEs and ODEs [17, 16, 2]. In this section we describe two of them. They differ in the formulation of the HH cable equation and of the equations for the gating variables.
3.1 Morphologically accurate reduced order modeling
The neuronal full order models (FOMs) in [17, 16] consists of branched dendritic neurons meeting at the soma, where the has branches. It is assumed that the branch carries distinct ionic currents with associated densities and and reversal potentials . The kinetics of current on branch is governed by the gating variables, . When subjected to input at synapses, the nonlinear HH cable equation for the transmembrane potential with the equation for the gating variables is given by (see [2] Fig.1, model network with three cables)
(1) | |||||
(2) |
where is the time course, is the spatial location and is the reversal potential of the th synapse on branch . The variables and parameters in (1) are described in [17, 16].
These branch potentials interact at junction points, where junction denotes the soma. The dendrites join at soma. Continuity of the potential at the soma leads to a common value at current balance denoted by . Then the networked form of (1) becomes
(3) | |||||
(4) |
When the cell is partitioned into compartments, with distinct ionic currents per compartment and with gating variables per current, the following nonlinear ODEs are obtained
(5) | |||||
(6) |
where
is the Hines matrix [11], and the ‘dot’ operator, , denotes element-wise multiplication. and are respectively the vector of channel reversal potentials and the vector of synaptic reversal potentials, respectively
Eq. (5) is discretized in time by the second order discretized Euler scheme [11].
In [16] using the snapshots of and of the nonlinear term at times the POD and DEIM modes are constructed.
The reduced membrane potential are constructed using the POD basis, the reduced gating variables are obtained after applying the DEIM to the nonlinear terms. The reduced order model in [16] preserves the spatial
precision of synaptic input, captures accurately the subthreshold
and spiking behaviors.
In [17] a linearized quasi active reduced neuronal model is constructed using balanced truncation and approximation of transfer functions in time. ROMs preserve the input-output relationship and reproduce only subthreshold dynamics.
3.2 Energy stable neuronal reduced order modeling
In [1, 2] a different form of the HH cable equation and ODEs for gating variables is considered. The intracellular potential and three gating variables , and describe the activation and decativation of the ion channels, of the sodium channels and of the potassium channels, respectively. A single cable in the computational domain describing the distribution of the potential is given by [1, 2]
(7) |
where the radius of the neurons and is specific membrane capacitance, the ratio with the axial resistivity. The conductance is a polynomial of the gating variables
(8) |
with the source term
(9) |
where are equilibrium potentials and input current at
(10) |
The nonlinear ODEs for the gating variables are given by
(11) | |||||
Expression for and boundary conditions can be found in [2].
In [1, 2], a model network with three cables connected to a soma is used. The equations governing the potential propagation in the network neuron cables-dentrites and /or axons with the superscript , are given as
(12) |
(13) | |||||
for together with the boundary conditions.
The semi-discrete form of these equations are approximated using energy stable summation by parts [1, 2] for the model network.
The reduced order bases (ROB) for multiple cables of identical lengths are assembled into
a network in block form.
The block structure of the ROB allows a flexible structure-preserving model
reduction approach with an independent approximation in each cable and energy
stability and accuracy properties follow from this block structure.
Computation of the time varying reduced variables in the gating equations at every time is costly because they scale with dimension of FOM. A nonnegative
variant of the discrete empirical interpolation method (NNDEIM) is developed in [2] to preserve the
structure and energy stability properties of the equations.
The capability of the greedy-based approach to generate accurate predictions in large-scale neuronal networks is demonstrated for system with more than degree of freedoms (dofs). The state variable ROB of dimension with POD modes together with the nonnegative ROBs of dimension with NNDEIM modes are constructed using a greedy approach to predict the potential variation at the soma. The speedup of simulations is about larger than Galerkin projection only is without using the NDEIM.
4 Combined state and parameter reduction for dynamic causal modelling
In neuroscience different regions of the brain are inferred using neuroimagining data from EEG or fMRI recordings using the method od dynamic causal modeling (DCM) [7]. Effective connectivity is parameterised in terms of coupling among unobserved brain states and neuronal activity in different regions. In DCM the neuronal activity is of the observed brain region is represented as a SISO (single input single output) linear state-space system
(14) |
with the parameterized connectivity and external input matrices .
Linearization of the nonlinear DCM hemodynamic forward sub-model (balloon model) [7] transforms the neuronal activity to the measured BOLD (blood oxygen level dependent) response. Linearization around the equilibrium results in the following single input, single output (SISO) system:
(15) | |||||
(16) | |||||
(17) | |||||
(18) |
(19) |
The fMRI measurements at the brain region are reflected by the output variables . For the meaning of the variables and parameters in (15) and (19) we refer to [10, 9]. The linearized forward sub-models are embedded into the fMRI connectivity model
(20) |
(21) |
where denotes the Kronecker matrix with non-zero elements located at the component.
The linearized state-space forward model (20) and (21) corresponds to a multiple input, multiple output (MIMO) system
(22) |
where is the internal state, the external input, the observed output and are the parameters describing different conditions.
For large number of parameters, the computational cost for inferring the parameters and states is very high. In [10, 8] a combined state and parameter model order reduction is developed for parameter estimation and identification to extract effective connectivity. The inversion procedure consists of two phases, the offline and online phases. In the offline phase, the underlying parameterized model is reduced jointly in states and parameters. In online phase, the reduced order model’s parameters are estimated to fit the observed experimental data. Using state and parameter reduction, the computational cost is reduced in the offline phase. The simultaneous reduction of state and parameter space is based on Galerkin projections with the orthogonal matrices for the state and for the parameters . The reduced model is of lower order than the original full order model. The reduced states and the reduced parameters are computed as
(23) |
with a reduced initial condition and the reduced components
In the online phase, the optimization based inverse problem is combined with the reduction of state and parameter space. The inversion is based on generalized data-driven optimization approach to construct the ROMs in [23] and enhanced with the Monte-Carlo method to speed up the computations. The state projection and parameter projection are determined iteratively based on a greedy algorithm by maximizing the error between the high-fidelity original and the low-dimensional reduced model in the Bayesian setting.
Numerical experiments with the DCM model in [23] show highly dimensional neuronal network system is efficiently inverted due to the short offline durations. In the offline phase, Monte-Carlo enhanced methods require more than an order of magnitude less offline time compared to the original and data-misfit enhanced methods. In the online phase the reduced order model has a speedup factor about an order of magnitude more compared to the full-order inversion. The output error of the data-misfit enhanced method is close to full order method. The output-errors of Monte-Carlo decrease with increasing number of simulation but does not reach the output error of the full order system. The source code is available in MATLAB [8].
5 Dynamic mode decomposition
Dynamic mode decomposition (DMD) is a data-driven, equation free ROM technique [20]. It was initially developed to reduce the high dimensional dynamic data obtained from experiments and simulations in the fluid mechanics into a small number of coupled spatial–temporal modes [29, 30]. DMD was applied to explore spatial–temporal patterns in large-scale neuronal recordings in [3]. DMD can be interpreted as combination of discrete Fourier transform (DFT) in time and principal component analysis (PCA) [15] in space.
Both PCA and independent component analyses (ICA) [13] are static techniques, which perform poorly in reproducing dynamic data, such as recordings of neural activity.
The data is taken from electrocorticography (ECoG) recordings. Voltages from channels of an electrode array sampled every . These measurements are arranged at snapshot to the column vectors . The snapshots in time construct to data matrices shifted in time with
(24) |
These matrices are assumed to be related linearly in time
(25) |
The DMD of the data matrix pair and is given by the eigendecomposition of using the singular value decomposition (SVD) of the data matrix by computing the pseudoinverse The spatio-temporal modes are computed by the exact DMD algorithm [32].
Because DMD does not contain explicit spatial relationship between neighboring measurements, traveling waves occurring in neuronal networks can not be captured well with a few coherent modes. DMD was also used as a windowed technique with a temporal window size constrained by lower bound as for the discrete Fourier transformation (DFT). In contrast to the fluid dynamics where , in neuroscience the electrode arrays that have tens of channels in the recordings with number of snapshots in the windows data per second, so that . The number of singular values of are less than and , which restricts the maximum number of DMD modes and eigenvalues to . Because of this the dynamics can be captured over snapshots. The rank mismatch is resolved by appending to the snapshot measurements with time-shifted versions of the data matrices. The augmented data matrix is given as
(26) |
The augmented matrices and are Hankel matrices, which are constant along the skew diagonal, as in the Eigenvalue Realization Algorithm (ERA) [14]. The number of the stacks is chosen such that . A measure to determined the optimal number of stacks is the approximation error
where is the Frobenius norm. The approximation error is decreasing with increasing number of stacks and reaches a plateau, so that the DMD accuracy does not significantly increases.
DMD is applied in [3] as an automated approach to detect and analyze reliably spatial localization and frequencies of sleep spindle networks from human ECoG recordings. A MATLAB implementation is available at github.com/bwbrunton/dmd-neuro/.
6 Reduced order modeling of biophysical neuronal networks
Recently reduced order models for ODEs
(27) |
are constructed using POD and DEIM to investigate input-output behavior of the neuronal networks in the brain [22, 21], where are state, and are input variables, respectively.
The model in [22] is based on the chemical reactions of molecules in synapses, that are the intercellular information transfer points of neurons. The signaling pathways in
striatal synaptic plasticity are modeled in [19]. This model describes how certain molecules, which are a prerequisite for learning
in the brain, act in synapses. The stoichiometric equations obey the law
of mass action, which leads to a deterministic system of
ODEs of the form (27) . The state of the control system (27) is a collection of ions,
molecules, and proteins that act in neuronal synapses. The linear part of (27) is sparse, the nonlinearities are quadratic.
The time dependent stimulus consists of molecules that are important for neuronal excitability and plasticity, calcium and glutamate.
In [21], a nonlinear biophysical network model is considered, describing
synchronized population bursting behavior of heterogeneous
pyramidal neurons in the brain [27]. Neurons communicate by changing their
membrane voltage to create action potentials (spikes),
propagating from cell to cell. Spiking is the fundamental
method of sensory information processing in the brain, and
synchronized spiking is an emergent property of biological
neuronal networks. The ODE system (27) in [21] consists of the states as a collection of neurons, each modeled with ODEs, leading totally to a system of ODEs of dimension . Each cell is modeled with Hodgkin-Huxley equations, where each cell has only two compartments (dendrites and soma) and these compartments have different ion channels. The state variables
include the voltages of somatic and dendritic compartments, dendritic calcium concentration, synaptic and ion channel gating variables and the input is an injected current. Additionally, the soma compartment voltages are coupled to
dentritic compartments of randomly chosen cells. This networking of the output of cells as input to other cells is key for producing synchronized population behavior. The nonlinearities are Hodgkin-Huxley type, i.e. exponential functions as well as cubic and quartic polynomials.
In [22], POD+DEIM was applied to a data-driven biological model of plasticity in the brain (27). The ROMs with POD-DEIM reduce significantly the simulation time and error between the original model and reduced order solutions can be tuned by adjusting the number POD and DEIM bases independently. When the ROMs are trained in a matching time interval of seconds, accurate results are obtained. However, generalizing the reduced model to longer time intervals is challenging, which is characteristic for all nonlinear models.
In [21], the network model (27) is reduced with localized DEIM (LDEIM) [24], discrete adaptive POD (DAPOD) [33, 34], and adaptive DEIM [25]. DEIM and the variations are used here in combination with POD. ROMs require large number POD and DEIM bases, to accurately simulate the input-output behavior in the ROMs. In this model, every cell is heterogeneous in parameters and there are also jump/reset conditions, which are factors that pose additional challenges to the reduced order methods. However, the ROMs in [21] were able to replicate the emergent synchronized population activity in the original network model. DAPOD and ADEIM perform best in preserving the spiking activity of the original network model. ADEIM is too slow and does not allow low enough dimensions to offset the computational costs of online adaptivity. DAPOD is able to find a lower dimensional POD basis online than the other methods find offline, but the runtime is close to the original model.
References
- [1] D. Amsallem and J. Nordström. High-order accurate difference schemes for the Hodgkin–Huxley equations. Journal of Computational Physics, 252:573 – 590, 2013.
- [2] D. Amsallem and J. Nordström. Energy stable model reduction of neurons by nonnegative discrete empirical interpolation. SIAM Journal on Scientific Computing, 38(2):B297–B326, 2016.
- [3] B. W. Brunton, L. A. Johnson, J. G. Ojemann, and J. N. Kutz. Extracting spatial–temporal coherent patterns in large-scale neural recordings using dynamic mode decomposition. Journal of Neuroscience Methods, 258:1 – 15, 2016.
- [4] L. Buesing, J. H. Macke, and M. Sahani. Spectral learning of linear dynamics from generalised-linear observations with application to neural population data. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1682–1690. Curran Associates, Inc., 2012.
- [5] S. Chaturantabut and D. Sorensen. Nonlinear model reduction via discrete empirical interpolation. SIAM Journal on Scientific Computing, 32(5):2737–2764, 2010.
- [6] John P Cunningham and Byron M Yu. Dimensionality reduction for large-scale neural recordings. Nature Neuroscience, 17(11):1500–1509, 2014.
- [7] K.J. Friston, L. Harrison, and W. Penny. Dynamic causal modelling. NeuroImage, 19(4):1273 – 1302, 2003.
- [8] C. Himpe. optmor - optimization-based model order reduction (version 1.2), 2015.
- [9] C. Himpe. Combined State and Parameter Reduction: For Nonlinear Systems with an Application in Neuroscience. Internationaler Fachverlag für Wissenschaft & Praxis, 2016.
- [10] C. Himpe and M. Ohlberger. Data-driven combined state and parameter reduction for inverse problems. Advances in Computational Mathematics, 41(5):1343–1364, 2015.
- [11] M. Hines. Efficient computation of branched nerve equations. Int J Biomed Comput., 15(1):69–76, 1984.
- [12] A. L. Hodgkin and A. F. Huxley. A quantitative description of membrane current and its application to conduction and excitation in nerve. Bulletin of Mathematical Biology, 52(1):25–71, Jan 1990.
- [13] A. Hyvärinen and E. Oja. Independent component analysis: algorithms and applications. Neural Networks, 13(4):411 – 430, 2000.
- [14] Juang J.N. and Pappa R.S. An eigensystem realization algorithm for modal parameter identification and model reduction. Journal of Guidance, Control, and Dynamics, 8(5):620–627, 1985.
- [15] I. T. Jolliffe. Principal component analysis. Springer Series in Statistics. Springer-Verlag, New York, 2005.
- [16] A. R. Kellems, S. Chaturantabut, D. C. Sorensen, and S. J. Cox. Morphologically accurate reduced order modeling of spiking neurons. Journal of Computational Neuroscience, 28(3):477–494, 2010.
- [17] A. R. Kellems, D. Roos, N. Xiao, and S. J. Cox. Low-dimensional, morphologically accurate models of subthreshold membrane potential. Journal of Computational Neuroscience, 27(2):161, 2009.
- [18] C. Kemere, G. Santhanam, B. M. Yu, A. Afshar, S. I. Ryu, T. H. Meng, and K. V. Shenoy. Detecting neural-state transitions using hidden Markov models for motor cortical prostheses. Journal of Neurophysiology, 100(4):2441–2452, 2008.
- [19] B. Kim, SL. Hawes, F. Gillani, LJ. Wallace, and KT. Blackwell. Signaling pathways involved in striatal synaptic plasticity are sensitive to temporal pattern and exhibit spatial specificity. PLoS Comput Biol, 9(3):e1002953, 2013.
- [20] J. N. Kutz, S. L. Brunton, B. W. Brunton, and J. L. Proctor. Dynamic mode decomposition: Data-driven modeling of complex systems. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2016.
- [21] M. Lehtimäki, L. Paunonen, and M.-L. Linne. Projection-based order reduction of a nonlinear biophysical neuronal network model. In 2019 IEEE 58th Conference on Decision and Control (CDC), pages 1–6, 2019.
- [22] M. Lehtimäki, L. Paunonen, S. Pohjolainen, and M.-L. Linne. Order reduction for a signaling pathway model of neuronal synaptic plasticity. IFAC-PapersOnLine, 50(1):7687 – 7692, 2017. 20th IFAC World Congress.
- [23] C. Lieberman, K. Willcox, and O. Ghattas. Parameter and state model reduction for large-scale statistical inverse problems. SIAM Journal on Scientific Computing, 32(5):2523–2542, 2010.
- [24] B. Peherstorfer, D. Butnaru, K. Willcox, and H.-J. Bungartz. Localized discrete empirical interpolation method. SIAM J. Sci. Comput., 36(1):A168–A192, 2014.
- [25] B. Peherstorfer and K. Willcox. Online adaptive model reduction for nonlinear systems via low-rank updates. SIAM J. Sci. Comput., 37(4):A2123–A2150, 2015.
- [26] B. Petreska, B. M Yu, J. P Cunningham, S. Gopal, Stephen I. Y., K. V. Shenoy, and M. Sahani. Dynamical segmentation of single trials from population neural data. In J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. Pereira, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 24, pages 756–764. Curran Associates, Inc., 2011.
- [27] P. F. Pinsky and J. Rinzel. Intrinsic and network rhythmogenesis in a reduced traub model for CA3 neurons. Journal of Computational Neuroscience, 1(1):39–60, 1994.
- [28] S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323–2326, 2000.
- [29] C. W. Rowley, I. Mezić, S. Bahheri, P. Schlatter, and D. S. Hennigson. Spectral analysis of nonlinear flows. Journal of Fluid Mechanics, 641:115–127, 2009.
- [30] P. J. Schmid. Dynamic mode decomposition of numerical and experimental data. Journal of Fluid Mechanics, 656:5–28, 2010.
- [31] J. B. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323, 2000.
- [32] J. H. Tu, C. W. Rowley., D. M. Luchtenburg, S. L. Brunton, and J. N. Kutz. On dynamic mode decomposition: Theory and applications. Journal of Computational Dynamics, 1(2158-2491_2014_2_391):391, 2014.
- [33] M. Yang and A. Armaou. Dissipative distributed parameter systems on-line reduction and control using DEIM/APOD combination. In 2018 Annual American Control Conference (ACC), pages 2557–2562, 2018.
- [34] M. Yang and A. Armaou. Revisiting APOD accuracy for nonlinear control of transport reaction processes: A spatially discrete approach. Chemical Engineering Science, 181:146 – 158, 2018.
- [35] B. M. Yu, J. P. Cunningham, G. Santhanam, S. I. Ryu, K. V. Shenoy, and M. Sahani. Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. Journal of Neurophysiology, 102(1):614–635, 2009.