MAP-SNN: Mapping Spike Activities with Multiplicity, Adaptability, and Plasticity into Bio-Plausible Spiking Neural Networks
Abstract
Spiking Neural Network (SNN) is considered more biologically realistic and power-efficient as it imitates the fundamental mechanism of the human brain. Recently, backpropagation (BP) based SNN learning algorithms that utilize deep learning frameworks have achieved good performance. However, bio-interpretability is partially neglected in those BP-based algorithms. Toward bio-plausible BP-based SNNs, we consider three properties in modeling spike activities: Multiplicity, Adaptability, and Plasticity (MAP). In terms of multiplicity, we propose a Multiple-Spike Pattern (MSP) with multiple spike transmission to strengthen model robustness in discrete time-iteration. To realize adaptability, we adopt Spike Frequency Adaption (SFA) under MSP to decrease spike activities for improved efficiency. For plasticity, we propose a trainable convolutional synapse that models spike response current to enhance the diversity of spiking neurons for temporal feature extraction. The proposed SNN model achieves competitive performances on neuromorphic datasets: N-MNIST and SHD. Furthermore, experimental results demonstrate that the proposed three aspects are significant to iterative robustness, spike efficiency, and temporal feature extraction capability of spike activities. In summary, this work proposes a feasible scheme for bio-inspired spike activities with MAP, offering a new neuromorphic perspective to embed biological characteristics into spiking neural networks.
1 Introduction
Motivated by biological plausibility, Spiking Neural Network (SNN) is introduced as a noise-robust third-generation neural network Maass (1997). The SNN transmits the discrete action potentials (spikes) through the adaptive synapses to process information similar to the communication scheme in the brain. Therefore, the exploration of SNN is anticipated to help reveal the working mechanism of the mind and intelligence Ghosh-Dastidar and Adeli (2009). Besides, the event-driven characteristic of SNN allows it to be potentially energy-efficient on the emerging neuromorphic hardware and relatively mature neuromorphic sensors Vanarse et al. (2016).
However, designs and analyses of SNN training algorithms are challenging. The asynchronous and discrete computing in SNN makes it difficult to apply the mature backpropagation (BP) technique for practical training Pfeiffer and Pfeil (2018). A pseudo-derivative method is introduced in recent work to overcome the non-differentiable problem, allowing SNN to be directly trained using BP Wu et al. (2018). Those BP-based SNNs utilize the basic concept of Recurrent Neural Network (RNN) by converting spiking neurons into an iterative model and simulating neural activities with discrete time-iteration. With BP-based learning algorithms, SNN models can be implemented on a larger scale under mature deep learning frameworks to achieve better performances Wu et al. (2019); Woźniak et al. (2020).
At present, some bio-inspired SNNs reveal the potential of biological characteristics with better performance, such as Lateral Interactions Cheng et al. (2020). Inspired by neuroscience, this work focuses on the neuromorphic properties of spike activities and proposes a feasible bio-plausible scheme with Multiple-Spike Pattern (MSP), Spike Frequency Adaption (SFA), and Convolutional Synapse (ConvSyn), advancing BP-based SNN toward the goal of neuromorphic computing. The multiple-spike pattern for spiking neurons allows multiple spikes transmission at the minimal iterative step length in terms of multiplicity. Furthermore, we adopt the Spike Frequency Adaptation (SFA) mechanism under MSP to realize adaptability for higher efficiency of spike activities. Compared with the single-spike pattern, the proposed multiple-spike pattern with the SFA mechanism alleviates the problem caused by discrete time-iteration and results in better model stability under different step lengths. Besides, inspired by the synaptic plasticity, this work proposes a convolutional synapse model to imitate the bio-electric synapse for converting incoming spike trains into pre-synaptic currents, further enhancing the temporal extraction ability of spike activities.
We test the proposed model on two neuromorphic datasets: N-MNIST and SHD Orchard et al. (2015); Cramer et al. (2020). The experimental results show that the proposed model can achieve competitive performance on both N-MNIST and SHD. Furthermore, comparative and analytical experiments demonstrate that the proposed scheme has more robust model stability under different iterative step lengths, fewer but practical spike activities, and better model performance for temporal feature extraction in neuromorphic tasks.
Our main contributions are four-folds:
-
1.
This work reveals the distinction between the discrete iterative simulation and biological network. To our best knowledge, this study, for the first time, discusses the discretization problem in time-iteration and raises a new question about the model robustness under different iterative step lengths for BP-based SNN algorithms.
-
2.
This work explores the importance of modeling spike activities and shows researchers in neuromorphic computing more possibilities of embedding biological properties into SNNs.
-
3.
This work proposes the Multiple-Spike Pattern for robust iterative training in Spiking Neural Network, providing a potential direction of SNN algorithm developments.
-
4.
This work proposes a Convolutional Synapse, modeling biological properties using mature convolution operations, making the SNN algorithm more compatible with deep learning frameworks.
2 Related Work
SNNs are computational models that consist of spiking neurons and interconnecting synapses with adjustable scalar weights. In this section, we discuss spiking neuron models and learning rules of synapses related to our proposed techniques.
2.1 Spiking Neuron Models
The spiking neurons receive input temporal signals and generate a spike when the membrane potential reaches a threshold. Multiple neuron models have been proposed to model the neural spike activities, including Rate, McCulloch and Pitts, Hodgkin-Huxley, and FitzHugh-Nagumo models Gerstner and Kistler (2002). However, those complex neural models with extensive biological details cause high computational costs. Recently, the leaky integrated-and-fire (LIF) model has drawn much attention. The LIF model captures the intuitive properties of external input accumulating charge across a leaky cell membrane with a clear threshold Tavanaei et al. (2019). An explicitly iterative version of the LIF model was generally utilized in nowadays deep learning frameworks Wu et al. (2018); Cheng et al. (2020), allowing discrete neural spike activities in deep SNNs. (Mathematical derivation of iterative LIF model is included in the supplementary material.)

2.2 Learning Rules in SNNs
The strengths of synapses are modeled as scalar weights in SNNs, which can be dynamically adjusted following a specific learning rule. The learning rules of SNNs are actively explored and can be roughly concluded into three directions: conversion-based methods that map SNNs from trained ANNs Han et al. (2020); supervised learning with spikes that directly train SNNs using variations of error backpropagation Lee et al. (2016); Wu et al. (2018); local learning rules at synapses, such as schemes exploring the spike time dependent plasticity (STDP) Song et al. (2000). Recent works have successfully applied the backpropagation algorithm into SNNs by defining pseudo-derivatives for the non-differentiable spike activities Lee et al. (2016); Wu et al. (2018); Tavanaei et al. (2019); Cheng et al. (2020). Those BP-based SNNs are similar to extensions of traditional Recurrent Neural Networks (RNNs), which utilize error backpropagation through time and follow gradient descent to adjust the connection weights. The BP-based algorithms can take advantage of mature deep learning frameworks for network design and operating efficiency. Thus, they have become an essential branch of the SNN algorithmic directions.
2.3 Biological Properties in BP-based SNNs
As the main target of neuromorphic computing, the research of BP-based SNN combined with biological characteristics is highly concerned. Recently, some works revealed the potential of biological characteristics in SNNs with better performance, such as Lateral Interactions for intra-layer connections Cheng et al. (2020), delayed Spike Response Model (SRM) for synaptic expressions Shrestha and Orchard (2018), providing a good entry point for training BP-based SNNs with bio-interpretability.
3 Methodology
As shown in Figure 2, we model the spike activities guided by the MAP principles. The implementation of our proposed model with motivation and benefits of each improvement are presented in this section. (The derivation details of formulas can be found in the supplement.)

3.1 Multiplicity with Multiple-Spike Pattern
In this part, the discretization problem of iterative models is discussed first. Then, the multiple-spike pattern that represents the model’s multiplicity is presented.
3.1.1 Problem under Discrete-Time-Iteration
BP-based SNNs simulate the spike activities by discrete time-iteration. However, the discreteness causes considerable problems. Within recursive time-iterations, needs to be determined as the minimum iterative step length for simulation. The mismatch between the continuous and discrete spike activities is illustrated in Figure 1. For distinction, we define BP-based SNNs with binary digits transmission as Single-Spike Pattern (SSP). With SSP, modeling spike activities becomes problematic when iterative step length goes larger. Since models with SSP represent spike activities as binary sequences, only one spike activity can be handled per iterative step. Under this circumstance, the temporal feature is restricted with lost spikes. Therefore, in discrete time-iteration, the proper selection of the iterative step lengths is always an indispensable part of SSP. Inspired by spike multiplicity, we propose a Multiple-Spike Pattern (MSP) to ease the problem by complementing spike activities into each iterative step, as shown in Figure 1.
3.1.2 Multiple-Spike Pattern for Discrete Iteration
As the compensation for the difference between neurons’ natural behaviors under such discrete iteration, MSP utilizes integers as the intensity of neural spike activities, allowing multiple spikes transmission in an iterative step. The comparison between the SSP and the MSP is shown in Figure 3(a) and Figure 3(b). By replenishing the spike number within iterative steps, the multiple-spike pattern avoids the spike loss problem and implements potentially higher expressiveness of spike activities during the time iteration.
3.1.3 Equivalence under Multiple-Spike Pattern
The MSP proposed in this work can be equivalently converted into the SSP by reducing the time scale correspondingly. One example is shown in Figure 3(c), where two patterns result in the same spike activities during a specific time interval. Furthermore, under the equivalence, the multiple-spike pattern implements the same neuron activity with more leisurely selection, which allows the model to achieve high stability for arbitrary iterative step lengths.
3.2 Adaptability with Spike Frequency Adaptation
3.2.1 Spike Frequency Adaptation for Spike Activities
Spike-frequency adaptation (SFA) is a biological neural phenomenon, describing a neuron fires with a frequency that reduces over time when stimulated with constant input. The phenomenon occurs in both vertebrates and invertebrates, in peripheral and central neurons, and plays an essential role in neural information processing Benda and Herz (2003). The SFA mechanism leads to non-linearity in spike activities and enriches the temporal feature for a single spike. Specifically, Adibi et al. Adibi et al. (2013) suggest that the SFA mechanism in real neurons like whisker sensory cortexes helps improve the information capacity of a single spike defined by the average mutual information (MI). Therefore, this work adopts the SFA mechanism with MSP for higher efficiency in spike transmission.

3.2.2 Iterative LIF with SFA
By unifying the accumulation activity and spike activity, the iterative LIF model with SFA is defined as:
(1) |
Here is the neuron’s membrane potential. is the consumed membrane potential that produces multiple spike activities. is the normalized pre-synaptic input current. is the decay factor describing the leaky activity of spiking neurons.
(2) |
(3) |
(4) |
Here is the threshold basis of neuron, is the estimated intensity of spike activity, is the integer number of output spike activities, is the inhibition coefficient that controls the temporary raising of the threshold, making the intensity of spike activity drop exponentially.
3.2.3 Adaptability with Fewer Spike Activities
As shown in Figure 4, when the membrane potential increases, the intensity of spike activity gradually deviates from linearity, showing adaptability to the current input. In this case, the total number of spike activities decreases, and each spike activity brings more features, potentially saving computation operations with less spike transmission while maintaining high performance.

3.2.4 Pseudo-Derivative of Spike Activity
In order to apply backpropagation, we assign a particular pseudo derivative as follows:
(5) |
This pseudo-derivative provides a unit vector for gradient descent without complicated computations.
3.3 Plasticity with Convolutional Synapse
3.3.1 Modeling Spike Activity through Electric Synapse
In biological neural networks with electric synapses, a spike is considered to be generated in a soma, transmitted through the axon to the synapse, and converted as an electric current into the connecting neuron’s dendrites. For describing spike transmissions through electric synapses, Gerstner Gerstner and Kistler (2002) proposes a Spike Response Model (SRM) to transform spike activities into current signals flowing into post-synaptic dendrites, defined as:
(6) |
Here is the spike activities, is the spike response signal transmitted from axon terminal to dendrite over time, is the spike response kernel relating current intensities with spike activities.
3.3.2 Potential Plasticity in Spike Response Model (SRM)
SRM provides richer temporal information for the network by allowing the varying effect of certain spike activity. However, the constant parameters of response kernel are widely pre-defined as a ”ground truth” before training, which limits the potential diversity and plasticity for SRM. Shrestha and Orchard Shrestha and Orchard (2018) first considered the plasticity of SRM by setting response delay as learnable parameters, which unsurprisingly improved the performance. As shown in Figure 5, this work further frees up the shape parameters for better plasticity, allowing shape parameters , , and delay parameter to be learnable during training. In this case, the plasticity of spike activity allows each neuron to learn different temporal features, improving the complexity and fitting ability of the model.

3.3.3 Spike Response Model as 1-D Convolution
As shown in Figure 5, SRMs used for spike activities are going up to the peak and subsequently decreasing towards zero. Therefore, it is possible to ignore the long-time spike response to help reduce computational complexity. This work applies one-dimensional convolution operation in computing SRM by defining a valid time window to make the SRM more compatible in nowadays deep learning frameworks. The convolution operation follows as:
(7) |
Here is the one-dimensional convolutional kernel of spike responses, modeled with three variables , and . is the time-window constant that describes the necessary scope of spike response, defined as Eq.8 with the minimal iterative step length and convolution kernel size.
(8) |
3.3.4 Dendrites with Negative Masking Filter
In spiking neurons, dendrites receive signals from pre-synaptic axon terminals and integrate all currents together for the neural soma. In this work, we set up a negative filter with function Ramachandran et al. (2018) to shield negative integrated input currents to improve the stability of membrane potential in the LIF model.
4 Experiment
4.1 Experiment Setting
The proposed model is built on the deep learning framework, PyTorch111https://pytorch.org/, and the weights are initialed using the default xavier_normal_222https://pytorch.org/docs/master/nn.init.html method in PyTorch. Besides, we use Adam as the optimizer and Cross-Entropy as the criterion during training. Hyperparameters of experimental settings are included in the supplementary materials with source codes.
To evaluate the performance of the proposed SNN model, we selected two neuromorphic datasets: N-MNIST Orchard et al. (2015) and SHD Cramer et al. (2020). They are used as experimental objects for classification error rates in neuromorphic tasks, including ablation experiments. In addition, we set up control experiments to analyze and discuss the significance brought from the three characteristics (MAP) to the model performance.
4.2 Classification on Neuromorphic Datasets
To clearly demonstrate the reliability of our approaches, we train our SNN models with spike-based datasets for image and sound classification, and compare the achieved error rates with relative works on SNN algorithms.
N-MNIST is a neuromorphic dataset of handwritten digits containing 60,000 train samples and 10,000 test samples. The samples of N-MNIST are event-based spike signals, which are captured by recording digits images on an LCD screen using Dynamic Vision Sensors (DVS). Spiking Heidelberg Digits (SHD) is a spike-based speech dataset consisting of 0 to 9 spoken digits recordings in both English and German. The audio recordings are converted into spikes using an artificial inner ear model, transforming into temporal features with 700 input channels, with 8156 train samples and 2264 test samples.
Model | Size of Hidden Layer | Error Rate(%) |
---|---|---|
Spiking-MLP Cohen et al. (2016) | 10000 | 8.13 |
Spiking-CNN Neil and Liu (2016) | - | 4.28 |
LSTM Neil et al. (2016) | - | 2.95 |
Phased-LSTM Neil et al. (2016) | - | 2.62 |
MLP Lee et al. (2016) | 800 | 2.20 |
Spiking-MLP Lee et al. (2016) | 1.26 | |
STBP Wu et al. (2018) | 1.22 | |
Spiking-MLP Fang et al. (2021) | - | 1.60 |
this work (SSP) | 1.60 | |
this work (ConvSyn) | 1.43 | |
this work (MSP) | 1.11 | |
MAP-SNN (MSP+ConvSyn) | 1.06 |
Model | Size of Hidden Layer | Error Rate(%) |
---|---|---|
Spiking-MLP Cramer et al. (2020) | - | 52.5 |
SNN-base Cramer et al. (2020) | - | 28.6 |
R-SNN Cramer et al. (2020) | - | 16.8 |
R-SNN Zenke and Vogels (2021) | - | 18.0 |
SRNN Yin et al. (2020) | - | 15.6 |
Spiking-MLP Fang et al. (2021) | - | 14.3 |
this work (SSP) | - | 36.1 |
this work (ConvSyn) | - | 33.0 |
this work (MSP) | - | 17.1 |
MAP-SNN (MSP+ConvSyn) | - | 13.0 |
We compare the obtained optimal model performance with state-of-the-art SNN models, as N-MNIST in Table 1 and SHD in Table 2, including ablation experiments with MSP and ConvSyn alone. The experimental results show that MAP-SNN can decrease the error rate by on N-MNIST and on SHD, which has achieved the highest performance among SNN-based algorithms under the same Multilayer Perceptron (MLP) structure. Furthermore, we observe that MSP and ConvSyn are enabled to improve the model accuracy independently and can also be combined together for significantly better performance, which supports the complementarity of MAP properties.

4.3 Analysis and Discussion
To explore the potentials of the proposed MSP, SFA, and ConvSyn, we carry control experiments on N-MNIST and SHD datasets and discuss the impacts of MAP properties on improving model performance.
4.3.1 The Impact of Multiplicity on Discrete Iteration
The selection of minimal iterative step lengths influences model performance in the discrete iterative models. For the sake of completeness of the analysis, we analyze this instability in the ablation experiments by building control experiments under MLP architecture with different iterative step lengths, as shown in Figure 6(a) and Figure 6(b). The experiments are based on N-MNIST and SHD, respectively, where the unified network structure is 3434-200-10 on N-MNIST and 700-400-10 on SHD. With the additional properties, the error rates of the model have been significantly improved. Compared with benchmark SSP, our MAP-SNN with complementary MSP and ConvSyn reduce error rates by on N-MNIST, and ) on SHD, which demonstrates the reliability of proposed methods. Furthermore, the model trained with MSP keeps almost constant error rates across different , supporting that multiplicity alleviates the discretization problem and improves the model stability on time-iteration with arbitrary steps.
4.3.2 The Impact of Adaptability on Spike Efficiency
To demonstrate the effectiveness of SFA in spike reduction, we establish a set of controlled experiments on the SHD dataset with the 700-400-10 MLP structure. Figure 6(c) shows the error rates and spike numbers in the training process of models in both SFA mode and Linear mode. The experimental results show that SFA effectively suppresses spike activities by times while slightly improving model accuracy by . In this case, the reduced signal transmissions helpfully decrease the amount of computation in synapses, which is significant to save the power consumption of neuromorphic hardware based on spike transmissions.
4.3.3 The Impact of Plasticity on Feature Extraction
To highlight the importance of plasticity for feature extraction, we set up a control experiment to compare the trainable ConvSyn with the untrainable SRM. As shown in Figure 6(d), the experiment is set on the SHD with the 700-400-10 MLP structure, showing the changes of model error rate and loss during the training epoch. The experimental results show that the plasticity allows the model to converge faster and reduces the error rate by during epoch , demonstrating the advantage of ConvSyn in temporal feature extraction. We conclude that plasticity helps shorten the training process of models and improve the model’s performance.
5 Conclusions
Inspired by the bionic spike MAP properties, we model spike activities with MSP, SFA, and ConvSyn toward bio-plausible SNNs for better performance. Experimental results confirm the superiority of the proposed model. This work demonstrates the potency of effectively modeling spike activities, revealing a unique perspective for researchers to re-examine the significance of biological facts.
References
- Adibi et al. [2013] Mehdi Adibi, Colin W. G. Clifford, and Ehsan Arabzadeh. Informational Basis of Sensory Adaptation: Entropy and Single-Spike Efficiency in Rat Barrel Cortex. J. Neurosci., 33(37):14921–14926, 2013.
- Benda and Herz [2003] Jan Benda and Andreas V. M. Herz. A Universal Model for Spike-Frequency Adaptation. Neural Computation, 15(11):2523–2564, 2003.
- Cheng et al. [2020] Xiang Cheng, Yunzhe Hao, Jiaming Xu, and Bo Xu. LISNN: Improving Spiking Neural Networks with Lateral Interactions for Robust Object Recognition. In IJCAI, pages 1519–1525, July 2020.
- Cohen et al. [2016] Gregory K. Cohen, Garrick Orchard, Sio-Hoi Leng, Jonathan Tapson, Ryad B. Benosman, and André van Schaik. Skimming Digits: Neuromorphic Classification of Spike-Encoded Images. Frontiers in Neuroscience, 10, 2016.
- Cramer et al. [2020] Benjamin Cramer, Yannik Stradmann, Johannes Schemmel, and Friedemann Zenke. The Heidelberg Spiking Data Sets for the Systematic Evaluation of Spiking Neural Networks. IEEE Transactions on Neural Networks and Learning Systems, pages 1–14, 2020.
- Fang et al. [2021] Haowen Fang, Brady Taylor, Ziru Li, Zaidao Mei, Hai Helen Li, and Qinru Qiu. Neuromorphic Algorithm-hardware Codesign for Temporal Pattern Learning. In DAC, pages 361–366, 2021.
- Gerstner and Kistler [2002] Wulfram Gerstner and Werner M. Kistler. Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, August 2002.
- Ghosh-Dastidar and Adeli [2009] Samanwoy Ghosh-Dastidar and Hojjat Adeli. Spiking neural networks. Int. J. Neur. Syst., 19(04):295–308, 2009.
- Han et al. [2020] Bing Han, Gopalakrishnan Srinivasan, and Kaushik Roy. RMP-SNN: Residual Membrane Potential Neuron for Enabling Deeper High-Accuracy and Low-Latency Spiking Neural Network. In ICCV, pages 13558–13567, 2020.
- Lee et al. [2016] Jun Haeng Lee, Tobi Delbruck, and Michael Pfeiffer. Training Deep Spiking Neural Networks Using Backpropagation. Frontiers in Neuroscience, 10, 2016.
- Maass [1997] Wolfgang Maass. Networks of spiking neurons: The third generation of neural network models. Neural Networks, 10(9):1659–1671, 1997.
- Neil and Liu [2016] Daniel Neil and Shih-Chii Liu. Effective sensor fusion with event-based sensors and deep network architectures. In ISCAS, pages 2282–2285, 2016.
- Neil et al. [2016] Daniel Neil, Michael Pfeiffer, and Shih-Chii Liu. Phased LSTM: Accelerating Recurrent Network Training for Long or Event-based Sequences. In NIPS, volume 29, 2016.
- Orchard et al. [2015] Garrick Orchard, Ajinkya Jayawant, Gregory K. Cohen, and Nitish Thakor. Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades. Frontiers in Neuroscience, 9, 2015.
- Pfeiffer and Pfeil [2018] Michael Pfeiffer and Thomas Pfeil. Deep Learning With Spiking Neurons: Opportunities and Challenges. Frontiers in Neuroscience, 12, 2018.
- Ramachandran et al. [2018] Prajit Ramachandran, Barret Zoph, and Quoc V. Le. Searching for activation functions. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Workshop Track Proceedings. OpenReview.net, 2018.
- Shrestha and Orchard [2018] Sumit Bam Shrestha and Garrick Orchard. SLAYER: Spike Layer Error Reassignment in Time. In NIPS, volume 31, 2018.
- Song et al. [2000] Sen Song, Kenneth D. Miller, and L. F. Abbott. Competitive Hebbian learning through spike-timing-dependent synaptic plasticity. Nat Neurosci, 3(9):919–926, September 2000.
- Tavanaei et al. [2019] Amirhossein Tavanaei, Masoud Ghodrati, Saeed Reza Kheradpisheh, Timothée Masquelier, and Anthony Maida. Deep learning in spiking neural networks. Neural Networks, 111:47–63, 2019.
- Vanarse et al. [2016] Anup Vanarse, Adam Osseiran, and Alexander Rassau. A Review of Current Neuromorphic Approaches for Vision, Auditory, and Olfactory Sensors. Frontiers in Neuroscience, 10, 2016.
- Woźniak et al. [2020] Stanisław Woźniak, Angeliki Pantazi, Thomas Bohnstingl, and Evangelos Eleftheriou. Deep learning incorporating biologically inspired neural dynamics and in-memory computing. Nat Mach Intell, 2(6):325–336, June 2020.
- Wu et al. [2018] Yujie Wu, Lei Deng, Guoqi Li, Jun Zhu, and Luping Shi. Spatio-Temporal Backpropagation for Training High-Performance Spiking Neural Networks. Frontiers in Neuroscience, 12, 2018.
- Wu et al. [2019] Yujie Wu, Lei Deng, Guoqi Li, Jun Zhu, Yuan Xie, and Luping Shi. Direct Training for Spiking Neural Networks: Faster, Larger, Better. In AAAI, volume 33, pages 1311–1318, July 2019.
- Yin et al. [2020] Bojian Yin, Federico Corradi, and Sander M. Bohté. Effective and Efficient Computation with Multiple-timescale Spiking Recurrent Neural Networks. In ICONS, pages 1–8, 2020.
- Zenke and Vogels [2021] Friedemann Zenke and Tim P. Vogels. The Remarkable Robustness of Surrogate Gradient Learning for Instilling Complex Function in Spiking Neural Networks. Neural Computation, 33(4):899–925, 2021.