This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Supplementary Materials for Evolving Connectivity for Recurrent Spiking Neural Networks

Appendix A Detailed Experiment Settings

A.1 Hardware

All experiments were conducted on a single GPU server with the following specifications. Additionally, all efficiency experiments, including those that measure wall-clock time, were executed on a single GPU within this server.

  • 8x NVIDIA Titan RTX GPU (24GB VRAM)

  • 2x Intel(R) Xeon(R) Silver 4110 CPU

  • 252GB Memory

A.2 Hyperparameters

In this section, we provide the hyperparameters for our framework, Evolving Connectivity (EC), and the baselines, including Evolution Strategies (ES) and Surrogate Gradient (SG). All methods utilize the same hyperparameter set for all three locomotion tasks. We also list the settings of neural networks below.

Table 1: Hyperparameters for Evolving Connectivity (EC)
Hyperparameter Value
Population size NN 10240
Learning rate η\eta 0.15
Exploration probability ϵ\epsilon 10310^{-3}
Table 2: Hyperparameters for Evolution Strategies
Hyperparameter Value Target
Population size NN 10240 All
Learning rate η\eta 0.15 RSNN
0.01 Deep RNN
Noise standard deviation σ\sigma 0.3 RSNN
0.02 Deep RNN
Weight decay 0.1 All
Table 3: Hyperparameters for Surrogate Gradient
Hyperparameter Value
Proximal Policy Optimization (PPO)
Batch size 2048
BPTT length 16
Learning rate η\eta 3×1043\times 10^{-4}
Clip gradient norm 0.5
Discount γ\gamma 0.99
GAE λ\lambda 0.95
PPO clip 0.2
Value loss coefficient 1.0
Entropy coefficient 10310^{-3}
Surrogate Gradient
Surrogate function SuperSpike
Surrogate function parameter β\beta 10
Table 4: Settings of neural networks
Hyperparameter Value
Recurrent Spiking Neural Network (RSNN)
Number of neurons dhd_{h} 256
Excitatory ratio 50%50\%
Simulation time per environment step 16.6 ms
Simulation timestep Δt\Delta t 0.5 ms
Synaptic time constant τsyn\tau_{syn} 5.0 ms
Membrane time constant τm\tau_{m} 10.0 ms
Output time constant τout\tau_{out} 10.0 ms
Input membrane resistance 111Resistance is set following snn_initialization to preserve variance. RinR_{in} 0.1τm2din0.1\cdot\tau_{m}\sqrt{\frac{2}{d_{in}}}
Hidden membrane resistance 111Resistance is set following snn_initialization to preserve variance. RhR_{h} 1.0τmτsyn2dh1.0\cdot\frac{\tau_{m}}{\tau_{syn}}\sqrt{\frac{2}{d_{h}}}
Output membrane resistance 111Resistance is set following snn_initialization to preserve variance. RoutR_{out} 5.0τout2dh5.0\cdot\tau_{out}\sqrt{\frac{2}{d_{h}}}
Gated Recurrent Unit (GRU)
Hidden size 256
Long-short term memory (LSTM)
Hidden size 128

Appendix B License

In this work, we utilized the locomotion tasks and physics simulator from brax, as well as the JAX framework (jax). Both are released under the Apache 2.0 License.

Appendix C Source Code

Upon publication, we will release the source code and trained 1-bit RSNN models used in this paper to ensure reproducibility and facilitate further research, in line with NeurIPS’s commitment to open and reproducible research. The code and data will be made available through a public repository (e.g., GitHub, GitLab, or Bitbucket) and will include detailed documentation and instructions for use.