Supplementary Materials for Evolving Connectivity for Recurrent Spiking Neural Networks
Appendix A Detailed Experiment Settings
A.1 Hardware
All experiments were conducted on a single GPU server with the following specifications. Additionally, all efficiency experiments, including those that measure wall-clock time, were executed on a single GPU within this server.
-
•
8x NVIDIA Titan RTX GPU (24GB VRAM)
-
•
2x Intel(R) Xeon(R) Silver 4110 CPU
-
•
252GB Memory
A.2 Hyperparameters
In this section, we provide the hyperparameters for our framework, Evolving Connectivity (EC), and the baselines, including Evolution Strategies (ES) and Surrogate Gradient (SG). All methods utilize the same hyperparameter set for all three locomotion tasks. We also list the settings of neural networks below.
Hyperparameter | Value |
---|---|
Population size | 10240 |
Learning rate | 0.15 |
Exploration probability |
Hyperparameter | Value | Target |
---|---|---|
Population size | 10240 | All |
Learning rate | 0.15 | RSNN |
0.01 | Deep RNN | |
Noise standard deviation | 0.3 | RSNN |
0.02 | Deep RNN | |
Weight decay | 0.1 | All |
Hyperparameter | Value |
---|---|
Proximal Policy Optimization (PPO) | |
Batch size | 2048 |
BPTT length | 16 |
Learning rate | |
Clip gradient norm | 0.5 |
Discount | 0.99 |
GAE | 0.95 |
PPO clip | 0.2 |
Value loss coefficient | 1.0 |
Entropy coefficient | |
Surrogate Gradient | |
Surrogate function | SuperSpike |
Surrogate function parameter | 10 |
Hyperparameter | Value |
---|---|
Recurrent Spiking Neural Network (RSNN) | |
Number of neurons | 256 |
Excitatory ratio | |
Simulation time per environment step | 16.6 ms |
Simulation timestep | 0.5 ms |
Synaptic time constant | 5.0 ms |
Membrane time constant | 10.0 ms |
Output time constant | 10.0 ms |
Input membrane resistance 111Resistance is set following snn_initialization to preserve variance. | |
Hidden membrane resistance 111Resistance is set following snn_initialization to preserve variance. | |
Output membrane resistance 111Resistance is set following snn_initialization to preserve variance. | |
Gated Recurrent Unit (GRU) | |
Hidden size | 256 |
Long-short term memory (LSTM) | |
Hidden size | 128 |
Appendix B License
In this work, we utilized the locomotion tasks and physics simulator from brax, as well as the JAX framework (jax). Both are released under the Apache 2.0 License.
Appendix C Source Code
Upon publication, we will release the source code and trained 1-bit RSNN models used in this paper to ensure reproducibility and facilitate further research, in line with NeurIPS’s commitment to open and reproducible research. The code and data will be made available through a public repository (e.g., GitHub, GitLab, or Bitbucket) and will include detailed documentation and instructions for use.