This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Expressivity of Neural Networks with Random Weights and Learned Biases

Ezekiel Williams
Mathematics and Statistics
Université de Montréal
Montréal, Québec, Canada
[email protected]
&Avery Hee-Woon Ryoo
Computer Science
Université de Montréal
Montréal, Québec, Canada
[email protected]
Thomas Jiralerspong
Computer Science
Université de Montréal
Montréal, Québec, Canada
[email protected]
&Alexandre Payeur
Mathematics and Statistics
Université de Montréal
Montréal, Québec, Canada
[email protected]
&Matthew G. Perich
Neuroscience
Université de Montréal
Montréal, Québec, Canada
[email protected]
&Luca Mazzucato
Biology, Physics, and Mathematics
University of Oregon
Eugene, Oregon, United States
[email protected]
&Guillaume Lajoie
Mathematics and Statistics
Université de Montréal
Montréal, Québec, Canada
[email protected]
Corresponding author; \dagger denotes equal contribution; \star denotes co-senior authors; second line is department
Abstract

Landmark universal function approximation results for neural networks with trained weights and biases provided impetus for the ubiquitous use of neural networks as learning models in Artificial Intelligence (AI) and neuroscience. Recent work has pushed the bounds of universal approximation by showing that arbitrary functions can similarly be learned by tuning smaller subsets of parameters, for example the output weights, within randomly initialized networks. Motivated by the fact that biases can be interpreted as biologically plausible mechanisms for adjusting unit outputs in neural networks, such as tonic inputs or activation thresholds, we investigate the expressivity of neural networks with random weights where only biases are optimized. We provide theoretical and numerical evidence demonstrating that feedforward neural networks with fixed random weights can be trained to perform multiple tasks by learning biases only. We further show that an equivalent result holds for recurrent neural networks predicting dynamical system trajectories. Our results are relevant to neuroscience, where they demonstrate the potential for behaviourally relevant changes in dynamics without modifying synaptic weights, as well as for AI, where they shed light on multi-task methods such as bias fine-tuning and unit masking.

1 Introduction

1.1 Context and Motivation

The universal approximation theorems Hornik et al. (1989); Funahashi (1989); Hornik (1991) of the late 1900s highlighted the expressivity of neural network models, i.e. their ability to approximate or express a broad class of functions through tuning of weights and biases, heralding the central role neural networks play in Machine Learning (ML) and neuroscience today. Since these foundational studies, a rich literature has explored the limits of the expressivity of neural networks by finding smaller parameter subsets whose tuning still results in the approximation of wide classes of functions or dynamics. Prior work has explored approximation capabilities of feed-forward (FFNs) and Recurrent Neural Networks (RNNs) where only the output weights are trained Rosenblatt et al. (1962); Rahimi and Recht (2008); Ding et al. (2014); Neufeld and Schmocker (2023); Jaeger and Haas (2004); Sussillo and Abbott (2009); Gonon et al. (2023); Hart et al. (2021), and deep FFNs where only subsets of parameters Rosenfeld and Tsotsos (2019), normalization parameters Burkholz (2023); Giannou et al. (2023), or binary masks–either over units or parameters–are trained Malach et al. (2020). Recently, a study has also explored the approximation abilities of transformers where only context is tuned Petrov et al. (2024). Here, motivated by recent insights from neuroscience, we initiate the investigation of expressivity in both FFNs and RNNs where only biases are trained, an avenue that remains largely unexplored.

Learned biases are of fundamental relevance to neuroscience and ML. In the latter domain, recent work has highlighted the optimization or the careful selection of bias vectors: fine-tuning of biases for multi-task learning Zaken et al. (2021) and the context tokens used for in-context learning Von Oswald et al. (2023) can all be viewed as methods of carefully setting the bias in a model where other connectivity parameters are fixed in order to perform a task. In neuroscience, single neurons employ diverse mechanisms to adjust their response to inputs, beyond synaptic plasticity. Importantly, some of these mechanisms can be construed as manipulating the firing onset of neurons which, in a firing rate model, is represented by the bias. Among these, we have shunting inhibition Holt and Koch (1997), threshold adaptation Azouz and Gray (2000), and a host of other mechanisms that participate in shaping the input-output transfer function of neurons (reviewed in Ferguson and Cardin (2020), see also Mazzucato et al. (2019)). Experimental evidence also suggests that bias-related signals play a role in learning Zhang and Linden (2003); Sehgal et al. (2013) but, despite this, most work modelling learning in neuroscience has focused on synapses.

If tuning the biases of a neural network will only span a reduced set of functions, or output dynamics, then this would solidify the role of synaptic plasticity as the critical component in biological and artificial learning. Conversely, if one can express arbitrary dynamics solely by changing biases, this would motivate deeper investigation of when and how non-synaptic plasticity mechanisms might shoulder some of the effort of learning. In this paper we address the question of the expressivity of bias learning. In a regime where all weight parameters are randomly initialized and frozen, and only hidden layer biases are optimized, we provide theoretical guarantees demonstrating that:

  1. 1.

    feed-forward neural networks with wide hidden layers are universal function approximators with high probability;

  2. 2.

    RNNs with wide hidden layers can arbitrarily approximate finite-time trajectories from smooth dynamical systems with high probability.

We provide empirical support for, and a deeper interrogation of, these theoretical findings with an array of numerical experiments.

1.2 Related Works

Neuroscience. Evidence from experimental and theoretical neuroscience shows that cortical circuits flexibly regulate their dynamical phases and toggle between different regimes solely by changing the distribution of input biases to a circuit. Bias-induced modulations can explain the context-dependent effects of expectation Mazzucato et al. (2019), movements Wyrick and Mazzucato (2021) and arousal Papadopoulos et al. (2024) on sensory processing across modalities and brain areas. Changes in the bias distribution modulate the gain of the input-output neuronal transfer function Mazzucato et al. (2019). Moreover, two bias-related biological mechanisms, neural firing threshold and network inputs, have also been noted for their ability to shape network dynamics: theoretical work shows that threshold heterogeneity can improve network capacity Gast et al. (2024, 2023), while a mix of theory and experimental evidence suggests a role for inputs from upstream brain areas in reconfiguring circuit dynamics on fast timescales Perich et al. (2018); Remington et al. (2018). Within the RNN reservoir computing approach to modelling in neuroscience, where recurrent weights are random and fixed, bias modulations can toggle between multiple phases (including fixed point, chaos, and multistable regimes) and, strikingly, enable RNN multi-tasking in the absence of any parameter optimization Ogawa et al. (2023). A repertoire of dynamical motifs can also be generated in RNN reservoirs with dynamic feedback loops Logiaco et al. (2021). However, in all these studies input biases or other parameters were either i.i.d. generated or set to specific values, but never learned. The role of plastic biases has not been investigated yet and will be the main focus of our efforts below.

Machine Learning. We mention primarily two bodies of literature in ML/AI that are related to this study. Many efforts have explored neural networks that are trained to quickly meta-learn new tasks via dynamics in activation space alone, without any adaptation of weights (see e.g. Feldkamp et al. (1997); Klos et al. (2020); Cotter and Conwell (1990, 1991)). Like our work, this thread of research proposes a mechanism by which a network might adapt to any new task without changing weights. However, prior work differs form the current study in that it requires an initial meta-training of all parameters in a network, weights included, before operating in the "fast learning" regime where network variables maintain context information that allow the networks to rapidly adapt to new tasks. In some cases these context variables can be thought of as biases Cotter and Conwell (1991). The closest ML contributions to our work are on the topic of masking–particularly the Strong Lottery Ticket Hypothesis (SLTH). This hypothesis conjectured that a given desired parameterization of a neural network can be found as a sub-network in a larger, appropriately initialized, network Ramanujan et al. (2020). SLTH is typically formulated with respect to weights, i.e. a subnetwork is defined by deleting weights from the full network. However, a few studies have investigated SLTH when subnetworks are constructed by deleting units Malach et al. (2020). While our study is different at face value for its focus on function approximation via bias optimization, rather than finding “lottery ticket" subnetworks, a key step in our analytic derivations relies on masking in a fashion analogous to proofs of the unit masking version of SLTH. Thus, our work also provides two results that may be of interest to the SLTH theory. First an alternative proof (complementing the first proof in Malach et al. (2020)) of the SLTH for units in single-hidden layer neural networks; second, a first proof of the SLTH over units for RNNs (see Section §2 for more details). Flavours of the lottery ticket hypothesis for RNNs has been explored empirically Yu et al. (2019); García-Arias et al. (2021); Schlake et al. (2022) but we have not encountered its proof, over weights or units, in the literature.

2 Theory Results

2.1 Feed-forward neural networks

This section studies the single-layer FFN, whose output is given by

yn(x,θ)=i=1nA:iϕ(Bi:x+b),y_{n}(x,\theta)=\sum_{i=1}^{n}A_{:i}\phi(B_{i:}x+b), (1)

with Al×nA\in\mathbb{R}^{l\times n}, Bn×dB\in\mathbb{R}^{n\times d}, bnb\in\mathbb{R}^{n}, and θ={B,A,b}\theta=\{B,A,b\}. We shall investigate the approximation properties of this neural network when all the weights in BB and AA are fixed and sampled uniformly from the n(l+d)n(l+d) dimensional centered hypercube, with half-edges of length γ\gamma, and only bb is tuned. We begin by outlining the activation function assumptions necessary for our theoretical results.

Definition 1.

The function ϕ\phi is a suitable activation if, when employed in the neural network of Eq.1, it allows for universal approximation of the following kind: for any continuous h:Ulh:U\to\mathbb{R}^{l} and any ϵ>0\epsilon>0 n\exists n\in\mathbb{N} and parameters θ\theta s.t. Uh(x)yn(x,θ)𝑑xϵ\int_{U}||h(x)-y_{n}(x,\theta)||dx\leq\epsilon, where UdU\subset\mathbb{R}^{d} is compact and ||||||\cdot|| is the 11-norm and will be throughout the paper.

From the universal approximation theorems of 1993 Leshno et al. (1993); Hornik (1993), a sufficient condition for ϕ\phi to be a suitable activation is that it is non-polynomial. In this paper we conceptualize universal approximation as the approximation of continuous functions on compact sets with respect to an L1L^{1} functional norm, but we remark that the literature has also studied other conditions on hh (for example, continuity and measurability) and other forms of convergence (for example, with respect to the LL^{\infty} norm). For a review of the literature, see Pinkus (1999).

Definition 2.

A suitable activation ϕ\phi is referred to as a γ\gamma-parameter bounding activation if it allows for universal approximation even when each individual parameter, e.g. an element of a weight matrix or bias vector, is bounded.

Proposition 1.

The ReLU and the Heaviside step function are γ\gamma-parameter bounding activations for any γ>0\gamma>0.

The proof is in Appendix §2.1 for completeness. A key subtlety of parameter-bounding is that it is a bound on individual, scalar, parameters. Thus, as a network grows in width the bias vector and weight matrix norms will still grow accordingly. This may be important, as research suggests band-limited parameters cannot universally approximate, at least for certain activation types Li et al. (2023). We leave it to future work to determine which other activations are parameter bounding.

We make one final definition:

Definition 3.

If ϕ\phi is a γ\gamma-parameter bounding activation, is continuous, and if τ\exists\tau\in\mathbb{R} such that for x<τx<\tau ϕ(x)=0\phi(x)=0 then we say that ϕ\phi is a γ\gamma-bias-learning activation.

Obviously the ReLU is a bias-learning activation. We leave the study of discontinuous functions like the Heaviside to future work. We conclude with the main result of this section. Define pRp_{R} to be a uniform distribution on [R,+R][-R,+R], where γ<R<\gamma<R<\infty.

Theorem 1.

Assume that ϕ\phi is γ\gamma-bias learning and, for compact UdU\subset\mathbb{R}^{d}, h:Ulh:U\to\mathbb{R}^{l} is continuous. Then for any degree of accuracy ϵ>0\epsilon>0 and probability of error δ(0,1)\delta\in(0,1), there exists a hidden-layer width mm\in\mathbb{N} and bias vector bmb\in\mathbb{R}^{m} such that, with a probability of 1δ1-\delta, a neural network given by Eq.1 with each individual weight sampled from pRp_{R} approximates hh with error less than ϵ\epsilon.

Corollary 1.

Assume that d=ld=l, i.e. the input and output space of hh has the same dimension. Then the results of Theorem 1 also hold for single-hidden-layer res-nets.

Proof Intuition: We provide intuition about the proof of Theorem 1 and its Corollary, whose details can be found in the Appendix. According to the universal approximation theorem, given a continuous function, we can find a one-hidden-layer network, 𝒩1\mathcal{N}_{1}, that is close to that function in the L1L^{1} norm on the (compact) space of its inputs. If 𝒩1\mathcal{N}_{1} has been constructed using γ\gamma-parameter bounding activation functions, then we know that each parameter will be on the interval [γ,γ][-\gamma,\gamma]. Next, we construct a second network, 𝒩2\mathcal{N}_{2}, to approximate 𝒩1\mathcal{N}_{1} by randomly sampling each of its parameters, weight or bias, from pRp_{R}. For 𝒩2\mathcal{N}_{2} to approximate 𝒩1\mathcal{N}_{1}, each parameter of 𝒩2\mathcal{N}_{2} should fall within a tiny window of an analogous parameter in 𝒩1\mathcal{N}_{1}. This window must have half-length less than ϵ\epsilon to yield the desired error bound. Without loss of generality we can assume ϵ<Rγ\epsilon<R-\gamma. Then, if we sample parameters uniformly on [R,R][-R,R] there will be a non-zero probability that a given parameter of 𝒩2\mathcal{N}_{2} will end up within the tiny ϵ\epsilon-window centered at a corresponding parameter value in 𝒩1\mathcal{N}_{1}; because ϵ<Rγ\epsilon<R-\gamma we know that the ϵ\epsilon-window won’t stretch outside the distribution support. If we randomly sample a very large number of units to construct the hidden layer of 𝒩2\mathcal{N}_{2} the probability of finding a subnetwork of 𝒩2\mathcal{N}_{2} corresponding to 𝒩1\mathcal{N}_{1} can be made arbitrarily close to 11. If the activation function is bias-learning we can use biases to pick out this subnetwork by setting them appropriately smaller than the threshold given in Definition 3. We stress that this proof requires exceedingly massive hidden layer widths.

2.2 Recurrent neural networks

This section studies a discrete-time RNN given by

rt=αrt1+βϕ(Wrt1+Bxt1+b),yt=Crt,\displaystyle r_{t}=-\alpha r_{t-1}+\beta\phi(Wr_{t-1}+Bx_{t-1}+b),\quad y_{t}=Cr_{t}, (2)

where rtmr_{t}\in\mathbb{R}^{m} for all 0tT0\leq t\leq T for some TT\in\mathbb{N}, α\alpha and β\beta control the time scale of the dynamics, Wm×mW\in\mathbb{R}^{m\times m}, Cl×mC\in\mathbb{R}^{l\times m}, and BB and bb are as in the previous section. The parameters are now θ={W,B,C,b}\theta=\{W,B,C,b\}. The time-dependent input xtx_{t} belongs to a compact subset UxdU_{x}\subset\mathbb{R}^{d} for all tt. Note that when α=0\alpha=0, β=1\beta=1 one gets the standard vanilla RNN formulation; alternatively, α\alpha and β\beta can be set to approximate continuous-time dynamics using Euler’s method.

We will approximate the following class of dynamical systems by learning only biases:

zt+1=F(zt,xt),yt=Qzt,z0Uz,z_{t+1}=F(z_{t},x_{t}),\quad y_{t}=Qz_{t},\quad z_{0}\in U_{z}, (3)

where tt and xtx_{t} are as defined for the RNN, F:Uz×UxsF:U_{z}\times U_{x}\to\mathbb{R}^{s} is continuous, and Ql×sQ\in\mathbb{R}^{l\times s}. Because we build from the classic universal approximation results, we must be working with functions operating on compact sets. To guarantee that this will be the case we must make several more assumptions about the dynamical system. First, UzsU_{z}\subset\mathbb{R}^{s} is assumed to be a compact invariant set of the dynamical system: if the system is in UzU_{z} it remains there for all tt for all inputs in UxU_{x}. Second, we assume that the dynamical system is well-defined on a slightly larger compact set U~z×Ux\tilde{U}_{z}\times U_{x}, where U~z={z0+c0:z0Uz,c0<c}\tilde{U}_{z}=\{z_{0}+c_{0}:z_{0}\in U_{z},\>||c_{0}||<c\} for some c>0c>0, with U~zUz\tilde{U}_{z}\supset U_{z}. Letting pRp_{R} be defined as in Section 2.1, the main result of this section is:

Theorem 2.

Consider the RNN in Eq.2 with ϕ\phi a γ\gamma-bias learning activation, and input, output, and recurrent weight parameters for each hidden unit sampled from pRp_{R}. We can find a hidden-layer width, a bias vector, and a hidden-state initial condition for the RNN such that, with a probability that is arbitrarily close to 11, the RNN approximates finite trajectories from the dynamical system defined in Eq.3 to below any positive, non-zero, error.

This proof is weaker than the proof given in the previous section in that it shows point-wise convergence, rather than L1L^{1}, convergence. We conjecture that this can be extended straightforwardly but we leave this result to future work.

Proof Intuition: The complete proof is in the Appendix; here we provide intuition. As in Theorem 1 the proof proceeds in two steps. First, the dynamical system is approximated by an RNN, 1\mathcal{R}_{1}, using universal approximation theory for RNNs (see e.g., Schäfer and Zimmermann (2006)). 1\mathcal{R}_{1} is then approximated by a much wider, random RNN, 2\mathcal{R}_{2}, with parameters sampled from pRp_{R}. Analogous to Theorem 1, we show that one can find a sub-network of hidden units in 2\mathcal{R}_{2} that approximates 1\mathcal{R}_{1} for very large hidden widths of 2\mathcal{R}_{2}.

3 Numerical Results

3.1 Multi-task Learning with Bias-learning Feed-forward Networks

We first validated the theory by checking whether a single-hidden-layer, bias-learning FFN could perform digit recognition on the MNIST dataset Deng (2012) increasingly well as its hidden layer was widened. Initial weights were sampled from a uniform distribution on [0.1,0.1][-0.1,0.1] and then frozen. As expected, the network learned the task and validation accuracy increased with layer width. The largest gains in performance occurred before 5000 units (Fig. 1A).

Refer to caption
Figure 1: A. Validation accuracy on MNIST vs layer width. B. Validation accuracy on multiple image classification tasks for bias-only (blue) and fully-trained (orange) networks. In panels A and B, training was on 20 epochs and error bars on 5 runs are omitted because the standard errors are of order 10310^{-3}. C. Top: K-mean clustering of Task Variance (TV) reveals task-specific clusters. Bottom: Spearman correlation between TV and bias vectors (mean across neurons in each cluster). In this and all other figures Adam Kingma and Ba (2014) was used for optimization.

Intuitively, bias learning should allow a single random set of weights to be used to learn multiple tasks by simply optimizing task-specific bias vectors. We confirmed this intuition by training a single-hidden-layer FFN with 32000 hidden units on 7 different tasks: MNIST Deng (2012), KMNIST Clanuwat et al. (2018), Fashion MNIST Xiao et al. (2017), Ethiopic-MNIST, Vai-MNIST, and Osmanya-MNIST from Afro-MNIST Wu et al. (2020), and Kannada-MNIST Prabhu (2019). All tasks involved classifying 28×\times28 grayscale images into 10 classes. The random weights were fixed across tasks while different biases were learned. We compared bias learning against a fully-trained neural network with the same size and architecture (Fig. 1B). We found that the bias-only network achieved similar performance to the fully-trained network on most tasks (only significantly worse on KMNIST). A crucial difference here is that the networks had matched size and architecture, so that the number of trainable parameters in the bias-only network (32 000 parameters) was several orders of magnitude smaller than in the fully-trained case (25 440 010 parameters). Notably, a different set of biases was learned for each task. We conclude that bias-only learning in FFNs is a viable avenue to perform multi-tasking with randomly initialized and fixed weights, but that it requires a much wider hidden layer than fully trained networks.

Next, we investigated the neural mechanism underlying bias learning of multiple tasks. We investigated the task-specific functional organization of the hidden units by estimating single-unit Task Variance (TV) Yang et al. (2019), defined as the variance of a hidden unit activation across the test set for each task. The TV provides a measure of the extent that a given hidden unit contributes to the given task. A unit with high TV in a particular task indicates that its responses vary across stimuli, suggesting that the unit is recruited for solving that task. A unit with high TV in one task and low TV for all other tasks is specialized to one particular task. We clustered the hidden units TV using K-means clustering (KK chosen by cross-validation) on the vectors of TVs for each unit and found that distinct functional clusters of hidden units emerged (Fig. 1C). Most units reflected strong task specialization, i.e., they were only used for specific tasks (ex: cluster 3 for KMNIST and cluster 10 for Osmanya). Others were used for many or all tasks (ex: clusters 1 and 8), although a smaller fraction of clusters exhibited such non-selective activation patterns. Overall, we conclude that multi-task bias-learning leads to the emergence of task-specific functional organization.

We finally explored the relationship between the bias of a hidden unit and its task variance (TV). If the neural networks are using biases to shut-off units, analogous to the intuition in our theory work (Section 2.1), then the network units that do not actively participate in a task should be more quiet due to a low bias value learned during training on that particular task. In other words, this intuition would suggest that units should exhibit a correlation between bias and TV, especially in task-specific clusters. In our experiments, all clusters did exhibit the statistical trend of a positive correlation between bias and TV, although to a varying degree across clusters (see numbers at the bottom of Fig. 1C).

3.2 Relationship Between Bias Learning and Masked Learning in FFNs

Refer to caption
Figure 2: Comparing bias and masked learning on same weights. A. Learning curves for bias (orange) and mask (black) learning. Inset: we observed a trend where bias-learning achieved roughly 11 percentage point higher accuracy on MNIST over mask-learning (0.915±0.0028SD0.915\pm 0.0028\mathrm{SD} bias vs. 0.905±0.0028SD0.905\pm 0.0028\mathrm{SD} mask). B. Histograms of hidden unit variances, calculated over 10,00010,000 MNIST samples, for bias-trained (orange) and mask-trained (black). Histogram counts are log-scaled. C. Scatter plot of hidden unit variances from C but taking only units that are non-zero/not approximately-zero in mask/bias-trained networks; bias-trained on x-axis and mask-trained on y-axis. Plot A is mean±1\pm 1sd over 4 mask-trained/bias-trained nets. Plot B, and C are both one representative network trained with biases and with masks. Learned parameters were initialized to uniform on [0.1,0.1][-0.1,0.1], weights to uniform on [1d,1d][-\frac{1}{\sqrt{d}},\frac{1}{\sqrt{d}}]; all other hyper-parameters can be found in the codebase.

As our theory shows that bias-learning networks can universally approximate simply by turning units off, we wished to test whether bias learning performs similarly to learning masks, and to what extent solutions learned by these approaches are different from each other. We compared training mask to bias learning on networks with the same random input/output weight matrices. For mask-training, we approximated binary masks with ‘soft’ sigmoid masks by optimizing the sigmoid parameters. The sigmoid was steepened over the course of training to approximately learn a mask, and at test time the slope of the sigmoid was made steep enough to effectively binarize the learned soft mask. We compared masks learned in this fashion with learned biases on single-hidden-layer ReLU networks with 10,00010,000 hidden units. We observed a trend of bias-training slightly improving upon mask-training (Fig.2A), an expected increase given the added flexibility of biases over masks. Further research is needed to determine if this trend is reliable across datasets and different network parametrizations, and whether there might be scenarios where one style of learning works better or worse.

Next, we compared the solutions found via bias and mask learning. We calculated the variance of each hidden unit across 10,00010,000 MNIST images, in both the bias and mask-trained paradigms, as a measure of the hidden-layer representation of MNIST. The histogram over hidden units of these variances is plotted for trained masks/biases (black/orange) for the same network weights (Fig.2B). The mask-trained networks found slightly sparser solutions (more 0s, far left histogram bin) and performed the task with lower unit variance values (see middle/right of histogram) compared to the bias-trained networks. We also plotted the hidden unit variances for units in mask and bias-trained networks and computed their correlation (Fig.2C). The scatter plot and correlation were calculated after removing unit variances that were zero in the mask-trained net and sufficiently close to zero (with variance values less than the mean variance divided by 100) in the bias-trained network. Despite the differences observed in the histogram, the correlation coefficient was 0.461±0.0220.461\pm 0.022 (mean ±\pm SD across n=4n=4 networks), suggesting that there is some overlap in the way that these two methods solve MNIST.

3.3 Bias-learning of Dynamical Systems with Recurrent Neural Networks

3.3.1 Autonomous dynamical systems

Refer to caption
Figure 3: Learning autonomous dynamical systems. A: Cosine generated by a bias-learning recurrent network with 200 hidden units (dashed orange) and its target (solid black). B. Eigenvalue spectra for the recurrent weight matrix (left) and the Jacobian at the start of training (right, grey squares) and mid-training (right, orange circles), when the network produced a decaying oscillation with period 23.75, close to the target period of 25. Units’ activity approached a fixed point with respect to which the Jacobian was computed. Later in training, the rightmost eigenvalues approached 1 in magnitude and their phases were such that the oscillation had period 25\sim 25. C. Van der Pol oscillator (target in solid black; with dynamics x¨=μ(1x2)x˙x\ddot{x}=\mu(1-x^{2})\dot{x}-x, for μ=2\mu=2) generated by the bias-learning recurrent network with 675 hidden units for a recurrent gain of 1 (dashed orange; see panel D) and a gain of 0.9 (dashed dark orange). Output represents the position of the oscillator, rescaled to [-1, 1]. D. (Left) Sensitivity to distribution of recurrent weights. The fully-trained and bias-learning networks had the same number of learnable parameters (size of hidden layer for fully-trained network was 25). Initial recurrent weight matrix had elements sampled from (g/m)𝒩(0,1)(g/\sqrt{m})\mathcal{N}(0,1), where gg is the gain (Gain recurrent init.). Error bars denote SEM for n=10n=10. (Right) Schematics of the fully-trained (top) and bias-learning (bottom) autonomous RNNs. Colored links denote trained weights. In all panels, output matrix elements were 𝒩(0,1/m2)\sim\mathcal{N}(0,1/m^{2}) and biases b𝒰(0,1)b\sim\mathcal{U}(0,1).

We studied the expressivity of bias learning in RNNs trained to generate linear and nonlinear dynamical systems autonomously (i.e., with xt0x_{t}\equiv 0 in Eq. 2). We found that RNNs with fixed and random Gaussian weights and trained biases were able to generate a simple cosine function (Fig. 3A). We then elucidated the mechanism underlying RNN bias learning by comparing the Jacobian matrix after learning with the random recurrent weight matrix (which was held fixed during learning). We found that although the random weight matrix maintained a fixed and circular eigenvalue distribution (Fig. 3B, left), learning the biases shaped the Jacobian matrix to develop complex conjugate pairs of large eigenvalues underlying the oscillations (Fig. 3B, right). Therefore, bias learning strongly relies on the ability to shape the “effective connectivity matrix”, i.e. the Jacobian, which involves the derivative of the activation and the recurrent weight matrix.

We next investigated whether bias learning relied on the statistics of the fixed recurrent weights. In light of Fig. 3B, we thus hypothesized that bias learning would be affected by changes in the weight distribution, because bias learning can only control the derivative. We initialized an i.i.d. Gaussian distributed weight matrix Wij𝒩(0,g2/N)W_{ij}\sim{\cal N}(0,g^{2}/N), where gg is referred to as its ‘gain’. We then trained bias-learning networks to generate a van der Pol oscillator (Fig. 3C). We found that bias learning required a large enough gain (at least g=1g=1) and failed for g<1g<1 (Fig. 3D). This was not purely due to a restricted dynamic range for the network activity since the network was able to reproduce the first peak of the oscillator and then flatlined (Fig. 3C). In contrast, fully-trained networks with the same number of training parameters (Fig. 3D) were not sensitive to the value of the gain at initialization. According to our theory, because ReLU is γ\gamma-parameter bounding, a bias-learning RNN should be able to produce the correct output even at low gain, but this could occur for a prohibitively large hidden layer. This result thus highlights that, when the hidden-layer size is fixed, the initial distribution of weights limits the capability of bias-learning networks.

3.3.2 Non-autonomous dynamical systems

Finally, we explored the capabilities of an RNN trained on a non-autonomous dynamical system, namely the xx-dimension of the Lorenz system where the other dimensions are unobserved (see Fig.4 caption for more details). As in the autonomous network, only the biases of the input layer were trained and the weights were initialized randomly. However, here the networks also received an external input. The objective was formulated as follows: given a recurrent state and the value of a dynamical system at some time-point tt, predict the future value at t+τt+\tau, where this offset (τ=27\tau=27) was chosen to be the half-width at the half-max of the xx auto-correlation function. The numerical integration time step was 0.010.01; each model had a hidden-layer width of 1024 units and was trained on different windows of 1080 (i.e., 40τ\tau) time points.

Refer to caption
Figure 4: Learning non-autonomous dynamical systems. A. The outputs of the fully-trained (top) and bias-only (bottom) networks on a window of a Lorenz system unseen during training (x˙=σ(yx),y˙=x(ρz)y,z˙=xyβz\dot{x}=\sigma(y-x),\dot{y}=x(\rho-z)-y,\dot{z}=xy-\beta z). The system was generated using Euler’s method with a step size Δt\Delta t of 0.01 from an initialization at (0,1,0), with σ=10\sigma=10, ρ=28\rho=28, and β=83\beta=\frac{8}{3}. Standard deviation error bars were computed over 5 seeds, but are not visible. B. Generalization to sequences longer (4320 time-points) than those trained on. C. Output of the bias-only network diverges from the ground truth signal when using its own outputs as context (starting from the grey line).

We found that both the fully-trained and bias-only networks accurately predicted future points of the system, evidenced by a consistent R2R^{2} metric of >0.99 (nn=5) on a window of the generated Lorenz attractor held out from training (Fig. 4A). Furthermore, the models showed remarkable stability when predicting windows that were several times larger than the ones used during training, continuing to reconstruct the system without a notable change in accuracy (Fig. 4B). However, when the networks were fed their own previous predictions as input, their prediction accuracy decreased, demonstrating the devastating effect of small compounding deviations propagated through time (Fig. 4C).

4 Discussion

In this paper, we presented theoretical results demonstrating that feed-forward and recurrent neural networks with fixed random weights but learnable biases can approximate arbitrary functions with high probability. We showcased the expressivity of bias-learned networks in auto-regressive modelling, multi-task learning, dynamic pattern generation, and dynamical system forecasting. Finally, we interrogated the representations learned by bias learning via analysis of task specialization, comparison with mask-learning, and an eigenvalue analysis in recurrent networks. In what follows we discuss key insights, limitations, and future directions.

Our results highlight three key insights. First, certain activation functions enable our universal approximation results when weights are drawn from a uniform distribution on a hyper-cube with any strictly positive edge length. Characterizing functions that do or do not support this property–we speculate that non-differentiable points might be an important component–and the link between hyper-cube edge length and network scaling (Remark 1 on Lemma B.4 in Appendix) both seem worthy of future work. Second, in the context of multi-tasking, we showed that bias learning finds solutions that rely on task-selective clusters (Fig.1), similar to the fully trained case Yang et al. (2019). Third, we believe our finding that bias learning yields solutions that are different, though related to, mask learning (Fig.2) suggests that further investigation of our method might shed light on this and other non-synaptic learning approaches.

Our three main study limitations inspire directions for future research. First, the mathematical convergence results for dynamical systems are only point-wise over initial conditions, and are for finite-time trajectories. We believe that one might strengthen these results at least to LpL^{p} convergence over trajectories, and that one might be able to break the finite-time limitations by studying convergence in stationary distribution. Second, our theory massively overestimates how wide the hidden layer of a bias-learned network should scale to reach a desired level of error (Remark 2 on Lemma B.4 in Appendix). This limitation might be addressed by drawing on scaling arguments from the SLTH literature (see e.g. Malach et al. (2020)). Third, a potential confounding factor in our comparison of bias and mask learning is that our mask learning approach used a learning schedule in the steepness parameter for the soft-masks. It is possible that the altered learning dynamics due to this scheduling contributed to mask and bias learning finding different solutions. Addressing this confound is an important direction for future work.

Future directions should focus on capturing greater biological detail, and better hidden layer scaling. Experimental Ferguson and Cardin (2020) and theoretical work Wyrick and Mazzucato (2021); Ogawa et al. (2023) showed that neural pathways that modulated biases, like firing threshold or tonic inputs, may effect other neuronal properties, like neuron input-output gain. As our proofs rely on masking, they demonstrate universal approximation not just for bias learning but for any learned mechanism that can mask neurons, possibly including gain modulation. Exploring the flexibility of paradigms where gain (Stroud et al. (2018)) and biases are learned in concert is an interesting direction to exploit this. Moreover, the observed distribution of synaptic weights in the brain is not uniform but long-tailed Song et al. (2005), and searching for weight distributions that improve expressivity and hidden-layer scaling is an exciting future direction. If hidden-layer scaling can be improved, bias-learning might open up options for temporal credit assignment that perform poorly due to the N2N^{2} scaling of synapses. Alternatively, it would be fascinating if, regardless of whether learned parameters are weights or biases, one needs roughly the same number of parameters to achieve a given degree of task performance.

Combined with decades old results on adaptive behaviour in networks without weight changes Cotter and Conwell (1990), and related findings in ML (see in-context learning Brown et al. (2020)), we hope that this study will inspire more research on learning phenomena that transcend synaptic adaptation.

Code Availability

Code will be released upon publication.

Acknowledgements

E.W. was supported by an NSERC CGS D scholarship, and wishes to thank the other members of the Lajoie lab for support and for helpful discussions. T.J. was supported by an NSERC CGS M scholarship. M.G.P. was supported by grant the Fonds de recherche du Québec – Santé (chercheurs-boursiers en intelligence artificielle). L.M. was partially supported by National Institutes of Health grants R01NS118461, R01MH127375 and R01DA055439 and National Science Foundation CAREER Award 2238247. GL acknowledges CIFAR and Canada chair programs

References

  • Azouz and Gray [2000] Rony Azouz and Charles M Gray. Dynamic spike threshold reveals a mechanism for synaptic coincidence detection in cortical neurons in vivo. Proceedings of the National Academy of Sciences, 97(14):8110–8115, 2000.
  • Brown et al. [2020] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
  • Burkholz [2023] Rebekka Burkholz. Batch normalization is sufficient for universal function approximation in cnns. In The Twelfth International Conference on Learning Representations, 2023.
  • Clanuwat et al. [2018] Tarin Clanuwat, Mikel Bober-Irizar, Asanobu Kitamoto, Alex Lamb, Kazuaki Yamamoto, and David Ha. Deep learning for classical japanese literature. arXiv preprint arXiv:1812.01718, 2018.
  • Cotter and Conwell [1990] N.E. Cotter and P.R. Conwell. Fixed-weight networks can learn. In 1990 IJCNN International Joint Conference on Neural Networks, pages 553–559 vol.3, June 1990. doi: 10.1109/IJCNN.1990.137898.
  • Cotter and Conwell [1991] Neil E Cotter and Peter R Conwell. Learning algorithms and fixed dynamics. In IJCNN-91-Seattle International Joint Conference on Neural Networks, volume 1, pages 799–801. IEEE, 1991.
  • Deng [2012] Li Deng. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141–142, 2012.
  • Ding et al. [2014] Shifei Ding, Xinzheng Xu, and Ru Nie. Extreme learning machine and its applications. Neural Computing and Applications, 25:549–556, 2014.
  • Feldkamp et al. [1997] L. A. Feldkamp, G. V. Puskorius, and P. C. Moore. Adaptive behavior from fixed weight networks. Information Sciences, 98(1):217–235, May 1997. ISSN 0020-0255. doi: 10.1016/S0020-0255(96)00216-2. URL https://www.sciencedirect.com/science/article/pii/S0020025596002162.
  • Ferguson and Cardin [2020] Katie A Ferguson and Jessica A Cardin. Mechanisms underlying gain modulation in the cortex. Nature Reviews Neuroscience, 21(2):80–92, 2020.
  • Funahashi [1989] Ken-Ichi Funahashi. On the approximate realization of continuous mappings by neural networks. Neural networks, 2(3):183–192, 1989.
  • García-Arias et al. [2021] Ángel López García-Arias, Masanori Hashimoto, Masato Motomura, and Jaehoon Yu. Hidden-fold networks: Random recurrent residuals using sparse supermasks. arXiv preprint arXiv:2111.12330, 2021.
  • Gast et al. [2023] Richard Gast, Sara A Solla, and Ann Kennedy. Macroscopic dynamics of neural networks with heterogeneous spiking thresholds. Physical Review E, 107(2):024306, 2023.
  • Gast et al. [2024] Richard Gast, Sara A Solla, and Ann Kennedy. Neural heterogeneity controls computations in spiking neural networks. Proceedings of the National Academy of Sciences, 121(3):e2311885121, 2024.
  • Giannou et al. [2023] Angeliki Giannou, Shashank Rajput, and Dimitris Papailiopoulos. The expressive power of tuning only the normalization layers. arXiv preprint arXiv:2302.07937, 2023.
  • Gonon et al. [2023] Lukas Gonon, Lyudmila Grigoryeva, and Juan-Pablo Ortega. Approximation bounds for random neural networks and reservoir systems. The Annals of Applied Probability, 33(1):28–69, 2023.
  • Hart et al. [2021] Allen G Hart, James L Hook, and Jonathan HP Dawes. Echo state networks trained by tikhonov least squares are l2 (μ\mu) approximators of ergodic dynamical systems. Physica D: Nonlinear Phenomena, 421:132882, 2021.
  • Holt and Koch [1997] Gary R Holt and Christof Koch. Shunting inhibition does not have a divisive effect on firing rates. Neural computation, 9(5):1001–1013, 1997.
  • Hornik [1991] Kurt Hornik. Approximation capabilities of multilayer feedforward networks. Neural networks, 4(2):251–257, 1991.
  • Hornik [1993] Kurt Hornik. Some new results on neural network approximation. Neural networks, 6(8):1069–1072, 1993.
  • Hornik et al. [1989] Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedforward networks are universal approximators. Neural networks, 2(5):359–366, 1989.
  • Jaeger and Haas [2004] Herbert Jaeger and Harald Haas. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. science, 304(5667):78–80, 2004.
  • Kingma and Ba [2014] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • Klos et al. [2020] Christian Klos, Yaroslav Felipe Kalle Kossio, Sven Goedeke, Aditya Gilra, and Raoul-Martin Memmesheimer. Dynamical Learning of Dynamics. Physical Review Letters, 125(8), August 2020. ISSN 0031-9007, 1079-7114. doi: 10.1103/PhysRevLett.125.088103. URL https://link.aps.org/doi/10.1103/PhysRevLett.125.088103.
  • Leshno et al. [1993] Moshe Leshno, Vladimir Ya Lin, Allan Pinkus, and Shimon Schocken. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural networks, 6(6):861–867, 1993.
  • Li et al. [2023] Ming Li, Sho Sonoda, Feilong Cao, Yu Guang Wang, and Jiye Liang. How powerful are shallow neural networks with bandlimited random weights? In International Conference on Machine Learning, pages 19960–19981. PMLR, 2023.
  • Logiaco et al. [2021] Laureline Logiaco, LF Abbott, and Sean Escola. Thalamic control of cortical dynamics in a model of flexible motor sequencing. Cell reports, 35(9), 2021.
  • Malach et al. [2020] Eran Malach, Gilad Yehudai, Shai Shalev-Schwartz, and Ohad Shamir. Proving the lottery ticket hypothesis: Pruning is all you need. In International Conference on Machine Learning, pages 6682–6691. PMLR, 2020.
  • Mazzucato et al. [2019] Luca Mazzucato, Giancarlo La Camera, and Alfredo Fontanini. Expectation-induced modulation of metastable activity underlies faster coding of sensory stimuli. Nature neuroscience, 22(5):787–796, 2019.
  • Neufeld and Schmocker [2023] Ariel Neufeld and Philipp Schmocker. Universal approximation property of random neural networks. arXiv preprint arXiv:2312.08410, 2023.
  • Ogawa et al. [2023] Shun Ogawa, Francesco Fumarola, and Luca Mazzucato. Multitasking via baseline control in recurrent neural networks. Proceedings of the National Academy of Sciences, 120(33):e2304394120, 2023.
  • Papadopoulos et al. [2024] Lia Papadopoulos, Suhyun Jo, Kevin Zumwalt, Michael Wehr, David A McCormick, and Luca Mazzucato. Modulation of metastable ensemble dynamics explains optimal coding at moderate arousal in auditory cortex. arXiv preprint arXiv:2404.03902, 2024.
  • Perich et al. [2018] Matthew G. Perich, Juan A. Gallego, and Lee E. Miller. A neural population mechanism for rapid learning. Neuron, 100(4):964–976.e7, 2018. ISSN 0896-6273. doi: https://doi.org/10.1016/j.neuron.2018.09.030.
  • Petrov et al. [2024] Aleksandar Petrov, Philip HS Torr, and Adel Bibi. Prompting a pretrained transformer can be a universal approximator. arXiv preprint arXiv:2402.14753, 2024.
  • Pinkus [1999] Allan Pinkus. Approximation theory of the mlp model in neural networks. Acta numerica, 8:143–195, 1999.
  • Prabhu [2019] Vinay Uday Prabhu. Kannada-mnist: A new handwritten digits dataset for the kannada language. arXiv preprint arXiv:1908.01242, 2019.
  • Rahimi and Recht [2008] Ali Rahimi and Benjamin Recht. Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning. Advances in neural information processing systems, 21, 2008.
  • Ramanujan et al. [2020] Vivek Ramanujan, Mitchell Wortsman, Aniruddha Kembhavi, Ali Farhadi, and Mohammad Rastegari. What’s hidden in a randomly weighted neural network? In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11893–11902, 2020.
  • Remington et al. [2018] Evan D. Remington, Devika Narain, Eghbal A. Hosseini, and Mehrdad Jazayeri. Flexible sensorimotor computations through rapid reconfiguration of cortical dynamics. Neuron, 98(5):1005–1019.e5, 2018. ISSN 0896-6273. doi: https://doi.org/10.1016/j.neuron.2018.05.020.
  • Rosenblatt et al. [1962] Frank Rosenblatt et al. Principles of neurodynamics: Perceptrons and the theory of brain mechanisms, volume 55. Spartan books Washington, DC, 1962.
  • Rosenfeld and Tsotsos [2019] Amir Rosenfeld and John K Tsotsos. Intriguing properties of randomly weighted networks: Generalizing while learning next to nothing. In 2019 16th conference on computer and robot vision (CRV), pages 9–16. IEEE, 2019.
  • Schäfer and Zimmermann [2006] Anton Maximilian Schäfer and Hans Georg Zimmermann. Recurrent neural networks are universal approximators. In Artificial Neural Networks–ICANN 2006: 16th International Conference, Athens, Greece, September 10-14, 2006. Proceedings, Part I 16, pages 632–640. Springer, 2006.
  • Schlake et al. [2022] Georg Stefan Schlake, Jan David Hüwel, Fabian Berns, and Christian Beecks. Evaluating the lottery ticket hypothesis to sparsify neural networks for time series classification. In 2022 IEEE 38th International Conference on Data Engineering Workshops (ICDEW), pages 70–73. IEEE, 2022.
  • Sehgal et al. [2013] Megha Sehgal, Chenghui Song, Vanessa L Ehlers, and James R Moyer Jr. Learning to learn–intrinsic plasticity as a metaplasticity mechanism for memory formation. Neurobiology of learning and memory, 105:186–199, 2013.
  • Song et al. [2005] Sen Song, Per Jesper Sjöström, Markus Reigl, Sacha Nelson, and Dmitri B Chklovskii. Highly nonrandom features of synaptic connectivity in local cortical circuits. PLoS biology, 3(3):e68, 2005.
  • Stroud et al. [2018] Jake P Stroud, Mason A Porter, Guillaume Hennequin, and Tim P Vogels. Motor primitives in space and time via targeted gain modulation in cortical networks. Nature neuroscience, 21(12):1774–1783, 2018.
  • Sussillo and Abbott [2009] David Sussillo and Larry F Abbott. Generating coherent patterns of activity from chaotic neural networks. Neuron, 63(4):544–557, 2009.
  • Von Oswald et al. [2023] Johannes Von Oswald, Eyvind Niklasson, Ettore Randazzo, João Sacramento, Alexander Mordvintsev, Andrey Zhmoginov, and Max Vladymyrov. Transformers learn in-context by gradient descent. In International Conference on Machine Learning, pages 35151–35174. PMLR, 2023.
  • Wu et al. [2020] Daniel J Wu, Andrew C Yang, and Vinay U Prabhu. Afro-mnist: Synthetic generation of mnist-style datasets for low-resource languages, 2020.
  • Wyrick and Mazzucato [2021] David Wyrick and Luca Mazzucato. State-dependent regulation of cortical processing speed via gain modulation. Journal of Neuroscience, 41(18):3988–4005, 2021.
  • Xiao et al. [2017] Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017.
  • Yang et al. [2019] Guangyu Robert Yang, Madhura R Joglekar, H Francis Song, William T Newsome, and Xiao-Jing Wang. Task representations in neural networks trained to perform many cognitive tasks. Nature neuroscience, 22(2):297–306, 2019.
  • Yu et al. [2019] Haonan Yu, Sergey Edunov, Yuandong Tian, and Ari S Morcos. Playing the lottery with rewards and multiple languages: lottery tickets in rl and nlp. arXiv preprint arXiv:1906.02768, 2019.
  • Zaken et al. [2021] Elad Ben Zaken, Shauli Ravfogel, and Yoav Goldberg. Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. arXiv preprint arXiv:2106.10199, 2021.
  • Zhang and Linden [2003] Wei Zhang and David J Linden. The other side of the engram: experience-driven changes in neuronal intrinsic excitability. Nature Reviews Neuroscience, 4(11):885–900, 2003.

Throughout the appendix the proofs are restated for ease of reference. We will always take ||||||\cdot|| to be the 11-norm unless stated otherwise.

Appendix A Random Neural Network Formulation

The proofs of this section revolve around masked, random, neural networks:

r~m=αr0+βϕ(Wr0+Bx+b),y~m=Ar~m,\displaystyle\tilde{r}_{m}^{\mathcal{M}}=-\alpha r_{0}+\mathcal{M}\odot\beta\phi(Wr_{0}+Bx+b),\quad\tilde{y}_{m}^{\mathcal{M}}=A\tilde{r}_{m}^{\mathcal{M}}, (A.1)

where 0α<0\leq\alpha<\infty, 0<β<0<\beta<\infty, mm\in\mathbb{N}, r0,r~mmr_{0},\>\tilde{r}_{m}^{\mathcal{M}}\in\mathbb{R}^{m}, xdx\in\mathbb{R}^{d}, y~ml\tilde{y}_{m}^{\mathcal{M}}\in\mathbb{R}^{l}, {0,1}m\mathcal{M}\in\{0,1\}^{m}, and all other matrices and vectors have real elements with the dimensions required by the above definitions. We assume that ϕ\phi is γ\gamma-parameter bounding and that each individual (scalar) parameter, be it weight or bias, is sampled randomly–before masking–from a uniform distribution on [γ¯,γ¯][-\bar{\gamma},\bar{\gamma}] (note that here we are using γ¯\bar{\gamma} where we used RR in the main text). In this way the parameters are random variables with compact support. If =𝟏\mathcal{M}=\mathbf{1} then we drop the superscript. To account for feed-forward neural networks we simply assume that WW is the zero matrix.

W.l.o.g. assume there are nn non-zero elements in \mathcal{M}. We construct Wn×nW^{\mathcal{M}}\in\mathbb{R}^{n\times n}–the recurrent matrix restricted to participating (non-masked) hidden units–by beginning with WW and deleting the ithi^{th} row and ithi^{th} column of the matrix if i=0\mathcal{M}_{i}=0. We construct Bn×dB^{\mathcal{M}}\in\mathbb{R}^{n\times d}, Al×nA^{\mathcal{M}}\in\mathbb{R}^{l\times n}, and bnb^{\mathcal{M}}\in\mathbb{R}^{n} by deleting the ithi^{th} row of BB, AA, and ithi^{th} element of bb if i=0\mathcal{M}_{i}=0.

Consider the case where the ithi^{th} element of r0r_{0} is 0 whenever i=0\mathcal{M}_{i}=0. Then, regardless of whether Eq.A.1 represents a feed-forward network or the transition function for an RNN, the masked units will always be zero. We can thus simply track the nn units that correspond with 11’s in \mathcal{M} as the outputs, yy^{\mathcal{M}} will depend solely on these. We observe that the behaviour of these units can be described by the following network:

rm=αr0+βϕ(Wr0+Bx+b),ym=Arm.r_{m}^{\mathcal{M}}=-\alpha r_{0}+\beta\phi(W^{\mathcal{M}}r_{0}+B^{\mathcal{M}}x+b^{\mathcal{M}}),\quad y_{m}^{\mathcal{M}}=A^{\mathcal{M}}r_{m}^{\mathcal{M}}. (A.2)

It is networks of the form of Eq.A.2 that will be the primary subject of study in what follows. Note that the ‘\sim’, over the rr, is dropped to denote the fact that rr is a different vector on account of dropping the zero units. In the feed-forward case we use subscripts, as we have done above, to denote hidden layer width. Whenever we discuss RNNs or dynamical systems we will instead use the subscript to denote time.

Appendix B Proofs from Section 2.1

See 1

Proof.

We prove this solely for the ReLU, as the logic for the Heaviside is effectively the same. Let ϕ\phi thus be a ReLU. First, observe the following useful property: for all α>0\alpha>0 we have αϕ(x)=ϕ(αx)\alpha\phi(x)=\phi(\alpha x). From this, consider the neural network of hidden layer width nn with ReLU activations, yn(θ)y_{n}(\theta), and observe:

yn(θ)=α2i=1nA:iαϕ(Bi:αx+biα)=α2yn(θα).y_{n}(\theta)=\alpha^{2}\sum_{i=1}^{n}\frac{A_{:i}}{\alpha}\phi\bigg{(}\frac{B_{i:}}{\alpha}x+\frac{b_{i}}{\alpha}\bigg{)}=\alpha^{2}y_{n}\Big{(}\frac{\theta}{\alpha}\Big{)}. (B.1)

Moreover, if α\alpha\in\mathbb{N} we have

yn(θ)=i=1α2nA~:iϕ(B~i:x+b~)=yα2n(θ~),y_{n}(\theta)=\sum_{i=1}^{\alpha^{2}n}\tilde{A}_{:i}\phi\big{(}\tilde{B}_{i:}x+\tilde{b}\big{)}=y_{\alpha^{2}n}(\tilde{\theta}), (B.2)

where θ~=[θ1α,,θnα,θ1α,,θnα]\tilde{\theta}=[\frac{\theta_{1}}{\alpha},\dots,\frac{\theta_{n}}{\alpha},\dots\frac{\theta_{1}}{\alpha},\dots,\frac{\theta_{n}}{\alpha}] so that each element is simply a re-scaled and repeated version of the original parameters; we have α2\alpha^{2} repeats for each term to replace the α2\alpha^{2} factor in the LHS of Eq. B.1.

Now, given an arbitrary compact set UdU\in\mathbb{R}^{d}, continuous function h:dlh:\mathbb{R}^{d}\to\mathbb{R}^{l}, and ε>0\varepsilon>0, by the universal approximator theory (see e.g. Leshno et al. [1993], Hornik [1993]) we can find nn such that

Uh(x)yn(x,θ)ϵ\int_{U}||h(x)-y_{n}(x,\theta)||\leq\epsilon (B.3)

holds. Because nn is finite we can bound every individual (scalar) parameter by MM, for some sufficiently large MM. Suppose we want the parameters to be bounded instead by γ\gamma with M>γ>0M>\gamma>0. If we select α\alpha\in\mathbb{N} s.t. α>Mγ\alpha>\frac{M}{\gamma} then we can find yα2n(x,θ~)y_{\alpha^{2}n}(x,\tilde{\theta}) such that yα2n(x,θ~)=yn(x,θ)y_{\alpha^{2}n}(x,\tilde{\theta})=y_{n}(x,\theta). Thus we have found a parameter-bounding ReLU neural network satisfying Eq.B.3, completing the proof.

Remark: The intuition behind this result, for the ReLU, is credited to a reply to the Universal Approximation Theorem with Bounded Parameters question on Mathematics Stack Exchange.

The following lemma constitutes the core of Theorem 1. It shows that one can achieve universal approximation, in the sense needed for the theorem, using masking. The theorem then follows by manipulating biases to achieve masking. As mentioned in the main text, the following lemma is very closely related to past results on the SLTH over units for MLPs with one hidden layer. Specifically, we believe that one should be able to prove an analogue to our theorem by combining Theorem 3.2 in Malach et al. [2020] with Theorem 1 of Rahimi and Recht [2008] (or a similar result on learning with random networks). We leave the details of this to future studies.

Lemma 1.

Let h:Ulh:U\to\mathbb{R}^{l} be a continuous function on compact support UdU\subset\mathbb{R}^{d}. Then for any ϵ>0,δ(0,1)\epsilon>0,\delta\in(0,1), we can find a layer width mm\in\mathbb{N} such that with probability at least 1δ1-\delta {0,1}m\exists\mathcal{M}\in\{0,1\}^{m} satisfying the following:

Uh(x)ym(x)𝑑xϵ.\int_{U}||h(x)-y_{m}^{\mathcal{M}}(x)||dx\leq\epsilon. (B.4)
Proof.

First, we find a neural network with parameters that approximate the desired function hh. Given the assumptions on ϕ\phi, we can use Proposition D.1 to find nn and parameters θ={A,B,b}\theta^{*}=\{A^{*},B^{*},b^{*}\} such that

Uh(x)yn(x,θ)𝑑xϵ2,\int_{U}||h(x)-y_{n}(x,\theta^{*})||dx\leq\frac{\epsilon}{2}, (B.5)

because UU is compact and hh is continuous.

We make a brief comment about the domain of a given activation function in yny_{n}. ϕ\phi will be operating on a compact domain {ux+b:u[γ¯,γ¯]d,xU,b[γ¯,γ¯]}\{ux+b:u\in[-\bar{\gamma},\bar{\gamma}]^{d},x\in U,b\in[-\bar{\gamma},\bar{\gamma}]\}, as a consequence of the compactness of the support of the parameters, and of the assumed compactness of UU. By its continuity, ϕ\phi is Lipschitz and bounded on this domain. We label Lipschitz constant and bound KϕK_{\phi} and MϕM_{\phi} respectively. We further define MxM_{x} to be the bound for xx on UU, and |U||U| to be the value U𝑑x\int_{U}dx, which is finite by the boundedness of the domain.

Next, we construct a masked random network that approximates yny_{n} with high probability. By Lemma 3, we can find a random feed-forward neural network of hidden layer width mm such that a mask, \mathcal{M}, exists satisfying |θiθi|<ε|\theta^{*}_{i}-\theta_{i}^{\mathcal{M}}|<\varepsilon for some arbitrarily ε>0\varepsilon>0. In particular, we can choose ε\varepsilon as:

|θiθi|<ε=ϵ2|U|max(nl(Kϕγ¯[1+Mx]+Mϕ),1)|\theta^{*}_{i}-\theta_{i}^{\mathcal{M}}|<\varepsilon=\frac{\epsilon}{2|U|\max\big{(}nl(K_{\phi}\bar{\gamma}[1+M_{x}]+M_{\phi}),1\big{)}} (B.6)

for all ii with probability at least 1δ1-\delta. If we are in the regime of probability 1δ1-\delta where the mask satisfying the above error bound exists then we get

ym(x)yn(x)nl[Kϕγ¯(1+Mx)+Mϕ]εϵ2|U|,||y_{m}^{\mathcal{M}}(x)-y_{n}(x)||\leq nl[K_{\phi}\bar{\gamma}(1+M_{x})+M_{\phi}]\varepsilon\leq\frac{\epsilon}{2|U|}, (B.7)

where, in addition to Eq.B.6, we use the assumptions on ϕ\phi and UU stated before the start of the proof and repeated application of the triangle inequality (see Lemma 4 for the derivation of the above bound).

Integrating the error over the domain and using Eq.B.5 and Eq.B.7 gives

Uh(x)ym(x)𝑑xUh(x)yn(x)𝑑x+Uym(x)yn(x)𝑑xϵ,\int_{U}||h(x)-y_{m}^{\mathcal{M}}(x)||dx\leq\int_{U}||h(x)-y_{n}(x)||dx+\int_{U}||y_{m}^{\mathcal{M}}(x)-y_{n}(x)||dx\leq\epsilon, (B.8)

with probability 1δ1-\delta.

See 1

Proof.

Observe that, once we have choosen an mm satisfying the desiderata of Lemma B.4, because ϕ\phi is assumed to be γ\gamma-bias-learning, mm is some finite value and all variables that make up the input of ϕ\phi are bounded, we can implement the mask by setting bib_{i} to be very negative for every ii such that i=0\mathcal{M}_{i}=0. For every bib_{i} such that =1\mathcal{M}=1 we simply leave bib_{i} at its original randomly chosen value. ∎

Corollary 2.

Assume d=ld=l, that is, the output and input spaces are the same. Then the results of Lemma B.4 and Theorem 1 also hold for res-nets; that is, networks whose output is of the form x+ym(x)x+y_{m}^{\mathcal{M}}(x).

Proof.

This follows by observing that h(x)+xh(x)+x is also a continuous function and then replacing h(x)h(x) with h(x)+xh(x)+x in Eq.B.4 and rearranging. ∎

Remark: While the error can be made arbitrarily small, the limit of zero error itself is undefined. This is because our proof relies on first approximating the given smooth function with a neural network with all parameters tuned and then approximating this second network using bias-learning to pick-out a matching sub-network from a large random reservoir; the probability of perfectly matching the fully tuned network with the bias-learned network is zero. This could be addressed by using an integral representation for continuous functions instead of directly using a finite-width neural network to approximate the given function (see e.g. Rahimi and Recht [2008], Li et al. [2023]). As one will see below, this remark also applies to the recurrent neural network result.

Appendix C Proof from Section 2.2

Analogous to the section containing the feed-forward proofs, we first state and prove a lemma which comprises the core of the proof for recurrent neural networks. This lemma shows that one can achieve universal approximation with high probability using masking in a randomly initialized RNN, and in this way provides a proof of the SLTH over units for RNNs. The proof of the main theorem in this section then follows quite straightforwardly.

Lemma 2.

Consider a discrete time, partially observed dynamical system of the form of, and satisfying the same conditions as, the one in Eq.3. Let 0<T<0<T<\infty, initial condition z0Uzz_{0}\in U_{z}, input xtUxx_{t}\in U_{x} t\forall t, ϵ>0\epsilon>0 and δ(0,1)\delta\in(0,1). Then we can find an RNN initial condition r0r_{0} and a layer width mm\in\mathbb{N} such that with probability at least 1δ1-\delta {0,1}m\exists\mathcal{M}\in\{0,1\}^{m} satisfying the following:

t=1Tytyt<ϵ.\sum_{t=1}^{T}||y_{t}-y_{t}^{\mathcal{M}}||<\epsilon. (C.1)
Proof.

In what follows W.L.O.G. we assume ϵ<1\epsilon<1. It is well known that we can arbitrarily approximate this dynamical system with an RNN Schäfer and Zimmermann [2006]; we provide a simple proof of this in Proposition 3. In particular, for arbitrary 1>ϵ>01>\epsilon>0 we can find an RNN of the form in Eq.2, with hidden layer width nn\in\mathbb{N} and output y^\hat{y}, satisfying:

t=1Ty^tyt<ϵ2,\sum_{t=1}^{T}||\hat{y}_{t}-y_{t}||<\frac{\epsilon}{2}, (C.2)

for any initial condition of the dynamical system selected within the invariant set UzU_{z}. We note that this is a point-wise convergence. It can be shown (see Prop.3) that the hidden states of this RNN remain on a compact set, UrU_{r}, when approximating finite time trajectories of the original dynamical system with initial conditions in UzU_{z}. Given this compactness, we can then show that the following set

U~={\displaystyle\tilde{U}=\{ u(r+r)+vx+b:u[γ¯,γ¯]n,v[γ¯,γ¯]d,\displaystyle u(r+r^{\prime})+vx+b:u\in[-\bar{\gamma},\bar{\gamma}]^{n},v\in[-\bar{\gamma},\bar{\gamma}]^{d}, (C.3)
rUr,xU,b[γ¯,γ¯],rn,||r||1},\displaystyle r\in U_{r},x\in U,b\in[-\bar{\gamma},\bar{\gamma}],r^{\prime}\in\mathbb{R}^{n},||r^{\prime}||\leq 1\}, (C.4)

is itself compact by the compactness of the sets from which it is formed. It will turn out that this set will contain the arguments of ϕ\phi that appear in the proof. By its own compactness and the assumptions on ϕ\phi we observe that ϕ\phi has Lipschitz constant KϕK_{\phi}. The size of UrU_{r}–and thus U~\tilde{U}–and KϕK_{\phi} will, in general, depend upon the first approximating neural network. By the compactness of UU and UrU_{r}, we can bound xx and rr on these sets to get bounds RxR_{x} and RrR_{r} respectively.

Let the parameters of the above-defined approximating RNN be given by θ={A,W,B,b}\theta^{*}=\{A^{*},W^{*},B^{*},b^{*}\}. Then by Lemma 3 we can find a random RNN of hidden width mm and with parameters θ\theta such that a mask, \mathcal{M}, exists satisfying

|θiθi|<ε=ϵ2lT[max(γ¯,1)max(nβKϕR~t=0T1(α+γ¯βKϕn)t,1)+Rr],|\theta^{*}_{i}-\theta_{i}^{\mathcal{M}}|<\varepsilon=\frac{\epsilon}{2lT\big{[}\max(\bar{\gamma},1)\max\big{(}n\beta K_{\phi}\tilde{R}\sum_{t=0}^{T-1}(\alpha+\bar{\gamma}\beta K_{\phi}n)^{t},1\big{)}+R_{r}\big{]}}, (C.5)

for all ii with probability at least 1δ1-\delta, where R~=Rx+Rr+1\tilde{R}=R_{x}+R_{r}+1. One can show using induction (see Lemma 5 for the derivation) that if |θiθi|<ε|\theta^{*}_{i}-\theta_{i}^{\mathcal{M}}|<\varepsilon for all ii we get

t=1Tyty^t<lT[γ¯nβKϕR~t=0T1(α+γ¯βKϕn)t+Rr]ε.\sum_{t=1}^{T}||y_{t}^{\mathcal{M}}-\hat{y}_{t}||<lT\big{[}\bar{\gamma}n\beta K_{\phi}\tilde{R}\sum_{t=0}^{T-1}(\alpha+\bar{\gamma}\beta K_{\phi}n)^{t}+R_{r}\big{]}\varepsilon. (C.6)

The triangle inequality on yty^t+y^tyt||y_{t}^{\mathcal{M}}-\hat{y}_{t}+\hat{y}_{t}-y_{t}||, along with Equations C.2, C.5, and C.6 completes the proof.

See 2

Proof.

This follows directly from Theorem 2, by observing that one can replace the mask by simply setting biases to some sufficiently low value. ∎

Appendix D Supplementary Lemmas

The following result is well known in the literature; see e.g. Proposition 1 of Leshno et al. [1993].

Proposition 2.

For any ϵ>0\epsilon>0 n\exists n\in\mathbb{N} s.t.

Uh(x)yn(x,θ)𝑑xϵ\int_{U}||h(x)-y_{n}(x,\theta^{*})||dx\leq\epsilon (D.1)
Corollary 3.

The above holds if we restrict the output weight matrix of the neural network to have rank equal to the output dimension.

Proof.

This is because the set of full rank matrices is dense in m×n\mathbb{R}^{m\times n} for m,nm,n\in\mathbb{N}. ∎

Consider matrices Wn×nW^{*}\in\mathbb{R}^{n\times n}, Bn×dB^{*}\in\mathbb{R}^{n\times d}, Al×nA^{*}\in\mathbb{R}^{l\times n}, and vector bnb^{*}\in\mathbb{R}^{n}. We can vectorize and concatenate their elements into the single long vector θπ\theta\in\mathbb{R}^{\pi}, where π=n(n+d+l+1)\pi=n(n+d+l+1). Assume that |θi|<γ|\theta_{i}^{*}|<\gamma for all ii.

Next, construct Wm×mW\in\mathbb{R}^{m\times m}, Bm×dB\in\mathbb{R}^{m\times d}, Al×mA\in\mathbb{R}^{l\times m}, and vector bmb\in\mathbb{R}^{m}, by sampling each element randomly from a uniform distribution on [γ¯,γ¯][-\bar{\gamma},\bar{\gamma}] where γ¯=γ+Δγ\bar{\gamma}=\gamma+\Delta\gamma for Δγ>0\Delta\gamma>0. We analogously group these into a single vector, θm(m+d+l+1)\theta\in\mathbb{R}^{m(m+d+l+1)} Observe that for each {0,1}m\mathcal{M}\in\{0,1\}^{m}, we can construct sub-matrices of WW, BB, AA, and sub-vector of bb by deleting column and row pairs in WW, rows in BB, columns in AA, and elements of bb whose indices correspond to i{1,,m}i\in\{1,\dots,m\} such that i=0\mathcal{M}_{i}=0. For a given \mathcal{M}, we define θ\theta^{\mathcal{M}} to be the vector constructed by flattening and concatenating these sub-matrices and vector. We then have the following lemma:

Lemma 3.

For θ\theta^{*}, defined above, and arbitrary ε>0\varepsilon>0, δ(0,1)\delta\in(0,1), we can find m>nm>n such that with probability at least 1δ1-\delta {0,1}m\exists\mathcal{M}\in\{0,1\}^{m} with only nn non-zero elements such that |θiθi|<ε|\theta_{i}^{*}-\theta_{i}^{\mathcal{M}}|<\varepsilon for all i{1,,π}i\in\{1,\dots,\pi\}. In particular, any mnlogδlog[1(ϵγ¯)π]m\geq\frac{n\log\delta}{\log[1-(\frac{\epsilon}{\bar{\gamma}})^{\pi}]} will satisfy the result, where ϵ=min(ε,Δγ)\epsilon=\min(\varepsilon,\Delta\gamma).

Proof.

In what follows we set ϵ=min(ε,Δγ)\epsilon=\min(\varepsilon,\Delta\gamma). This simplifies the below probability bound that we derive because it means the probability of falling within an ϵ\epsilon window of a desired parameter will not change, even if the desired parameter is very close to its bound, ±γ\pm\gamma. We will refer to the event that the desiderata of the lemma are satisfied for ϵ\epsilon, rather than ε\varepsilon, as A1A_{1}; that is: {0,1}m\exists\mathcal{M}\in\{0,1\}^{m} with only nn non-zero elements such that |θiθi|<ϵ|\theta_{i}^{*}-\theta_{i}^{\mathcal{M}}|<\epsilon for all i{1,,n}i\in\{1,\dots,n\}. The event that the desiderata are not satisfied is A1cA_{1}^{c}.

Assume that m=knm^{\star}=kn for some k+k\in\mathbb{N}^{+}. Consider the ‘block’ mask k1\mathcal{M}^{k_{1}} s.t. ik1=1\mathcal{M}_{i}^{k_{1}}=1 only for i{(k11)n+1,,k1n}i\in\{(k_{1}-1)n+1,\dots,k_{1}n\}, with 0<k1k0<k_{1}\leq k. Note that the nn elements selected by these block masks are non-overlapping for two different k1k_{1}. Let event A2A_{2} be the event that there is a block mask that occurs satisfying the desiderata of the lemma with error ϵ\epsilon. Clearly A2A1A1cA2cP(A1c)P(A2c)A_{2}\subset A_{1}\implies A_{1}^{c}\subset A_{2}^{c}\implies P(A_{1}^{c})\leq P(A_{2}^{c}). A2cA_{2}^{c} is the probability that there is no block mask satisfying the desiderata. Observe that

P(A2c)\displaystyle P(A_{2}^{c}) =P[k1=1k{k1thblockmaskdoesntwork}]=k1=1kP({k1thblockmaskdoesntwork})\displaystyle=P\bigg{[}\bigcap_{k_{1}=1}^{k}\{k_{1}^{th}\>\mathrm{block}\>\mathrm{mask}\>\mathrm{doesn^{\prime}t}\>\mathrm{work}\}\bigg{]}=\prod_{k_{1}=1}^{k}P(\{k_{1}^{th}\>\mathrm{block}\>\mathrm{mask}\>\mathrm{doesn^{\prime}t}\>\mathrm{work}\})
=k1=1k1P({k1thblockmaskworks})=[1(ϵγ¯)π]mn,\displaystyle=\prod_{k_{1}=1}^{k}1-P(\{k_{1}^{th}\>\mathrm{block}\>\mathrm{mask}\>\mathrm{works}\})=\bigg{[}1-\Big{(}\frac{\epsilon}{\bar{\gamma}}\Big{)}^{\pi}\bigg{]}^{\frac{m^{\star}}{n}}, (D.2)

which follows from the fact that the elements of the matrices are independently sampled and the elements corresponding to sub-matrices selected by a given block mask are independent of those associated with another block mask. By making mm^{\star} very large we can make P(A2c)P(A_{2}^{c}) arbitrarily small. Because P(A1c)P(A2c)P(A_{1}^{c})\leq P(A_{2}^{c})–and the desiderata of the lemma with error ϵ\epsilon are not satisfied solely on A1cA_{1}^{c}–the result follows by selecting m=mm^{\star}=m such that P(A2c)δP(A_{2}^{c})\leq\delta. We thus see that the probability of finding a sufficient mask occurs with probability at least 1δ1-\delta. Lastly, because we have found a mask that satisfies per-parameter error ϵ\epsilon, and because ϵε\epsilon\leq\varepsilon, we have proved the lemma. ∎

Remark 1: We note that, in Eq.D.2, nn will likely also depend implicitly on γ\gamma. If γ\gamma is very small then we will need to stack many ReLUs on top of each other to attain a large enough dynamic range to approximate the desired function (see §2.1), leading to a larger number of units. Conversely, if γ\gamma is very large we will need to sample a large number of units before we get parameters appropriately close to the desired subnetwork configuration. This suggests the existence of some sweet spot in the value γ\gamma, which we leave for future work to explore.

Remark 2: We observe that this bound appears to be very weak. For example, if one wished to use it to find a masked network to match an MLP with input, hidden, and output dimensions of only 11, 33, and 11 respectively, with a per-parameter error of ϵ=0.05\epsilon=0.05 an error probability of δ=0.1\delta=0.1, and with γ=0.1\gamma=0.1, this bound would suggest we need a hidden layer of m8.34×1012m\geq 8.34\times 10^{12} neurons in the bias learning network. In light of the numerical experiments, it is clear that while the math here provides proofs of existence for bias learning it does not say anything useful about the hidden layer width scaling required.

For the following proposition we consider the discrete time dynamical system that we wish to approximate to be as in Eq.3.

Proposition 3.

For finite 0<T<0<T<\infty, any initial condition z0Uzz_{0}\in U_{z}, input xtUxx_{t}\in U_{x} t\forall t, ϵ>0\epsilon>0, and any |α|<,β>0|\alpha|<\infty,\beta>0 we can find an RNN of the style of Eq. 2 of hidden width nn\in\mathbb{N} and an initial value r0r_{0} for the RNN such that:

t=1Ty^tyt<ϵ\sum_{t=1}^{T}||\hat{y}_{t}-y_{t}||<\epsilon (D.3)

where y^t\hat{y}_{t} is the output of the RNN and yty_{t} is that of the dynamical system. Moreover, when approximating the dynamical system in this way the RNN hidden states will remain in a compact set which we denote UrU_{r}.

The main portion of this result is well known, see e.g. Schäfer and Zimmermann [2006]. For completeness, we provide an example proof below.

Proof.

In what follows, W.L.O.G we will assume that the error is smaller than cc. We want to approximate the dynamical system:

zt+1=F(zt,xt),yt=Czt,z0Uz,z_{t+1}=F(z_{t},x_{t}),\quad y_{t}=Cz_{t},\quad z_{0}\in U_{z}, (D.4)

defined on set U~z={z0+c0:z0Uz,c0<c}\tilde{U}_{z}=\{z_{0}+c_{0}:z_{0}\in U_{z},\>||c_{0}||<c\}, where UzU_{z} is an invariant set (see §2.2).

We define the set:

Uzx={[z+c0x]:zUz,xU,c0<c}.\displaystyle U_{zx}=\{[z+c_{0}\>\>x]:z\in U_{z},x\in U,||c_{0}||<c\}. (D.5)

Importantly, this set is compact given the compactness assumptions on UU and UzU_{z}. Also note that, since FF is assumed continuous, it will be KFK_{F}-Lipschitz on this compact set for some constant KFK_{F}. We can thus use the corollary to Proposition D.1 to find a neural network of hidden dimension nn\in\mathbb{N} that approximates FF with a maximum-rank output matrix, AA. We write this neural network:

z^=αz+βAϕ(Wz+Bx+b)=F^(z,x),\hat{z}=-\alpha z+\beta A\phi(Wz+Bx+b)=\hat{F}(z,x), (D.6)

assuming zUzz\in U_{z} and xUx\in U, with As×nA\in\mathbb{R}^{s\times n}, Wn×sW\in\mathbb{R}^{n\times s}, Bn×dB\in\mathbb{R}^{n\times d}, and bnb\in\mathbb{R}^{n}. In particular, we can find arbitrary ϵ\epsilon with 0<ϵ<c0<\epsilon<c such that:

F^(z,x)F(z,x)<ε=ϵTmax(RCt=0T1KFt,1),||\hat{F}(z,x)-F(z,x)||<\varepsilon=\frac{\epsilon}{T\max(R_{C}\sum_{t=0}^{T-1}K^{t}_{F},1)}, (D.7)

where RC=CR_{C}=||C||. Fix T1T\geq 1. To prove that we can approximate the underlying dynamical system, we use induction starting at time t=1t=1. The base case will be

z^1z1=F^(z0,x0)F(z0,x0)ε,||\hat{z}_{1}-z_{1}||=||\hat{F}(z_{0},x_{0})-F(z_{0},x_{0})||\leq\varepsilon, (D.8)

by our choice of nn and initial condition, and that [z0,x0]Uzx[z_{0},x_{0}]\in U_{zx}. Importantly, this implies also that z^1z1<ε||\hat{z}_{1}-z_{1}||<\varepsilon. Because ε<c\varepsilon<c this means that [z1^x1]Uzx[\hat{z_{1}}\>\>x_{1}]^{\top}\in U_{zx}.

For t=1t=1, ε=t=0t1KFtε\varepsilon=\sum_{t^{\prime}=0}^{t-1}K_{F}^{t^{\prime}}\varepsilon. We thus make the induction hypothesis that z^tzt<t=0t1KFtε||\hat{z}_{t}-z_{t}||<\sum_{t^{\prime}=0}^{t-1}K_{F}^{t^{\prime}}\varepsilon and that [z^txt]Uzx[\hat{z}_{t}\>\>x_{t}]^{\top}\in U_{zx}. If T=1T=1 we are finished. If T>1T>1 we assume 1<t<T1<t<T and use this hypothesis to prove the induction step:

z^t+1zt+1\displaystyle||\hat{z}_{t+1}-z_{t+1}|| F^(z^t,xt)F(z^t,xt)+F(z^t,xt)F(zt,xt)\displaystyle\leq||\hat{F}(\hat{z}_{t},x_{t})-F(\hat{z}_{t},x_{t})||+||F(\hat{z}_{t},x_{t})-F(z_{t},x_{t})|| (D.9)
ε+KFz^tzt=εt=0tKFt<cT.\displaystyle\leq\varepsilon+K_{F}||\hat{z}_{t}-z_{t}||=\varepsilon\sum_{t^{\prime}=0}^{t}K_{F}^{t^{\prime}}<\frac{c}{T}. (D.10)

Because cTc\frac{c}{T}\leq c, [z^t+1xt+1]Uzx[\hat{z}_{t+1}\>\>x_{t+1}]^{\top}\in U_{zx}. Then

t=1Ty^tytRCTεt=0TKFtϵ.\sum_{t=1}^{T}||\hat{y}_{t}-y_{t}||\leq R_{C}T\varepsilon\sum_{t=0}^{T}K_{F}^{t}\leq\epsilon. (D.11)

While we have approximated the dynamical system it is not yet in the standard rate-style RNN form. However, we can obtain the rate form by changing from tracking z^\hat{z} to a different dynamical variable: rtnr_{t}\in\mathbb{R}^{n}. Because AA is assumed to be maximum rank, we can find a minimal norm r0nr_{0}\in\mathbb{R}^{n} such that z0=Ar0z_{0}=Ar_{0}, and r0r||r_{0}||\leq r^{\prime} for any rr^{\prime} that also satisfies z0=Arz_{0}=Ar^{\prime}. Take this as the initial value for an RNN with dynamics:

rt+1=αrt+ϕ(WArt+Bxt+b)r_{t+1}=-\alpha r_{t}+\phi(WAr_{t}+Bx_{t}+b) (D.12)

It is easy to see that z^t=Art\hat{z}_{t}=Ar_{t} for all 0tT0\leq t\leq T. It follows that y^t=CArtt\hat{y}_{t}=CAr_{t}\>\forall t. Thus, this RNN approximates the original partially observed dynamical system.

Lastly, we explain how the hidden states of the RNN will remain in a compact set. Each rt=ϕ(Wz^t1+Bxt1+b)Φ(z0,x0,,xt1)r_{t}=\phi(W\hat{z}_{t-1}+Bx_{t-1}+b)\equiv\Phi(z_{0},x_{0},\dots,x_{t-1}) for all 1tT1\leq t\leq T and any choice of z^0=z0Uz\hat{z}_{0}=z_{0}\in U_{z}. Φ\Phi is continuous by the continuity of the activation functions and the boundedness of the parameters. Thus, the set Ur(1)={r:r=Φ(z0,x0,,xt1),z0Uz,xsUx 0s<t}U_{r}^{(1)}=\{r:r=\Phi(z_{0},x_{0},\dots,x_{t-1}),z_{0}\in U_{z},x_{s}\in U_{x}\>\forall\>0\leq s<t\} contains all rtr_{t} and is compact, as the arguments of Φ\Phi are taken from compact sets.

It remains to treat the initial values of the RNN, r0r_{0}. These are defined implicitly as Ar0=z0,z0UzAr_{0}=z_{0},z_{0}\in U_{z}. We will show that we can span all z0Uzz_{0}\in U_{z} by selecting r0r_{0} from a compact set. By assumption AA is full rank so col(A)=s\mathrm{col}(A)=\mathbb{R}^{s} and, thus, we can find a subset of the columns of AA that form a basis for s\mathbb{R}^{s}. Let these columns be {A:i}iν\{A_{:i}\}_{i\in\nu}. Construct a matrix AνA_{\nu} by deleting the columns whose indices are not contained in ν\nu. Because the columns of AνA_{\nu} are a basis the matrix is invertible. Define Ur(0)={r:ri=r~iifiν,ri=0ifiν,r~=Aν1z0,z0Uz}U_{r}^{(0)}=\{r:r_{i}=\tilde{r}_{i}\>\mathrm{if}\>i\in\nu,r_{i}=0\>\mathrm{if}\>i\not\in\nu,\tilde{r}=A_{\nu}^{-1}z_{0},z_{0}\in U_{z}\}. For each z0z_{0} in UzU_{z} we have an initial condition r0Ur(0)r_{0}\in U_{r}^{(0)} and, moreover, r0r_{0} is a continuous mapping of z0z_{0} (via Aν1A_{\nu}^{-1}). Since z0z_{0} is taken from a compact set Ur(0)U_{r}^{(0)} must also be compact.

Finally, define Ur=Ur(0)Ur(1)U_{r}=U_{r}^{(0)}\cup U_{r}^{(1)}. From the above this contains all hidden states of the RNN for the finite time trajectories of interest from any initial condition. As a union of two compact sets it is also compact. We will use UrU_{r} in Lemma 2. We note that the size of the set will likely depend, via the dependency on AA, on the function being approximated and on the neural network hidden layer size nn.

Lemma 4.

Eq.B.7 holds under the assumptions of Lemma 1.

Proof.

This follows by adding and subtracting Aijϕ(Bj:x+bj)A_{ij}^{\mathcal{M}}\phi(B_{j:}x+b_{j}) inside the summations in the norm and the expanded neural network expression, and then successively bounding things using the assumptions on the domain and the activation function:

ymyn\displaystyle||y_{m}^{\mathcal{M}}-y_{n}|| i=1lj=1n|Aij[ϕ(Bj:x+bj)ϕ(Bj:x+bj)](AijAij)ϕ(Bj:x+bj)|\displaystyle\leq\sum_{i=1}^{l}\sum_{j=1}^{n}\Big{|}A_{ij}^{\mathcal{M}}\big{[}\phi(B_{j:}^{\mathcal{M}}x+b_{j}^{\mathcal{M}})-\phi(B_{j:}x+b_{j})\big{]}-(A_{ij}-A_{ij}^{\mathcal{M}})\phi(B_{j:}x+b_{j})\Big{|} (D.13)
i=1lj=1n[γ¯Kϕ|(Bj:Bj:)x+bjbj|+εMϕ]\displaystyle\leq\sum_{i=1}^{l}\sum_{j=1}^{n}\Big{[}\bar{\gamma}K_{\phi}\big{|}(B_{j:}^{\mathcal{M}}-B_{j:})x+b_{j}^{\mathcal{M}}-b_{j}\big{|}+\varepsilon M_{\phi}\Big{]} (D.14)
nl[γ¯Kϕ(εMx+ε)+εMϕ]=nl[γ¯Kϕ(Mx+1)+Mϕ]ε.\displaystyle\leq nl\big{[}\bar{\gamma}K_{\phi}(\varepsilon M_{x}+\varepsilon)+\varepsilon M_{\phi}\big{]}=nl\big{[}\bar{\gamma}K_{\phi}(M_{x}+1)+M_{\phi}\big{]}\varepsilon. (D.15)

Lemma 5.

Eq.C.6 holds under the assumptions of Lemma 2.

Proof.

We have:

r0r0\displaystyle||r_{0}^{\mathcal{M}}-r_{0}|| =0\displaystyle=0 (D.16)
r1r1\displaystyle||r_{1}^{\mathcal{M}}-r_{1}|| αr0r0+βKϕ[γ¯nr0r0+εn(Rr+Rx+1)]\displaystyle\leq\alpha||r_{0}^{\mathcal{M}}-r_{0}||+\beta K_{\phi}\big{[}\bar{\gamma}n||r_{0}^{\mathcal{M}}-r_{0}||+\varepsilon n(R_{r}+R_{x}+1)\big{]} (D.17)
=βKϕγ¯nR~ε1,\displaystyle=\beta K_{\phi}\bar{\gamma}n\tilde{R}\varepsilon\leq 1, (D.18)

where we used that r0=r0r_{0}=r_{0}^{\mathcal{M}} and thus both are contained in UrU_{r} so that the arguments of ϕ\phi are contained in U~\tilde{U}, allowing us to use the Lipschitz result and constant KϕK_{\phi}. We note that the final inequality follows from the choice of ε\varepsilon and means that, since r1Urr_{1}\in U_{r}, r1=r1+rr_{1}^{\mathcal{M}}=r_{1}+r^{\prime} with r1||r^{\prime}||\leq 1. This means that for the next step forward in time the arguments of ϕ\phi will once again be in U~\tilde{U} so that the Lipschitz conditions will continue to hold. Using t=1t=1 as a base case we make the induction hypothesis:

rtrt\displaystyle||r_{t}^{\mathcal{M}}-r_{t}|| nβKϕR~t=0t1(α+βKϕnγ¯)tε\displaystyle\leq n\beta K_{\phi}\tilde{R}\sum_{t=0}^{t-1}(\alpha+\beta K_{\phi}n\bar{\gamma})^{t}\varepsilon (D.19)
rtrt\displaystyle||r_{t}^{\mathcal{M}}-r_{t}|| 1.\displaystyle\leq 1. (D.20)

Using the induction hypothesis it is straightforward to prove the induction step, and to show that rt+1rt+1||r_{t+1}^{\mathcal{M}}-r_{t+1}|| remains smaller than 11 and thus that the arguments of ϕ\phi remain in U~\tilde{U}. Proving the induction step and then bounding the output matrix elements by γ¯\bar{\gamma} yields the result.