On Sparsity in Overparametrised Shallow ReLU Networks
Abstract
The analysis of neural network training beyond their linearization regime remains an outstanding open question, even in the simplest setup of a single hidden-layer. The limit of infinitely wide networks provides an appealing route forward through the mean-field perspective, but a key challenge is to bring learning guarantees back to the finite-neuron setting, where practical algorithms operate.
Towards closing this gap, and focusing on shallow neural networks, in this work we study the ability of different regularisation strategies to capture solutions requiring only a finite amount of neurons, even on the infinitely wide regime. Specifically, we consider (i) a form of implicit regularisation obtained by injecting noise into training targets [Blanc et al. 19], and (ii) the variation-norm regularisation [Bach 17], compatible with the mean-field scaling. Under mild assumptions on the activation function (satisfied for instance with ReLUs), we establish that both schemes are minimised by functions having only a finite number of neurons, irrespective of the amount of overparametrisation. We study the consequences of such property and describe the settings where one form of regularisation is favorable over the other.
1 Introduction
Supervised Learning in high-dimensions remains an active research area, in part due to a confluence of several challenges, including approximation, computational and statistical, where the curse of dimensionality needs to be simultaneously avoided. Linear learning based on RKHS theory suffers in the high-dimensional regime due its lack of approximation power, which motivates the study of non-linear learning methods. Amongst them, arguably the simplest is given by single hidden-layer neural networks, trained with some form of gradient descent scheme. Despite its simplicity relative to its deep counterparts, learning with shallow neural networks is still not fully understood from a theoretical standpoint, specifically with regards to giving positive results that explain their advantage over linear learning methods on realistic settings.
An important theme in recent analysis of neural networks is overparametrisation. By letting the number of neurons grow to infinity, one can obtain optimization, approximation and generalization guarantees beyond the linear learning regime [CB20, MMN18, RVE18], at the expense of computational efficiency. In this context, a key goal is to exhibit the right combination of neural architecture and learning algorithm that preserves the guarantees of infinitely-wide neural networks with polynomially-sized networks. We note that our study is only tangentially related to the so-called double-descent phenomena recently reported in those same models [BHMM19, BHX19, MM19], since we include regularisation from the onset. Regularisation can take two flavors: implicit (by injecting noise) or explicit (by adding a term in the empirical loss function). They are a key device to give statistical guarantees of generalisation, and in this work we study their potential computational benefit in the overparametrised regime.
The mean-field regime of overparametrised shallow networks defines solutions in terms of measures over parameters. A natural form of explicit regularisation is then given by the variation-norm [Bac17], which penalises an continous-equivalent of an norm over weights, and is equivalent to the path-norm [NTS15] and the Barron norm [MWE19], as well as the popular ‘weight-decay’ for homogeneous activations such as the ones we consider here. Injecting noise into gradient descent dynamics is another classic form of implicit regularization, under appropriate ergodicity conditions that enable the forgetting of initial conditions. In this work, and inspired by [BGVV19], we focus on a specific form of noise, namely noise added into the training labels. The authors interpreted the noisy gradient dynamics as an approximate Orstein-Uhlenbeck process that ‘walks’ around the manifold of solutions with zero training error (which has positive dimension due to overparametrisation), and ‘locks’ in solutions that minimise a certain implicit regulariser, given by the average squared norm of the network gradients over the data. They leveraged such implicit regularisation to explain the ‘simple’ structure of solutions on specific cases, such as univariate inputs or a single datapoint.
We encapsulate these two regularisation instances into a general family of regularisers that, combined with mild assumptions on the activation function, produce optimisation problems over the space of probability measures that can only be minimised if the measure has sparse support (Theorem 4.2). This gives a novel perspective on the overparametrised regime under these forms of regularisation, that connects it to a finite-dimensional linear program, and whereby the infinitude of parameters required to give optimization guarantees might be relaxed to a finite (albeit possibly exponentially large in dimension) quantity.
Related Work:
Our work fits into a recent line of work that attempts to uncover the interplay between neural network overparametrisation, optimization and generalisation, dating back at least to [Bar97]. The implicit bias brought by different flavors of gradient descent in models involving neural networks has since been thoroughly explored [SHN+18, GWB+17, SESS19, NTS15, LMZ17, BGVV19, CB20, NMB+18, AS17, BO97]. Authors have also leveraged the special structure of ReLU activations [MBG18, WTP+19]. Closest to our setup are [BGVV19, EP20], who separately study the two forms of regularisation considered in this work. Our characterisation is consistent with theirs, but extends to a general high-dimensional setting and provides a unified picture. Finally, recently [BELM20] studied sparse solutions of the TV regularised problem, establishing that in generic data one can find solutions with nearly optimal cost and sparsity; we instead focus on the exact minimisers of the constrained and penalised objectives, but leave the connections for future work.
2 Preliminaries
2.1 Shallow Neural Networks and Variation-Norm Spaces
We consider a shallow neural network with hidden-layer-width as the following function:
(1) |
where is the feature of the input data, is the neuron constructed from an activation function , and with are the parameters for the -th neuron. As observed by several authors [MMN18, CB18b, RVE18, SS18], such a neural network can be equivalently represented in terms of a probability measure over as , where
(2) |
and is the empirical measure of the parameters :
Let us denote by the space of probability measures over . In the following, we write for a Borel-measurable test function and . Let be a non-negative potential function satisfying . The set of functions representable as (1) defines a metric space with norm
(3) |
When , the resulting space is the so-called variation-norm space [Bac17]. One can verify [MWE19] that gives the same functional characterisation for any . Moreover, if is 2-homogeneous w.r.t. (such as the ReLU ) 111, then one can also verify that also defines the same space 222By first projecting the measure to the unit sphere and then lifting again as a point mass in the -direction, and using the fact that has mass ; see [MWE19], Theorem 1.. Such apparent flexibility is a consequence of the fact that , the underlying object parametrising the integral representation (3), is in fact a lifted version of a more ‘fundamental’ object , the space of signed Radon measures over the unit -dimensional sphere . The measures and are related via the projection
(4) |
for all continuous test functions . One can verify [Chi19] that for the previous choice of potential , one has , where and is the Total-Variation norm of [Bac17].
This variation-norm space contains any RKHS whose kernel is generated as an expectation over features with a base measure , but it provides crucial advantages over these RKHS at approximating certain non-smooth, high-dimensional functions having some ‘hidden’ low-dimensional structure [Bac17]. This motivates the study of overparametrised shallow networks with the scaling as in (1), as opposed to the NTK scaling [JGH18] in .
As we will see in Section 3, other spaces arise naturally when training overparametrised neural networks with noisy dynamics, corresponding to different choice of potential function. In the following, we are going to focus on the ReLU case where .
2.2 Empirical Risk Minimization and Representer Theorem
Given samples for a regression task, learning in in the interpolant regime (corresponding to overparametrised models) amounts to the following problem
(5) |
This problem can be equivalently expressed as an optimization problem in :
(6) |
Even though is not a RKHS, its convex structure also provides a Representer Theorem [Zuh48, FJ75, Bac17, BCC+19]: the extreme points of the solution set of (6) consist of point masses of support at most . In other words, there exist minimisers of the variation norm that require at most neurons to interpolate the data. However, in contrast with the RKHS representer theorem, these neurons are not explicit in terms of the datapoints, and cannot be directly characterised from (6) since this problem generally lacks unicity.
2.3 Training Overparametrised Neural Networks and Wasserstein Gradient Flows
Notice that for empirical measures corresponding to a -width shallow network, the loss is precisely the loss
(8) |
with respect to the parameters . In that case, the regularisation term corresponds to weight decay for , and the so-called path norm [NTS15] for . As also argued in [NTS14], these two regularisation terms capture the same implicit bias in the functional space.
Performing gradient descent on with respect to induces gradient dynamics on the functional over , with respect to the Wasserstein metric [MMN18, CB18b, RVE18, SS18], thus mapping an Euclidean non-convex problem (8) to a non-Euclidean convex one (7). The gradient dynamics of overparametrised neural networks in the infinite-width limit can thus be analysed as the mean-field limit of such Wasserstein Gradient flows, leading to global convergence for problems such as (7) under appropriate homogeneity assumptions for [CB18b, Chi19]. However, such convergence results are qualitative and only hold in the limit of without extra structure in the minimisers of such as being point masses – precisely the structure that our main results from Section 4 provide.
2.4 Implicit Regularization
A popular alternative to the explicit variation-norm regularisation is to rely on the particular choice of optimization scheme to provide an implicit regularisation effect. While gradient descent enjoys powerful implicit regularisation for logistic regression [SHN+18, CB20] by favoring max-margin solutions, and for linear least-squares regression by choosing solutions of minimal norm, the situation for non-linear least-squares is less clear.
Indeed, in those cases, the dynamics do not typically forget the initial conditions, making them very sensitive to initialisation and producing substantial qualitative changes, as illustrated by the lazy dynamics [CB18a]. A powerful alternative is thus given by algorithms that inject noise into the dynamics, such as SGD or Langevin Dynamics. The latter was studied in the framework of overparametrised shallow networks in [MMN18]. This dynamics leads to an associated cost of the form:
(9) |
where is the relative entropy of . Minimizers to this functional have a smooth density that is solution to the associated elliptic Euler-Lagrange problem. They are therefore not sparse due to the entropic regularisation.
3 Implicit Regularisation by Label Noise
The goal of this section is to establish a link between a type of noise-induced regularisation phenomena established in [BGVV19, Theorem 2] and the variation-norm spaces defined in section 3.1.
3.1 Results in Blanc. et. al.
Blanc et al. [BGVV19] propose Algorithm 1 for training a neural network on a data-set . The algorithm is an SGD-type algorithm with a small, independent perturbation on each label sampled independently on each gradient step.
Denote by the input data and let denote the empirical average over the observed datapoints. Let . The main result therein [BGVV19, Theorem 2] states that, for large times (depending, amongst others, on the noise size and the number of parameters), the solution concentrates at local minimizers (amongst the zero-error solutions) of the effective regularization loss:
(10) |
The RHS of (10) is now an integral over the counting measure , and therefore,
(11) |
3.2 An equivalent formulation in the ReLU case
As we have seen in the previous section, the training algorithm with label noise minimizes . In the ReLU case we can explicitly write:
(12) |
The key realization is that in order to minimize the value of 11, the algorithm will exploit the degeneracy for . In other words, we may substitute for without effectively changing the problem. A straightforward computation shows that is equal to:
(13) |
There are two crucial properties of the potential function . First, that it is invariant under the degeneracy described above: If we have a measure and modify it with the method above to obtain , both of them will have the same loss. The second is that the function is convex, and, as long as the neurons can ‘see’ enough of the dataset, essentially convex. To make this precise, we start with two definitions:
Definition 3.1.
A neuron with parameters is at the bulk of the dataset whenever the map is locally injective.
Definition 3.2.
A weight that is 1-homogeneous in is effectively strictly convex if whenever
(14) |
holds for some parameters and , the parameters and are proportional to each other.
With these definitions in hand, we can state the main result of this section, a convexity result for the regularization loss (see the Appendix for a proof):
Proposition 3.3.
For any set of variables the associated weight is convex in . Moreover, if the neuron with parameters is at the bulk of the dataset, the weight is effectively strictly convex in a neighbourhood of .
4 Minimizers to the convex-type TV have sparse support
In this section we will first focus on understanding the solutions of the regularised interpolation problem (6), for potential functions of the form for a given regularization weight . As discussed so far, this problem is motivated by the case , corresponding to variation-norm learning [Bac17] (and which, as seen in [OWSS19] is also related to a Tikhonov-type regularization). Another example is the implicit regularization scheme discussed in section 3.
By the rescaling trick (see the discussion in section 3.2) it suffices to consider the case where is homogeneous of degree in the sense that for we have . These weights can never be strictly convex (a desirable quantity when looking for uniqueness or sparsity of limits) because homogeneous functions are never strictly convex: . This obstruction, however, appears only from the parameter perspective but not from the neuron perspective, because proportional parameters also represent proportional neurons. This is a motivation for the effective convexity definition in section 3.2.
Lemma 3.3 in section 3.2 shows that the implicit regularization term that appeared in [BGVV19] is also effectively strictly convex. The fact that the TV norm is effectively strictly convex is a direct consequence of Minkowski’s inequality. The definition of effective strict convexity motivates an analog definition of sparsity on lifted measures:
Definition 4.1.
A lifted measure is effectively supported on points if the support of is contained on the union of at most half-planes of the form:
(15) |
If a lifted measure is effectively supported on points, then (by construction) there is a lifted measure supported on exactly points that represents the same function. With this definition in hand we are ready to state the main result of this section (see Figure 1 for an illustration motivating the theorem):
Theorem 4.2.
Let be a 1-homogeneous effectively strictly convex regularization weight and the associated potential. Then the solutions of the problem (6) are all effectively supported on a finite number of points. The number of points in the solution is bounded by .
The proof of the theorem has two steps. In the first step we will split the parameter space into cells where the parameters interact in a particularly nice way. This cell decomposition is reminiscent of the “open sectors" described in [MBG18]333The cells defined in this work are the closure of the open sectors defined by Maennel et. al.. The second step is to use the effective strict convexity result to show that, on each of those cells, optimizing measures are supported on at most one point.
Cell decomposition of the parameter space
Fix a dataset , with . This will split the parameter space of into a finite amount of convex cones that we will call cells, such that two neurons belong to the same cell iff they are active on the same datapoints. We will consider closed cells, and therefore there will be overlap at the boundary of the cells. By interpreting each datapoint with is dual hyperplane, there is a one-to-one correspondence between these cells and the resulting hyperplane arrangement, thus in generic datasets.
A key property of the cells is that the map is linear when restricted to a single cell. From the convexity, the cone property, and the restricted linearity property, one can extract the following result:
Lemma 4.3 (Discrete version).
If two parameters belong to the same cell, then for all the datapoints it holds that
(16) |
Using induction, plus standard limiting arguments this lemma can be upgraded to the measure-valued case:
Lemma 4.3 (Continuous version).
If two measures are supported on the same cell , then
(17) |
for all data points whenever the moment estimates hold:
(18) |
Effective convexity - measure case.
Now we want to show that the effective convexity concentrates the mass on each cell. We will first study measures that are already concentrated on a single cell, and then patch the result. A first step is that effective convexity for induces a similar notion effective convexity on :
Lemma 4.4 (Lifting of the effective convexity).
Let be a 1-homogeneous effectively strictly convex regularization weight. Let , and . Then
(19) |
if and only if there exist constants such that .
In other words, the only scenario where there is no strict convexity is when testing on parameters that represent neurons proportional to each other, since the condition is equivalent to saying that the two ReLU neurons share the same hyperplane.
Convex functions have a unique minimizer, and, in the same way we bootsrapped the discrete, single-variable information into its continuous counterpart in Lemma 4.3, we can see that:
Lemma 4.5.
Let be a 1-homogeneous effectively strictly convex regularization weight, and fix a data cell. Amongst all the measures which are supported on the fixed cell and have the same moment estimates (eq (18)), the minimizers of are effectively supported on one point.
Proof of Theorem 4.2.
We can write our measure where each is supported on the closure of a cell. By the previous results, on each cell the associated measure has to be supported at at most one point. ∎
![[Uncaptioned image]](https://cdn.awesomepapers.org/papers/cea8aa7c-b849-4a45-b215-584c53004cf7/cells_a.png)
Figure 1.a: Data-space perspective: Each ReLu separates the data space into two half-spaces, the active half-space and the non-active. Given a subset of datapoints (in orange) there is at most one ReLu active at all points of but non-active in all points in .
![[Uncaptioned image]](https://cdn.awesomepapers.org/papers/cea8aa7c-b849-4a45-b215-584c53004cf7/cells_b.png)
Figure 1.b: Parameter-space perspective: Each data-point induces a hyperplane on the parameter space: the set of ReLu parameters that have the datapoint at the hinge. There is at most one neuron (in red) on each cell generated by these hyperplanes (See 1.c).
![[Uncaptioned image]](https://cdn.awesomepapers.org/papers/cea8aa7c-b849-4a45-b215-584c53004cf7/cells_c.png)
Figure 1.c: Single-cell optimizers: If a cell has more than one neuron on a cell, substituting all the neurons on the cell by a neuron in the center of mass will not affect the predictions on the data, but will give a gain on the regularizer.
Recovering Discrete Minimisers by Projecting into the Sphere:
Since the constrained optimization problem (6) is defined over probability measures in the lifted space , there is a price to pay to recover discrete minimizers, what we called ‘effective’ support in definition 4.1. In the TV case, this aspect disappears as one looks at the equivalent problem in the space , as shown in the following corollary:
Unicity of Supports Across all minimisers:
We have so far proven that solutions to the constrained minimization have a single mass point at each cell. However, a strictly stronger result holds:
Theorem 4.7.
Let be a cell. Let be zero-errror minimizers of with non-zero support on . Then both and are effectively supported in the same single point in .
Proof sketch.
By contradiction, let have different (effective) support on the same cell. By strict convexity the average of and would also be a minimizer supported in two points in , but that would contradict Lemma 4.5. ∎
Penalized minimization problem:
In a parallel fashion to the problem presented in this section, one can study the penalized version (7). All the sparsity results in this section (Theorems 4.2, 4.7 and Corollary 4.6 in particular, but also suitable variations of the intermediate lemmas) transfer to the convex case for arbitrary , as explained in the appendix:
Corollary 4.8.
Under the same assumptions of Theorem 4.7 and for any , all minimisers of are effectively supported on a finite number of points. For the corresponding problem expressed in the projected , all minimisers are supported on a finite number of points.
Implicit versus TV regularisation:
We can make the following observations on the relationship between the two forms of regularisation:
-
•
The implicit regularization is translation invariant while TV regularization is not. One could remove the bias () dependence on TV (and consider , but in this case theorems 4.2, 4.7 become false). This is not a draw-back for all applications (for example in image processing, since translation corresponds to a color change).
-
•
Implicit regularization is (in principle) more costly to compute, especially whenever the number datapoints is large.
From discrete solutions to sparse solutions:
Corollary 4.6 and Theorem 4.7 allow us to turn the original infinite-dimensional problem into a -dimensional linear problem with linear constraints, where the only free parameters are the the amount of (signed) mass that is given to each cell. Recall that corresponds to the number of cells of the hyperplane arrangement determined by the datapoints in the projective space.
Focusing on the TV case, let be a minimiser of (20). Observe that as a consequence of Theorem 4.7, the locations are shared across every minimiser (but unknown a priori). Its coefficients are thus solutions of the linear program
(21) |
where is the matrix . From the Representer theorem, we know that there exists a solution of (21) of support , and in the generic case this solution is the sparsest one, ie solves the minimisation counterpart of (21).
A natural question is thus to understand under what conditions on the data the and problems share the same minimisers. This is an instance of Compressed Sensing, which could be analysed by studying properties of the sensing matrix such as the RIP (leveraging random datasets) or the Nullspace property. This question has been also recently studied in [EP20]. This is out of the scope of the present work and is left for future research.
Consequences for Gradient-Descent Optimization:
With abuse of notation, let
(22) |
be the penalized form of the regularised problem, expressed in terms of the Radon measure in . The ability of gradient-descent to minimise such problems was recently studied in [Chi19], leading to a positive result under cetain assumptions, notably that minimisers are discrete and ‘feel’ a positive curvature around their support. The local convexity analysis that resulted in the discrete characterisation of solutions for this penalised problem (Lemma 4.5) brings precisely this geometric property around minimisers, suggesting a favorable case for gradient-descent on sufficiently overparametrised (but finite) shallow ReLU networks. However, we note that (i) our ReLU setup does not have appropriate smoothness to directly apply the results from [Chi19], and (ii) the finite-particle guarantees would still be at best exponential in the dimension. We leave these considerations for future work.
5 Conclusions
In this work, we have studied the structure of the minimisers in overparametrised shallow RelU networks under a broad umbrella of implicit and explicit regularisation strategies, including the popular weight decay and noisy gradient dynamics through label noise. Our main result leverages the underlying convexity generated by the regularisation to show that the resulting learning objectives only admit sparse minimisers, irrespective of the amount of overparametrisation.
This fact effectively maps the task of learning in variation-norm ReLU spaces (which are infinite-dimensional) into a finite-dimensional linear program. However, such linear program is determined by the hyperplane arrangement generated by the finite dataset after dualization – and thus of size exponential in dimension. As such, it does not directly solve the computational barrier present in the mean-field formulations of sparse measure optimization [Chi19], which is unavoidable in certain data regimes under the general SQ-model [GGJ+20].
As future work, it would be interesting to further explore the connections with the recent ‘memorization’ constructions of [BELM20], and to study favorable conditions on the datapoints that could break the exponential size of the linear program.
References
- [AS17] Madhu S Advani and Andrew M Saxe. High-dimensional dynamics of generalization error in neural networks. arXiv preprint arXiv:1710.03667, 2017.
- [Bac17] Francis Bach. Breaking the curse of dimensionality with convex neural networks. The Journal of Machine Learning Research, 18(1):629–681, 2017.
- [Bar97] Peter L Bartlett. For valid generalization the size of the weights is more important than the size of the network. In Advances in neural information processing systems, pages 134–140, 1997.
- [BCC+19] Claire Boyer, Antonin Chambolle, Yohann De Castro, Vincent Duval, Frédéric De Gournay, and Pierre Weiss. On representer theorems and convex regularization. SIAM Journal on Optimization, 29(2):1260–1281, 2019.
- [BELM20] Sébastien Bubeck, Ronen Eldan, Yin Tat Lee, and Dan Mikulincer. Network size and weights size for memorization with two-layers neural networks. arXiv preprint arXiv:2006.02855, 2020.
- [BGVV19] Guy Blanc, Neha Gupta, Gregory Valiant, and Paul Valiant. Implicit regularization for deep neural networks driven by an Ornstein-Uhlenbeck like process. arXiv:1904.09080 [cs, stat], April 2019. arXiv: 1904.09080.
- [BHMM19] Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal. Reconciling modern machine learning practice and the bias-variance trade-off. arXiv:1812.11118 [cs, stat], September 2019. arXiv: 1812.11118.
- [BHX19] Mikhail Belkin, Daniel Hsu, and Ji Xu. Two models of double descent for weak features. arXiv:1903.07571 [cs, stat], March 2019. arXiv: 1903.07571.
- [BO97] Siegfried Bös and Manfred Opper. Dynamics of training. In Advances in Neural Information Processing Systems, pages 141–147, 1997.
- [CB18a] Lenaic Chizat and Francis Bach. A note on lazy training in supervised differentiable programming. arXiv preprint arXiv:1812.07956, 2018.
- [CB18b] Lenaic Chizat and Francis Bach. On the global convergence of gradient descent for over-parameterized models using optimal transport. In Advances in Neural Information Processing Systems, pages 3036–3046, 2018.
- [CB20] Lénaïc Chizat and Francis Bach. Implicit bias of gradient descent for wide two-layer neural networks trained with the logistic loss. arXiv preprint arXiv:2002.04486, 2020.
- [Chi19] Lenaic Chizat. Sparse optimization on measures with over-parameterized gradient descent. arXiv preprint arXiv:1907.10300, 2019.
- [EP20] Tolga Ergen and Mert Pilanci. Convex geometry and duality of over-parameterized neural networks. arXiv preprint arXiv:2002.11219, 2020.
- [FJ75] SD Fisher and Joseph W Jerome. Spline solutions to l1 extremal problems in one and several variables. Journal of Approximation Theory, 13(1):73–83, 1975.
- [GGJ+20] S. Goel, A. Gollakota, Z. Jin, S. Karmalkar, and A. Klivans. Superpolynomial lower bounds for learning one-layer neural networks using gradient descent. ICML, 2020.
- [GWB+17] Suriya Gunasekar, Blake E Woodworth, Srinadh Bhojanapalli, Behnam Neyshabur, and Nati Srebro. Implicit regularization in matrix factorization. In Advances in Neural Information Processing Systems, pages 6151–6159, 2017.
- [JGH18] Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and generalization in neural networks. In Advances in neural information processing systems, pages 8571–8580, 2018.
- [LMZ17] Yuanzhi Li, Tengyu Ma, and Hongyang Zhang. Algorithmic regularization in over-parameterized matrix sensing and neural networks with quadratic activations. arXiv preprint arXiv:1712.09203, 2017.
- [MBG18] Hartmut Maennel, Olivier Bousquet, and Sylvain Gelly. Gradient Descent Quantizes ReLU Network Features. arXiv:1803.08367 [cs, stat], March 2018. arXiv: 1803.08367.
- [MM19] Song Mei and Andrea Montanari. The generalization error of random features regression: Precise asymptotics and double descent curve. arXiv preprint arXiv:1908.05355, 2019.
- [MMN18] Song Mei, Andrea Montanari, and Phan-Minh Nguyen. A mean field view of the landscape of two-layer neural networks. Proceedings of the National Academy of Sciences, 115(33):E7665–E7671, 2018.
- [MWE19] Chao Ma, Lei Wu, and Weinan E. Barron spaces and the compositional function spaces for neural network models. arXiv preprint arXiv:1906.08039, 2019.
- [NMB+18] Brady Neal, Sarthak Mittal, Aristide Baratin, Vinayak Tantia, Matthew Scicluna, Simon Lacoste-Julien, and Ioannis Mitliagkas. A modern take on the bias-variance tradeoff in neural networks. arXiv preprint arXiv:1810.08591, 2018.
- [NTS14] Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. In search of the real inductive bias: On the role of implicit regularization in deep learning. arXiv preprint arXiv:1412.6614, 2014.
- [NTS15] Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. Norm-Based Capacity Control in Neural Networks. arXiv:1503.00036 [cs, stat], April 2015. arXiv: 1503.00036.
- [OWSS19] Greg Ongie, Rebecca Willett, Daniel Soudry, and Nathan Srebro. A Function Space View of Bounded Norm Infinite Width ReLU Nets: The Multivariate Case. arXiv:1910.01635 [cs, stat], October 2019. arXiv: 1910.01635.
- [RVE18] Grant Rotskoff and Eric Vanden-Eijnden. Parameters as interacting particles: long time convergence and asymptotic error scaling of neural networks. In Advances in Neural Information Processing Systems, pages 7146–7155, 2018.
- [SESS19] Pedro Savarese, Itay Evron, Daniel Soudry, and Nathan Srebro. How do infinite width bounded norm networks look in function space? In Conference on Learning Theory, pages 2667–2690, 2019.
- [SHN+18] Daniel Soudry, Elad Hoffer, Mor Shpigel Nacson, Suriya Gunasekar, and Nathan Srebro. The implicit bias of gradient descent on separable data. The Journal of Machine Learning Research, 19(1):2822–2878, 2018.
- [SS18] Justin Sirignano and Konstantinos Spiliopoulos. Dgm: A deep learning algorithm for solving partial differential equations. Journal of Computational Physics, 375:1339–1364, 2018.
- [WTP+19] Francis Williams, Matthew Trager, Daniele Panozzo, Claudio Silva, Denis Zorin, and Joan Bruna. Gradient dynamics of shallow univariate relu networks. In Advances in Neural Information Processing Systems, pages 8376–8385, 2019.
- [Zuh48] S. Zuhovickii. Remarks on problems in approximation theory. Mat. Zbirnik KDU, 1948.
Appendix A Additional Proofs
Proof of Proposition 3.3.
Proof.
Convexity follows from the fact that average of convex functions is convex:
(23) | ||||
Where the first line folows from the convexity of the ReLu. Strict convexity follows similarly: Local injectivity implies that, unless is proportional to the vector is not proportional to the vector . In that case by (sharp) Minkowski inequality, the inequality has to be strict. ∎
Proof of Lemma 4.3 (Discrete version)
Proof.
Let be a data-point. If and belong to the same cell, there are two options:
Both and . In this case, .
Both and . In this case, we have
∎
Proof of Lemma 4.3 (Continuous version)
Proof.
Let be a data-point. As in the discrete case, we have two options. In the trivial case, the datapoint is inactive in the cell, in the sense that for all in the cell, . In this case the equality is always true. In the non-zero (linear) case, all neurons in the cell are active. In this case we may omit the ReLu itself, and the equality we need to show is:
(24) |
by linearity of the integral this is equivalent to
(25) |
and now the result follows directly from the hypothesis. ∎
Proof of Lemma 4.4
Lemma A.1.
Let belong to the same cell and not represent the same neuron, let . Then there is a probability measure concentrated at a single point such that estimates (18) hold for but . 444There is a typo in the statement of lemma 4.4 in the main text in how the interpolation in the lifted space is performed; it has been addressed here.
Proof.
Assume without loss of generality that have the same sign (since otherwise we can transfer mass from one neuron to the other). Since and do not represent the same neuron, by the effective strict convexity of we have
(26) |
for any .
Pick . Then we easily verify that and verify the moment estimates, and by (26).
∎
Proof of Lemma 4.5
Proof.
Assume minimizers exist, as otherwise the result is vacuously true.
Given a minmizer there is a measure that has the same mass and first moment as , but is concentrated at a single point (the center of mass). By convexity, this has to be a minimizer of the loss amongst all functions with the same first moment estimate (by Jensen) so it suffices to show that is effectively unique. This is a consequence of effective convexity: If is another minimizer, has to be supported on parameters that represent the same parameter as , because otherwise we can build an even better measure:
Let
Then, for any given measure with the same moment estimates as , has the same moment estimates as and , but unless is effectively supported at the same point as , it must be (by strict convexity) that
(27) |
which contradicts the hypothesis that is a minimizer. ∎
Proof of Theorem 4.7
Proof.
Let be two minimizing measures. By convexity, must also be a minimizing sequence. Lemma 4.5 guarantees then that will be supported in at most one point per cell. Therefore, if had support on the same cell, it must be exactly on the same point. ∎
Proof of Corollary 4.8
Proof.
Let be a minimizer to the penalized problem
(28) |
for some similarity loss function . Then must also be the minimizer to the following restricted problem (note that this is not the traditional penalized-constrained duality):
(29) |
this is because a minimizer must remain a minimizer if we further constrain the feasible set.