capbtabboxtable[][\FBwidth]
A Novel Explanation Against Linear Neural Networks
Abstract
Linear Regression and neural networks are widely used to model data. Neural networks distinguish themselves from linear regression with their use of activation functions that enable modeling nonlinear functions. The standard argument for these activation functions is that without them, neural networks only can model a line. However, a novel explanation we propose in this paper for the impracticality of neural networks without activation functions, or linear neural networks, is that they actually reduce both training and testing performance. Having more parameters makes LNNs harder to optimize, and thus they require more training iterations than linear regression to even potentially converge to the optimal solution. We prove this hypothesis through an analysis of the optimization of an LNN and rigorous testing comparing the performance between both LNNs and linear regression on synthethic, noisy datasets.
1 Introduction
Neural networks [1] distinguish themselves from linear regression by their ability to model nonlinear data. This capability comes from their nonlinear activation functions. The standard explanation against neural networks without such activation functions, which we refer to as linear neural networks (LNNs), is that they only can model lines and thus yield no benefit compared to linear regression.
In this paper, we propose a novel reason for the impracticality of LNNs: LNNs actually perform worse than linear regression, despite modeling the same form of data. The excess of parameters in LNNs corrupts the optimization process thus preventing LNN training to yield the optimal solution. We test our hypothesis through a debrief of optimization procedures on an LNN and perform experiments on synthethic datasets of various noisiness.
2 Methods
If we have a univariate dataset and associated labels , assuming the relationship between and is linear, a linear regression model given by the equation = can be created where is the prediction for the input . If this model was fully optimized, and would be the weight and bias respectively to minimize the mean of the squared residuals.
Neural networks for univariate data can similarly be constructed as the following. The output vector for the first layer is given by . and denote the weight and bias for the th layer. The output of an LNN with a second layer would then be or .
LNNs require iterative optimization, such as Gradient Descent (GD), to optimally adjust their parameters. GD updates each of current parameters based on the derivative of the objective function with respect to that parameter.Given learning rate and any parameter at time step , GD will update the parameter to as such: . In our case, our objective function is the mean squared error (MSE) given by . The derivatives used to optimize a linear regression parameters through such optimization are shown in Equation 1.
(1) |
LNN optimization is more cumbersome because of the increased amount of parameters. For the two-layered LNN given by , the optimal parameter solution is for so that the LNN’s prediction function simplifies to the . Because the derivative of any parameter depends on parameters from previous layers, this makes this solution harder to reach. Given the derivative of with respect to used to optimize :
(2) |
we can see that the next step of by GD would be based on the currently suboptimal parameters and . In order for the optimal solution to be met, this means the new value of , calculated on a suboptimal , and have to align such that their product is . This will realistically only happen if the LNN begins training with a parametrization initialization where . GD initializes parameters randomly, so this particular arrangement is extremely unlikely. The high interdependency between parameters and their movements across iterations creates difficulty for an LNN’s parameters to arrive at the optimal solution. Note that these same dynamics apply to the optimization of the bias parameter. Through this demonstration, it can be seen how this problem will be further exacerbated if the LNN had more layers, and thus more parameters.
3 Experiments
We compare the performance of linear regression and LNNs from 2 to 10 layers on synthetic datasets with varying levels of noise.
Data
For simplicity, all of our data in our experiments are univariate. Note that even if our data was multivariate, the same results would occur as linear regression or LNNs on multivariate data essentially operates the same across each dimension.
We first sample the input data vector from a standard normal distribution. We randomly sample scalars and from the same distribution as the respective true weight and bias parameters of the data. This gives us , the label vector, equal to . Because no realistic data is perfectly linear we add noise to our dataset. We sample noise from a standard normal distribution and then scale the noise to the magnitude of the pre-existing data by multiplying it by the expectation of . This scaled noise is then multiplied by a noise coefficient , which controls the extent to which the labels are corrupted by noise. Finally this noise scaled to the magnitude of the dataset is added to the pre-existing labels to give the noisy labels, . In equation form, our noisy labels are given by:
(3) |
For the new noisy dataset, the new optimal weight is denoted as and optimal bias as .
Results
We compare the performances of a linear regression model to LNNs with 2 to 10 layers. For each experiment, using the aforementioned data procedure, we generate a 1000-length data and label vector for model training and a 200-length data and label vector model evaluation. Both datasets are generated with the same noise coefficient. We first train each model on the training data to convergence. At each iteration, we track the model’s MSE on the train and test datasets.
Additionally, we track the model’s parameters deviation from the optimal weight and bias at iteration.We calculate the deviation of a given model’s parameters from the optimal solution by first applying the Normal Equation, a closed-form solution, on the training data to solve for optimal weight and optimal bias . Because all models are a linear function, we can simplify all models to a linear function and then measure the model’s optimal parameter deviation as . Over the iterations, this deviation should reduce.
We perform this experiment 100 times for each of the noise coefficient values 0.05, 0.15, 0.3, and 0.5. We write our models in PyTorch [2] and train them with SGD [3] using a learning rate of 0.001. We report the testing mean and standard deviations of the MSE (across all 100 experiments) for all models and noise coefficients in Table 1. Figure 1 shows the average optimal parameter deviation throughout training over the 100 experiments for each model with . Figure 2 shows the sharp increases in MSE as the LNN parameter count (or number of layers) increase across all noise levels.
![[Uncaptioned image]](https://cdn.awesomepapers.org/papers/510d9818-f6e5-48ab-9af5-13d94d14b04d/paramdeviation.png)
Noise Coefficient | ||||
---|---|---|---|---|
Model | 0.05 | 0.15 | 0.30 | 0.50 |
LinReg | 0.0028 ±0.005 | 0.0197 ±0.025 | 0.086449 ±0.1197 | 0.2840 ±0.4667 |
LNN-2 | 0.003 ±0.006 | 0.020 ±0.025 | 0.086451 ±0.1197 | 0.2842 ±0.4668 |
LNN-3 | 0.004 ±0.007 | 0.023 ±0.04 | 0.09 ±0.1194 | 0.2844 ±0.4665 |
LNN-4 | 0.05 ±0.27 | 0.03 ±0.05 | 0.101 ±0.13 | 0.30 ±0.47 |
LNN-5 | 0.08 ±0.28 | 0.09 ±0.26 | 0.196 ±0.42 | 0.36 ±0.61 |
LNN-6 | 0.21 ±0.55 | 0.19 ±0.58 | 0.26 ±0.59 | 0.55 ±0.9 |
LNN-7 | 0.39 ±0.85 | 0.40 ±0.98 | 0.52 ±1.02 | 0.82 ±1.32 |
LNN-8 | 0.69 ±1.48 | 0.74 ±1.14 | 0.61 ±0.87 | 1.01 ±1.35 |
LNN-9 | 0.87 ±1.27 | 0.74 ±1.08 | 0.72 ±1.06 | 1.08 ±1.45 |
LNN-10 | 0.98 ±1.35 | 0.90 ±1.33 | 0.94 ±1.17 | 1.10 ±1.296 |

Discussion
The optimal parameter solution is achieved only by linear regression and LNNs with a few layers. LNNs with more layers typically converge at increasingly suboptimal solutions despite being provided an excessive number of iterations. This highlights the empirical difficulty of excess parameters in optimization, showing both training and testing performance suffer.
4 Conclusion
We are the first to propose a novel explanation against neural networks without activation functions. We prove the superiority of linear regressions compared to linear neural networks by a comparison of their optimization. We validate this proof by testing linear regression and LNNs on different levels of noise across 100 datasets for each level. We conclude LNNs perform worse in training and tesitng than linear regression due to more difficult optimization caused by their excess parameters.
References
- [1] Warren S McCulloch and Walter Pitts “A logical calculus of the ideas immanent in nervous activity” In The bulletin of mathematical biophysics 5 Springer, 1943, pp. 115–133
- [2] Adam Paszke et al. “Pytorch: An imperative style, high-performance deep learning library” In Advances in neural information processing systems 32, 2019
- [3] Herbert Robbins and Sutton Monro “A stochastic approximation method” In The annals of mathematical statistics JSTOR, 1951, pp. 400–407