Model Compression Method for S4 with Diagonal State Space Layers using Balanced Truncation
Abstract
To implement deep learning models on edge devices, model compression methods have been widely recognized as useful. However, it remains unclear which model compression methods are effective for Structured State Space Sequence (S4) models incorporating Diagonal State Space (DSS) layers, tailored for processing long-sequence data. In this paper, we propose to use the balanced truncation, a prevalent model reduction technique in control theory, applied specifically to DSS layers in pre-trained S4 model as a novel model compression method. Moreover, we propose using the reduced model parameters obtained by the balanced truncation as initial parameters of S4 models with DSS layers during the main training process. Numerical experiments demonstrate that our trained models combined with the balanced truncation surpass conventionally trained models with Skew-HiPPO initialization in accuracy, even with fewer parameters. Furthermore, our observations reveal a positive correlation: higher accuracy in the original model consistently leads to increased accuracy in models trained using our model compression method, suggesting that our approach effectively leverages the strengths of the original model.
Index Terms:
Balanced truncation, Deep learning, Diagonal state space model, Model compressionI Introduction
In recent years, deep learning models have garnered substantial attention due to their versatility across a range of applications, including sequence prediction, natural language translation, speech recognition, and audio generation [1, 2, 3]. These models’ ability to understand and predict sequential data underpins their success in these domains. A critical aspect of these models’ effectiveness is their capacity to capture dependencies between sequential data points, a fundamental requirement for achieving high levels of performance in tasks involving time series or sequential input. For instance, the Transformer [4] is effective in capturing short-range dependencies in sequential data, and achieved a state-of-the-art BLEU score of 41.0 on the WMT 2014 English-to-French translation task. However, the Transformer’s ability to capture long-range dependencies in time series data is limited, leading to the loss of temporal information due to its permutation-invariant self-attention mechanism [5].
Contrary to the limitations observed in the Transformer model, the Structured State Space Sequence (S4) model, as introduced in [6], demonstrates exceptional capability in capturing long-range dependencies within sequential data. This effectiveness is largely attributed to the innovative use of HiPPO initialization [7], a technique specifically designed to enhance model performance by leveraging the principles of the state space model (SSM) from control theory. Notably, the S4 model has shown to surpass conventional models, including the Transformer, in Long Range Arena (LRA) tasks [8], signifying a substantial advancement in handling sequential data. Further refinement of the S4 architecture led to the introduction of the Diagonal State Space (DSS) layers [9], offering a simplified yet effective version of the original S4 model, maintaining its high performance with a more streamlined architecture. In addition to the original and simplified S4 models, several deep learning models related to the SSM, such as H3 [10], Hyena [11], S4D [12], S4ND [13], S5 [14], SSSD [15], and Mamba [16], have been proposed. Generally, in tasks requiring long-range dependency modeling, these deep learning models tend to perform better with a larger number of parameters.
However, deep learning models, which have a vast number of parameters, demand considerable computational resources for inference, thereby limiting their practical and sustainable use. For example, in Edge Intelligence (EI) [17, 18], data from individual devices are processed both in the cloud and locally on each device (Fig. 1). EI devices, such as sensors in factories, have limited computational resources and power consumption constraints. This limitation poses a challenge for performing inference using deep learning models with numerous parameters. Therefore, it is crucial to achieve optimal performance using models with fewer parameters and reduced computational costs in EI applications.

When considering the application of deep learning models in EI, the implementation of model compression techniques is essential [19, 20, 21, 22]. For example, the techniques include:
- •
-
•
Quantization, which reduces the number of bits used to represent the weights and activations in deep learning models [25].
- •
However, the effectiveness of these model compression techniques in deep models that incorporate SSMs remains unclear.
Thus, our goal is to provide a novel and effective model compression method tailored for S4 models with DSS layers, especially for EI scenario deployment. To achieve this, we leverage SSM’s use in DSS layers to enable the use of various well-established reduction methods [30, 31, 32, 33, 34, 35, 36]. In this study, we employ the balanced truncation method [33], a widely used control theory approach. To train the original model, we use Skew-HiPPO initialization [12, 9], consistently outperforming models started with random initialization.
The contributions of this paper are summarized as follows: To reduce computational costs during inference, we introduce a novel model compression method that applies the balanced truncation technique to DSS layers in pre-trained S4 models. Moreover, we propose using the reduced model parameters obtained by the balanced truncation as initial parameters of S4 models with DSS layers during the main training process. As demonstrated in Section VI, our trained models combined with the balanced truncation achieved superior accuracy on LRA tasks compared to conventionally trained models using Skew-HiPPO initialization as described in [12, 9], even with fewer parameters. While [37] reports minimal impact from dimension reduction in the MultiHyena variant of the Hyena model on performance, our findings with the S4 model underscore a critical distinction with significant performance improvement.
The paper is organized as follows. In Section II, we introduce the balanced truncation method for the reduction of state space models and explain the HiPPO matrix used in Skew-HiPPO initialization. In Section III, we present a deep learning model with DSS layers. Section IV describes existing training methods for this model, and discusses the model’s computational cost during inference. To address the issue of computational cost, in Section V, we propose a model compression method using the balanced truncation method for SSMs. In Section VI, the results of numerical experiments are presented. Finally, in Section VII, we discuss the effectiveness of the proposed method based on the results of the numerical experiments, clarify the limitations of our work, and outline future work.
Notation
-
•
and denote the sets of real and complex numbers, respectively.
-
•
For , denotes the absolute value of .
-
•
For , denotes the Euclidean norm of , i.e., .
-
•
and denote the transpose and complex conjugate transpose of matrix , respectively.
-
•
denotes the function that coincides with , a function defined on , on its domain .
-
•
represents the degree of polynomial .
-
•
denotes the imaginary unit.
-
•
is a diagonal matrix with as diagonal elements.
-
•
and are the real and imaginary parts of , respectively.
-
•
represents the Hadamard product of vectors and .
-
•
-
•
denotes the uniform distribution on .
-
•
is a normal distribution with mean and variance .
II Preliminaries
In this section, we introduce a state space model (SSM), an important component of the DSS layer. We also present the balanced truncation method [33], a reduction method for SSMs employed in our training method. Furthermore, we explain the HiPPO matrix [6] utilized in Skew-HiPPO initialization [12, 9], which enhances the performance of trained models.
II-A State space model (SSM)
A crucial component of a deep learning model discussed in our study, as outlined in Section III, is the hidden layer known as the DSS layer. This layer is defined by using the SSM
(1) |
where , , and denote the input, output, and state, respectively, and . The matrix denotes a state transition matrix, which describes the internal influence on the time evolution of the internal state . The matrix is an input matrix, which describes how the external input affects the internal state . The matrix serves as an output matrix, which describes how the internal state is transformed into the observable output .
II-B Balanced truncation method
To address computational issues arising from the large state dimension of SSM (1), we consider using the balanced truncation method [33], a reduction technique. This method focuses on the controllability and observability of SSM (1) to derive another SSM with dimension , which gives almost the same output as (1):
(2) |
where . Below is a brief explanation of the balanced truncation method. For more detailed information, refer to Appendix A.
For SSM (1) with asymptotic stability, where all the real parts of the eigenvalues of matrix are negative, the controllability Gramian and observability Gramian are defined as
(3) | |||
(4) |
These are the unique solutions to the Lyapunov equations
(5) | |||
(6) |
as shown in [38, Theorem 4.1].
In the balanced truncation method, the subspace spanned by eigenvectors corresponding to small eigenvalues of the controllability Gramian and the observability Gramian is ignored. The minimum input energy required to achieve from the initial condition is expressed using the controllability Gramian as
(7) |
Thus, eigenvectors corresponding to smaller eigenvalues of correspond to directions in the state space that are less influenced by the input . On the other hand, when and , the output energy can be expressed using the observability Gramian as
(8) |
Thus, eigenvectors corresponding to smaller eigenvalues of correspond to directions in the state space that have less impact on the output .
A coordinate transformation is applied to SSM (1), obtaining another SSM where the controllability Gramian and observability Gramian coincide as a diagonal matrix :
(9) |
SSM (9) is referred to as the balanced realization of SSM (1). Here, the diagonal elements of are denoted as , which are called the Hankel singular values, satisfying under the assumption that SSM (1) is controllable and observable. By partitioning the matrices , , in SSM (9) as
(10) | ||||
(11) | ||||
(12) |
we define the parameters of reduced model (2) as
(13) |
The resulting system (2) with (13) can be interpreted as a reduced SSM of SSM (1), obtained by truncating the state space associated with the smaller Hankel singular values , which correspond to the subspace spanned by eigenvectors that are less influenced by the input or have less influence on the output. Moreover, if SSM (1) is asymptotically stable, the reduced SSM (2) with (13) is also asymptotically stable, as shown in [38, Proposition 4.15].
II-C HiPPO matrix
For the model explained in Section III, the parameters of the matrices in the SSM (1) are trained using a suitable optimization algorithm. The initialization of matrix significantly influences the performance of trained models, as it sets the initial state for the optimization process. The High-order Polynomial Projection Operators (HiPPO) matrix [7] is derived from a method for online compression of continuous signals using projections onto subspaces spanned by polynomial bases. It is well-established that the HiPPO matrix is an effective choice for the initial [6, 12, 9].
The derivation of the HiPPO matrix is explained below. With respect to a measure on , let
(14) |
The inner product and norm on are defined as
(15) | |||
(16) |
respectively.
For an input signal defined on , the history at each time is approximated by projecting it onto a subspace spanned by polynomial bases, and the corresponding coefficient vector represents the history of input signal. This compression is useful because storing requires a significant amount of memory. Thus, at each time , contains sufficient information to reconstruct , even though it requires less memory compared to directly storing .
The vector is expressed as the optimal solution to a convex optimization problem, which is defined by a measure on and orthogonal polynomial basis of the subspace of denoted as (i.e. ). The optimization problem is
(17) |
If is a normalized orthogonal basis, i.e. , the optimal solution is given by
(18) |
The vector defines the approximation of , thus it retains information necessary for reconstructing the history of the input at time . This property of memorizing the input history in the state vector will be useful for modeling long sequential data, as capturing dependencies in sequential data requires referencing information from previous inputs to compute each output at every time step. Moreover, by Equation (18), the measure represents the importance of each time step when compressing the history . This satisfies the differential equation
(19) |
where depend on the polynomial basis and the measure . Unlike SSM (1) introduced in Subsection II-A, and are time-dependent.
For HiPPO-LegS that is a variant of HiPPO [7], the measure is defined as the scaled Legendre (LegS) measure
(20) |
This assigns uniform importance to the entire history at each time . Furthermore, the polynomial basis is the normalized orthogonal basis
(21) |
where is the Legendre polynomial
(22) |
In this case, as shown in [7, Appendix D.3], satisfies
(23) | ||||
(24) | ||||
(25) |
This matrix of Equation (24) is called the HiPPO matrix. For the SSM incorporating the HiPPO matrix, the state vector retains information about the history of the input at each time [7].
III Deep learning model employed in this study
The deep learning model employed in this study, proposed in [9], has the structure illustrated in Fig. 2. In this paper, the model is referred to as S4 with DSS layers, despite being named DSS by the authors of [9]. The input layer receives sequential data and outputs features as 1-dimensional sequential data. This conversion adapts data from various formats to the DSS layer’s input format, as detailed in Subsection III-A. The term denotes the hidden size, representing the count of features processed by the DSS layer. Finally, the output layer converts the features of 1-dimensional sequential data into the model’s final output format. For further details, refer to Appendix B.

III-A Diagonal State Space layer
The most important component of the deep learning model illustrated in Fig. 2 is the DSS layer. As shown in Fig. 3, the DSS layer consists of:
-
•
Independent DSS models.
-
•
Nonlinear connection blocks.
-
•
Linear combination block.
The details of each of these components are explained below.
By restricting the matrix in (1) to be diagonal, assuming that the diagonal elements do not lie on the imaginary axis, the DSS model is defined by the following discretization for a sample time :
(26) |
where , , and . The diagonal elements of the matrix do not lie on the unit circle in the complex plane, due to the assumption on the matrix .
The nonlinear connection block receives the input and output from the DSS model and outputs 1-dimensional sequential data
(27) |
where , and GELU[39] is a nonlinear activation function expressed as
(28) |
Here, is the cumulative distribution function of the standard normal distribution. This approach is expected to enhance the performance of the model. Note that is used in Equation (27) since can be a complex number.
Finally, the 1-dimensional sequential data outputted from each nonlinear connection block is mixed to obtain the final output of the DSS layer, resulting in 1-dimensional sequential data . With parameters of weight and bias , the output is expressed as
(29) |
where is a 1-dimensional sequential data of the same length as with all elements .

III-B DSS and DSS
The output of DSS (26) can be calculated as
(30) |
with , which is referred to as the impulse response of DSS (26). Given a sample time , is determined by , and different sets of parameters may result in the same sequence of impulse responses. In fact,
(31) |
are the same for different , as described below [9].
Proposition 1.
Suppose that the parameters of DSS (26) are given, and define . Then, there exist satisfying the following equations:
-
(a)
-
(b)
where
(32) | |||
(33) | |||
(34) |
Proposition 1 implies that, under weak assumptions, the impulse responses of DSS (26) can be achieved with special structure of . DSS (26) with , as stated in Proposition 1(a), is referred to as DSS, and DSS (26) with , as stated in Proposition 1(b), is referred to as DSS [9]. DSS and DSS offer different approaches to modeling the impulse responses, with potential implications for the performance and interpretability of the DSS model. In the following sections, we utilize DSS or DSS as DSS (26).
IV Existing training methods and limitations
In the training of deep learning models, the goal is to minimize the loss function with respect to the training dataset . Here, represents a pair of input and its desired output , and denotes the parameters of the model. The model’s output for an input with parameters is denoted as . For each input , a loss function is defined to measure the difference between the desired output and the model’s output . The loss function for the entire training dataset is expressed as , where the summation is over all training examples. As an algorithm for minimizing the loss function , we can consider using AdamW [40].
IV-A Training parameters within the Diagonal State Space layer
Among the parameters trained in our deep learning model, those in the DSS layer include:
For DSS defined in Subsection III-B, the parameters are defined as
(35) |
where
(36) |
For DSS, the parameters and are trained to determine .
For DSS defined in Subsection III-B, the parameters are defined as:
(37) |
where
(38) |
Similarly to DSS, the parameters and are trained to determine for DSS, with a different expression for and .
IV-B Initialization of the DSS Layer
The performance of S4 with DSS layers is sensitive to initialization of the state matrix . To obtain an effective initial value for , the HiPPO matrix is decomposed into a normal matrix and a low-rank matrix. This decomposition allows for a more structured and interpretable initialization of the state matrix , which can improve the performance of the model. The eigenvalues of the normal matrix are employed to initialize the diagonal elements of .
In more detail, the HiPPO matrix is defined as explained in Subsection II-C:
(39) |
For SSM (1) incorporating this HiPPO matrix, the state retains information about the history of the input [7]. The HiPPO matrix can be decomposed into a normal matrix and a low-rank matrix as
(40) |
where are defined as
(41) | |||
(42) |
This is a normal matrix. Under the assumption that and are the eigenvalues of with positive imaginary parts, are defined as follows:
-
•
For DSS,
(43) -
•
For DSS,
(44)
Using and , the matrix is initialized as , where each is derived from Equation (36) for DSS or Equation (38) for DSS. This process is known as the Skew-HiPPO initialization [12, 9]. Other parameters within the DSS are randomly sampled, as detailed in Section VI. According to [9], models utilizing Skew-HiPPO initialization demonstrate superior prediction accuracy compared to those initialized with values from .
IV-C Computational cost of the DSS layer output
When the input is entered one by one or all at once, reducing (the hidden size) and (the state dimension) facilitates a reduction in the computational cost of the DSS layer output. In fact, the time and space complexities of the DSS layer output are as illustrated in Table I and Table II, respectively, as discussed below. Here, the input length of the 1-dimensional sequence is denoted as .
In the case where input at each time step is entered one by one into DSS (26), the output can be computed using the previous state vector according to (26). The time and space complexities per step are both . The time complexity for processing the entire input of length is , and the space complexity involves overwriting at each step, thus remaining . For the nonlinear connection block, both the time and space complexities per step are . The time complexity for processing the entire input of length is , and the space complexity involves overwriting at each step, thus remaining . Regarding the following linear combination block, the time complexity per step is , and the space complexity is . The time complexity for processing the entire input of length is , and the space complexity involves overwriting at each step, thus remaining . Therefore, adding up DSS, nonlinear connection blocks, and one linear combination block, the time complexity of the DSS layer output when input is entered one by one is , and the space complexity is (Table I).
In the case where whole input is entered all at once, the output
(45) |
can be efficiently computed. In fact, leveraging the fast Fourier transform [41] implies that the time complexity is and the space complexity is . Furthermore, the computation is parallelizable. Besides, the impulse response in Equation (45) can be easily computed. In fact, for a diagonal matrix with diagonal elements , can be calculated as
(46) |
The computation time for the sequence of impulse responses is . As for the nonlinear connection block, the time and space complexities are both Regarding the following linear combination block, the time complexity is , and the space complexity is . Therefore, adding up DSS, nonlinear connection blocks, and one linear combination block, the time complexity of the DSS layer output when input is entered all at once is , and the space complexity is (Table II).
Additionally, Equation (45) is an approximation that holds when DSS (26) is asymptotically stable and is sufficiently large. The exact output of DSS (26) is given by (30). However, when (26) is asymptotically stable and is sufficiently large, for , making the approximation in (45) valid.
Time | Space | |
---|---|---|
DSS (1 unit) | ||
Nonlinear connection(1 unit) | ||
Linear combination | ||
Total |
Time | Space | |
---|---|---|
DSS (1 unit) | ||
Nonlinear connection(1 unit) | ||
Linear combination | ||
Total |
IV-D Issues for practical applications
Let us consider the issues that hinder the application of S4 with DSS layers to EI [17, 18], as explained in Section I. These issues include memory constraints, computational complexity, and the trade-offs between model performance and resource efficiency.
The following can be concluded from the arguments of Subsection IV-C:
-
•
When processing the entire input of large length at once, the application of S4 with DSS layers to EI is challenging, even if and of the trained model are sufficiently small. This is because the space complexity is (Table II), making it difficult to conduct inference in devices with small capacity memory (e.g. sensors set in factories).
-
•
When processing the input one-by-one, which corresponds to , S4 with DSS layers can be applied to EI if and of the trained model are sufficiently small. Specifically, the time and space complexities are those shown in Table I. That is, when , the time complexity is , and the space complexity is . This implies that even for very large input sequences, inference can be conducted in devices with small capacity memory.
In summary, to apply S4 with DSS layers to EI, the input needs to be processed one-by-one, and it is desirable to keep the values of and as small as possible. However, excessively small values of and may limit the model’s capacity to capture complex patterns in the data, leading to a deterioration in performance.
V Proposed Model Compression Method
In this section, to address the issues discussed in Subsection IV-D, we propose an effective model compression method for S4 with DSS layers aiming to reduce computational costs during inference by one-by-one processing. Specifically, this method enables the acquisition of parameter values that achieve higher accuracy compared to existing methods when training models with DSS layers of the same and .
The following procedure is our proposed model compression method.
-
1.
Apply the balanced truncation method, as explained in Subsection II-B, to a large-scale DSS that is part of a trained model.
-
2.
Retrain the model using the reduced DSS obtained in step 1) for improved initialization.

In more detail, our proposed method consists of the following Pre-Training, DSS Reduction, Parameter Extraction, and Main Training, illustrated in Fig. 4.
Pre-Training
DSS Reduction
Parameter Extraction
Assuming is diagonalizable, we can transform the reduced SSM (2) into the DSS (26), as detailed below. There exist a diagonal matrix and an invertible matrix satisfying . Using a coordinate transformation , we obtain the new SSM
(47) |
which is equivalent to the reduced SSM (2). Consequently, the transformation allows expressing the reduced SSM (2) in the explicitly diagonal form of DSS, as shown in (47).
From Proposition 1, the impulse response of DSS (47) with the state dimension can also be derived from DSS or DSS.
-
•
For DSS, the parameters are determined as
(48) (49) (50) -
•
For DSS, the parameters are determined as
(51) (52) (53)
For the vectors of (50) and (53), refer to the proof of Proposition in [9, Appendix A.1].
Main Training
As detailed in Subsection IV-A, and are training parameters within DSS and DSS. Here, and are initialized with and obtained by Parameter Extraction, respectively. It is important to note that the dimension, previously denoted as , is adjusted to for the context of this initialization. All other parameters maintain their values as obtained from the Pre-Training phase, ensuring consistency in the model’s initialization process.
VI Numerical Experiments
To evaluate the proposed method, we employed tasks of LRA [8], which is available at https://github.com/google-research/long-range-arena. The benchmark includes sequence data ranging from 1,000 to 16,000 in length and evaluates the model’s ability to capture long-range dependencies required for learning long sequences. In the text classification task, we classify movie reviews in the Internet Movie Database (IMDb) review dataset [42] as negative or positive. Tables III and IV summarize the statistics of the text classification dataset, including the counts and lengths of the raw data sequences. These sequences are truncated or padded as necessary to ensure consistent input lengths.
The experiments were conducted on a machine running Windows 10, equipped with 64 GB of memory and an 11th Gen Intel Core i9-11980HK CPU. The model training and evaluation code was implemented in Python using PyTorch 1.11.0 and TensorFlow 2.12.0, and executed with the NVIDIA RTX A3000 Laptop GPU.
Train | Test | |
Number of examples | 12,500 | 12,500 |
Max text length | 8,969 | 6,385 |
Min text length | 52 | 32 |
Avg text length | 1302.97904 | 1285.14968 |
Train | Test | |
Number of examples | 12,500 | 12,500 |
Max text length | 13,704 | 12,988 |
Min text length | 70 | 65 |
Avg text length | 1347.16024 | 1302.43512 |
VI-A Comparison with existing training methods
Table V and Table VI show the accuracy of models obtained through various training methods, using DSS and DSS respectively, where denotes the dimension of the state vector of DSS. The number of DSS layers is and the hidden size is . The columns labeled “before” and “after” denote the accuracy of the model with the initial parameter values and the accuracy of the model after training from that initial state, respectively.
In the “HiPPO” column on the left, the Skew-HiPPO initialization explained in Subsection IV-B was used to initialize the state matrix of each DSS. In the middle column “Random”, the initial values of were randomly sampled. For DSS, the real and imaginary parts of the diagonal elements of matrix were sampled from and , respectively. For DSS, the real and imaginary parts of the diagonal elements were sampled from . Other parameters within DSS were randomly sampled for both “HiPPO” and “Random”. The real and imaginary parts of each element in were sampled from .
HiPPO | Random | Proposed Method | ||
---|---|---|---|---|
128 | before | 0.5000 | 0.4998 | |
after | 0.7997 | 0.7731 | ||
64 | before | 0.4967 | 0.4982 | |
after | 0.8012 | 0.7968 | ||
32 | before | 0.5008 | 0.4978 | |
after | 0.7990 | 0.8100 | ||
16 | before | 0.4936 | 0.4970 | 0.8310 |
after | 0.8310 | 0.8071 | 0.8396 | |
8 | before | 0.4996 | 0.5028 | 0.5011 |
after | 0.8182 | 0.8115 | 0.8376 | |
4 | before | 0.4960 | 0.4964 | 0.5002 |
after | 0.8042 | 0.8155 | 0.8418 |
HiPPO | Random | Proposed Method | ||
---|---|---|---|---|
128 | before | 0.4984 | 0.5000 | 0.8216 |
after | 0.8216 | 0.7540 | 0.8359 | |
64 | before | 0.4993 | 0.5000 | 0.5013 |
after | 0.8158 | 0.7544 | 0.8382 | |
32 | before | 0.4994 | 0.4999 | 0.5000 |
after | 0.8026 | 0.7496 | 0.8370 | |
16 | before | 0.5011 | 0.4994 | 0.5005 |
after | 0.8190 | 0.7461 | 0.8402 | |
8 | before | 0.5000 | 0.5072 | 0.5036 |
after | 0.8071 | 0.7722 | 0.8412 | |
4 | before | 0.4940 | 0.5000 | 0.5012 |
after | 0.7948 | 0.7819 | 0.8377 |
For DSS, it has been reported in [9] that “HiPPO” using Skew-HiPPO initialization achieves higher accuracy after training compared to “Random” using randomly sampled initial values. The results in Table VI are cosistent, where “HiPPO” achieves higher accuracy after training compared to “Random” for each , while each accuracy before training is near. For DSS, the same trend was observed for almost all as shown in Table V.
The ”Proposed Method” column on the right describes our approach, which uses a reduced SSM from the Pre-Trained models with and , obtained through the balanced truncation method, to initialize Main Training. In Table V, the ”Proposed Method” entries for , , and are blank, because the balanced truncation does not permit expanding the state dimension beyond the original size of .
Before Main Training, the accuracy of models using the “Proposed Method” is comparable to those using “Random” and “HiPPO” for each excluding for DSS and for DSS. However, after Main Training, the accuracy of “Proposed Method” exceeded that of “HiPPO” for each . This result is noteworthy because the “Proposed Method” tends to outperform “HiPPO” after Main Training, despite having similar accuracies before the training.
The following points are particularly noteworthy.
-
•
For DSS shown in Table V, the highest accuracy after Main Training with “Proposed Method” was 0.8418 at . Notably, this exceeded the accuracy after training with “HiPPO” at , which was 0.8310, despite having a smaller while maintaining the same hidden size .
-
•
For DSS shown in Table VI, the highest accuracy after training with “Proposed Method” was 0.8412 at . Notably, this exceeded the accuracy after training with “HiPPO” at , which was 0.8216, despite having a smaller while maintaining the same hidden size .
In summary, the initial parameters obtained by reducing Pre-Trained DSS of for DSS and for DSS appear to be effective in enhancing accuracy of the trained model compared to the initial parameters by the Skew-HiPPO initialization. Similar trends are observed in the ListOps task and text retrieval task of LRA, where our method enhanced the accuracy of the trained model. For detailed results, refer to Appendix C.
VI-B Relationship between accuracy of Pre-Trained models and models after Main Training
Table VII shows the accuracy of models after Main Training when initialized with different Pre-Trained models. We obtained Pre-Trained models with DSS of and , and utilized the reduced models for improved initialization of Main Training. The accuracy of the models after main training is in columns “Proposed Method ()” and “Proposed Method ()”. Both “Proposed Method ()” and “Proposed Method ()” followed the trend observed in Subsection VI-A, where “Proposed Method” achieved higher accuracy than “HiPPO” for each .
The accuracy of the Pre-Trained models for is 0.8216, which is higher than 0.8098 at . As for models after Main Training, the accuracy of “Proposed Method ()” surpasses that of “Proposed Method ()” for each . This suggests that higher accuracy of the Pre-Trained model leads to higher accuracy of the model obtained through Main Training.
HiPPO |
|
|
|||||
---|---|---|---|---|---|---|---|
128 | 0.8216 | 0.8359 | |||||
80 | 0.8098 | 0.8390 | 0.8343 | ||||
64 | 0.8158 | 0.8382 | 0.8224 | ||||
32 | 0.8026 | 0.8370 | 0.8255 | ||||
16 | 0.8190 | 0.8402 | 0.8294 | ||||
8 | 0.8071 | 0.8412 | 0.8334 | ||||
4 | 0.7948 | 0.8377 | 0.8317 |
VI-C Non-triviality of the obtained results
The Hankel singular values illustrated in Fig. 5 highlight the non-triviality of the results presented in Tables V and VI from a system-theoretic perspective. These values were derived from the SSM parameters of the Pre-Trained model using DSS with . Specifically, the Hankel singular values were computed for each SSM in every DSS layer. The detailed computational method is described in Appendix A.
As explained in Section II-B and Appendix A, the Hankel singular values can reveal the important directions in the state space from the controllability and observability perspective. That is, if the Hankel singular values are relatively large, the corresponding directions are relatively important. Notably, Fig. 5 shows that almost all directions in the dimensional state space are important, because there are few significantly small Hankel sigular values. Therefore, reducing the dimensionality of the Pre-Trained model from is expected to significantly deteriorate its performance. This expectation is consistent with the results shown in Table VI. In fact, the accuracy of the Pre-Trained model with was , but after reducing to , the accuracy dropped to . Nevertheless, after Main-Training, the accuracy of the reduced model improved to . This improvement in accuracy is not predicted by the theoretical analysis of balanced truncation introduced in Appendix A and is a non-trivial result.

VII Conclusion
We developed a new model compression method specifically for S4 models with DSS layers, using the balanced truncation method [33]. This approach not only reduces the number of parameters but also enhances model performance. We proposed using the reduced model parameters obtained by the balanced truncation as initial parameters for the main training process. Our experiments demonstrated that the proposed method achieves superior accuracy on Long Range Arena (LRA) tasks compared to conventionally trained models using the Skew-HiPPO initialization, even with fewer parameters. Moreover, we observed a positive correlation between the accuracy of Pre-Trained models and their accuracy after Main Training.
The primary limitation of this study lies in the scope of tasks and datasets used for evaluation. While the LRA tasks provide a robust benchmark for long-range dependency modeling, further validation on diverse datasets and real-world applications is necessary. Additionally, the underlying principles of the proposed method remain unclear, which limits the understanding of why this approach is effective.
The following are interesting future directions:
-
•
Future research should investigate the underlying principles of the proposed method, aiming to enhance the development of more effective training methods for deep learning models with DSS layers.
-
•
Reference [23] has shown that combining various model compression methods can yield better results. Investigating whether combining our proposed model compression method based on the balanced truncation with other compression techniques can improve performance is an interesting and promising direction for future work.
-
•
Expanding the scope of evaluation to include real-time deployment scenarios in EI applications will provide more comprehensive insights into the method’s practical viability. This can help demonstrate how the reduced models can be effectively used in resource-constrained environments.
- •
Acknowledgment
This work was supported by the Japan Society for the Promotion of Science KAKENHI under Grant 23H03680.
Appendix A Details of the balanced truncation method
As mentioned in Section II-B, the eigenvectors of the controllability and observability Gramians and of SSM (1) provide important directions in the state space from the perspectives of controllability and observability. Thus, we can adopt an approach that reduces dimensions along directions that are not significant. However, the eigenvectors do not coincide in general. This means that, in general, it is impossible to uniquely determine the directions to be ignored based solely on the information from the original controllability Gramian and observability Gramian .
To overcome this problem, we apply a coordinate transformation to SSM (1) to obtain new SSM (9). Then, the corresponding controllability and observability Gramians of SSM (9) become and , respectively. Thus, if we can find satisfying
(54) |
the controllability and observability Gramians of the transformed SSM (9) will coincide, even if the original Gramians of SSM (1) do not.
To find satisfying (54), we perform the eigenvalue decomposition of the symmetric positive definite matrix to obtain
(55) |
where is the square root matrix of , is a unitary matrix, and is a diagonal matrix with positive diagonal elements satisfying . Defining
(56) |
we get
(57) |
We call the Hankel singular values of SSM (1).
Thus, the balanced truncation method consists of the following procedure:
The reduced SSM (2) is preferable when it closely approximates the original large-scale SSM (1) in terms of the norm of the difference in their transfer functions. In fact, the transfer functions and of original SSM (1) and reduced SSM (2) are defined by
(58) |
respectively. The energy of the difference between original SSM (1) output and reduced SSM (2) output can be evaluated by
(59) |
where denotes the norm. That is, if the input energy and are sufficiently small, the output error energy is also sufficiently small. Moreover, when using the balanced truncation method is bounded by
(60) |
assuming that . Thus, if the Hankel singular values are small, then will also be small. The proofs for the above claims can be found in [38, Chapter 4].
Appendix B Details of the deep learning model
In the deep learning model employed in this study, residual connections [45] and normalization layers are positioned before and after the DSS layer. In residual connections, a path bypassing one or more layers is created, as illustrated in Fig. 6 and Fig. 7, and the output of the bypassed layer is added to it. The normalization layer can be placed before the DSS layer (Prenorm, Fig. 6) or after the residual connection (Postnorm, Fig. 7). In the case of prenorm, a normalization layer is also placed before the output layer. Normalization layers such as batch normalization [46] or layer normalization [47] are used, which contributes to the stability and acceleration of training. Additionally, residual connections prevent the gradient vanishing and exploding problems.


Appendix C Other results
In addition to the text classification task, explained in Section VI, we confirmed that our proposed method improves performance in the ListOps task and the text retrieval task of LRA [8], as shown in Table VIII and Table IX, respectively. Here, the number of DSS layers is and the hidden size is . In the ListOps task, a numerical expression structured with operators MAX, MEAN, MEDIAN, SUM_MOD and parentheses is the input, and its value is the output. For instance,
(61) |
The maximum length of input is 2000, and the output values range from 0 to 9. In the text retrieval task, we estimate the similarity between two papers and determine if there is a citation link. The length of each paper is 4000, and the total input length is 8000.
HiPPO | Random | Proposed Method | ||
---|---|---|---|---|
64 | before | 0.0715 | 0.0810 | 0.5180 |
after | 0.5180 | 0.4020 | 0.5300 | |
32 | before | 0.1705 | 0.1195 | 0.3610 |
after | 0.4920 | 0.4035 | 0.5205 | |
16 | before | 0.0760 | 0.1780 | 0.1290 |
after | 0.4745 | 0.4001 | 0.5390 | |
8 | before | 0.0715 | 0.0960 | 0.1825 |
after | 0.4025 | 0.4300 | 0.5250 | |
4 | before | 0.0825 | 0.1210 | 0.1725 |
after | 0.4250 | 0.4440 | 0.5175 |
HiPPO | Random | Proposed Method | ||
---|---|---|---|---|
64 | before | 0.4943 | 0.5068 | 0.8189 |
after | 0.8189 | 0.7830 | 0.8345 | |
32 | before | 0.4937 | 0.4976 | 0.5807 |
after | 0.8217 | 0.7812 | 0.8302 | |
16 | before | 0.5055 | 0.4932 | 0.5399 |
after | 0.8071 | 0.7907 | 0.8251 | |
8 | before | 0.4939 | 0.4939 | 0.5064 |
after | 0.8212 | 0.7944 | 0.8270 | |
4 | before | 0.4939 | 0.4939 | 0.5062 |
after | 0.8136 | 0.7893 | 0.8313 |
For DSS, the left column “HiPPO” using the Skew-HiPPO initialization achieved higher accuracy after training compared to the middle column “Random” using randomly sampled initial values. For DSS, the same trend was observed for almost all as shown in Table VIII and Table IX.
In the “Proposed Method” column on the right, Main Training was initialized using a reduced model obtained from Pre-Trained models of . The accuracy of models before Main Training with “Proposed Method” was comparable to that of “Random” and “HiPPO” for each excluding . However, after the training, the accuracy of “Proposed Method” exceeded that of “HiPPO” for each .
The following points are particularly noteworthy.
-
•
For ListOps task, the highest accuracy after Main Training with “Proposed Method” was 0.5390 at . Notably, this exceeded the accuracy after training with “HiPPO” at , which was 0.5180, despite the smaller and the same hidden size .
-
•
For text retrieval task, the accuracy after Main Training with “Proposed Method” was 0.8313 at , which exceeded the accuracy after training with “HiPPO” at and , despite the smaller and the same hidden size .
Consequently, the initial parameters obtained by reducing Pre-Trained DSS of appear to be effective in enhancing accuracy of the trained model compared to the initial parameters by the Skew-HiPPO initialization.
References
- [1] B. Lim and S. Zohren, “Time-series forecasting with deep learning: a survey,” Philosophical Transactions of the Royal Society A, vol. 379, no. 2194, p. 20200209, 2021.
- [2] A. Mehrish, N. Majumder, R. Bharadwaj, R. Mihalcea, and S. Poria, “A review of deep learning techniques for speech processing,” Information Fusion, p. 101869, 2023.
- [3] A. K. Pandey and S. S. Roy, “Natural language generation using sequential models: A survey,” Neural Processing Letters, vol. 55, no. 6, pp. 7709–7742, 2023.
- [4] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Proceedings of Advances in neural information processing systems, 2017.
- [5] A. Zeng, M. Chen, L. Zhang, and Q. Xu, “Are transformers effective for time series forecasting?” in Proceedings of the AAAI conference on artificial intelligence, vol. 37, no. 9, 2023, pp. 11 121–11 128.
- [6] A. Gu, K. Goel, and C. Ré, “Efficiently modeling long sequences with structured state spaces,” in International Conference on Learning Representations, 2022.
- [7] A. Gu, T. Dao, S. Ermon, A. Rudra, and C. Ré, “Hippo: Recurrent memory with optimal polynomial projections,” in Proceedings of Advances in neural information processing systems, 2020, pp. 1474–1487.
- [8] Y. Tay, M. Dehghani, S. Abnar, Y. Shen, D. Bahri, P. Pham, J. Rao, L. Yang, S. Ruder, and D. Metzler, “Long range arena: A benchmark for efficient transformers,” in International Conference on Learning Representations, 2021.
- [9] A. Gupta, A. Gu, and J. Berant, “Diagonal state spaces are as effective as structured state spaces,” in Proceedings of Advances in Neural Information Processing Systems, 2022, pp. 22 982–22 994.
- [10] D. Y. Fu, T. Dao, K. K. Saab, A. W. Thomas, A. Rudra, and C. Re, “Hungry hungry hippos: Towards language modeling with state space models,” in The Eleventh International Conference on Learning Representations, 2023.
- [11] M. Poli, S. Massaroli, E. Nguyen, D. Y. Fu, T. Dao, S. Baccus, Y. Bengio, S. Ermon, and C. Ré, “Hyena hierarchy: Towards larger convolutional language models,” arXiv preprint arXiv:2302.10866, 2023.
- [12] A. Gu, K. Goel, A. Gupta, and C. Ré, “On the parameterization and initialization of diagonal state space models,” in Advances in Neural Information Processing Systems, 2022, pp. 35 971–35 983.
- [13] E. Nguyen, K. Goel, A. Gu, G. Downs, P. Shah, T. Dao, S. Baccus, and C. Ré, “S4nd: Modeling images and videos as multidimensional signals with state spaces,” in Proceedings of Advances in neural information processing systems, 2022, pp. 2846–2861.
- [14] J. T. Smith, A. Warrington, and S. Linderman, “Simplified state space layers for sequence modeling,” in The Eleventh International Conference on Learning Representations, 2023.
- [15] J. M. Lopez Alcaraz and N. Strodthoff, “Diffusion-based time series imputation and forecasting with structured state space models,” Transactions on machine learning research, pp. 1–36, 2023.
- [16] A. Gu and T. Dao, “Mamba: Linear-time sequence modeling with selective state spaces,” arXiv preprint arXiv:2312.00752, 2023.
- [17] K. Cao, Y. Liu, G. Meng, and Q. Sun, “An overview on edge computing research,” IEEE Access, vol. 8, pp. 85 714–85 728, 2020.
- [18] Z. Zhou, X. Chen, E. Li, L. Zeng, K. Luo, and J. Zhang, “Edge intelligence: Paving the last mile of artificial intelligence with edge computing,” Proceedings of the IEEE, vol. 107, no. 8, pp. 1738–1762, 2019.
- [19] T. Choudhary, V. Mishra, A. Goswami, and J. Sarangapani, “A comprehensive survey on model compression and acceleration,” Artificial Intelligence Review, vol. 53, pp. 5113–5155, 2020.
- [20] H. Djigal, J. Xu, L. Liu, and Y. Zhang, “Machine and deep learning for resource allocation in multi-access edge computing: A survey,” IEEE Communications Surveys & Tutorials, vol. 24, no. 4, pp. 2449–2494, 2022.
- [21] M. S. Murshed, C. Murphy, D. Hou, N. Khan, G. Ananthanarayanan, and F. Hussain, “Machine learning at the network edge: A survey,” ACM Computing Surveys, vol. 54, no. 8, pp. 1–37, 2021.
- [22] N. Tekin, A. Aris, A. Acar, S. Uluagac, and V. C. Gungor, “A review of on-device machine learning for iot: An energy perspective,” Ad Hoc Networks, p. 103348, 2023.
- [23] S. Han, H. Mao, and W. J. Dally, “Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding,” in 4th International Conference on Learning Representations, 2016.
- [24] Y. LeCun, J. Denker, and S. Solla, “Optimal brain damage,” Advances in neural information processing systems, vol. 2, 1989.
- [25] A. Gholami, S. Kim, Z. Dong, Z. Yao, M. W. Mahoney, and K. Keutzer, “A survey of quantization methods for efficient neural network inference,” in Low-Power Computer Vision. Chapman and Hall/CRC, 2022, pp. 291–326.
- [26] J. Gou, B. Yu, S. J. Maybank, and D. Tao, “Knowledge distillation: A survey,” International Journal of Computer Vision, vol. 129, no. 6, pp. 1789–1819, 2021.
- [27] G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” arXiv preprint arXiv:1503.02531, 2015.
- [28] Z. Hao, J. Guo, K. Han, H. Hu, C. Xu, and Y. Wang, “Revisit the power of vanilla knowledge distillation: from small scale to large scale,” Advances in Neural Information Processing Systems, vol. 36, 2023.
- [29] T. Huang, Y. Zhang, M. Zheng, S. You, F. Wang, C. Qian, and C. Xu, “Knowledge diffusion for distillation,” Advances in Neural Information Processing Systems, vol. 36, 2023.
- [30] A. C. Antoulas, Approximation of large-scale dynamical systems. SIAM, 2005.
- [31] A. Astolfi, “Model reduction by moment matching for linear and nonlinear systems,” IEEE Transactions on Automatic Control, vol. 55, no. 10, pp. 2321–2336, 2010.
- [32] S. Gugercin, A. C. Antoulas, and C. Beattie, “ model reduction for large-scale linear dynamical systems,” SIAM Journal on Matrix Analysis and Applications, vol. 30, no. 2, pp. 609–638, 2008.
- [33] B. Moore, “Principal component analysis in linear systems: Controllability, observability, and model reduction,” IEEE transactions on automatic control, vol. 26, no. 1, pp. 17–32, 1981.
- [34] K. Sato, “Riemannian optimal model reduction of linear port-hamiltonian systems,” Automatica, vol. 93, pp. 428–434, 2018.
- [35] K. Sato, “Riemannian optimal model reduction of stable linear systems,” IEEE Access, vol. 7, pp. 14 689–14 698, 2019.
- [36] K. Sato, “Reduced model reconstruction method for stable positive network systems,” IEEE Transactions on Automatic Control, vol. 68, no. 9, pp. 5616–5623, 2023.
- [37] S. Massaroli, M. Poli, D. Fu, H. Kumbong, R. Parnichkun, D. Romero, A. Timalsina, Q. McIntyre, B. Chen, A. Rudra et al., “Laughing hyena distillery: Extracting compact recurrences from convolutions,” in Advances in Neural Information Processing Systems, 2023.
- [38] G. E. Dullerud and F. Paganini, A course in robust control theory: a convex approach. Springer Science & Business Media, 2000.
- [39] D. Hendrycks and K. Gimpel, “Gaussian error linear units (gelus),” arXiv preprint arXiv:1606.08415, 2016.
- [40] I. Loshchilov and F. Hutter, “Decoupled weight decay regularization,” arXiv preprint arXiv:1711.05101, 2017.
- [41] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to algorithms, third edition. MIT press, 2009.
- [42] A. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts, “Learning word vectors for sentiment analysis,” in Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies, 2011, pp. 142–150.
- [43] J. Donnelly, A. Daneshkhah, and S. Abolfathi, “Physics-informed neural networks as surrogate models of hydrodynamic simulators,” Science of the Total Environment, vol. 912, p. 168814, 2024.
- [44] M. Raissi, P. Perdikaris, and G. E. Karniadakis, “Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations,” Journal of Computational physics, vol. 378, pp. 686–707, 2019.
- [45] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
- [46] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in International conference on machine learning, 2015, pp. 448–456.
- [47] J. L. Ba, J. R. Kiros, and G. E. Hinton, “Layer normalization,” arXiv preprint arXiv:1607.06450, 2016.