This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

A Functional approach for Two Way Dimension Reduction in Time Series

Aniruddha Rajendra Rao
Industrial AI Lab, Hitachi America, Ltd. R&\&D
Santa Clara, CA
   Haiyan Wang
Industrial AI Lab, Hitachi America, Ltd. R&\&D
Santa Clara, CA
   Chetan Gupta
Industrial AI Lab, Hitachi America, Ltd. R&\&D
Santa Clara, CA
Abstract

The rise in data has led to the need for dimension reduction techniques, especially in the area of non-scalar variables, including time series, natural language processing, and computer vision. In this paper, we specifically investigate dimension reduction for time series through functional data analysis. Current methods for dimension reduction in functional data are functional principal component analysis and functional autoencoders, which are limited to linear mappings or scalar representations for the time series, which is inefficient. In real data applications, the nature of the data is much more complex. We propose a non-linear function-on-function approach, which consists of a functional encoder and a functional decoder, that uses continuous hidden layers consisting of continuous neurons to learn the structure inherent in functional data, which addresses the aforementioned concerns in the existing approaches. Our approach gives a low dimension latent representation by reducing the number of functional features as well as the timepoints at which the functions are observed. The effectiveness of the proposed model is demonstrated through multiple simulations and real data examples.

{Aniruddha.Rao, Haiyan.Wang, Chetan.Gupta}@hal.hitachi.com

Keywords: Functional Data Analysis, Deep Learning, Autoencoder, Functional Neural Network, Time series, Dimension Reduction.

1 Introduction

Nowadays, with the rapid advancements in information technology, data are continuously collected at an astonishing frequency in many fields, such as sensors installed on industrial equipment, economic factors, individual health, and environmental exposures. The exponentially growing volume of data poses challenges for storage, transfer, and analysis.

Dimension reduction plays a key role in solving this problem by reducing the data in a meaningful manner. The objective of dimension reduction is to mathematically reduce the dimensionality by projecting the data to a lower dimensional subspace with minimum loss of information. The learned mathematical mapping plays an effective role in not only sorting out which variables are important but also how they interact with each other. Dimension reduction also helps us to deal with the curse of dimensionality, ease the transfer of information, reduce storage requirements, and reduce computation time. The usefulness of dimension reduction makes it an active area of research. Particularly, in the last few years, building dimension reduction models have proven to be beneficial in many fields, like time series, text, image, and video. This has attracted the interests of researchers from the scientific community, especially where non-scalar types of data have become prevalent [27, 13, 24, 8, 21]. Unfortunately, there is still a lot to be done in dimension reduction for time series data.

The purpose of this paper is to consider the problem of dimension reduction in time series data that occurs frequently in many areas of interest in industry and research like economics, meteorology, finance, health, sensors, etc. Examples include the weekly temperature, hourly sensor data, daily stock returns, and monthly blood pressure readings of a patient. This is a difficult and interesting problem as time series consists of random observations of the same variables at different timestamps, and the information is not independent nor linear because of the intricate temporal correlations. Mathematically, we are building mapping from multiple chronologically measured numerical variables within a certain time interval 𝒮\mathcal{S} to a few chronologically measured numerical variables within the same time interval 𝒮\mathcal{S}.

In this paper, we consider a general setting similar to an autoencoder where the input and output time series are the same and we learn the dimension reduced form of the time series using the latent space representation. The standard machine learning methods for dimension reduction, such as Principal component analysis (PCA) and Autoencoders (AE), are designed to work with data that are recorded at regularly spaced points and in the finite dimensional space. These methods have the following limitations: ignoring intricate temporal correlations, unable to capture complex relations, scalar representation of time series data [2].

Functional data analysis (FDA) [15, 4, 9] considers this time series information as functions over a continuum that are intrinsically infinite dimensional. Functional principal component analysis (FPCA) [15, 4, 9] is the functional counterpart to PCA from the multivariate setting. FPCA is a useful dimension reduction tool that frequently serves as a key component in many functional analysis [3, 5]. It allows us to do unsupervised learning of low-dimensional representations of the time series data by considering the entire time series as individual functional samples. FPCA helps to overcome some of the limitations mentioned above but they learn a linear representation of the data and therefore suffer from under fitting when the underlying mapping is complex. There is also a sufficient dimension reduction branch in FDA [10, 11] that learns a low-dimensional latent representation of the data which is non-linear but they are supervised. This limits the applicability of these methods. Also, they either don’t work or don’t perform well in the case of multiple functional features.

Deep learning approaches have become very popular in the past few years. They offer some of the state-of-the-art methods for learning nonlinear representations of multiple types of data [23, 7]. More recently, deep learning in functional data [18, 26, 25, 22, 17] has gained a lot of momentum. Functional Autoencoders (FAE) [8] was inspired by Rossi et al. [19, 20], introduced dimension reduction for functional data using deep learning. FAE adapted AE to the functional setting. They showed that the FPCA is a special case of FAE under certain conditions. While FAE has proven to beat a lot of the state-of-the-art methods [8], unfortunately, it does not capture the true nature of functional data as only some of the layers have weight functions and the low dimensional latent representation of the time series data is scalar, which is restrictive. So far, all the methods we discussed have some kind of structural limitation, and to the best of our knowledge, the current works tackle either decreasing the number of functional features or decreasing the number of timepoints at which these curves are observed but there is very limited work on solving both of them together.

In this paper, we formulate the dimension reduction for the time series problem from the functional data analysis perspective. We innovatively identify a general mathematical mapping between the functional features, based on which a non-linear approach for two-way dimension reduction is proposed. Specifically, we introduce Bi-Functional Autoencoders (BFAE), a novel generalization of traditional autoencoders to functional data settings. The contributions of this paper are summarized as follows:

  • We propose a general functional mapping from multivariate temporal variables to themselves that embraces Functional Auto-encoders (FAE) as a special case.

  • We propose a new model that generalizes the well known neural network autoencoders to the functional data setting while preserving the functional nature of the data to address dimension reduction in time series.

  • We explain how our approach can capture non-linear relations while reducing the dimensions in two-ways (number of features and time points) and, therefore, called Bi-Functional Autoencoders (BFAE).

  • We demonstrate the effectiveness of the proposed approach in learning low dimensional non-linear representation of the time series through simulation experiments and two real-world examples.

2 Preliminaries

2.1 Notations and Prior Art

This paper aims to develop an approach to map and represent the multiple time series variables (features) in a compact and efficient dimension reduced manner by leveraging the temporal dependencies within and between the involved variables. We begin by introducing useful notations, discussing the prior arts and problem definition, before proceeding to give the details of Bi-Functional Autoencoders (BFAE). Let us assume that for the ithi^{th} (i{1,2,,N}i\in\{1,2,...,N\}) independent subject, we have RR features, that are continuously recorded within a compact time interval 𝒮𝐑\mathcal{S}\subseteq\mathbf{R}. In particular, the observed timepoints of the rthr^{th} feature for the ithi^{th} subject are given by a Ms(i,r)M^{(i,r)}_{s}-dimensional vector 𝐒(i,r)=[S1(i,r),,Sj(i,r),,SMs(i,r)(i,r)]T\mathbf{S}^{(i,r)}=[S^{(i,r)}_{1},...,S^{(i,r)}_{j},...,S^{(i,r)}_{M^{(i,r)}_{s}}]^{T}, with Ms(i,r)M^{(i,r)}_{s} representing the number of timepoints in the time series and Sj(i,r)𝒮S^{(i,r)}_{j}\in\mathcal{S} for i=1,,n;r=1,,R;j=1,,Ms(i,r)i=1,...,n;r=1,...,R;j=1,...,M^{(i,r)}_{s}. The corresponding time series are denoted as 𝐗(i,r)=[X1(i,r),,Xj(i,r),,XMs(i,r)(i,r)]T\mathbf{X}^{(i,r)}=[X^{(i,r)}_{1},...,X^{(i,r)}_{j},...,X^{(i,r)}_{M^{(i,r)}_{s}}]^{T}. The subscript Ms(i,r)M^{(i,r)}_{s} reflects the fact that the measuring timestamps may vary across features and subjects. For the applicability of prior arts like PCA and AE, the data needs to be regular time series, therefore, we assume the number of timepoints across all features and subject is MM. The observed data can be denoted as {𝐗(i,1),,𝐗(i,R)}i=1N\{\mathbf{X}^{(i,1)},...,\mathbf{X}^{(i,R)}\}_{i=1}^{N}. We then concatenate the RR temporal features as 𝐗(i)=[𝐗(i,1)T,,𝐗(i,R)T]T\mathbf{{X}}^{(i)}=[\mathbf{{X}}^{{(i,1)}^{T}},...,\mathbf{{X}}^{{(i,R)}^{T}}]^{T}. Given samples {𝐗(i)}i=1N\{\mathbf{{X}}^{(i)}\}_{i=1}^{N}, the problem definition can be stated as an unsupervised nonlinear multi-dimensional functional representation learning problem, formally defined as follows:

𝐙(i)=F(𝐗(i))\mathbf{{Z}}^{(i)}=F(\mathbf{{X}}^{(i)}) (1)

where 𝐙(i)={𝐙(i,1),,𝐙(i,R)}i=1N\mathbf{Z}^{(i)}=\{\mathbf{Z}^{(i,1)},...,\mathbf{Z}^{(i,R^{\prime})}\}_{i=1}^{N} is a latent representation of the time series 𝐗(i)\mathbf{{X}}^{(i)} with R<RR^{\prime}<R.

This problem formulation has a few disadvantages. Biases may get introduced when performing data pre-processing for having the same number of timepoints for each time series and these widely used prior arts have their own limitations in solving the dimension reduction problems in time series data when they assume them to be scalar values. These multivariate approaches are unable to capture the temporal dependencies among 𝐗(i,r)\mathbf{{X}}^{(i,r)}, r=1,,Rr=1,...,R.

2.2 Functional Data Analysis and an alternate Formulation

In the paper so far, we have seen the dimension reduction problem from a multivariate time series point of view. Let us look at an alternative problem formulation using functional data analysis (FDA). In FDA, we learn from random functions which are dynamically varying data over a continuum. There is a lot of application for FDA in a wide variety of fields, like time series, sensor data, image, and spatial data [14, 9]. We can use this ability of FDA for dealing with continuous underlying curves X(i,r)(s),s𝒮X^{(i,r)}(s),s\in\mathcal{S} that is observed at discrete timepoints along a time interval. In functional analysis, the input or output has to be functional curves if not both. For given functional features {X(i,1)(s),,X(i,R)(s),s𝒮}i=1N\{X^{(i,1)}(s),...,X^{(i,R)}(s),s\in\mathcal{S}\}_{i=1}^{N}, our problem of finding a latent representation of the data can be given as follows:

𝐙(i)(t)=F(𝐗(i)(s)).\mathbf{{Z}}^{(i)}(t)=F(\mathbf{{X}}^{(i)}(s)). (2)

where 𝐙(i)(t)={𝐙(i,1)(t),,𝐙(i,R)(t),t𝒮}i=1N\mathbf{Z}^{(i)}(t)=\{\mathbf{Z}^{(i,1)}(t),...,\mathbf{Z}^{(i,R^{\prime})}(t),t\in\mathcal{S}\}_{i=1}^{N} is a latent representation of the time series 𝐗(i)(s)\mathbf{{X}}^{(i)(s)} with R<RR^{\prime}<R and each time series is observed at MM^{\prime} timepoints (MMM^{\prime}\leq M).

The popularity of FDA in time series application is evident with the boom of recent interest in this area of research. The assumption of smoothness can deviate in FDA as shown in [28], allowing us to directly apply different FDA techniques to time series data. Also, compared to the common sequential models (RNNs, LSTMs) which learn fixed parameters over time, FDA enables us to learn feature effects that change over the interval 𝒮\mathcal{S} and build more efficient models [25]. In the functional field, we have FPCA [6, 1] which can encode the information in a low-dimensional latent space but the mapping is linear. This can be a limiting factor because, in many real-world applications, the multiple functional features can have complex relations not just among them but across different timepoints as well.

The growing interest in deep learning and FDA has resulted in multiple successive deep functional models [19, 26, 16] for the different analytical tasks. FAE [8] is a deep learning approach for dimension reduction, a generalization of the autoencoder from the vector space to the functional setting. In this network, only the first and last layer can accommodate functions with the help of functional neurons given by [19, 20], and the rest of the network consists of traditional neurons. Because of this, FAE can only give a latent representation of functional data in the scalar form, which is limiting. It is very common for functional features to have complex relations that can’t be embedded in a scalar form. Thus, the scalar representation of functional data is inadequate.

In the next section, we propose a non-linear function-on-function autoencoder that leverages the power of fully connected Neural Networks while keeping the functional identity of the time series.

3 Proposed Bi-Functional Autoencoder Model

3.1 Our approach

Refer to caption
Figure 1: General architecture for our proposed Bi-Functional Autoencoder

The main goal of this paper is to learn a mapping 𝐟()\mathbf{f}(\cdot) for the time series data to a low dimensional latent space as shown in Figure 1. To achieve this goal we make use of the idea of an autoencoder and Functional Neural Networks [19, 16, 17]. An autoencoder (AE) is a popular dimension reduction technique which can learn non-linear latent presentation in a RR^{\prime}-dimensional space 𝐑R\mathbf{R}^{R^{\prime}} from a RR-dimensional vector-valued input space 𝐑R\mathbf{R}^{R} (where R<RR^{\prime}<R) using the help of an encoder. The decoder part of the AE helps to map the learned latent representation back to the RR-dimensional vector-valued input space 𝐑R\mathbf{R}^{R}. The input and output of an AE are the same and we learn by measuring the reconstruction error of the output compared to the input. We want to use this idea but in a functional setting where we have an encoder that is mapping from a RR-dimensional functional space to a RR^{\prime}-dimensional functional space and the decoder maps RR^{\prime}-dimensional functional space back to the RR-dimensional functional space. We make use of continuous neurons defined in [16, 17] to enable us to achieve this task in a functional setting. The continuous neurons take in functional inputs and produce a functional output, thus leading to the preservation of the functional nature of the data.

The framework for our Bi-Functional Autoencoder (BFAE) Model, as seen in Figure 1, involves development of a continuous mapping from layer to layer by using multiple continuous hidden layers consisting of multiple continuous neurons as defined in [16, 17]. In Figure 1, we have three types of layers: an input layer, continuous hidden layers, and a continuous output layer. The input layer is the first layer that takes in the functional features, the continuous hidden layers consist of multiple continuous neurons, and the continuous output layer provides the reconstructed input functional features using the continuous neurons. The model learns with the help of functional weights that are continuous over time (or some other continuum). These functional weights consists of both univariate and bivariate functions, that is, we learn functions as well as surfaces. The key idea here is that we want to preserve the functional nature of the data throughout the whole reconstruction process across the network. This will give a richer structure to transform the data into low dimensional latent representation while exploiting the domain information and continuity of the functional features. We define the lthl^{th} continuous hidden layer and its rthr^{th} continuous neuron as:

H(l)(i,r)(s)=σ(b(l)(r)(s)+j=1Jw(l)(r,j)(s,t)H(l1)(i,j)(t)𝑑t)\begin{split}H^{(i,r)}_{(l)}(s)&=\sigma\Big{(}b^{(r)}_{(l)}(s)+\sum_{j=1}^{J}\int w^{(r,j)}_{(l)}(s,t)H^{(i,j)}_{(l-1)}(t)dt\Big{)}\end{split} (3)

where l=1,2,3,,Ll=1,2,3,...,L, H(0)(i,r)(s)=X(i,r)(s)H^{(i,r)}_{(0)}(s)=X^{(i,r)}(s), H(L)(i,r)(s)=X(i,r)(s)^H^{(i,r)}_{(L)}(s)=\widehat{X^{(i,r)}(s)}, b(l)(r)(s)2(𝒮)b^{(r)}_{(l)}(s)\in\mathcal{L}^{2}(\mathcal{S}) is the unknown univariate intercept function, w(l)(r,j)2(𝒮×𝒮)w^{(r,j)}_{(l)}\in\mathcal{L}^{2}(\mathcal{S}\times\mathcal{S}) is the bivariate parameter function for the rthr^{th} continuous neuron in the lthl^{th} continuous hidden layer coming from the jthj^{th} continuous neuron of the (l1)th(l-1)^{th} continuous hidden layer and σ()\sigma(\cdot) is a standard activation function. The value of JJ, that can be considered as the number of features in each continuous hidden layer, is fixed for the input layer and the continuous output layer as RR. We can tune JJ to any scalar value for the rest of the continuous hidden layers. Similarly, the timepoints at which the output of the continuous neurons is observed is fixed for the input layer and the continuous output layer and can be changed for all the other continuous neurons in the remaining continuous hidden layers.

We will now look into the working of our network. We use the continuous neurons defined by Equation 3, to get from the input layer to a dimension reduced form of the input at an intermediate continuous hidden layer (say, it is the lthl^{\prime th} layer). This is the encoder part of the BFAE approach called as the functional encoder. We can get the low dimensional representation of the inputs in the lthl^{\prime th} continuous hidden layers as 𝐙(i)(s)=H(l)(i)(s)\mathbf{Z}^{(i)}(s)=H^{(i)}_{(l^{\prime})}(s) where 𝐙(i)(s)={𝐙(i,1)(s),,𝐙(i,R)(s),s𝒮}i=1N\mathbf{Z}^{(i)}(s)=\{\mathbf{Z}^{(i,1)}(s),...,\mathbf{Z}^{(i,R^{\prime})}(s),s\in\mathcal{S}\}_{i=1}^{N} and H(l)(i)(s)={H(l)(i,1)(s),,H(l)(i,R)(s),s𝒮}i=1NH^{(i)}_{(l^{\prime})}(s)=\{H^{(i,1)}_{(l^{\prime})}(s),...,H^{(i,R^{\prime})}_{(l^{\prime})}(s),s\in\mathcal{S}\}_{i=1}^{N}, R<RR^{\prime}<R and the function 𝐙(i)\mathbf{Z}^{(i)} is observed at MMM^{\prime}\leq M timepoints. We move from this latent representation of the input towards the continuous output layer using the continuous neurons in the continuous hidden layers to get the reconstructed functional input values. This part of our approach is called the functional decoder. Therefore, we have developed a mapping from a RR-dimensional functional space to RR^{\prime}-dimensional functional space using the functional encoder, and the functional decoder maps RR^{\prime}-dimensional functional space back to the RR-dimensional functional space. This latent representation 𝐙(i)(s)\mathbf{Z}^{(i)}(s) is very flexible in nature, where we can set the number of functional features as RR^{\prime} and the number of timepoints observed as MM^{\prime}, according to the task in hand.

For simplicity, let us assume that Figure 1 has RR functional features in the input layer, LL continuous hidden layers, each with JJ incoming and RR outgoing connections. BFAE is similar to an autoencoder in nature, but for functional data, we have to define the number of continuous hidden layers, the number of continuous neurons in each of the continuous hidden layers, and the number of timepoints at which these functions are observed for each continuous neuron. The activation function used in the continuous neurons is ReLU, tanh, sigmoid/logistic, or linear (continuous output layer). The forward propagation of the network is straightforward with the help of Equation 3. In the backpropagation phase, we learn using the functional surfaces (bivariate) and the functional intercepts (univariate) directly. We need the functional gradients to learn the functional parameters of the BFAE network. These functional gradients measure the change in a functional to a change in a function on which the functional depends.

We use the functional gradients given below to optimize our network’s functional parameters. The optimization approach we use is the traditional gradient descent. The necessary assumptions and mathematical tools are discussed in [19, 26, 12]. We use Fréchet derivatives, from the calculus of variation, for computing the functional gradients. The partial derivatives needed for the backpropagation are as follows:

H(l)(i,r)b(l)(r)(s)=σ(b(k)(r)(s)+j=1Jw(l)(r,j)(s,t)H(l1)(i,j)(t)𝑑t)H(l)(i,r)w(l)(r,j)(s,t)=σ(b(l)(r)(s)+j=1Jw(l)(r,j)(s,t)H(l1)(i,j)(t)𝑑t)×w(l)(r,j)(s,t)w(l)(r,j)(s,t)H(l1)(i,j)(t)dt=σ(b(l)(r)(s)+j=1Jw(l)(r,j)(s,t)H(l1)(i,j)(t)𝑑t)×H(l1)(i,j)(t)\begin{split}\frac{\partial H^{(i,r)}_{(l)}}{\partial b^{(r)}_{(l)}}(s)&=\sigma^{\prime}\Big{(}b^{(r)}_{(k)}(s)+\sum_{j=1}^{J}\int w^{(r,j)}_{(l)}(s,t)H^{(i,j)}_{(l-1)}(t)dt\Big{)}\\ \frac{\partial H^{(i,r)}_{(l)}}{\partial w^{(r,j)}_{(l)}}(s,t)&=\sigma^{\prime}\Big{(}b^{(r)}_{(l)}(s)+\sum_{j^{\prime}=1}^{J}\int w^{(r,j^{\prime})}_{(l)}(s,t^{\prime})H^{(i,j^{\prime})}_{(l-1)}(t^{\prime})dt^{\prime}\Big{)}\\ &\times\int\frac{\partial}{\partial w^{(r,j)}_{(l)}(s,t)}w^{(r,j)}_{(l)}(s,t)H^{(i,j)}_{(l-1)}(t)dt\\ &=\sigma^{\prime}\Big{(}b^{(r)}_{(l)}(s)+\sum_{j^{\prime}=1}^{J}\int w^{(r,j^{\prime})}_{(l)}(s,t^{\prime})H^{(i,j^{\prime})}_{(l-1)}(t^{\prime})dt^{\prime}\Big{)}\\ &\times H^{(i,j)}_{(l-1)}(t)\\ \end{split} (4)

where l=1,2,3,,Ll=1,2,3,...,L, H(0)(i,r)(s)=X(i,r)(s)H^{(i,r)}_{(0)}(s)=X^{(i,r)}(s), H(L)(i,r)(s)=X(i,r)(s)^H^{(i,r)}_{(L)}(s)=\widehat{X^{(i,r)}(s)} and σ()\sigma^{\prime}(\cdot) represents the first derivative of σ()\sigma(\cdot). In the backpropagation phase, we pass through the network backward from the continuous output layer to the input layer and calculate the partial derivatives of the continuous neurons with respect to the bivariate functional weight and the functional intercept. The loss function that we minimize is (θ)\mathcal{L}(\theta), where θ\theta is the collection of all functional parameters in the BFAE network. The loss function is given as follows:

(θ)=1Ni=1Nr=1R(X(i,r)(s)X(i,r)(s)^)2𝑑t\mathcal{L}(\theta)={\frac{1}{N}\sum_{i=1}^{N}\sum_{r=1}^{R}\int\Big{(}X^{(i,r)}(s)-\widehat{X^{(i,r)}(s)}\Big{)}^{2}dt} (5)

The number of continuous hidden layers, the number of continuous neurons in each of the continuous hidden layers, and the number of observed timepoints for each continuous neuron can be considered as hyperparameters. We can also adjust our approach to accommodate irregular functional data or scalar variables. All these offer a lot of flexibility for our approach to work with different kinds of problems and adjust things according to the downstream process for different analytical tasks like prediction, classification, clustering, forecasting, and more.

3.2 Connection of our approach to prior arts

Some of the current dimension reduction methods can be represented as a special case of BFAE. As we know, a single layer autoencoder (AE) with a linear activation function is analogous to PCA. In the same way, we can adjust the parameters of our model to act like FPCA. The FPCA is a special case of BFAE when our model has the following specifications: linear activation functions, a single continuous hidden layer, and the functional weights are constrained to be orthonormal. The Functional Autoencoder (FAE) [8] paper discusses how AE is a special case of FAE as FAE replaces the scalar weights and inner products of the AE with functional weights and inner products. In the same manner, FAE is a special case of our approach, BFAE. If we specify the functional encoder in our model to learn a scalar (M=1M^{\prime}=1) latent representation of the functional features, our approach essentially acts like FAE. Therefore, our approach is a generalization of most of the existing methods like PCA, FPCA, AE, and FAE.

4 Numerical Experiments

In this section, we proceed to apply the proposed model to multiple simulation scenarios and two real-world problems. We compare our results against several state-of-the-art approaches, like PCA, AE, and FPCA, to show the effectiveness of BFAE. We demonstrate the following objectives through our results: 1) Our approach is effectively trained by the derived functional gradients 2) BFAE enables learning of the relation between the functional features 3) We show the efficacy of our approach by outperforming other methods.

4.1 Simulations

In the first experiment, we consider an individual predictor function (i.e. R=1R=1) which is observed on a dense and regular grid. We generate N=100,1000N=100,1000 iid random curves {X(1)(t),,X(n)(t)}\left\{X^{(1)}(t),\cdots,X^{(n)}(t)\right\} from a Gaussian process with mean 0 and covariance given as follows:

CX(t,s)=σ2Γ(ν)2ν1(2ν|ts|ρ)νKν(2ν|ts|ρ).C_{X}(t,s)=\frac{\sigma^{2}}{\Gamma(\nu)2^{\nu-1}}\left(\frac{\sqrt{2\nu}|t-s|}{\rho}\right)^{\nu}K_{\nu}\left(\frac{\sqrt{2\nu}|t-s|}{\rho}\right). (6)

where this is the Matérn covariance function and KνK_{\nu} is the modified Bessel function of the second kind. The value of ρ=0.5,\rho=0.5, ν=5/2\nu=5/2, and σ2=1.\sigma^{2}=1. These curves are realized at equally-spaced timepoint on the interval [0,1][0,1] where M=50,250M=50,250 timepoints. We have specifically chosen a case where N<MN<M to check if it causes any issues with any of the approaches.

After generating the curves using a Gaussian process, we add some random noise (ϵi(t)\epsilon_{i}(t)) to them. The results are averaged over 100 simulations, and we divide the 100 samples into 80 for training and 20 for testing. The same ratio split is used for the case of 1000 samples. We report the Root Mean Square Error (RMSE) for the reconstruction of the functional features given by Equation (7) and in the case of the scalar approaches, we just report the standard RMSE ignoring the temporal aspect. For our approach in the first experiment, we consider multiple combinations of continuous hidden layers (LL= 1, 3) and only one continuous neuron (J=1J=1). The grid (s) value for latent representation in the continuous neurons is either MM or M=M/5M^{\prime}=M/5. We keep the network of AE similar to BFAE, and for PCA and FPCA we follow the 99%99\% variance of explained rule. Note that we can select MM^{\prime} to be any scalar value in the range of (1,M1,M) but we select this particular value (M=M/5M^{\prime}=M/5) to show the flexibility of our approach.

RMSE=1Ni=1Nr=1R(X(i,r)(s)X(i,r)(s)^)2𝑑tRMSE=\sqrt{\frac{1}{N}\sum_{i=1}^{N}\sum_{r=1}^{R}\int\Big{(}X^{(i,r)}(s)-\widehat{X^{(i,r)}(s)}\Big{)}^{2}dt} (7)
M 50 250 50 250
Method/N 100 1000
PCA 0.483 0.418 0.469 0.398
AE 0.409 0.443 0.390 0.406
FPCA 0.390 0.351 0.356 0.343
BFAE 0.301 0.265 0.102 0.098
BFAE (M’) 0.331 0.274 0.117 0.104
Table 1: Comparing RMSE of different methods for capturing a single time series function.

We observe from Table I that PCA and AE perform similarly on the temporal curves. There is some gain in performance for FPCA but our approach performs the best irrespective of the sample size (NN) and the number of timepoints (MM). The difference in performance increases when the sample size and the number of timepoints increases, as our approach is a deep learning model, it learns better with more information. When we set the number of observed timepoints as MM^{\prime}, our approach loses some performance because we are restricting the true data to a smaller latent space. But, even under this scenario of representing the true curves using 10 or 50 timepoints rather than 50 or 250, we are still performing better than all the other approaches.

Refer to caption
Figure 2: Curve reconstruction comparison for different methods when N=1000N=1000, R=1R=1, and M=50M=50

Figure 2 shows us the reconstructed curves for a random sample from different approaches for the simulation setting of N=1000N=1000, R=1R=1, and M=50M=50. We see a similar story compared to Table I where PCA is performing poorly, and the shape is very different from the truth. PCA is a linear approach and can’t capture the temporal information in the data. AE and FPCA perform better than PCA by getting the general shape correct but are still far off from the truth. Our approach, BFAE, irrespective of the number of timepoints observed, is able to reconstruct the truth well and captures the shape of the curve correctly over the whole time interval.

We extend the first experiment to include more functional features. In the second experiment, we increase the number of features to R=10R=10. We follow the same procedure to generate these 10 curves for the same set of values of NN and MM. The results discussed below are an average of 100 simulations after adding random noise and using the same splits for training and testing. For our approach, we consider multiple combinations of continuous hidden layers (LL= 11, 3), continuous neurons (JJ= 1, 2, 4) and we set R=4R^{\prime}=4. The grid (s) value for latent representation in the continuous neurons is either MM or M=M/5M^{\prime}=M/5.

M 50 250 50 250
Method/N 100 1000
PCA 1.527 1.525 1.518 1.518
AE 0.471 0.490 0.489 0.484
FPCA 0.386 0.417 0.365 0.404
BFAE 0.328 0.292 0.109 0.132
BFAE (M’) 0.359 0.314 0.147 0.169
Table 2: Comparing RMSE of different methods for capturing multiple (R=10R=10) time series functions.

Table II shows that the increase in the functional features results in a deterioration in performance for PCA. While the other two approaches perform better than PCA, our approach performs the best irrespective of the sample size (NN) and the number of timepoints (MM). The difference in performance again increases when the sample size and number of timepoints increases. We observe that the errors have increased marginally compared to the previous table as we have added more features. The behavior of decreasing the number of timepoints to MM^{\prime} is similar as well.

4.2 Real Data applications

We consider two real data sets. The first is the speech recognition data from TIMIT (available at http://statweb.stanford.edu/tibs/Ele-mStatLearn/) where we have speech signals for different phonemes. Audio information is available abundantly, but they are measured at a very high frequency requiring high storage capacity. A low dimensional latent representation of this data would be very useful in the industry. The other example deals with the relation between electricity demand and temperature in the city of Adelaide, Australia. This data is important because of the high costs related to the storage of electricity and understanding the effect of temperature on demand can lead to information based operational steps.

For speech recognition data, we set up the experiment similar to [10, 16], where we have voice signals (RR=1) for two phonemes (response) transcribed as follows: “aa” as the vowel in “dark” and “ao” as the first vowel in “water”. We build a classifier model with the help of FLM after applying dimension reduction to the voice signals. Figure 3 shows that it is difficult to separate the two groups where we see the two phoneme curves computed by a log-periodogram of 150 (M=150M=150). Each phonemes information is recorded over 150 points, reduction of such data is beneficial, especially with the potential to expand this low dimension representation to much higher voice signals. We have 800 functional samples, we split these samples into 640 samples for training and 160 samples for testing.

The Adelaide data (available in fdsfds package in R-software) has the temperature and electricity demand records from 7/6/1997 to 3/31/2007 (508 weeks) measured daily at half-hourly rate (M=48M=48). We consider each day of the week as a feature (RR=7) and try to map the relation between temperature and demand using Functional Linear model (FLM) [4, 9, 14]. Before we use FDA to model this relation, we use BFAE to represent the temperature curves in a compact low dimensional form. We split this data into 400 samples for training and 108 samples for testing. Figure 4 shows the half-hourly temperature and electricity demand (Megawatts) for the whole 508 weeks. We can see from the image that the temperature curve for each day of the week follows a similar pattern and learning that pattern can lead to a reduction in dimension with minimum loss of information.

Method/Data Phonemes Adelaide
PCA 14.960 18.633
AE 1.790 2.354
FPCA 1.242 0.636
BFAE 1.126 0.581
BFAE (M’) 1.128 0.615
Table 3: Comparing RMSE of different methods for capturing prediction functions for the real data.

We can observe from Table III that PCA again is having difficulties in reducing the information for both the real data sets. While AE and FPCA are performing better than PCA, BFAE performs the best. For phonemes data, we even reduced the points to M=30M^{\prime}=30 and still BFAE performs better than other approaches. The difference between BFAE with timepoints as MM and MM^{\prime} is negligible, indicating that the low dimension representation is very rich in information. While for Adelaide data, we reduce the number of features to 4 (R=4R^{\prime}=4) and the timepoints to M=12M^{\prime}=12 and produce the best results using our approach with timepoints as MM. Our intuition to reduce the temperature feature is certainly validated as seen from Figure 4 and Table III.

Modeling results can be seen in Table IV where we build a classifier for the phonemes data, to predict if the voice signal is for “aa” or “ao”. For the Adelaide data, we map the temperature information for the 7 days of the week to predict the electricity demand for the 7 days of that same week. Both these models are built with the help of the original data and the low dimension latent representation learned using BFAE. We ignore the other approaches as the goal here is to demonstrate that our approach gives modeling results at least as good as the original data by capturing the signals. We can see from Table IV that our approach not only does a good job of modeling but also performs as well as the original data and has a lower tendency to over fit as it contains less noise. The error increases in both cases as we reduce the timepoints to MM^{\prime} but the performance is still competitive.

Real Data Data Original BFAE BFAE (MM^{\prime})
train 0.175 0.185 0.205
Phonemes test 0.200 0.190 0.210
train 188.363 164.105 199.758
Adelaide test 220.739 184.518 215.294
Table 4: Comparison of errors (Classification error for Phonemes and RMSE for Adelaide) of different methods for modeling the response.

Overall, our approach does the best job in reducing the information into a low dimensional latent space while still maintaining a competitive performance at modeling different tasks compared to the original data. We are able to represent the information in a compact data rich manner and are also able to reconstruct it back to the original scale without much loss of information.

Refer to caption
Figure 3: Voice signal curves for the Phonemes data
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 4: Day curves (Monday to Sunday, top to Bottom) of temperature and electricity demand for Adelaide

5 Conclusions

In this paper, we proposed a novel model for multivariate time series dimension reduction. Our approach helps to perform two-way dimension reduction by reducing the number of features and the number of timepoints at which the time series is observed. The proposed Bi-Functional Autoencoder (BFAE) reduces the input into a low dimension latent representation using a functional encoder and reconstructs the information back using a functional decoder. Our proposed approach has a lot of advantages compared to current methods, including its ability to represent the time series data in a flexible manner, capture timely varying correlations among features, ability to deal with different kinds of data, and capture non linear relations. Along with superior simulation results, we showed the proficiency of our approach in two real-world examples in comparison with several common practices in the prior art. Our approach produced smaller errors in reconstruction and modeled the data as well as the original information. We expect the proposed model to be widely used in diverse real-world problems where the goals are to reduce the transfer load of huge amount of time series data, store a large amount of time series information effectively, and reduce the computation cost of different analytical tasks while retaining the performance.

References

  • [1] Daniel Backenroth, Jeff Goldsmith, Michelle D. Harran, Juan C. Cortes, John W. Krakauer, and Tomoko Kitago. Modeling motor learning using heteroscedastic functional principal components analysis. Journal of the American Statistical Association, 113(523):1003–1015, 2018. PMID: 30416231.
  • [2] Wei Chen, Hythem Sidky, and Andrew L. Ferguson. Capabilities and limitations of time-lagged autoencoders for slow mode discovery in dynamical systems. The Journal of Chemical Physics, 151(6):064123, 2019.
  • [3] Jeng-Min Chiou, Yu-Ting Chen, and Ya-Fang Yang. Multivariate functional principal component analysis: A normalization approach. Statistica Sinica, pages 1571–1596, 2014.
  • [4] F. Ferraty and Y. Romain. The Oxford Handbook of Functional Data Analysis. Oxford University Press, 01 2011.
  • [5] Clara Happ and Sonja Greven. Multivariate functional principal component analysis for data observed on different (dimensional) domains. Journal of the American Statistical Association, 113(522):649–659, 2018.
  • [6] Clara Happ and Sonja Greven. Multivariate functional principal component analysis for data observed on different (dimensional) domains. Journal of the American Statistical Association, 113(522):649–659, 2018.
  • [7] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507, 2006.
  • [8] Tsung-Yu Hsieh, Yiwei Sun, Suhang Wang, and Vasant Honavar. Functional Autoencoders for Functional Data Representation Learning, pages 666–674. arxiv, 2021.
  • [9] Piotr Kokoszka and Matthew Reimherr. Introduction to Functional Data Analysis. New York: Chapman and Hall/CRC, 2018.
  • [10] Bing Li and Jun Song. Nonlinear sufficient dimension reduction for functional data. Annals of Statistics, 45(3):1059–1095, June 2017. Copyright: Copyright 2017 Elsevier B.V., All rights reserved.
  • [11] Bing Li and Jun Song. Dimension reduction for functional data based on weak conditional moments. The Annals of Statistics, 50(1):107 – 128, 2022.
  • [12] Peter J. Olver. Introduction to the calculus of variations. In Introduction to the Calculus of Variations, 2014.
  • [13] Stefan Petscharnig, Mathias Lux, and Savvas Chatzichristofis. Dimensionality reduction for image features using deep learning and autoencoders. In Proceedings of the 15th International Workshop on Content-Based Multimedia Indexing, CBMI ’17, New York, NY, USA, 2017. Association for Computing Machinery.
  • [14] James O Ramsay. Functional data analysis. Wiley Online Library, 2006.
  • [15] J.O. Ramsay and B.W. Silverman. Functional Data Analysis. Springer series in statistics. Springer, 1997.
  • [16] Aniruddha Rajendra Rao and Matthew Reimherr. Modern non-linear function-on-function regression, 2021.
  • [17] Aniruddha Rajendra Rao and Matthew Reimherr. Non-linear functional modeling using neural networks, 2021.
  • [18] Aniruddha Rajendra Rao, Qiyao Wang, Haiyan Wang, Hamed Khorasgani, and Chetan Gupta. Spatio-temporal functional neural networks. arXiv preprint arXiv:2009.05665, 2020.
  • [19] Fabrice Rossi, Brieuc Conan-Guez, and François Fleuret. Functional data analysis with multi layer perceptrons. In Proceedings of IJCNN, pages 2843–2848. Citeseer, 2002.
  • [20] Fabrice Rossi, Nicolas Delannay, Brieuc Conan-Guez, and Michel Verleysen. Representation of functional data in neural networks. Neurocomputing, 64:183–210, 2005.
  • [21] Nitish Srivastava, Elman Mansimov, and Ruslan Salakhutdinov. Unsupervised learning of video representations using lstms. In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, ICML’15, page 843–852. JMLR.org, 2015.
  • [22] Barinder Thind, Kevin Multani, and Jiguo Cao. Deep learning with functional inputs, 2020.
  • [23] Michael Tschannen, Olivier Frederic Bachem, and Mario Lučić. Recent advances in autoencoder-based representation learning. In Bayesian Deep Learning Workshop, NeurIPS, 2018.
  • [24] Laurens van der Maaten, Eric Postma, and H. Herik. Dimensionality reduction: A comparative review. Journal of Machine Learning Research - JMLR, 10, 01 2007.
  • [25] Qiyao Wang, Haiyan Wang, Chetan Gupta, Aniruddha Rajendra Rao, and Hamed Khorasgani. A non-linear function-on-function model for regression with time series data. In 2020 IEEE International Conference on Big Data (Big Data), pages 232–239, 2020.
  • [26] Qiyao Wang, Shuai Zheng, Ahmed Farahat, Susumu Serita, Takashi Saeki, and Chetan Gupta. Multilayer perceptron for sparse functional data. In 2019 International Joint Conference on Neural Networks (IJCNN), pages 1–10. IEEE, 2019.
  • [27] Yasi Wang, Hongxun Yao, and Sicheng Zhao. Auto-encoder based dimensionality reduction. Neurocomputing, 184:232–242, 2016. RoLoD: Robust Local Descriptors for Computer Vision 2014.
  • [28] Fang Yao, Hans-Georg Müller, and Jane-Ling Wang. Functional data analysis for sparse longitudinal data. Journal of the American Statistical Association, 100(470):577–590, 2005.