This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Mining of Switching Sparse Networks for Missing Value Imputation in Multivariate Time Series

Kohei Obata SANKEN, Osaka UniversityOsakaJapan [email protected] Koki Kawabata SANKEN, Osaka UniversityOsakaJapan [email protected] Yasuko Matsubara SANKEN, Osaka UniversityOsakaJapan [email protected]  and  Yasushi Sakurai SANKEN, Osaka UniversityOsakaJapan [email protected]
(2024)
Abstract.

Multivariate time series data suffer from the problem of missing values, which hinders the application of many analytical methods. To achieve the accurate imputation of these missing values, exploiting inter-correlation by employing the relationships between sequences (i.e., a network) is as important as the use of temporal dependency, since a sequence normally correlates with other sequences. Moreover, exploiting an adequate network depending on time is also necessary since the network varies over time. However, in real-world scenarios, we normally know neither the network structure nor when the network changes beforehand. Here, we propose a missing value imputation method for multivariate time series, namely MissNet, that is designed to exploit temporal dependency with a state-space model and inter-correlation by switching sparse networks. The network encodes conditional independence between features, which helps us understand the important relationships for imputation visually. Our algorithm, which scales linearly with reference to the length of the data, alternatively infers networks and fills in missing values using the networks while discovering the switching of the networks. Extensive experiments demonstrate that MissNet outperforms the state-of-the-art algorithms for multivariate time series imputation and provides interpretable results.

Multivariate time series, Missing value imputation, Network inference, State-space model, Graphical lasso
journalyear: 2024copyright: acmlicensedconference: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining; August 25–29, 2024; Barcelona, Spainbooktitle: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD ’24), August 25–29, 2024, Barcelona, Spaindoi: 10.1145/3637528.3671760isbn: 979-8-4007-0490-1/24/08ccs: Information systems Data miningccs: Mathematics of computing Time series analysis

1. Introduction

Refer to caption
Figure 1. An illustrative example of a multivariate time series including missing blocks, where each time point of the data is allocated to two regimes, each with a distinct network, where the edges show relationships between features.

With the development of the Internet of Things (IoT), multivariate time series are generated in many real-world applications, such as motion capture (Qin et al., 2020), and health monitoring (Chambon et al., 2018). However, there are inevitably many values missing from these data, and this has many possible causes (e.g., sensor malfunction). As most algorithms assume an intact input when building models, missing value imputation is indispensable for real-world applications (Shadbahr et al., 2023; Berrevoets et al., 2023).

In time series data, missing values often occur consecutively, leading to a missing block in a sequence, and can happen simultaneously to multiple sequences (Liu et al., 2020). To effectively reconstruct missing values from such partially observed data, we must exploit both temporal dependency, by taking account of past and future values in the sequence, and inter-correlation, by using the relationship between different sequences (i.e., a network) (Cini et al., 2021; Yi et al., 2016). Here, a network does not necessarily mean the spatial proximity of sensors but rather underlying connectivity (e.g., Pearson correlation or partial correlation). Moreover, as time series data are normally non-stationary, so is the network; an adequate network must be exploited depending on time (Tomasi et al., 2018, 2021). We collectively refer to a group of time points with the same network as a “regime”. Fig. 1 shows an illustrative example where missing blocks randomly exist in a multivariate time series consisting of three features (i.e., A, B, and C). Each time point belongs to either of two regimes with different networks (i.e., #1\#1 and #2\#2), where the thickness of the edge indicates the strength of the interplay between features. It is appropriate to use the values of feature C to impute the block missing from feature B in regime #1\#1 since the network has an edge between B and C. On the other hand, in regime #2\#2, it is preferable to use feature A, as the network suggests. Nevertheless, in real-world scenarios, we often have no information about the data; that is, we do not know the structure of the network, let alone when the network changes. Thus, given a partially observed multivariate time series, how can we infer networks and allocate each time point to the correct regime? How can we achieve an accurate imputation exploiting both temporal dependency and inter-correlation?

There have been many studies on time series missing value imputation with machine learning and deep learning models (Khayati et al., 2020). Many of them employ time-variant latent/hidden states, as used in a State-Space Model (SSM) or a Recurrent Neural Network (RNN), to learn the dynamic patterns of time series (Li et al., 2009; Chen and Sun, 2021; Cao et al., 2018; Yoon et al., 2018b). However, they do not make full use of inter-correlation, while the sequences of a multivariate time series usually interact. Although they implicitly capture inter-correlation in latent space, they potentially capture spurious correlations under the presence of missing values, leading to inaccurate imputation. To tackle this problem, several studies explicitly utilize the network (Cai et al., 2015a, b; Hairi et al., 2019; Jing et al., 2021; Cini et al., 2021; Wang et al., 2023a). However, they require a predefined network, even though we rarely know it in advance.

In this work, we propose MissNet 111Our source code and datasets are publicly available:
https://github.com/KoheiObata/MissNet.
, Mining of switching sparse Networks for multivariate time series imputation, which repeatedly infers sparse networks from imputed data and fills in missing values using the networks while allocating each time point to regimes. Specifically, our model has three components: (1) a regime-switching model based on a discrete Markov process for detecting the change point of the network. (2) An imputation model based on an SSM for exploiting temporal dependency and inter-correlation by latent space and the networks. (3) A network inference model for inferring sparse networks via graphical lasso, where each network encodes pairwise conditional independencies among features, and the lasso penalty helps avoid capturing spurious correlations. Our proposed algorithm maximizes the joint probability distribution over the above components.

Our method has the following desired properties:

  • Effective: MissNet, which exploits both temporal dependency and inter-correlation by inferring switching sparse networks, outperforms the state-of-the-art algorithms for missing value imputation for multivariate time series.

  • Scalable: Our proposed algorithm scales linear with regard to the length of the time series and is thus applicable to a long-range time series.

  • Interpretable: MissNet discovers regimes with distinct sparse networks, which help us interpret data, e.g., what is the key relationship in the data, and why does a particular regime distinguish itself from others?

2. Related work

We review previous studies closely related to our work. Table 1 shows the relative advantages of MissNet. Current approaches fall short with respect to at least one of these desired characteristics.

Time series missing value imputation. Missing value imputation for time series is a very rich topic (Khayati et al., 2020). We roughly classify missing value imputation methods as Matrix Factorization (MF)-based, SSM-based, and Deep Learning (DL)-based approaches.

MF-based methods, such as SoftImpute (Mazumder et al., 2010) based on Singular Value Decomposition (SVD), recover missing values from low-dimensional embedding matrices of partially observed data (Papadimitriou et al., 2005; Khayati et al., 2014; Zhang and Balzano, 2016; Yi et al., 2016). For example, SoRec (Ma et al., 2008), proposed as a recommendation system, constrains MF with a predefined network to exploit inter-correlation. Since MF is limited in capturing temporal dependency, TRMF (Yu et al., 2016) uses an Auto-Regressive (AR) model and imposes temporal smoothness on MF.

SSMs, such as Linear Dynamical Systems (LDS) (Ghahramani, 1998), use latent space to capture temporal dependency, where the data point depends on all past data points (Rogers et al., 2013; Dabrowski et al., 2018; Chen and Sun, 2021; Hairi et al., 2019). To fit more complex time series, Switching LDS (SLDS) (Pavlovic et al., 1999; Fox et al., 2008) switches multiple LDS models. SSM-based methods, such as DynaMMo (Li et al., 2009), focus on capturing the dynamic patterns in time series rather than inter-correlation implicitly captured through the latent space. To use the underlying connectivity in multivariate time series, DCMF (Cai et al., 2015a), and its tensor extension Facets (Cai et al., 2015b) use SSM constrained with a predefined network, which is effective, especially when the missing rate is high. However, they assume that the network is accurately known and fixed, while it is usually unknown and may change over time in real-world scenarios.

Recently, extensive research has focused on DL-based methods, employing techniques including graph neural networks (Jing et al., 2021; Cini et al., 2021; Fan et al., 2023; Ren et al., 2023), self-attention (Shukla and Marlin, 2021; Du et al., 2023), and, most recently, diffusion models (Alcaraz and Strodthoff, 2022; Tashiro et al., 2021; Wang et al., 2023b), to harness their high model capacity (Yoon et al., 2018a; Bansal et al., 2021). For example, BRITS (Cao et al., 2018) and M-RNN (Yoon et al., 2018b) impute missing values according to hidden states from bidirectional RNN. To utilize dynamic inter-correlation in time series, POGEVON (Wang et al., 2023a) requires a sequence of networks and imputes missing values in time series and missing edges in the networks, assuming that the network varies over time. Although DL-based methods can handle complex data, the imputation quality depends heavily on the size and the selection of the training dataset.

Table 1. Capabilities of MissNet, Matrix Factorization (MF), State-Space Model (SSM), Deep Learning (DL), and Graphical Lasso (GL)-based approaches.
MF SSM DL GL

SoftImpute (Mazumder et al., 2010)

SoRec (Ma et al., 2008)

TRMF (Yu et al., 2016)

SLDS (Pavlovic et al., 1999)

DynaMMo (Li et al., 2009)

DCMF (Cai et al., 2015a)

BRITS (Cao et al., 2018)

POGEVON (Wang et al., 2023a)

TICC (Hallac et al., 2017b)

MMGL (Tozzo et al., 2020)

MissNet

Temporal dependency - - - -
Inter-correlation - - - - - -
Time-varying inter-correlation - - - - - - - -
Missing value imputation - -
Segmentation - - - - - - - -
Sparse network inference - - - - - - - -

Sparse network inference. From another perspective, our method infers sparse networks from time series containing missing values and discovers regimes (i.e., clusters) based on networks. Inferring a sparse inverse covariance matrix (i.e., network) from data helps us to understand feature dependency in a statistical way. Graphical lasso (Friedman et al., 2008), which maximizes the Gaussian log-likelihood imposing an 1\ell_{1}-norm penalty, is one of the most commonly used techniques for estimating a sparse network from static data. Städler and Bühlmann (Städler and Bühlmann, 2012a) have tackled inferring a sparse network from partially observed data according to conditional probability. However, the network varies over time (Namaki et al., 2011; Monti et al., 2014; Malte Londschien and Bühlmann, 2021); thus, TVGL (Hallac et al., 2017a) infers time-varying networks by considering the time similarity with a network belonging to neighboring segments. To infer time-varying networks in the presence of missing values, MMGL (Tozzo et al., 2020), which employs TVGL, uses the expectation-maximization (EM) algorithm to repeat the inference of time-varying networks and missing value imputation based on conditional probability under the condition that each segment has the same observed features. Discovering clusters based on networks (Obata et al., 2024), such as TICC (Hallac et al., 2017b) and TAGM (Tozzo et al., 2021), provides interpretable results that other traditional clustering methods cannot find. However, they cannot handle missing values.

As a consequence, none of the previous studies have addressed missing value imputation for multivariate time series by employing sparse network inference and segmentation based on the network.

3. Preliminaries

3.1. Problem definition

In this paper, we focus on the task of multivariate time series missing value imputation. We use a multivariate time series with NN features and TT timesteps 𝑿={𝒙1,𝒙2,,𝒙T}N×T\bm{X}=\{\bm{x}_{1},\bm{x}_{2},\dots,\bm{x}_{T}\}\in\mathbb{R}^{N\times T}. To represent the missing values in 𝑿\bm{X}, we introduce the indicator matrix 𝑾N×T\bm{W}\in\mathbb{R}^{N\times T}, where 𝑾i,t\bm{W}_{i,t} indicates the availability of feature ii at timestep tt: 𝑾i,t\bm{W}_{i,t} being 0 or 11 indicates whether 𝑿i,t\bm{X}_{i,t} is missing or observed. Thus, the observed entries can be described as 𝑿~=𝑾𝑿\tilde{\bm{X}}=\bm{W}\circ\bm{X}, where \circ is a Hadamard product. Our problem is formally written as follows:

Problem 1 (Multivariate Time Series Missing Value Imputation).

Given: a partially observed multivariate time series 𝐗~\tilde{\bm{X}}; Recover: its missing parts indicated by 𝐖\bm{W}.

3.2. Graphical lasso

We use graphical lasso (Friedman et al., 2008) to infer the network for each regime. Given 𝑿\bm{X}, graphical lasso estimates the sparse Gaussian inverse covariance matrix (i.e., network) 𝚯N×N\bm{\Theta}\in\mathbb{R}^{N\times N}, also known as the precision matrix. The network encodes pairwise conditional independencies among NN features, e.g., if 𝚯i,j=0\bm{\Theta}_{i,j}=0, then features ii and jj are conditionally independent given the values of all the other features. The optimization problem is expressed as follows:

(1) minimize𝚯S++p\displaystyle\textrm{minimize}_{\bm{\Theta}\in S^{p}_{++}} λ𝚯od,1t=1Tll(𝒙t,,𝚯),\displaystyle\lambda||\bm{\Theta}||_{od,1}-\sum_{t=1}^{T}ll(\bm{x}_{t,},\bm{\Theta}),
ll(𝒙t,𝚯)=\displaystyle ll(\bm{x}_{t},\bm{\Theta})= 12(𝒙t𝝆)𝚯(𝒙t𝝆)\displaystyle-\frac{1}{2}(\bm{x}_{t}-\bm{\rho})^{\prime}\bm{\Theta}(\bm{x}_{t}-\bm{\rho})
(2) +12logdet(𝚯)N2log(2π),\displaystyle+\frac{1}{2}\log\textrm{det}(\bm{\Theta})-\frac{N}{2}\log(2\pi),

where 𝚯\bm{\Theta} must be symmetric positive definite (S++pS^{p}_{++}). ll(𝒙t,𝚯)ll(\bm{x}_{t},\bm{\Theta}) is the log-likelihood and 𝝆N\bm{\rho}\in\mathbb{R}^{N} is the empirical mean of 𝑿\bm{X}. λ0\lambda\geq 0 is a hyperparameter for determining the sparsity level of the network, and od,1\|\cdot\|_{od,1} indicates the off-diagonal 1\ell_{1}-norm. This is a convex optimization problem that can be solved via the alternating direction method of multipliers (ADMM) (Boyd et al., 2011).

4. Proposed MissNet

In this section, we present our proposed MissNet for missing value imputation. We give the formal formulation of the model and then provide the detailed algorithm to learn the model.

4.1. Optimization model

MissNet infers sparse networks and fills in missing values using the networks while discovering regimes. We first introduce three interacting components of our model: a regime-switching model, an imputation model, and a network inference model. Then, we define the optimization formulation.

Regime-switching model. MissNet describes the change of networks by regime-switching. Let KK be the number of regimes, 𝑭={𝒇1,𝒇T}K×T\bm{F}=\{\bm{f}_{1},\dots\bm{f}_{T}\}\in\mathbb{R}^{K\times T} be regime assignments, and 𝒇t\bm{f}_{t} be a one-hot vector 222We use it interchangeably with the index of the regime (i.e., 𝒇tth\bm{f}_{t}^{th}-regime). that indicates 𝒙~tN\tilde{\bm{x}}_{t}\in\mathbb{R}^{N} belongs to 𝒇tth\bm{f}_{t}^{th}-regime. We assume regime-switching to be a discrete first-order Markov process:

(3) p(𝒇1)=𝝅0,p(𝒇t+1|𝒇t)=𝚷𝒇t+1,𝒇t,\displaystyle p(\bm{f}_{1})=\bm{\pi}_{0},\quad p(\bm{f}_{t+1}|\bm{f}_{t})=\bm{\Pi}_{\bm{f}_{t+1},\bm{f}_{t}},

where 𝚷K×K\bm{\Pi}\in\mathbb{R}^{K\times K} is the Markov transition matrix and 𝝅0K\bm{\pi}_{0}\in\mathbb{R}^{K} is the initial state distribution.

Imputation model. MissNet imputes missing values exploiting temporal dependency and inter-correlation indicated by the networks. We assume that the temporal dependency is consistent throughout all regimes and captured in the latent space of an SSM, which allows us to consider long-term dependency. We define the latent states 𝒁={𝒛1,𝒛2,,𝒛T}L×T\bm{Z}=\{\bm{z}_{1},\bm{z}_{2},\dots,\bm{z}_{T}\}\in\mathbb{R}^{L\times T} corresponding to 𝑿~\tilde{\bm{X}}, where LL is the number of latent dimensions, and 𝒛t+1L\bm{z}_{t+1}\in\mathbb{R}^{L} is linear to 𝒛t\bm{z}_{t} through the transition matrix 𝑩L×L\bm{B}\in\mathbb{R}^{L\times L} with the covariance 𝚺𝒁\bm{\Sigma}_{\bm{Z}}, shown in Eq. (5). As defined in Eq. (4), the first timestep of latent state 𝒛1\bm{z}_{1} is defined by the initial state 𝒛0\bm{z}_{0} and the covariance 𝚿0\bm{\Psi}_{0}.

(4) 𝒛1\displaystyle\bm{z}_{1}\sim 𝒩(𝒛0,𝚿0),\displaystyle\mathcal{N}(\bm{z}_{0},\bm{\Psi}_{0}),
(5) 𝒛t+1|𝒛t\displaystyle\bm{z}_{t+1}|\bm{z}_{t}\sim 𝒩(𝑩𝒛t,𝚺𝒁).\displaystyle\mathcal{N}(\bm{B}\bm{z}_{t},\bm{\Sigma}_{\bm{Z}}).

We define the observation 𝒙~t\tilde{\bm{x}}_{t} assigned to 𝒇tth\bm{f}_{t}^{th}-regime as being linear to the latent state 𝒛t\bm{z}_{t} through the observation matrix of 𝒇tth\bm{f}_{t}^{th}-regime 𝑼(𝒇t)N×L\bm{U}^{(\bm{f}_{t})}\in\mathbb{R}^{N\times L} with the covariance 𝚺𝑿(𝒇t)\bm{\Sigma}_{\bm{X}^{(\bm{f}_{t})}}:

(6) 𝒙~t|𝒛t,𝑼(𝒇t),𝒇t\displaystyle\tilde{\bm{x}}_{t}|\bm{z}_{t},\bm{U}^{(\bm{f}_{t})},\bm{f}_{t} 𝒩(𝑼(𝒇t)𝒛t,𝚺𝑿(𝒇t)).\displaystyle\sim\mathcal{N}(\bm{U}^{(\bm{f}_{t})}\bm{z}_{t},\bm{\Sigma}_{\bm{X}^{(\bm{f}_{t})}}).

MissNet captures inter-correlation by adding a constraint on 𝑼(k)\bm{U}^{(k)}. Let it be assumed that the contextual matrix of the kthk^{th}-regime 𝑺(k)N×N\bm{S}^{(k)}\in\mathbb{R}^{N\times N} encodes the inter-correlation of 𝑿\bm{X} belonging to the kthk^{th}-regime (1kK1\leq k\leq K). We define the contextual latent factor of the kthk^{th}-regime 𝑽(k)L×N\bm{V}^{(k)}\in\mathbb{R}^{L\times N}, and the jj-th column (1jN1\leq j\leq N) of the contextual matrix 𝒔j(k)\bm{s}^{(k)}_{j} as linear to 𝒗j(k)\bm{v}^{(k)}_{j} through the observation matrix 𝑼(k)\bm{U}^{(k)} with the covariance 𝚺𝑺(k)\bm{\Sigma}_{\bm{S}^{(k)}}, formulated in Eq. (7). In Eq. (8), we place zero-mean spherical Gaussian priors on 𝒗j(k)\bm{v}^{(k)}_{j} under the assumption that 1𝑺i,j(k)1-1\leq\bm{S}^{(k)}_{i,j}\leq 1.

(7) 𝒔j(k)|𝒗j(k),𝑼(k)\displaystyle\bm{s}^{(k)}_{j}|\bm{v}^{(k)}_{j},\bm{U}^{(k)} 𝒩(𝑼(k)𝒗j(k),𝚺𝑺(k)),\displaystyle\sim\mathcal{N}(\bm{U}^{(k)}\bm{v}^{(k)}_{j},\bm{\Sigma}_{\bm{S}^{(k)}}),
(8) 𝒗j(k)\displaystyle\bm{v}^{(k)}_{j} 𝒩(0,𝚺𝑽(k)).\displaystyle\sim\mathcal{N}(0,\bm{\Sigma}_{\bm{V}^{(k)}}).

To avoid our model overfitting the observed entries since many entries are missing, we simplify the covariances by assuming that all noises are independent and identically distributed (i.i.d.). Thus, the parameters 𝚺𝒁,𝚺𝑿(k),𝚺𝑺(k),𝚺𝑽(k)\bm{\Sigma}_{\bm{Z}},\bm{\Sigma}_{\bm{X}^{(k)}},\bm{\Sigma}_{\bm{S}^{(k)}},\bm{\Sigma}_{\bm{V}^{(k)}} are reduced to σ𝒁2,σ𝑿(k)2,σ𝑺(k)2,σ𝑽(k)2\sigma_{\bm{Z}}^{2},\sigma_{\bm{X}^{(k)}}^{2},\sigma_{\bm{S}^{(k)}}^{2},\linebreak\sigma_{\bm{V}^{(k)}}^{2}, respectively (i.e., 𝚺𝒁=σ𝒁2𝑰\bm{\Sigma}_{\bm{Z}}=\sigma_{\bm{Z}}^{2}\bm{I}, 𝚺𝑿(k)=σ𝑿(k)2𝑰\bm{\Sigma}_{\bm{X}^{(k)}}=\sigma_{\bm{X}^{(k)}}^{2}\bm{I}, 𝚺𝑺(k)=σ𝑺(k)2𝑰\bm{\Sigma}_{\bm{S}^{(k)}}=\sigma_{\bm{S}^{(k)}}^{2}\bm{I}, and 𝚺𝑽(k)=σ𝑽(k)2𝑰\bm{\Sigma}_{\bm{V}^{(k)}}=\sigma_{\bm{V}^{(k)}}^{2}\bm{I}).

With the above imputation model, the imputed time series 𝒙^t\hat{\bm{x}}_{t} at timestep tt is recovered as follows:

(9) 𝒙^t=𝑾:,t𝒙~t+(1𝑾:,t)𝑼(𝒇t)𝒛t.\displaystyle\hat{\bm{x}}_{t}=\bm{W}_{:,t}\circ\tilde{\bm{x}}_{t}+(1-\bm{W}_{:,t})\circ\bm{U}^{(\bm{f}_{t})}\bm{z}_{t}.

Network inference model. MissNet infers the network for each regime to exploit inter-correlation. We define a Gaussian distribution and 1\ell_{1}-norm for the imputed data belonging to each regime to allow us to estimate networks accurately:

(10) 𝒙^t|𝒇t𝒩(𝝆(𝒇t),𝚯(𝒇t)1),s.t.,||𝚯(𝒇t)||od,1eλ,\displaystyle\hat{\bm{x}}_{t}|\bm{f}_{t}\sim\mathcal{N}(\bm{\rho}^{(\bm{f}_{t})},\bm{\Theta}^{(\bm{f}_{t})-1}),\>\>\>s.t.,||\bm{\Theta}^{(\bm{f}_{t})}||_{od,1}\leq\frac{e}{\lambda},

where ee is any constant value for convenience, and λ\lambda is a hyperparameter that controls the sparsity of the network (i.e., inverse covariance matrix) 𝚯(𝒇t)\bm{\Theta}^{(\bm{f}_{t})} with 1\ell_{1}-norm, which helps avoid capturing spurious correlations.

Optimazation formulation. Our goal is to estimate the model parameters θ={𝑩,𝒛0,𝚿0,σ𝒁,σ𝑿˙,σ𝑺˙,σ𝑽˙,𝝆˙,𝚯˙,𝝅0,𝚷}\theta=\{\bm{B},\bm{z}_{0},\bm{\Psi}_{0},\sigma_{\bm{Z}},\dot{\sigma_{\bm{X}}},\dot{\sigma_{\bm{S}}},\dot{\sigma_{\bm{V}}},\dot{\bm{\rho}},\dot{\bm{\Theta}},\bm{\pi}_{0},\bm{\Pi}\} and find the latent factors 𝒁,𝑽˙,𝑼˙,𝑺˙,𝑭\bm{Z},\dot{\bm{V}},\dot{\bm{U}},\dot{\bm{S}},\bm{F}, where the letters with a dot indicate a set of KK vectors/matrices/scalers (e.g., 𝑺˙={𝑺(k)}k=1K\dot{\bm{S}}=\{\bm{S}^{(k)}\}_{k=1}^{K}), that maximizes the following joint probability distribution:

argmaxp(𝑿~,𝒁,𝑽˙,𝑼˙,𝑺˙,𝑭)=p(𝒛1)t=2Tp(𝒛t|𝒛t1)temporal dependency\displaystyle\mathop{\rm arg~{}max}\limits p(\tilde{\bm{X}},\bm{Z},\dot{\bm{V}},\dot{\bm{U}},\dot{\bm{S}},\bm{F})=\underbrace{p(\bm{z}_{1})\prod_{t=2}^{T}p(\bm{z}_{t}|\bm{z}_{t-1})}_{\text{temporal dependency}}
t=1Tp(𝒙~t|𝒛t1,𝑼(𝒇t),𝒇t)time seriesk=1K{j=1Np(𝒗j(k))j=1Np(𝒔j(k)|𝑼(k),𝒗j(k))}network constraint\displaystyle\underbrace{\prod_{t=1}^{T}p(\tilde{\bm{x}}_{t}|\bm{z}_{t-1},\bm{U}^{(\bm{f}_{t})},\bm{f}_{t})}_{\text{time series}}\underbrace{\prod_{k=1}^{K}\Bigg{\{}\prod_{j=1}^{N}p(\bm{v}^{(k)}_{j})\prod_{j=1}^{N}p(\bm{s}^{(k)}_{j}|\bm{U}^{(k)},\bm{v}^{(k)}_{j})\Bigg{\}}}_{\text{network constraint}}
p(𝒇1)t=2Tp(𝒇t|𝒇t1)regime-switchingt=1Tp(𝒙^t|𝝆(𝒇t),𝚯(𝒇t),𝒇t)network inference,(s.t.𝚯(k)od,1eλ)network sparsity.\displaystyle\underbrace{p(\bm{f}_{1})\prod_{t=2}^{T}p(\bm{f}_{t}|\bm{f}_{t-1})}_{\text{regime-switching}}\underbrace{\prod_{t=1}^{T}p(\hat{\bm{x}}_{t}|\bm{\rho}^{(\bm{f}_{t})},\bm{\Theta}^{(\bm{f}_{t})},\bm{f}_{t})}_{\text{network inference}},\quad\underbrace{(s.t.||\bm{\Theta}^{(k)}||_{od,1}\leq\frac{e}{\lambda})}_{\text{network sparsity}}.
Refer to caption
Figure 2. Graphical model of MissNet at each iteration.

4.2. Algorithm

It is difficult to find the global optimal solution of Eq. (4.1), for the following reasons: (i)\mathrm{(\hskip 1.79993pti\hskip 1.79993pt)} As a constraint for 𝑼˙\dot{\bm{U}}, 𝑺˙\dot{\bm{S}} has to be fixed and encode inter-correlation; (ii)\mathrm{(\hskip 0.80002ptii\hskip 0.80002pt)} 𝑼˙\dot{\bm{U}}, 𝒁\bm{Z} and 𝑭\bm{F} jointly determine 𝑿~\tilde{\bm{X}}, and 𝑼˙\dot{\bm{U}} and 𝑽˙\dot{\bm{V}} jointly determine 𝑺˙\dot{\bm{S}}; (iii)\mathrm{(iii)} Calculating the correct 𝑭\bm{F} is NP-hard. Hence, we aim to find its local optimum instead, following the EM algorithm, where the graphical model for each iteration is shown in Fig. 2. Specifically, to address the aforementioned difficulties, we employ the following steps: (i)\mathrm{(\hskip 1.79993pti\hskip 1.79993pt)} we consider 𝑺˙\dot{\bm{S}} is observed in each iteration, and we update it at the end of the iteration using 𝚯˙\dot{\bm{\Theta}}; (ii)\mathrm{(\hskip 0.80002ptii\hskip 0.80002pt)} we regard 𝑼˙\dot{\bm{U}} as a model parameter, thus {𝑿~,𝒁,𝑭}\{\tilde{\bm{X}},\bm{Z},\bm{F}\} are independent with {𝑺˙,𝑽˙}\{\dot{\bm{S}},\dot{\bm{V}}\}. We alternate the inference of {𝒁,𝑭}\{\bm{Z},\bm{F}\} and 𝑽˙\dot{\bm{V}} and the update of the model parameters; (iii)\mathrm{(iii)} we employ a Viterbi approximation and infer the most likely 𝑭\bm{F}.

4.2.1. E-step

Given 𝑼˙\dot{\bm{U}}, we can infer {𝒁,𝑭}\{\bm{Z},\bm{F}\} and 𝑽˙\dot{\bm{V}} independently.

Let 𝒐t\bm{o}_{t} denote the indices of the observed entries of 𝒙~t\tilde{\bm{x}}_{t}. The observed-only data 𝒙¯t\bar{\bm{x}}_{t} and the corresponding observed-only observation matrix 𝑼¯t(k)\bar{\bm{U}}^{(k)}_{t} are defined as follows:

𝒐t={i|𝑾i,t>0,i=1,,N},\displaystyle\bm{o}_{t}=\{i|\bm{W}_{i,t}>0,i=1,\dots,N\},
(12) 𝒙¯t=𝒙~t(𝒐t),𝑼¯t(k)=𝑼(k)(𝒐t,:).\displaystyle\bar{\bm{x}}_{t}=\tilde{\bm{x}}_{t}(\bm{o}_{t}),\quad\bar{\bm{U}}^{(k)}_{t}=\bm{U}^{(k)}(\bm{o}_{t},:).

Inferring F\bm{F} and Z\bm{Z}. 𝑭\bm{F} and 𝒁\bm{Z} are coupled, and so must be jointly determined. We first use a Viterbi approximation to find the most likely regime assignments 𝑭\bm{F} that maximize the log-likelihood Eq. (4.1). The likelihood term obtained during the calculation of 𝑭\bm{F} also acts as a Kalman Filter (forward algorithm). Then, we infer 𝒁\bm{Z} with a Rauch-Tung-Streibel (RTS) smoother (backward algorithm).

In a Viterbi approximation, finding 𝑭\bm{F} requires the partial cost Jt,t1,k,lJ_{t,t-1,k,l} when the switch is to regime kk at time tt from regime ll at time t1t-1. To calculate the partial cost, we define the following LDS state and variance terms:

𝝁t|t1,k,l\displaystyle\bm{\mu}_{t|t-1,k,l} =𝑩𝝁t1|t1,k,l,\displaystyle=\bm{B}\bm{\mu}_{t-1|t-1,k,l},
𝝁t|t,k,l\displaystyle\bm{\mu}_{t|t,k,l} =𝝁t|t1,k,l+𝑫t,k,l(𝒙¯t𝑼¯t(k)𝑩𝝁t|t1,k,l),\displaystyle=\bm{\mu}_{t|t-1,k,l}+\bm{D}_{t,k,l}(\bar{\bm{x}}_{t}-\bar{\bm{U}}^{(k)}_{t}\bm{B}\bm{\mu}_{t|t-1,k,l}),
𝚿t|t1,k,l\displaystyle\bm{\Psi}_{t|t-1,k,l} =𝑩𝚿t1|t1,k,l𝑩+σ𝒁2𝑰,\displaystyle=\bm{B}\bm{\Psi}_{t-1|t-1,k,l}\bm{B}^{\prime}+\sigma_{\bm{Z}}^{2}\bm{I},
𝚿t|t,k,l\displaystyle\bm{\Psi}_{t|t,k,l} =(𝑰𝑫t,k,l𝑼¯t(k))𝚿t1|t1,k,l,\displaystyle=(\bm{I}-\bm{D}_{t,k,l}\bar{\bm{U}}^{(k)}_{t})\bm{\Psi}_{t-1|t-1,k,l},
(13) 𝑫t,k,l\displaystyle\bm{D}_{t,k,l} =𝚿t|t1,k,l𝑼¯t(k)(𝑼¯t(k)𝚿t|t1,k,l𝑼¯t(k)+σ𝑿(k)2𝑰)1,\displaystyle=\bm{\Psi}_{t|t-1,k,l}\bar{\bm{U}}^{(k)^{\prime}}_{t}(\bar{\bm{U}}^{(k)}_{t}\bm{\Psi}_{t|t-1,k,l}\bar{\bm{U}}^{(k)^{\prime}}_{t}+\sigma_{\bm{X}^{(k)}}^{2}\bm{I})^{-1},

with the initial state:

𝝁1|1,k\displaystyle\bm{\mu}_{1|1,k} =𝒛0+𝑫1,k(𝒙¯1𝑼¯1(k)𝒛0),\displaystyle=\bm{z}_{0}+\bm{D}_{1,k}(\bar{\bm{x}}_{1}-\bar{\bm{U}}^{(k)}_{1}\bm{z}_{0}),
𝚿1|1,k\displaystyle\bm{\Psi}_{1|1,k} =(𝑰𝑫1,k𝑼¯1(k))𝚿0,\displaystyle=(\bm{I}-\bm{D}_{1,k}\bar{\bm{U}}^{(k)}_{1})\bm{\Psi}_{0},
(14) 𝑫1,k\displaystyle\bm{D}_{1,k} =𝚿0𝑼¯1(k)(𝑼¯1(k)𝚿0𝑼¯1(k)+σ𝑿(k)2𝑰)1,\displaystyle=\bm{\Psi}_{0}\bar{\bm{U}}^{(k)^{\prime}}_{1}(\bar{\bm{U}}^{(k)}_{1}\bm{\Psi}_{0}\bar{\bm{U}}^{(k)^{\prime}}_{1}+\sigma_{\bm{X}^{(k)}}^{2}\bm{I})^{-1},

where 𝝁t|t1,k,l\bm{\mu}_{t|t-1,k,l} and 𝝁t|t,k,l\bm{\mu}_{t|t,k,l} are the one-step predicted LDS state and the best-filtered state estimates at tt, respectively, given the switch is in regime kk at time tt and in regime ll at time t1t-1 and only the t1t-1 measurements are known. Similar definitions are used for 𝚿t|t1,k,l\bm{\Psi}_{t|t-1,k,l} and 𝚿t|t,k,l\bm{\Psi}_{t|t,k,l}.

The partial cost is obtained by calculating the logarithm of Eq. (4.1) related to 𝒇t\bm{f}_{t}:

Jt,t1,k,l=\displaystyle J_{t,t-1,k,l}= 12(𝒙¯t𝑼¯t(k)𝝁t|t1,k,l)𝑹(𝒙¯t𝑼¯t(k)𝝁t|t1,k,l)\displaystyle\frac{1}{2}(\bar{\bm{x}}_{t}-\bar{\bm{U}}^{(k)}_{t}\bm{\mu}_{t|t-1,k,l})^{\prime}\bm{R}(\bar{\bm{x}}_{t}-\bar{\bm{U}}^{(k)}_{t}\bm{\mu}_{t|t-1,k,l})
12logdet(𝑹)+L2log(2π)\displaystyle-\frac{1}{2}\log\textrm{det}(\bm{R})+\frac{L}{2}log(2\pi)
+12(𝒙^t𝝆(k))𝚯(k)(𝒙^t𝝆(k))\displaystyle+\frac{1}{2}(\hat{\bm{x}}_{t}-\bm{\rho}^{(k)})^{\prime}\bm{\Theta}^{(k)}(\hat{\bm{x}}_{t}-\bm{\rho}^{(k)})
(15) 12logdet(𝚯(k))+N2log(2π)log(𝚷k,l),\displaystyle-\frac{1}{2}\log\textrm{det}(\bm{\Theta}^{(k)})+\frac{N}{2}\log(2\pi)-\log(\bm{\Pi}_{k,l}),
𝑹=\displaystyle\bm{R}= (𝑼¯t(k)𝚿t|t1,k,l𝑼¯t(k)+σ𝑿(k)2𝑰)1.\displaystyle(\bar{\bm{U}}^{(k)}_{t}\bm{\Psi}_{t|t-1,k,l}\bar{\bm{U}}^{(k)^{\prime}}_{t}+\sigma_{\bm{X}^{(k)}}^{2}\bm{I})^{-1}.

Once all partial costs are obtained, it is well-known how to apply a Viterbi inference to a discrete Markov process to obtain the most likely regime assignments 𝑭\bm{F} (Viterbi, 1967).

Then, we infer 𝒁\bm{Z}. Let the posteriors of 𝒛t\bm{z}_{t} be as follows:

p(𝒛t|𝒙~1,,𝒙~t)=𝒩(𝒛t|𝝁t,𝚿t),\displaystyle p(\bm{z}_{t}|\tilde{\bm{x}}_{1},\dots,\tilde{\bm{x}}_{t})=\mathcal{N}(\bm{z}_{t}|\bm{\mu}_{t},\bm{\Psi}_{t}),
(16) p(𝒛t|𝒙~1,,𝒙~T)=𝒩(𝒛t|𝝁^t,𝚿^t).\displaystyle p(\bm{z}_{t}|\tilde{\bm{x}}_{1},\dots,\tilde{\bm{x}}_{T})=\mathcal{N}(\bm{z}_{t}|\bm{\hat{\mu}}_{t},\bm{\hat{\Psi}}_{t}).

We now obtain 𝝁t,𝚿t,𝚿t|t1\bm{\mu}_{t},\bm{\Psi}_{t},\bm{\Psi}_{t|t-1} using 𝑭\bm{F}; thus, in practice, we have conducted a Kalman Filter. We apply the RTS smoother to infer 𝒁\bm{Z}.

𝝁^t\displaystyle\bm{\hat{\mu}}_{t} =𝝁t+𝑪t(𝝁^t+1𝑩𝝁t),\displaystyle=\bm{\mu}_{t}+\bm{C}_{t}(\bm{\hat{\mu}}_{t+1}-\bm{B}\bm{\mu}_{t}),
𝚿^t\displaystyle\bm{\hat{\Psi}}_{t} =𝚿t+𝑪t(𝚿^t+1𝚿t|t1)𝑪t,\displaystyle=\bm{\Psi}_{t}+\bm{C}_{t}(\bm{\hat{\Psi}}_{t+1}-\bm{\Psi}_{t|t-1})\bm{C}^{\prime}_{t},
(17) 𝑪t\displaystyle\bm{C}_{t} =𝚿t𝑩(𝚿t|t1)1,\displaystyle=\bm{\Psi}_{t}\bm{B}^{\prime}(\bm{\Psi}_{t|t-1})^{-1},
𝔼[𝒛t]\displaystyle\mathbb{E}[\bm{z}_{t}] =𝝁^t,\displaystyle=\bm{\hat{\mu}}_{t},
𝔼[𝒛t𝒛t1]\displaystyle\mathbb{E}[\bm{z}_{t}\bm{z}^{\prime}_{t-1}] =𝚿^t𝑪t+𝝁^t𝝁^t1,\displaystyle=\bm{\hat{\Psi}}_{t}\bm{C}^{\prime}_{t}+\bm{\hat{\mu}}_{t}\bm{\hat{\mu}}^{\prime}_{t-1},
(18) 𝔼[𝒛t𝒛t]\displaystyle\mathbb{E}[\bm{z}_{t}\bm{z}^{\prime}_{t}] =𝚿^t+𝝁^t𝝁^t1.\displaystyle=\bm{\hat{\Psi}}_{t}+\bm{\hat{\mu}}_{t}\bm{\hat{\mu}}^{\prime}_{t-1}.

Inferring V˙\dot{\bm{V}}. We apply Bayes’ theorem to Eq. (7) and (8) to obtain the posteriors p(𝒗j(k)|𝒔j(k))=𝒩(𝒗j(k)|𝝂j(k),𝚼(k))p(\bm{v}^{(k)}_{j}|\bm{s}^{(k)}_{j})=\mathcal{N}(\bm{v}^{(k)}_{j}|\bm{\nu}^{(k)}_{j},\bm{\Upsilon}^{(k)}):

𝝂j(k)\displaystyle\bm{\nu}^{(k)}_{j} =𝑴(k)1𝑼(k)𝒔j(k),\displaystyle=\bm{M}^{(k)-1}\bm{U}^{(k)^{\prime}}\bm{s}^{(k)}_{j},
𝚼(k)\displaystyle\bm{\Upsilon}^{(k)} =σ𝑺(k)2𝑴(k)1,\displaystyle=\sigma_{\bm{S}^{(k)}}^{2}\bm{M}^{(k)-1},
𝑴(k)\displaystyle\bm{M}^{(k)} =𝑼(k)𝑼(k)+σ𝑽(k)2σ𝑺(k)2𝑰,\displaystyle=\bm{U}^{(k)^{\prime}}\bm{U}^{(k)}+\sigma_{\bm{V}^{(k)}}^{-2}\sigma_{\bm{S}^{(k)}}^{2}\bm{I},
𝔼[𝒗j(k)]\displaystyle\mathbb{E}[\bm{v}^{(k)}_{j}] =𝝂j(k),\displaystyle=\bm{\nu}^{(k)}_{j},
(19) 𝔼[𝒗j(k)𝒗j(k)]\displaystyle\mathbb{E}[\bm{v}^{(k)}_{j}\bm{v}^{(k)^{\prime}}_{j}] =𝚼(k)+𝝂j(k)𝝂j(k).\displaystyle=\bm{\Upsilon}^{(k)}+\bm{\nu}^{(k)}_{j}\bm{\nu}^{(k)^{\prime}}_{j}.

4.2.2. M-step

After obtaining {𝒁,𝑭}\{\bm{Z},\bm{F}\} and 𝑽˙\dot{\bm{V}}, we update the model parameters to maximize the expectation of the log-likelihood:

(20) θnew\displaystyle\theta^{new} =argmaxθQ(θ,θold),\displaystyle=\mathop{\rm arg~{}max}\limits_{\theta}Q(\theta,\theta^{old}),
Q(θ,θold)\displaystyle Q(\theta,\theta^{old}) =𝔼𝒁,𝑭,𝑽˙|θold[logp(𝑿~,𝒁,𝑽˙,𝑼˙,𝑺˙,𝑭|θ)]k=1Kλ𝚯(k)od,1,\displaystyle=\mathbb{E}_{\bm{Z},\bm{F},\dot{\bm{V}}|\theta^{old}}[\log p(\tilde{\bm{X}},\bm{Z},\dot{\bm{V}},\dot{\bm{U}},\dot{\bm{S}},\bm{F}|\theta)]-\sum_{k=1}^{K}\lambda||\bm{\Theta}^{(k)}||_{od,1},

where we incorporate the 1\ell_{1}-norm constraint.

Parameters for the imputation model, 𝑼˙\dot{\bm{U}} and {𝑩,𝒛0,𝚿0,σ𝒁,σ𝑿˙,σ𝑺˙,σ𝑽˙}\{\bm{B},\bm{z}_{0},\bm{\Psi}_{0},\sigma_{\bm{Z}},\dot{\sigma_{\bm{X}}},\dot{\sigma_{\bm{S}}},\dot{\sigma_{\bm{V}}}\}, can be obtained by taking the derivative of Q(θ,θold)Q(\theta,\theta^{old}). For 𝑼(k)\bm{U}^{(k)}, we update each row 𝑼i,:(k)\bm{U}^{(k)}_{i,:} individually:

(21) (𝑼i,:(k))new\displaystyle(\bm{U}^{(k)}_{i,:})^{new} =𝑨𝟏(k)𝑨𝟐(k)1,\displaystyle=\bm{A_{1}}^{(k)}\bm{A_{2}}^{(k)-1},
𝑨𝟏(k)=ασ𝑺(k)2j=1N𝑺i,j(k)𝔼[𝒗j(k)]\displaystyle\bm{A_{1}}^{(k)}=\alpha\sigma_{\bm{S}^{(k)}}^{-2}\sum_{j=1}^{N}\bm{S}^{(k)}_{i,j}\mathbb{E}[\bm{v}^{(k)^{\prime}}_{j}] +(1α)σ𝑿(k)2t=1,𝒇tkT𝑾i,t𝑿~i,t𝔼[𝒛t],\displaystyle+(1-\alpha)\sigma_{\bm{X}^{(k)}}^{-2}\sum_{t=1,\bm{f}_{t}\in k}^{T}\bm{W}_{i,t}\tilde{\bm{X}}_{i,t}\mathbb{E}[\bm{z}^{\prime}_{t}],
𝑨𝟐(k)=ασ𝑺(k)2j=1N𝔼[𝒗j(k)𝒗j(k)]\displaystyle\bm{A_{2}}^{(k)}=\alpha\sigma_{\bm{S}^{(k)}}^{-2}\sum_{j=1}^{N}\mathbb{E}[\bm{v}^{(k)}_{j}\bm{v}^{(k)^{\prime}}_{j}] +(1α)σ𝑿(k)2t=1,𝒇tkT𝑾i,t𝔼[𝒛t𝒛t],\displaystyle+(1-\alpha)\sigma_{\bm{X}^{(k)}}^{-2}\sum_{t=1,\bm{f}_{t}\in k}^{T}\bm{W}_{i,t}\mathbb{E}[\bm{z}_{t}\bm{z}^{\prime}_{t}],

where 0α10\leq\alpha\leq 1 is a hyperparameter employed as a trade-off for the contributions of inter-correlation and temporal dependency. The details of updating {𝑩,𝒛0,𝚿0,σ𝒁,σ𝑿˙,σ𝑺˙,σ𝑽˙}\{\bm{B},\bm{z}_{0},\bm{\Psi}_{0},\sigma_{\bm{Z}},\dot{\sigma_{\bm{X}}},\dot{\sigma_{\bm{S}}},\dot{\sigma_{\bm{V}}}\} are presented in Appendix A.3.

For the network inference model, we calculate 𝑿^\hat{\bm{X}} with Eq. (22) and then update 𝝆(k)\bm{\rho}^{(k)} by calculating the empirical mean of 𝑿^\hat{\bm{X}} belonging to the kthk^{th}-regime and 𝚯(k)\bm{\Theta}^{(k)} by solving the graphical lasso problem shown in Eq. (23) via ADMM.

(22) 𝒙^t=𝑾:,t𝒙~t+(1𝑾:,t)(𝑼(𝒇t))new𝝁^t,\displaystyle\hat{\bm{x}}_{t}=\bm{W}_{:,t}\circ\tilde{\bm{x}}_{t}+(1-\bm{W}_{:,t})\circ(\bm{U}^{(\bm{f}_{t})})^{new}\bm{\hat{\mu}}_{t},
(23) minimize𝚯(k)S++pλ𝚯(k)od,1t=1,𝒇tkTll(𝒙^t,𝚯(k)).\displaystyle\textrm{minimize}_{\bm{\Theta}^{(k)}\in S^{p}_{++}}\lambda||\bm{\Theta}^{(k)}||_{od,1}-\sum_{t=1,\bm{f}_{t}\in k}^{T}ll(\hat{\bm{x}}_{t},\bm{\Theta}^{(k)}).

For the regime-switching model, the initial state distribution and the Markov transition matrix are updated as follows:

(24) 𝝅0=𝒇1,𝚷=(t=2T𝒇t𝒇t1)diag(t=2T𝒇t)1.\displaystyle\bm{\pi}_{0}=\bm{f}_{1},\quad\bm{\Pi}=\bigg{(}\sum_{t=2}^{T}\bm{f}_{t}\bm{f}_{t-1}^{\prime}\bigg{)}\textrm{diag}\bigg{(}\sum_{t=2}^{T}\bm{f}_{t}\bigg{)}^{-1}.

4.2.3. Update 𝑺˙\dot{\bm{S}}

We update 𝑺(k)\bm{S}^{(k)} at the end of each iteration. As shown in Eq. (25), we define the off-diagonal elements of 𝑺(k)\bm{S}^{(k)} as partial correlations calculated from the network 𝚯(k)\bm{\Theta}^{(k)} to encode the inter-correlation.

(25) 𝑺i,j(k)\displaystyle\bm{S}^{(k)}_{i,j} ={1(i=j)(𝚯i,j(k)𝚯i,i(k)𝚯j,j(k))(ij).\displaystyle=\begin{cases}1&(i=j)\\ -(\frac{\bm{\Theta}^{(k)}_{i,j}}{\sqrt{\bm{\Theta}^{(k)}_{i,i}}\sqrt{\bm{\Theta}^{(k)}_{j,j}}})&(i\neq j)\end{cases}.
Algorithm 1 MissNet (𝑿~,𝑾,L,K,α,λ\tilde{\bm{X}},\bm{W},L,K,\alpha,\lambda)
1:  Input: (a) partially observed multivariate time series 𝑿~\tilde{\bm{X}},           (b) indicator matrix 𝑾\bm{W},           (c) and hyperparameters L,K,α,λL,K,\alpha,\lambda
2:  Output: latent factors 𝒁,𝑽˙,𝑼˙,𝑺˙,𝑭\bm{Z},\dot{\bm{V}},\dot{\bm{U}},\dot{\bm{S}},\bm{F} model parameters θ\theta, and 𝑿^\hat{\bm{X}}
3:  Initialize 𝑿^\hat{\bm{X}} with linear interpolation, 𝒁,𝑽˙,𝑼˙,𝑺˙,𝑭\bm{Z},\dot{\bm{V}},\dot{\bm{U}},\dot{\bm{S}},\bm{F}, and θ\theta;
4:  repeat
5:     for t=1:Tt=1:T do
6:        for k=1:Kk=1:K do
7:           for l=1:Kl=1:K do
8:              Calculate partial cost Jt,t1,k,lJ_{t,t-1,k,l} using Eq. (15)
9:           end for
10:        end for
11:     end for
12:     Infer 𝑭\bm{F} and obtain 𝝁t,𝚿t\bm{\mu}_{t},\bm{\Psi}_{t} based on Eq. (13), (14)
13:     for t=T:1t=T:1 do
14:        Infer 𝝁^t,𝚿^t,𝔼[𝒛t𝒛t1],𝔼[𝒛t𝒛t]\bm{\hat{\mu}}_{t},\bm{\hat{\Psi}}_{t},\mathbb{E}[\bm{z}_{t}\bm{z}^{\prime}_{t-1}],\mathbb{E}[\bm{z}_{t}\bm{z}^{\prime}_{t}] by Eq. (17), (18)
15:     end for
16:     for j=1:Nj=1:N do
17:        Infer 𝔼[𝒗j(k)],𝔼[𝒗j(k)𝒗j(k)]\mathbb{E}[\bm{v}^{(k)}_{j}],\mathbb{E}[\bm{v}^{(k)}_{j}\bm{v}^{(k)^{\prime}}_{j}] using Eq. (19)
18:     end for
19:     Set 𝒁={𝝁^1,,𝝁^T}\bm{Z}=\{\bm{\hat{\mu}}_{1},\dots,\bm{\hat{\mu}}_{T}\}, {𝑽(k)={𝝂1(k),,𝝂N(k)}}k=1K\{\bm{V}^{(k)}=\{\bm{\nu}^{(k)}_{1},\dots,\bm{\nu}^{(k)}_{N}\}\}_{k=1}^{K}
20:     Update 𝑼˙\dot{\bm{U}}, θ\theta and 𝑿^\hat{\bm{X}} by Eq. (21) - (24)
21:     Update 𝑺˙\dot{\bm{S}} based on Eq. (25)
22:  until convergence;
23:  return  {𝒁,𝑽˙,𝑼˙,𝑺˙,𝑭,θ,𝑿^}\{\bm{Z},\dot{\bm{V}},\dot{\bm{U}},\dot{\bm{S}},\bm{F},\theta,\hat{\bm{X}}\};

4.2.4. Overall algorithm

We have the overall algorithm shown as Alg. 1 to obtain a local optimal solution of Eq. (4.1). Given a partially observed multivariate time series 𝑿~\tilde{\bm{X}}, an indicator matrix 𝑾\bm{W}, the dimension of latent state LL, the number of regimes KK, the network parameter α\alpha, and the sparse parameter λ\lambda, our algorithm aims to find the latent factors 𝒁,𝑽˙,𝑼˙,𝑺˙,𝑭\bm{Z},\dot{\bm{V}},\dot{\bm{U}},\dot{\bm{S}},\bm{F}, other model parameters in θ\theta, and imputed time series 𝑿^\hat{\bm{X}}.

The MissNet algorithm starts by initializing 𝑿^\hat{\bm{X}} with a linear interpolation, and by randomly initializing 𝒁,𝑽˙,𝑼˙,𝑺˙,𝑭\bm{Z},\dot{\bm{V}},\dot{\bm{U}},\dot{\bm{S}},\bm{F}, and θ\theta (Line 3). Then, it alternately updates the latent factors and parameters until they converge. In each iteration, we consider 𝑺˙\dot{\bm{S}} to be given and 𝑼˙\dot{\bm{U}} to be a model parameter. In an iteration, we first conduct a Viterbi approximation to calculate the most likely regime assignments 𝑭\bm{F} (Line 5-12). Then, we infer the expectations of 𝒁\bm{Z} and 𝑽˙\dot{\bm{V}} (Line 13-15 and Line 16-18), and we update 𝑼˙\dot{\bm{U}} and model parameters θ\theta (Line 20), and at the end of the iteration, we update 𝑺˙\dot{\bm{S}} (Line 21).

4.3. Complexity analysis

Lemma 0.

The time complexity of MissNet is
O(#iter(K2t=1T(L3+L2Nt+LNt2+Nt3)+KL2N2+KTL2N+KN3))O(\#iter\cdot(K^{2}\sum_{t=1}^{T}(L^{3}+L^{2}N_{t}+LN_{t}^{2}+N_{t}^{3})+KL^{2}N^{2}+KTL^{2}N+KN^{3})).

Proof.

See Appendix A.2. ∎

NtN_{t} represents the number of observed features of 𝒙¯t\bar{\bm{x}}_{t}. Note that K2t=1T(L3+L2Nt+LNt2+Nt3)K^{2}\sum_{t=1}^{T}(L^{3}+L^{2}N_{t}+LN_{t}^{2}+N_{t}^{3}) is upper bounded by K2TN3K^{2}TN^{3}. In practice, the length of the time series (TT) is often orders of magnitude greater than the number of features (NN). Hence, the actual running time of MissNet is dominated by the term related to TT, which is linear in TT.

Lemma 0.

The space complexity of MissNet is
O(TN+K2TL2+KL2N+KN2)O(TN+K^{2}TL^{2}+KL^{2}N+KN^{2}).

Proof.

See Appendix A.2. ∎

5. Experiments

Refer to caption
Refer to caption (a) PatternA (T=1000T=1000) Refer to caption (b) PatternB (T=1000T=1000) Refer to caption (c) Run (T=145T=145) Refer to caption (d) Bouncy walk (T=644T=644) Refer to caption (e) Swing shoulder (T=804T=804) Refer to caption (f) Walk slow (T=1223T=1223)
Refer to caption (g) Mawashigeri (T=1472T=1472) Refer to caption (h) Jump distance (T=547T=547) Refer to caption (i) Wave hello (T=299T=299) Refer to caption (j) Football throw (T=1091T=1091) Refer to caption (k) Boxing (T=773T=773) Refer to caption (l) Motes (T=336T=336)
Figure 3. RMSE of (a), (b) Synthetic (N=50N=50), (c) \sim (k) MotionCapture (N=123N=123) and (l) Motes (N=54N=54) datasets.
Refer to caption
Figure 4. Critical difference diagram of real-world datasets.

In this section, we empirically evaluate our approach against state-of-the-art baselines on 12 datasets. We present experimental results for the following questions:
Q1. Effectiveness: How accurate is the proposed MissNet for recovering missing values?
Q2. Scalability: How does the proposed algorithm scale?
Q3. Interpretability: How can MissNet help us understand the data?

5.1. Experimental setup

5.1.1. Datasets

We use the following datasets.

Synthetic. We generate two types of synthetic data, PatternA and PatternB, five times each, by defining 𝒁\bm{Z} and 𝑼˙\dot{\bm{U}}. We set T=1000,N=50T=1000,N=50 and L=10L=10 (Appendix B.1). PatternA has one regime (K=1K=1), and in PatternB (K=2K=2), two regimes switch at every 200200 timesteps.

MotionCapture. This dataset contains nine types of full body motions of MotionCapture database 333http://mocap.cs.cmu.edu. Each motion measures the positions of 4141 bones in the human body, resulting in a total of 123123 features (X, Y, and Z coordinates).

Motes. This dataset consists of temperature measurements from the 5454 sensors deployed in the Intel Berkeley Research Lab 444https://db.csail.mit.edu/labdata/labdata.html. We use hourly data for the first two weeks (03-01 \sim 03-14). Originally, 9.6%9.6\% of the data is missing, including a blackout from 03-10 to 03-11 where all the values are missing.

5.1.2. Data preprocessing

We generate a synthetic missing block at a length of 05%0\sim 5\% of the data length and place it randomly until the total missing rate reaches {10,20,80%}\{10,20,\dots 80\%\}. Thus, a missing block can be longer than 0.05T0.05T when it overlaps. An additional 10%10\% of missing values are added for hyperparameter tuning. Each dataset feature is normalized independently using a z-score so that each dataset has a zero mean and a unit variance.

5.1.3. Comparison methods

We compare our method with state-of-the-art imputation methods ranging from classical baselines (Linear and Quadratic), MF-based methods (SoftImpute, CDRec and TRMF), SSM-based methods (DynaMMo and DCMF), to DL-based methods (BRITS, SAITS and TIDER).

  • Linear/Quadratic 555https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.interpolate.html use linear/quadratic equations to interpolate missing values.

  • SoftImpute (Mazumder et al., 2010) first fills in missing values with zero, then alternates between recovering the missing values and updating the SVD using the recovered matrix.

  • CDRec (Khayati et al., 2014) is a memory-efficient algorithm that iterates centroid decomposition (CD) and missing value imputation until they converge.

  • TRMF (Yu et al., 2016) is based on MF that imposes temporal dependency among the data points with the AR model.

  • DynaMMo (Li et al., 2009) first fills in missing values using linear interpolation and then uses the EM algorithm to iteratively recover missing values and update the LDS model.

  • DCMF (Cai et al., 2015a) adds a contextual constraint to SSM and captures inter-correlation by a predefined network. As suggested in the original paper, we give the cosine similarity between each pair of time series calculated after linear interpolation as a predefined network. This method is similar to MissNet if we set K=1K=1, employ a predefined network that is fixed throughout the algorithm, and eliminate the effect of regime-switching and network inference models from MissNet.

  • BRITS (Cao et al., 2018) imputes missing values according to hidden states from bidirectional RNN.

  • SAITS (Du et al., 2023) is a self-attention-based model that jointly optimizes imputation and reconstruction to perform the missing value imputation of multivariate time series.

  • TIDER (LIU et al., 2022) learns disentangled seasonal and trend representations by employing a neural network combined with MF.

5.1.4. Hyperparameter setting

For MissNet, we use the latent dimensions of 10,3010,30 and 1515 for Synthetic, MotionCapture and Motes, respectively, and we set λ=1.0,α=0.5\lambda=1.0,\alpha=0.5 for all datasets. We set the correct number of regimes on Synthetic datasets; we vary K={1,2,3}K=\{1,2,3\} for other datasets. We list the detailed hyperparameter settings for the baselines in Appendix B.2.

5.1.5. Evaluation metric

To evaluate the effectiveness, we use Root Mean Square Error (RMSE) of the observed time series (Cai et al., 2015a).

5.2. Results

5.2.1. Q1. Effectiveness

We show the effectiveness of MissNet over baselines in missing value imputation.

Synthetic. Fig. 3 (a) and (b) show the results obtained with Synthetic datasets. SSM and MF-based methods perform worse with PatternB than with PatternA due to the increased complexity of data. DL-based methods, especially BRITS, are less affected thanks to their high modeling power. MissNet significantly outperforms DCMF for PatternB although it produces similar results for PatternA. This is because DCMF fails to capture inter-correlation with PatternB since it can only use one predefined network and cannot afford a change of network. Meanwhile, MissNet can capture the inter-correlation for two different regimes thanks to our regime-switching model. However, MissNet fails to discover the correct transition when the missing rate exceeds 70%70\%, and RMSE becomes similar to DCMF.

MotionCapture and Motes. The results for MotionCapture and Motes datasets are shown in Fig. 3 (c) \sim (l). We can see that MissNet and DCMF constantly outperform other baselines thanks to their ability to exploit inter-correlation explicitly.

Fig. 4 shows the corresponding critical difference diagram for all missing rates based on the Wilcoxon-Holm method (Ismail Fawaz et al., 2019), where methods that are not connected by a bold line are significantly different in average rank. This confirms that MissNet significantly outperforms other methods, including DCMF, in average rank. Our algorithm for repeatedly inferring networks and the use of 1\ell_{1}-norm enables the inference of adequate networks for imputation, contributing to better results than DCMF, which uses cosine similarity as a predefined network that may contain spurious correlations in the presence of missing values. Note that MissNet and DCMF exhibit only minor differences when the missing rate is low (10%10\%) because a plausible predefined network can be calculated from observed data.

Classical Linear and Quadratic baselines are unsuitable for imputing missing blocks since they impute missing values mostly from neighboring observed points and cannot capture temporal patterns when there are large gaps. DL-based methods lack sufficient training data and are not suitable for the data we use here, making them perform particularly poorly at a high missing rate, as also noted in (Khayati et al., 2020). MF-based methods, SoftImpute and CDRec, have a higher RMSE than SSM-based methods since they do not model the temporal dynamics of the data. TRMF utilizes temporal dependency with the AR model, however, it can only capture certain lags specified on the hyperparameter of the AR model. SSM-based methods are superior in imputation to other groups owing to their ability to capture temporal dependency in latent space. DynaMMo implicitly captures inter-correlation in latent space. Therefore, it is no match for MissNet or DCMF.

Fig. 5 demonstrates the results for the MotionCapture Run dataset (missing rate =60%=60\%). We compare the imputation result for the sensor at RKNE provided by the top five methods in terms of average rank, including MissNet, in Fig. 5 (a). BRITS and SoftImpute fail to capture the dynamics of time series while providing a good fit to observed values. The imputation of DynaMMo is smooth, but some parts are imprecise since it cannot explicitly exploit inter-correlation. MissNet and DCMF can effectively exploit other observed features associated with RKNE, thereby accurately imputing missing values where other methods fail (e.g., T=2040,6080T=20\sim 40,60\sim 80). Fig. 5 (b) shows the sensor network of Y-coordinate values obtained by MissNet plotted on the human body, where a green/yellow dot (node) indicates a sensor placed on the front/back of the body and the thickness and color (blue/red) of the edges are the value and sign (positive/negative) of partial correlations, respectively. We can see that the sensors located close together have edges, meaning they are conditionally dependent given all other sensor values. For example, the sensor at RKNE has edges between RTHI, RSHN, LKNE, and LTHI. They are located to RKNE nearby and show similar dynamics, thus it is reasonable to consider that they are connected. Since MissNet can infer such a meaningful network from partially observed data, the imputation of MissNet is more accurate than that of DCMF.

Refer to caption (a) Imputation results at RKNE Refer to caption (b) Y-network of MissNet
Figure 5. Case study on MotionCapture Run dataset.
Refer to caption (a) Impact of LL Refer to caption (b) Impact of α\alpha Refer to caption (c) Impact of λ\lambda
Figure 6. Hyperparameter sensitivity results.

5.2.2. Hyperparameter sensitivity

We take the Motes dataset and show the impact of hyperparameters: the latent dimension LL, the network parameter α\alpha, and the sparse parameter λ\lambda. We show the mean RMSE of all missing rates.

Latent dimension. Fig. 6 (a) shows the impact of LL. As LL becomes larger, the model’s fitting against the observed data increases. As we can see, the RMSE is constantly decreasing as LL increases and stabilizes after 1515. This shows that MissNet does not overfit the observed data even for a large LL.

Network parameter. α\alpha determines the contributions of inter-correlation and temporal dependency to learning 𝑼˙\dot{\bm{U}}. If α=0\alpha=0, the contextual matrix 𝑺˙\dot{\bm{S}} is ignored. If α=1\alpha=1, only 𝑺˙\dot{\bm{S}} is considered for learning 𝑼˙\dot{\bm{U}}. Fig. 6 (b) shows the results of varying α\alpha and they are robust except when α=1\alpha=1 (RMSE =0.76=0.76). We can see that α=0.4\alpha=0.4 shows the best result, indicating that both temporal dependency and inter-correlation are important for precise imputation.

Sparse parameter. λ\lambda controls the sparsity of the networks 𝚯˙\dot{\bm{\Theta}} through 1\ell_{1}-norm. The bigger λ\lambda becomes, the more sparse the networks become, resulting in MissNet considering only strong interplay. By contrast, when λ\lambda is small, MissNet considers insignificant interplays. Fig. 6 (c) shows the impact of λ\lambda. We can see that the sparsity of the networks affects the accuracy, and the best λ\lambda exists between 0.10.1 and 1010. Thus, the 1\ell_{1}-norm constraint helps MissNet to exploit important relationships.

Refer to caption
Figure 7. Scalability of MissNet.

5.2.3. Q2. Scalability

We test the scalability of the MissNet algorithm by changing the number of the data length (TT) in PatternA. Fig. 7 shows the computation time for one iteration plotted with the data length. As it shows, our proposed MissNet algorithm scales linearly with regard to the data length TT.

Refer to caption

(a) Results of regime assignments

Refer to caption (b) Regime #1\#1, #\# of edges: 116 Refer to caption (c) Regime #2\#2, #\# of edges: 134
Figure 8. Case study on Motes dataset.

5.2.4. Q3. Interpretability

We demonstrate how MissNet helps us understand data. We have shown an example with the MotionCapture Run dataset in Fig. 5 (b) where MissNet provides an interpretable network. Here, we demonstrate the results on the Motes dataset (missing rate =30%=30\%) of MissNet (K=2K=2). Fig. 8 (a) shows the regime assignments 𝑭\bm{F}, and MissNet mostly assigns night hours to regime #1\#1 and working hours (about 99 am. to 1010 pm.) to regime #2\#2, suggesting that they have different networks. Fig. 8 (b) and (c) show the networks for regimes #1\#1 and #2\#2 obtained by MissNet plotted on the building layout. The sensor numbers in the figure are plotted on the actual sensor deployments. As we can see, the two regimes have different networks, and a common feature is that the neighboring sensors tend to form edges, which aligns with our expectations, considering that the sensors measure temperature and, thus, neighboring sensors correlate. The network of regime #2\#2 has more edges than that of #1\#1, and the edges are 1.21.2 times longer on average, which might be caused by air convection due to the movement of people during working hours. Consequently, MissNet provides regime assignments and sparse networks, which help us understand how data can be separated and important relationships for imputation.

6. Conclusion

In this paper, we proposed an effective missing value imputation method for multivariate time series, namely MissNet, which captures temporal dependency based on latent space and inter-correlation by the inferred networks while discovering regimes. Our proposed method has the following properties: (a) Effective: it outperforms the state-of-the-art algorithms for multivariate time series imputation. (b) Scalable: the computation time of MissNet scales linearly with regard to the length of the data. (c) Interpretable: it provides sparse networks and regime assignments, which help us understand the important relationships for imputation visually. Our extensive experiments demonstrated the above properties of MissNet.

Acknowledgements.
The authors would like to thank the anonymous referees for their valuable comments and helpful suggestions. This work was supported by JSPS KAKENHI Grant-in-Aid for Scientific Research Number JP21H03446, JP22K17896, NICT JPJ012368C03501, JST-AIP JPMJCR21U4, JST CREST JPMJCR23M3.

References

  • (1)
  • Alcaraz and Strodthoff (2022) Juan Miguel Lopez Alcaraz and Nils Strodthoff. 2022. Diffusion-based time series imputation and forecasting with structured state space models. arXiv preprint arXiv:2208.09399 (2022).
  • Bansal et al. (2021) Parikshit Bansal, Prathamesh Deshpande, and Sunita Sarawagi. 2021. Missing Value Imputation on Multidimensional Time Series. Proc. VLDB Endow. 14, 11 (jul 2021), 2533–2545.
  • Berrevoets et al. (2023) Jeroen Berrevoets, Fergus Imrie, Trent Kyono, James Jordon, and Mihaela van der Schaar. 2023. To impute or not to impute? missing data in treatment effect estimation. In International Conference on Artificial Intelligence and Statistics. PMLR, 3568–3590.
  • Boyd et al. (2011) Stephen P. Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. 2011. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. Found. Trends Mach. Learn. 3, 1 (2011), 1–122.
  • Cai et al. (2015a) Yongjie Cai, Hanghang Tong, Wei Fan, and Ping Ji. 2015a. Fast mining of a network of coevolving time series. In Proceedings of the 2015 SIAM International Conference on Data Mining. SIAM, 298–306.
  • Cai et al. (2015b) Yongjie Cai, Hanghang Tong, Wei Fan, Ping Ji, and Qing He. 2015b. Facets: Fast comprehensive mining of coevolving high-order time series. In KDD. 79–88.
  • Cao et al. (2018) Wei Cao, Dong Wang, Jian Li, Hao Zhou, Lei Li, and Yitan Li. 2018. Brits: Bidirectional recurrent imputation for time series. Advances in neural information processing systems 31 (2018).
  • Chambon et al. (2018) Stanislas Chambon, Mathieu N Galtier, Pierrick J Arnal, Gilles Wainrib, and Alexandre Gramfort. 2018. A deep learning architecture for temporal sleep stage classification using multivariate and multimodal time series. IEEE Transactions on Neural Systems and Rehabilitation Engineering 26, 4 (2018), 758–769.
  • Chen and Sun (2021) Xinyu Chen and Lijun Sun. 2021. Bayesian temporal factorization for multidimensional time series prediction. IEEE Transactions on Pattern Analysis and Machine Intelligence 44, 9 (2021), 4659–4673.
  • Cini et al. (2021) Andrea Cini, Ivan Marisca, and Cesare Alippi. 2021. Filling the g_ap_s: Multivariate time series imputation by graph neural networks. arXiv preprint arXiv:2108.00298 (2021).
  • Dabrowski et al. (2018) Joel Janek Dabrowski, Ashfaqur Rahman, Andrew George, Stuart Arnold, and John McCulloch. 2018. State Space Models for Forecasting Water Quality Variables: An Application in Aquaculture Prawn Farming (KDD ’18). Association for Computing Machinery, New York, NY, USA, 177–185.
  • Du et al. (2023) Wenjie Du, David Côté, and Yan Liu. 2023. Saits: Self-attention-based imputation for time series. Expert Systems with Applications 219 (2023), 119619.
  • Fan et al. (2023) Yangxin Fan, Xuanji Yu, Raymond Wieser, David Meakin, Avishai Shaton, Jean-Nicolas Jaubert, Robert Flottemesch, Michael Howell, Jennifer Braid, Laura Bruckman, Roger French, and Yinghui Wu. 2023. Spatio-Temporal Denoising Graph Autoencoders with Data Augmentation for Photovoltaic Data Imputation. Proc. ACM Manag. Data 1, 1, Article 50 (may 2023), 19 pages. https://doi.org/10.1145/3588730
  • Fox et al. (2008) Emily Fox, Erik Sudderth, Michael Jordan, and Alan Willsky. 2008. Nonparametric Bayesian learning of switching linear dynamical systems. Advances in neural information processing systems 21 (2008).
  • Friedman et al. (2008) Jerome Friedman, Trevor Hastie, and Robert Tibshirani. 2008. Sparse inverse covariance estimation with the graphical lasso. Biostatistics 9, 3 (2008), 432–441.
  • Ghahramani (1998) Zoubin Ghahramani. 1998. Learning dynamic Bayesian networks. Springer Berlin Heidelberg, Berlin, Heidelberg, 168–197.
  • Hairi et al. (2019) N Hairi, Hanghang Tong, and Lei Ying. 2019. NetDyna: mining networked coevolving time series with missing values. In 2019 IEEE International Conference on Big Data (Big Data). IEEE, 503–512.
  • Hallac et al. (2017a) David Hallac, Youngsuk Park, Stephen P. Boyd, and Jure Leskovec. 2017a. Network Inference via the Time-Varying Graphical Lasso. In KDD. 205–213.
  • Hallac et al. (2017b) David Hallac, Sagar Vare, Stephen P. Boyd, and Jure Leskovec. 2017b. Toeplitz Inverse Covariance-Based Clustering of Multivariate Time Series Data. In KDD. 215–223.
  • Ismail Fawaz et al. (2019) Hassan Ismail Fawaz, Germain Forestier, Jonathan Weber, Lhassane Idoumghar, and Pierre-Alain Muller. 2019. Deep learning for time series classification: a review. Data mining and knowledge discovery 33, 4 (2019), 917–963.
  • Jing et al. (2021) Baoyu Jing, Hanghang Tong, and Yada Zhu. 2021. Network of tensor time series. In Proceedings of the Web Conference 2021. 2425–2437.
  • Khayati et al. (2014) Mourad Khayati, Michael Böhlen, and Johann Gamper. 2014. Memory-efficient centroid decomposition for long time series. In 2014 IEEE 30th International Conference on Data Engineering. IEEE, 100–111.
  • Khayati et al. (2020) Mourad Khayati, Alberto Lerner, Zakhar Tymchenko, and Philippe Cudré-Mauroux. 2020. Mind the gap: an experimental evaluation of imputation of missing values techniques in time series. In Proceedings of the VLDB Endowment, Vol. 13. 768–782.
  • Kolar and Xing (2012) Mladen Kolar and Eric P Xing. 2012. Estimating sparse precision matrices from data with missing values. In International Conference on Machine Learning. 635–642.
  • Li et al. (2009) Lei Li, James McCann, Nancy S Pollard, and Christos Faloutsos. 2009. Dynammo: Mining and summarization of coevolving sequences with missing values. In KDD. 507–516.
  • LIU et al. (2022) SHUAI LIU, Xiucheng Li, Gao Cong, Yile Chen, and YUE JIANG. 2022. Multivariate Time-series Imputation with Disentangled Temporal Representations. In The Eleventh International Conference on Learning Representations.
  • Liu et al. (2020) Yuehua Liu, Tharam Dillon, Wenjin Yu, Wenny Rahayu, and Fahed Mostafa. 2020. Missing value imputation for industrial IoT sensor data with large gaps. IEEE Internet of Things Journal 7, 8 (2020), 6855–6867.
  • Ma et al. (2008) Hao Ma, Haixuan Yang, Michael R Lyu, and Irwin King. 2008. Sorec: social recommendation using probabilistic matrix factorization. In Proceedings of the 17th ACM conference on Information and knowledge management. 931–940.
  • Malte Londschien and Bühlmann (2021) Solt Kovács Malte Londschien and Peter Bühlmann. 2021. Change-Point Detection for Graphical Models in the Presence of Missing Values. Journal of Computational and Graphical Statistics 30, 3 (2021), 768–779.
  • Mazumder et al. (2010) Rahul Mazumder, Trevor Hastie, and Robert Tibshirani. 2010. Spectral regularization algorithms for learning large incomplete matrices. The Journal of Machine Learning Research 11 (2010), 2287–2322.
  • Mohan et al. (2014) Karthik Mohan, Palma London, Maryam Fazel, Daniela Witten, and Su-In Lee. 2014. Node-Based Learning of Multiple Gaussian Graphical Models. J. Mach. Learn. Res. 15, 1 (jan 2014), 445–488.
  • Monti et al. (2014) Ricardo Pio Monti, Peter Hellyer, David Sharp, Robert Leech, Christoforos Anagnostopoulos, and Giovanni Montana. 2014. Estimating time-varying brain connectivity networks from functional MRI time series. NeuroImage 103 (2014), 427–443.
  • Namaki et al. (2011) A. Namaki, A.H. Shirazi, R. Raei, and G.R. Jafari. 2011. Network analysis of a financial market based on genuine correlation and threshold method. Physica A: Statistical Mechanics and its Applications 390, 21 (2011), 3835–3841.
  • Obata et al. (2024) Kohei Obata, Koki Kawabata, Yasuko Matsubara, and Yasushi Sakurai. 2024. Dynamic Multi-Network Mining of Tensor Time Series. In Proceedings of the ACM on Web Conference 2024 (WWW ’24). 4117–4127.
  • Papadimitriou et al. (2005) Spiros Papadimitriou, Jimeng Sun, and Christos Faloutsos. 2005. Streaming pattern discovery in multiple time-series. (2005).
  • Pavlovic et al. (1999) Vladimir Pavlovic, James M Rehg, Tat-Jen Cham, and Kevin P Murphy. 1999. A dynamic Bayesian network approach to figure tracking using learned dynamic models. In Proceedings of the seventh IEEE international conference on computer vision, Vol. 1. IEEE, 94–101.
  • Qin et al. (2020) Zhen Qin, Yibo Zhang, Shuyu Meng, Zhiguang Qin, and Kim-Kwang Raymond Choo. 2020. Imaging and fusing time series for wearable sensor-based human activity recognition. Information Fusion 53 (2020), 80–87.
  • Ren et al. (2023) Xiaobin Ren, Kaiqi Zhao, Patricia Riddle, Katerina Taškova, Lianyan Li, and Qingyi Pan. 2023. DAMR: Dynamic Adjacency Matrix Representation Learning for Multivariate Time Series Imputation. SIGMOD (2023). https://doi.org/10.1145/3589333
  • Rogers et al. (2013) Mark Rogers, Lei Li, and Stuart J Russell. 2013. Multilinear dynamical systems for tensor time series. Advances in Neural Information Processing Systems 26 (2013).
  • Shadbahr et al. (2023) Tolou Shadbahr, Michael Roberts, Jan Stanczuk, Julian Gilbey, Philip Teare, Sören Dittmer, Matthew Thorpe, Ramon Viñas Torné, Evis Sala, Pietro Lió, Mishal Patel, Jacobus Preller, Ian Selby, Anna Breger, Jonathan R. Weir-McCall, Effrossyni Gkrania-Klotsas, Anna Korhonen, Emily Jefferson, Georg Langs, Guang Yang, Helmut Prosch, Judith Babar, Lorena Escudero Sánchez, Marcel Wassin, Markus Holzer, Nicholas Walton, Pietro Lió, James H. F. Rudd, Tuomas Mirtti, Antti Sakari Rannikko, John A. D. Aston, Jing Tang, and Carola-Bibiane Schönlieb. 2023. The impact of imputation quality on machine learning classifiers for datasets with missing values. Communications Medicine 3, 1 (2023).
  • Shukla and Marlin (2021) Satya Narayan Shukla and Benjamin M Marlin. 2021. Multi-time attention networks for irregularly sampled time series. arXiv preprint arXiv:2101.10318 (2021).
  • Städler and Bühlmann (2012a) Nicolas Städler and Peter Bühlmann. 2012a. Missing values: sparse inverse covariance estimation and an extension to sparse regression. Statistics and Computing 22 (2012), 219–235.
  • Städler and Bühlmann (2012b) Nicolas Städler and Peter Bühlmann. 2012b. Missing values: sparse inverse covariance estimation and an extension to sparse regression. Statistics and Computing 22 (2012), 219–235.
  • Tashiro et al. (2021) Yusuke Tashiro, Jiaming Song, Yang Song, and Stefano Ermon. 2021. Csdi: Conditional score-based diffusion models for probabilistic time series imputation. Advances in Neural Information Processing Systems 34 (2021), 24804–24816.
  • Tomasi et al. (2021) Federico Tomasi, Veronica Tozzo, and Annalisa Barla. 2021. Temporal Pattern Detection in Time-Varying Graphical Models. In ICPR. 4481–4488.
  • Tomasi et al. (2018) Federico Tomasi, Veronica Tozzo, Saverio Salzo, and Alessandro Verri. 2018. Latent Variable Time-varying Network Inference. In KDD. 2338–2346. https://doi.org/10.1145/3219819.3220121
  • Tozzo et al. (2021) Veronica Tozzo, Federico Ciech, Davide Garbarino, and Alessandro Verri. 2021. Statistical Models Coupling Allows for Complex Local Multivariate Time Series Analysis. In KDD. 1593–1603.
  • Tozzo et al. (2020) Veronica Tozzo, Davide Garbarino, and Annalisa Barla. 2020. Missing Values in Multiple Joint Inference of Gaussian Graphical Models. In Proceedings of the 10th International Conference on Probabilistic Graphical Models (Proceedings of Machine Learning Research, Vol. 138), Manfred Jaeger and Thomas Dyhre Nielsen (Eds.). PMLR, 497–508.
  • Viterbi (1967) Andrew Viterbi. 1967. Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. IEEE transactions on Information Theory 13, 2 (1967), 260–269.
  • Wang et al. (2023a) Dingsu Wang, Yuchen Yan, Ruizhong Qiu, Yada Zhu, Kaiyu Guan, Andrew Margenot, and Hanghang Tong. 2023a. Networked time series imputation via position-aware graph enhanced variational autoencoders. In KDD. 2256–2268.
  • Wang et al. (2023b) Xu Wang, Hongbo Zhang, Pengkun Wang, Yudong Zhang, Binwu Wang, Zhengyang Zhou, and Yang Wang. 2023b. An Observed Value Consistent Diffusion Model for Imputing Missing Values in Multivariate Time Series. In KDD. 2409–2418.
  • Yi et al. (2016) Xiuwen Yi, Yu Zheng, Junbo Zhang, and Tianrui Li. 2016. ST-MVL: filling missing values in geo-sensory time series data. In Proceedings of the 25th International Joint Conference on Artificial Intelligence.
  • Yoon et al. (2018a) Jinsung Yoon, James Jordon, and Mihaela Schaar. 2018a. Gain: Missing data imputation using generative adversarial nets. In International conference on machine learning. PMLR, 5689–5698.
  • Yoon et al. (2018b) Jinsung Yoon, William R Zame, and Mihaela van der Schaar. 2018b. Estimating missing data in temporal data streams using multi-directional recurrent neural networks. IEEE Transactions on Biomedical Engineering 66, 5 (2018), 1477–1490.
  • Yu et al. (2016) Hsiang-Fu Yu, Nikhil Rao, and Inderjit S. Dhillon. 2016. Temporal Regularized Matrix Factoriztion for High-dimensional Time Series Prediction. In Advances in Neural Information Processing Systems 28.
  • Zhang and Balzano (2016) Dejiao Zhang and Laura Balzano. 2016. Global convergence of a grassmannian gradient descent algorithm for subspace estimation. In Artificial Intelligence and Statistics. PMLR, 1460–1468.

Appendix A Proposed model

A.1. Symbols

Table 2 lists the main symbols we use throughout this paper.

A.2. Complexity analysis

Proof of Lemma 1.

Proof.

The overall time complexity is composed of four parts by taking the most time-consuming part of equations for each iteration considering N>Nt,LN>N_{t},L: the complexity for the inference of 𝒁\bm{Z} and 𝑭\bm{F} is O(K2t=1T(L3+L2Nt+LNt2+Nt3))O(K^{2}\sum_{t=1}^{T}(L^{3}+L^{2}N_{t}+LN_{t}^{2}+N_{t}^{3})) related to Eq. (13) and Eq. (15); the inference of 𝑽˙\dot{\bm{V}} is O(KL2N2)O(KL^{2}N^{2}) (Eq. (19)); M step is O(KTL2N)O(KTL^{2}N) related to the calculation of 𝑼˙\dot{\bm{U}} (Eq. (21)); and the update of 𝚯˙\dot{\bm{\Theta}} is O(KN3)O(KN^{3}) (Eq. (23)). Thus, the overall time complexity is O(#iter(K2t=1T(L3+L2Nt+LNt2+Nt3)+KL2N2+KTL2N+KN3))O(\#iter\cdot(K^{2}\sum_{t=1}^{T}(L^{3}+L^{2}N_{t}+LN_{t}^{2}+N_{t}^{3})+KL^{2}N^{2}+KTL^{2}N+KN^{3})). ∎

Proof of Lemma 2.

Proof.

The space complexity is composed of three parts: storing input dataset 𝑿\bm{X} is O(TN)O(TN); intermediate values in E step are O(K2TL2+KL2N)O(K^{2}TL^{2}+KL^{2}N); and storing parameter set is O(KN2)O(KN^{2}). Thus, the overall space complexity is O(TN+K2TL2+KL2N+KN2)O(TN+K^{2}TL^{2}+KL^{2}N+KN^{2}). ∎

Table 2. Symbols and definitions.
Symbol Definition
𝑿\bm{X} Multivariate time series 𝑿={𝒙1,𝒙2,,𝒙T}N×T\bm{X}=\{\bm{x}_{1},\bm{x}_{2},\dots,\bm{x}_{T}\}\in\mathbb{R}^{N\times T}
𝑾\bm{W} Indicator matrix 𝑾N×T\bm{W}\in\mathbb{R}^{N\times T}
𝑿~\tilde{\bm{X}} Partially observed multivariate time series 𝑿~=𝑾𝑿\tilde{\bm{X}}=\bm{W}\circ\bm{X}
𝑿^\hat{\bm{X}} Inputed multivariate time series
𝒁\bm{Z} Time series latent states 𝒁L×T\bm{Z}\in\mathbb{R}^{L\times T}
𝑽(k)\bm{V}^{(k)} Contextual latent factor of kthk^{th}-regime 𝑽(k)L×N\bm{V}^{(k)}\in\mathbb{R}^{L\times N}
𝑺(k)\bm{S}^{(k)} Contextual matrix of kthk^{th}-regime 𝑺(k)N×N\bm{S}^{(k)}\in\mathbb{R}^{N\times N}
𝑩\bm{B} Transition matrix 𝑩L×L\bm{B}\in\mathbb{R}^{L\times L}
𝑼(k)\bm{U}^{(k)} Observation matrix of kthk^{th}-regime 𝑼(k)N×L\bm{U}^{(k)}\in\mathbb{R}^{N\times L}
𝑭\bm{F} Regime assignments 𝑭K×T\bm{F}\in\mathbb{R}^{K\times T}
𝝆(k)\bm{\rho}^{(k)} Mean vector of kthk^{th}-regime 𝝆(k)N\bm{\rho}^{(k)}\in\mathbb{R}^{N}
𝚯(k)\bm{\Theta}^{(k)} Inverse covariance matrix (i.e., network) of kthk^{th}-regime 𝚯(k)N×N\bm{\Theta}^{(k)}\in\mathbb{R}^{N\times N}
NN Number of features
TT Number of timesteps
LL Number of latent dimensions
KK Number of regimes
α\alpha Trade-off between temporal dependency and inter-correlation
λ\lambda Parameter for 1\ell_{1}-norm that regulates network sparsity

A.3. Updating parameters

The parameters are updated as follows:

𝑩new\displaystyle\bm{B}^{new} =(t=2T𝔼[𝒛t𝒛t1])(t=2T𝔼[𝒛t1𝒛t1])1,\displaystyle=\bigg{(}\sum_{t=2}^{T}\mathbb{E}[\bm{z}_{t}\bm{z}^{\prime}_{t-1}]\bigg{)}\bigg{(}\sum_{t=2}^{T}\mathbb{E}[\bm{z}_{t-1}\bm{z}^{\prime}_{t-1}]\bigg{)}^{-1},
𝒛0new\displaystyle\bm{z}_{0}^{new} =𝔼[𝒛1],\displaystyle=\mathbb{E}[\bm{z}_{1}],
𝚿0new\displaystyle\bm{\Psi}_{0}^{new} =𝔼[𝒛1𝒛1]𝔼[𝒛1]𝔼[𝒛1],\displaystyle=\mathbb{E}[\bm{z}_{1}\bm{z}^{\prime}_{1}]-\mathbb{E}[\bm{z}_{1}]\mathbb{E}[\bm{z}^{\prime}_{1}],
(σ𝒁2)new=\displaystyle(\sigma_{\bm{Z}}^{2})^{new}= 1(T1)Ltr(t=2T𝔼[𝒛t𝒛t]𝑩t=2T𝔼[𝒛t𝒛t]),\displaystyle\frac{1}{(T-1)L}\textrm{tr}\bigg{(}\sum_{t=2}^{T}\mathbb{E}[\bm{z}_{t}\bm{z}^{\prime}_{t}]-\bm{B}\sum_{t=2}^{T}\mathbb{E}[\bm{z}_{t}\bm{z}^{\prime}_{t}]\bigg{)},
(σ𝑿(k)2)new=\displaystyle(\sigma_{\bm{X}^{(k)}}^{2})^{new}= 1t=1(𝒇t=k)Ti=1N𝑾i,t\displaystyle\frac{1}{\sum_{t=1(\bm{f}_{t}=k)}^{T}\sum_{i=1}^{N}\bm{W}_{i,t}}
[t=1(𝒇t=k)Ttr((𝑼¯t(k))new𝔼[𝒛t𝒛t](𝑼¯t(k))new)\displaystyle\bigg{[}\sum_{t=1(\bm{f}_{t}=k)}^{T}tr\bigg{(}(\bar{\bm{U}}^{(k)}_{t})^{new}\mathbb{E}[\bm{z}_{t}\bm{z}^{\prime}_{t}](\bar{\bm{U}}^{(k)}_{t})^{new^{\prime}}\bigg{)}
+t=1(𝒇t=k)T((𝒙¯t)𝒙¯t2(𝒙¯t)((𝑼¯t(k))new𝔼[𝒛t]))],\displaystyle+\sum_{t=1(\bm{f}_{t}=k)}^{T}\bigg{(}(\bar{\bm{x}}_{t})^{\prime}\bar{\bm{x}}_{t}-2(\bar{\bm{x}}_{t})^{\prime}((\bar{\bm{U}}^{(k)}_{t})^{new}\mathbb{E}[\bm{z}_{t}])\bigg{)}\bigg{]},
(σ𝑺(k)2)new=\displaystyle(\sigma_{\bm{S}^{(k)}}^{2})^{new}= 1N2[j=1N(𝒔j(k)𝒔j(k)2𝒔j(k)((𝑼(k))new𝔼[𝒗j(k)]))\displaystyle\frac{1}{N^{2}}\bigg{[}\sum_{j=1}^{N}\bigg{(}\bm{s}^{(k)^{\prime}}_{j}\bm{s}^{(k)}_{j}-2\bm{s}^{(k)^{\prime}}_{j}((\bm{U}^{(k)})^{new}\mathbb{E}[\bm{v}^{(k)}_{j}])\bigg{)}
+tr((𝑼(k))new(j=1N𝔼[𝒗j(k)𝒗j(k)])(𝑼(k)new))],\displaystyle+tr\bigg{(}(\bm{U}^{(k)})^{new}(\sum_{j=1}^{N}\mathbb{E}[\bm{v}^{(k)}_{j}\bm{v}^{(k)^{\prime}}_{j}])(\bm{U}^{(k)new})^{\prime}\bigg{)}\bigg{]},
(26) (σ𝑽(k)2)new=\displaystyle(\sigma_{\bm{V}^{(k)}}^{2})^{new}= 1NLj=1Ntr(𝔼[𝒗j(k)𝒗j(k)]).\displaystyle\frac{1}{NL}\sum_{j=1}^{N}\textrm{tr}(\mathbb{E}[\bm{v}^{(k)}_{j}\bm{v}^{(k)^{\prime}}_{j}]).

Appendix B Experiments

B.1. Synthetic data generation

We first generate a latent factor containing a linear trend, a sinusoidal seasonal pattern, and a noise, 𝒁i,t=sin(2πβTt)+γt+η\bm{Z}_{i,t}=\sin(2\pi\frac{\beta}{T}t)+\gamma t+\eta, s.t. 1<β<20,0.3<|γ|<1,η𝒩(0,0.3)1<\beta<20,0.3<|\gamma|<1,\eta\sim\mathcal{N}(0,0.3), where 𝒁L×T\bm{Z}\in\mathbb{R}^{L\times T}. We then project the latent factor with object matrix 𝑿=𝑼(k)𝒁\bm{X}=\bm{U}^{(k)}\bm{Z}, where 𝑿N×T\bm{X}\in\mathbb{R}^{N\times T}, and 𝑼(k)N×L\bm{U}^{(k)}\in\mathbb{R}^{N\times L} is a random graph created as follows (Mohan et al., 2014):

  1. (1)

    Set 𝑼(k)\bm{U}^{(k)} equal to the adjacency matrix of an Erdős-Rényi directed random graph, where every edge has a 20%20\% chance of being selected.

  2. (2)

    Set selected edge 𝑼i,j(k)\bm{U}^{(k)}_{i,j}\sim Uniform([0.6,0.3][0.3,0.6])([-0.6,-0.3]\cup[0.3,0.6]), where 𝑼i,j(k)\bm{U}^{(k)}_{i,j} denotes the weight between variables ii and jj.

We set T=1000,N=50,L=10T=1000,N=50,L=10. We generate two types of synthetic data, PatternA and PatternB, five times each, where K=1,2K=1,2, respectively. In PatternB, the regime switches every 200200 timesteps.

B.2. Hyperparameters

We describe the hyperparameters of the baselines. For Synthetic datasets, we give a latent dimension of 1010 for all baselines. For a fair comparison, we set the latent dimension of the SSM-based methods at the same value as MissNet. For the MF-based methods, we vary the latent dimension {3,5,10,15,20,30,40}\{3,5,10,15,20,30,40\}. We vary the AR parameter for TRMF {[1,2,3,4,5],[1,24]}\{[1,2,3,4,5],[1,24]\}. To learn the DL-based methods, we add 10%10\% of the data as missing values for training the model. We vary the window size {16,32,T}\{16,32,T\}. Other hyperparameters are the same as the original codes.

Appendix C Discussion

While MissNet achieved superior performance against state-of-the-art baselines in missing value imputation, here, we mention two limitations of MissNet in terms of sparse network inference and data size.

As mentioned in Section 5.2.1, MissNet fails to discover the correct transition when the missing rate exceeds 70%70\%. However, we claim that MissNet failing to discover the correct transition when the missing rate exceeds 70%70\% is reasonable; rather, correctly discovering transition up to 60%60\% is valuable. Several studies (Tozzo et al., 2020; Kolar and Xing, 2012; Städler and Bühlmann, 2012b) tackled the sparse network inference under the existence of missing values. They aim to infer the correct network and, thus, only utilize the observed value for the network inference. Since observing a complete pair at a high missing rate is rare, it is difficult to infer the correct network. Therefore, the maximum missing rate in their experiments is 30%30\%. Although the experimental settings are different from ours, we can say that the task of sparse network inference in the presence of missing values itself is challenging.

As shown in the experiments, MissNet performs well even when a relatively small number of samples (TT) and a large number of features (NN) since MissNet is a parametric model and we assume the sparse networks to capture inter-correlation. This cannot be achieved by DL models, which contain a massive number of parameters that require a large amount of TT, especially when NN is large since all the relationships between features need to be learned. However, unlike DL models, the increased number of samples may not greatly improve MissNet’s performance as it has a much smaller number of parameters than DL models, even though switching sparse networks increases the model’s flexibility.