Real-time Anomaly Detection for Multivariate Data Streams
Abstract.
We present a real-time multivariate anomaly detection algorithm for data streams based on the Probabilistic Exponentially Weighted Moving Average (PEWMA). Our formulation is resilient to (abrupt transient, abrupt distributional, and gradual distributional) shifts in the data. The novel anomaly detection routines utilize an incremental online algorithm to handle streams. Furthermore, our proposed anomaly detection algorithm works in an unsupervised manner eliminating the need for labeled examples. Our algorithm performs well and is resilient in the face of concept drifts.
1. Introduction
Anomaly detection is the task of classifying patterns that depict abnormal behavior. Outliers can arise due to (human/equipment) errors, faulty systems, and others. Anomaly detection is well-suited for unbalanced data, where the ideal scenario is to predict the behavior of the minority class. There are numerous applications in detecting default on loans, fraud detection, and network intrusion detections. An anomaly detection algorithm can work in Unsupervised, Supervised, or hybrid mode.
There are different types of anomalies described as follows.
-
•
Point Anomaly: the algorithm decide a single instance as an anomaly concerning the entire data set.
-
•
Contextual Anomaly: a data instance can be anomalous based on the context (attributes and position in the stream) and proximity of the chosen anomaly. This anomaly type is ideal for multivariate data, e.g. in the snapshot reading of a machine, an attribute of a single data point may seem abnormal but can be normal behavior based on consideration of the entire data.
-
•
Collective Anomaly: the algorithm decide a set of data points that are anomalies as a group, but individually these data points exhibit normal behaviors.
Anomaly detection algorithms can operate in the following setting:
-
•
Static: These algorithms work in static datasets. Every item is loaded into memory at one time to perform computation.
-
•
Online: These algorithms work in real-time data streams. Items are incrementally loaded into memory and processed in chunks.
-
•
Static + Online: The model may operate in two stages as initial parameters get estimated in the static setting. The parameters are incrementally updated as more data arrives. Our work is of this type.
PEWMA was introduced in the work (Carter and Streilein, 2012) for online anomaly detection on univariate time series. Drawing inspiration from their work, we have provided extensions to support real-time anomaly detection of a multivariate data stream.
2. Background
An anomaly detection algorithm can identify outliers in several signal changes in time-dependent data. Signal changes are common in time-dependent data. They include abrupt transient shift, abrupt distributional shift, and gradual distributional shift (Carter and Streilein, 2012) labeled as ”A”, ”B”, and ”C” in Figure 1 respectively.

Online algorithms are useful for real-time applications, as they operate incrementally on data streams. These algorithms incrementally receive input and decide based on an updated parameter that conveys the current state of the data stream. This philosophy contrasts with offline algorithms that assume the entire data is available in memory. The issue with an offline algorithm is that the data must fit in memory. The online algorithm should be both time and space-efficient.
Anomaly detection algorithm can work in modes such as diagnosis and accommodation (Hodge and Austin, 2004). Firstly, the diagnosis method finds the outlier in the data and removes it from the data sample to avoid skewing the distribution. This method is applicable when the distribution of expected behaviors is known. The outliers get excluded when the estimation of the parameters of the distribution (Hodge and Austin, 2004). Secondly, the accommodation method finds the outliers in the data and includes estimating the parameters of the statistical model. The accommodation method is suitable for data streams that account for the effect of concept drift (Ditzler and Polikar, 2013).
Muth (Muth, 1960) laid the foundation for exponential smoothing by showing that it provides optimal estimates using random walk with some noise. Further works were done to provide a statistical framework for exponential smoothing leading to the creation of linear models (Box and Jenkins, 1976; Roberts, 1982). Unfortunately, this does apply to nonlinear models. Yule postulated a formulation of time series as a realization of a stochastic process (Yule, 1927). Exponential Weighted Moving Average (EWMA) is ideal for keeping a set of running moments in the data stream. EWMA is a smoothing technique that adds a forgetting parameter to modify the influence of a recent item in the data stream as shown in Equation 1. This smoothing causes volatility in abrupt transient changes and is unfit for distribution shifts.
(1) |
EWMA limitations motivated the discovery of Probabilistic Exponentially Weighted Moving Average (PEWMA). PEWMA (Carter and Streilein, 2012) improves on EMWA by adding parameter which includes the probability of the data in the model as shown in Equation 2. PEWMA works for every shift including abrupt transient shift, abrupt distributional shift, and gradual distributional shift respectively. PEWMA and EWMA make use of a damped window (Salehi and Rashidi, 2018).
(2) |
3. Method
Our formulation provided an implementation of the online covariance matrix in Subsection 3.2, alongside an online inverse covariance matrix based on ShermanMorrison formula in Subsection 3.3, and PEWMA in Subsection 3.1.
Our implementation of the online covariance matrix builds on work (Igel et al., 2006). We simplify the algorithm by ignoring the details of evolutionary computation in the paper. Our work adapted evolution as described in the paper (Igel et al., 2006) as a transition from one generation to the next; is equivalent to moving from one state to another state. This is analogous to how online algorithms work with dynamic changes as new data enters the stream. Cholesky decomposition is used extensively in the algorithms.
3.1. Probabilistic Exponentially Weighted Moving Average (PEWMA)
PEWMA (Carter and Streilein, 2012) algorithm works in the accommodation mode. The routine shown in Algorithm [1] allows for concept drift (Ditzler and Polikar, 2013), which occurs in data streams by updating the set of parameters that convey the state of the stream.
//incremental Z score
//probability density function
//increment standard deviation (training phase)
//increment standard deviation
//moving average
//incremental mean
//incremental standard deviation
The parameters of the anomaly detection algorithm consist of the current data, the mean of the data, is the mean of the data, the current standard deviation, the probability density function, the mean of the next data (incremental aggregate), the next standard deviation (incremental aggregates), the data size, and a point in . Initialize the process by setting the initial data for training the model and .
Our work made use of the following parameters , and . These thresholds are chosen based on the criteria that outliers are times the standard deviation in normally distributed data.
3.2. Online Covariance matrix
-
(1)
Estimate covariance matrix for initial data, . Initial covariance matrix, where , is the number of samples, is the number of dimensions as shown in Equation 3.
(3) -
(2)
Perform Cholesky factorization on the initial covariance matrix (positive-definite), as shown in Equation 4.
(4) -
(3)
The updated covariance in the presence of new data is equivalent to the weighted average of the past covariance without the updated data and covariance of the transformed input as shown in Equation 5.
(5) Where and is understood in our implementation is the current data. and are positive scalar values.
-
(4)
Increment the Cholesky factor of the covariance matrix as shown in Equation 6.
(6) -
(5)
There are difficulties with setting the values of and respectively. as an explicit form of exponential moving average coefficients. The author chose to set the values of , using the statistics of the data stream as shown in Equation 7.
The parameters are set as , and is the size of the original data in the static settings. Where and .
(7)
3.3. Online Inverse Covariance matrix
-
(1)
Estimate covariance matrix for initial data, . Initial covariance matrix, where , is the number of samples, is the number of dimensions as shown in Equation 8.
(8) Inverse the covariance matrix, as shown in Equation 9.
(9) -
(2)
Perform Cholesky factorization on initial covariance matrix, as shown in Equation 10.
(10) -
(3)
Increment the Cholesky factor of the covariance matrix
(11) (12) Let us fix, . The resulting simplification using ShermanMorrison Formula reduces the expression to
(13)
3.4. Online Multivariate Anomaly Detection
The probability density function utilizes ideas from hypothesis testing for Deciding on a threshold to set the confidence level for determining the acceptance and rejection regions of the Gaussian distribution curve.
-
(1)
Use the covariance matrix, and inverse covariance matrix, .
-
(2)
We increment the mean vector, as new data arrives. It is possible to simplify the Covariance matrix, , which will capture a number of the dynamics of the system. Let represent the current count of data before new data has arrived. Also, : is the new data, : moving average as shown in Equation 14.
(14) -
(3)
Set a threshold to determine the acceptance and rejection regions. Items in the acceptance region are considered to be normal behavior as shown in Equation 15.
(15) Where is mean vector, is covariance matrix, is the determinant of matrix, is data vector, and is the dimension of respectively.
4. Experiment
Furthermore, we have provided a set of detailed experiments on the proposed algorithms in different realistic scenarios. However, we maintain the statistical framework provided by the work (Carter and Streilein, 2012) with theoretical guarantees.
We have experimented to determine the usefulness of our algorithm by creating a simulation with 10000000 random vectors with dimensions of 15. The repeated trial shows that our algorithm is not sensitive to initialization seeds and dimensions of the matrix. This requirement was a deciding factor in the choice of the evaluation metric as shown in Equation 16. We have provided more information on the metric in Section 5.
Our experiment will check the effect of varying the size of the initial static window versus the update window as shown in Subsection 4.1 and Subsection 4.2.

4.1. Experiment 1
We evaluate the trade-off between the static window and the update window. The experiment setup is as follows:
-
•
Split the data into 5 segments train on 1st segment(static), update covariance on 2nd (online), compare with static covariance, and calculate the error.
-
•
Train on 1, 2 segments (static), update covariance on 3rd (online), compare with static covariance and calculate the error.
-
•
Train on 1, 2, 3 segments (static), update covariance on 4th (online), compare with static covariance and calculate the error.
-
•
Train on 1, 2, 3, 4 segments (static), update covariance on 5th (online), compare with static covariance and calculate the error.
Figure 2 contains the experimental result.

4.2. Experiment 2
The experiment setup is as follows:
-
•
Split the data into 5 segments.
-
•
Train on 1st segment(static), update covariance on remaining segments (2,3,4,5) (online), compare with static covariance and calculate error on segments (2,3,4,5)
-
•
Train on 1, 2 segments (static), update covariance on remaining segments (3,4,5) (online), compare with static covariance and calculate error on segments (3,4,5)
-
•
Train on 1, 2, 3 segments (static), update covariance on remaining segments (4,5) (online), compare with static covariance and calculate error on segments (4,5)
-
•
Train on 1, 2, 3, 4 segment(static), update covariance on remaining segments (5) (online), compare with static covariance and calculate error on segments (5)
Figure 3 contains the experimental result.
5. Result Analysis
Our random matrices get flattened to a vector and utilized as input. The length of the flattened vector is used as a normalization factor to make the loss metric that is agnostic to the dimension of the matrix. The loss function used in the evaluation is Absolute Average Deviation (AAD) because it gives a tighter bound on the error compared to mean squared error (MSE) or mean absolute deviation (MAD) as shown in Equation 16. We take the average of the residuals divided by the ground truth for every sample in our evaluation set. If the residual is close to zero, we contribute almost nothing to the measure. On the contrary, if the residual is high, we want to know the difference from the ground truth.
(16) |
Where is the predicted value, is the ground truth, and is the length of the flattened matrix.
We can observe that building your model with more data in the init (static) phase leads to lower errors compared to having fewer data in the init phase and using more of the data for an update. The observation matches our intuition because when you operate in an online mode, you tend to use smaller storage space. However, there is still a performance trade-off when compared to batch mode.
The error at the beginning of our training is significant in both charts. This insight shows that rather than performing the expensive operation of converting a covariance matrix to have the property of positive definiteness, it is better to use random matrices that are positive definite. More data would help us get to convergence as more data arrives.
The success of the experiments has given us the confidence that our multivariate anomaly detection algorithm would have similar characteristics to the univariate case described in the work (Carter and Streilein, 2012).
6. Conclusion
There is no generic anomaly detection that works for every possible task. The underlying assumption in this work is that the features in use capture relevant information about the underlying dynamic of the system. Our proposed implementation is an anomaly detection algorithm for handling multivariate streams even with challenging shifts. In future work, we will make extensions to support non-stationary distributions in multivariate data streams.
References
- (1)
- Box and Jenkins (1976) George E. P. Box and Gwilym M. Jenkins. 1976. Time Series Analysis: Forecasting and Control. Holden-Day.
- Carter and Streilein (2012) Kevin M. Carter and William W. Streilein. 2012. Probabilistic reasoning for streaming anomaly detection. In Proceedings of the Statistical Signal Processing Workshop. 377–380.
- Ditzler and Polikar (2013) Gregory Ditzler and Robi Polikar. 2013. Incremental Learning of Concept Drift from Streaming Imbalanced Data. IEEE Transactions on Knowledge and Data Engineering 25, 10 (2013), 2283–2301.
- Hodge and Austin (2004) Victoria Hodge and Jim Austin. 2004. A Survey of Outlier Detection Methodologies. Artificial Intelligence Review 22, 2 (2004), 85–126.
- Igel et al. (2006) Christian Igel, Thorsten Suttorp, and Nikolaus Hansen. 2006. A computational efficient covariance matrix update and a (1+1)-CMA for evolution strategies. In Proceedings of the Genetic and Evolutionary Computation Conference, GECCO. Washington, USA.
- Muth (1960) John F. Muth. 1960. Optimal Properties of Exponentially Weighted Forecasts. J. Amer. Statist. Assoc. 55, 290 (1960), 299–306.
- Roberts (1982) S. A. Roberts. 1982. A General Class of Holt-Winters Type Forecasting Models. Journal of Management Science 28, 7 (1982), 808–820.
- Salehi and Rashidi (2018) Mahsa Salehi and Lida Rashidi. 2018. A Survey on Anomaly Detection in Evolving Data: With Application to Forest Fire Risk Prediction. SIGKDD Explorations Newsletter 20, 1 (2018), 13–23.
- Yule (1927) G. Udny Yule. 1927. On a Method of Investigating Periodicities in Disturbed Series, with Special Reference to Wolfer’s Sunspot Numbers. Philosophical Transactions of the Royal Society of London 226 (1927), 267–298.