This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Efficient Traffic State Forecasting using Spatio-Temporal Network Dependencies: A Sparse Graph Neural Network Approach

Bin Lei, , Shaoyi Huang,  Caiwen Ding, , Monika Filipovska †: Corresponding author Bin Lei, Shaoyi Huang, Caiwen Ding are with the Department of Computer Science and Engineering at the University of Connecticut, Storrs, CT, USA. Monika Filipovska is with the Department of Civil and Environmental Engineering at the University of Connecticut, Storrs, CT, USA. (E-mail: {bin.lei, shaoyi.huang, caiwen.ding, monika.filipovska}@uconn.edu).
Abstract

Traffic state prediction in a transportation network is paramount for effective traffic operations and management, as well as informed user and system-level decision-making. However, long-term traffic prediction (beyond 30 minutes into the future) remains challenging in current research. In this work, we integrate the spatio-temporal dependencies in the transportation network from network modeling, together with the graph convolutional network (GCN) and graph attention network (GAT). To further tackle the dramatic computation and memory cost caused by the giant model size (i.e., number of weights) caused by multiple cascaded layers, we propose sparse training to mitigate the training cost, while preserving the prediction accuracy. It is a process of training using a fixed number of nonzero weights in each layer in each iteration.

We consider the problem of long-term traffic speed forecasting for a real large-scale transportation network data from the California Department of Transportation (Caltrans) Performance Measurement System (PeMS). Experimental results show that the proposed GCN-STGT and GAT-STGT models achieve low prediction errors on short-, mid- and long-term prediction horizons, of 15, 30 and 45 minutes in duration, respectively. Using our sparse training, we could train from scratch with high sparsity (e.g., up to 90%), equivalent to 10×\times floating point operations per second (FLOPs) reduction on computational cost using the and same epochs as dense training, and arrive at a model with very small accuracy loss compared with the original dense training.

Index Terms:
Graph Neural Network, Spatio-Temporal, Traffic State Forecasting, Sparse Training, Efficient

I Introduction

Traffic state forecasting is a central problem in Intelligent Transportation Systems (ITS) since predictions of the state of traffic in a transportation network is paramount for effective traffic operations and management, as well as informed user and system-level decision-making [1]. In recent years, traffic forecasting has been enabled by the increasing quantity and quality of available traffic data. However, accurate modeling and prediction of traffic state, especially across large-scale networks, is a highly nonlinear and complex problem. The network performance is susceptible to fluctuations due to a combination of factors including traffic incidents, weather conditions, work zones, user choices, etc [2]. Furthermore, the effects of these factors on traffic spread spatially through the transportation network, and often in a temporally lagged manner as traffic propagates through the network.

Short-term traffic state prediction approaches are common in the literature, with both statistical and machine learning methods performing well for 1 to 5-minute prediction horizons [3]. However, longer-term traffic prediction, for example beyond 30 minutes into the future, presents a more difficult problem. This is expected, since it is more difficult to anticipate how traffic patterns will spread spatially and temporally over longer prediction horizons, and the potential effects of a range of measurable and immeasurable external factors become more uncertain [4].

In the literature on traffic state prediction, common early approaches come from time-series analysis [5, 6, 7]. The immediate limitation of time-series approaches is their singular focus on the temporal component, without accounting for spatial effects. Thus, time-series-based approaches are often applied to individual road segments or small isolated sequences of road segments. However, previous work has shown the importance of incorporating both spatially and temporally lagged features in traffic prediction [8, 9]. Furthermore, with the growing availability of traffic data, machine learning methods are becoming more common and necessary for traffic prediction tasks, due to their ability to capture more complex phenomena.

This study considers the problem of long-term traffic speed forecasting for a real large-scale transportation network, via an approach that integrates the understanding of spatio-temporal dependencies in the transportation network from network modeling, together with graph-based neural network approaches suitable for large-scale data analysis and complex modeling. Several strategies have been proposed for modeling temporal dynamics and spatial dependencies in the literature, and we focus on two architectures based on graph neural networks (GNNs), the graph convolutional network (GCN) [10] and graph attention network (GAT) [11]. An important consideration for implementing GNN models in real-world large-scale graphs for internet-of-things (IoT) applications is the outsized training computational cost as GNN model sizes continue to grow. Training such large-scale GNNs requires high-end servers with expensive GPUs that are difficult to maintain. A recent study shows that it can take 1600 GPU days to train a GNN model with 10.7 million parameters [12]. The computational challenges of training massive GNNs with billions of parameters on large-scale graphs is an important and emerging problem in the machine learning community.

The contribution of this paper is three-fold. (1) It proposes a spatio-temporal GNN-based approach for traffic prediction that used the underlying structure of the transportation network to extract spatio-temporal features traffic speed data throughout the network. (2) To accelerate the GNN training and inference, we use a sparse training method, which converts the computation from dense to sparse by removing large portion of the weights (say, up to 90%), thus reducing the memory consumption and the total amount of computation with small accuracy loss. (3) The numerical experiments demonstrate the applicability of the proposed approach to a large-scale transportation network and its transferability to time periods not seen during training.

II Background

II-A Statistical Approaches for Traffic Prediction

Traffic state forecasting problems focus on predicting traffic during a given time interval or prediction horizon in terms of aggregate macroscopic characteristics of traffic flow: speed, density, or flow. Speed is most often used as the quantity of interest in prediction, as a more user-centric measure of traffic state compared to flow or density. Prediction of traffic flow levels is also common as it can offer insights into the level of congestion. Earlier studies use time-series analysis approaches to model the evolution of traffic state over time as a time-series, as introduced by Ahmed and Cook[5] and use time-series forecasting methods for prediction such as ARIMA [6, 13, 7] and Kalman filtering methods [14]. These approaches initially focused on univariate prediction at a given location, and were later extended to multi-variate problems that aim to simultaneously capture traffic state at various locations with approaches such as space-time ARIMA (STARIMA) [15, 16], and combined with adaptive Kalman filtering [17].

II-B Machine Learning for Traffic Prediction

With the increase of data availability in transportation systems, data-driven machine learning approaches for traffic forecasting have become more common in recent years. Many of the initial studies focus on short term prediction, such as [18] using multi-layer perceptron models, as the simplest form of neural networks, [19, 20] employing deep belief networks. Wu and Tan [21] present an approach for short-term traffic flow forecasting with spatio-temporal correlation in a hybrid deep-learning framework combining convolutional neural networks (CNN) and long short-term memory (LSTM) networks. Tahlyan et al.[22] present a meta-learner ensemble framework for real-time short-term traffic speed forecasting. Some recent studies also focus on longer-term prediction, such as [23] presenting a deep learning stacked auto-encoder approach for a prediction horizon of up to 60 minutes, and [24] suing an LSTM neural network to capture long-term temporal dependence.

While all of the studies referenced above consider traffic prediction from a primarily temporal aspect at isolated locations, a few recent studies consider the problem of spatio-temporal traffic state prediction. Li et al.[25] propose a diffusion convolutional recurrent neural network (RNN) to capture spatial dependency in traffic state predictions. Yu et al. [26] consider the problem of large-scale network-wide traffic speed prediction and implement a spatiotemporal recurrent convolutional network (SRCN) approach for short term prediction. Ma et al. [27] approach a similar problem using a convolutional neural-network (CNN) model aiming to capture spatiotemporal traffic dynamics, which was shown to be well suited for short to medium-term prediction of 10 to 20 minutes.

The recent emergence of GNNs has helped advance the methods for spatio-temporal traffic forecasting across networks. For mid- and long-term prediction tasks across space, [28] introduce a graph convolutional network (GCN), referred to as spatio-temporal GCN (STGCN), while [29] and [30] both present a spatial-temporal graph attention network (STGAT) approach. The key difference between the STGCN and STGAT approaches is the use of a graph convolutional network (GCN) in the former and a graph attention network (GAT) in the latter. GAT modifies the convolution operation in a GCN with an attention mechanism whose purpose is to perform node classification of graph-structured data by computing hidden representations of each node in the graph by attending over the node’s neighbors. Thus, GAT performs the functions of a GCN while attuning itself to the particular features that are most important in the network. GAT also promises efficient (parallelizable) operations, application to graph nodes with different degrees, and applicability to inductive learning problems including models that are generalizable to previously unseen graphs. Details on the GAT architecture can be found in the original work [31].

An important limitation of the spatio-temporal models in recent studies [28, 29, 30] is that they do not take into account the underlying transportation road network. Instead, they construct artificial distance-based adjacency matrices using Euclidean distance to infer the connections between locations and generate the graphs. Thus, the spatial component does not account for the structure of the transportation network and the actual flow of traffic over space, but is simply based on the spatial proximity of the locations where data is observed.

II-C Weight Sparsification in Graph Neural Network

State-of-the-art deep neural networks have large model size and computational requirements, which limits their ability to provide a user-friendly experience [32, 33, 34, 35, 36]. To address the challenges, weight sparsification has been investigated. The key idea is to represent a neural network with a much simpler model (set a group of weight values to be zero), therefore bringing acceleration in computation and reduction in memory [37, 38, 39, 40, 41, 42, 43]. Recent work by Chen [44] proposes a weight sparcification framework for GNNs, called Unified GNN Sparsification (UGS), that pruned the input graph as well as the model weights using the well-studied lottery ticket hypothesis weight pruning method [45]. However, these work mainly focus on generating on the inference efficiency, and often need to use more training epoches on training.

III Problem definition

The problem considered in this paper is as follows. Given the observations of aggregate traffic speeds across a number of spatially distributed road segments in a transportation network for a historical time horizon HH, the goal is to predict the aggregate traffic speed for the same locations over a future time horizon TT. The road segments at which traffic speeds are observed are connected to one another to form the transportation network on which traffic flows between these spatially distributed locations. Towards this goal, we define the underlying transportation network as a graph G(N,A,W)G(N,A,W) where NN is the set of nodes (i.e., vertices), A are the set of links (i.e., edges) connecting those vertices and W|N|×|N|W\in\mathbb{R}^{|N|\times|N|} is a weighted adjacency matrix for network. Typical representations of transportation networks model the intersections as nodes connected by road segments which are modeled as links in the graph. However, in defining this problem, there is a need to model the traffic monitoring stations (i.e. sensors at fixed locations) on road segments as the set of nodes NN and use the links AA to model the connections between those road segments. Therefore, G(N,A,W)G(N,A,W) is a transformed version of the road network, so that the nodes iNi\in N correspond to the |N||N| observation locations, i.e., the stations yielding traffic speed measurements and the links (i,j)A(i,j)\in A are constructed where there are direct station-to-station connections on the road network. Thus, if (i,j)A(i,j)\in A then the stations ii and jj are consecutive or connected by a consecutive sequence of road segments on which traffic flows directly from ii to jj, with no intermediate stations along the way. The weight matrix supplies additional information regarding the connections between nodes. Firstly, let wijWw_{ij}\in W denote the ii, jj element of the |N|×|N||N|\times|N| matrix, and

wij={w(i,j)if(i,j)A0otherwisew_{ij}=\begin{cases}w(i,j)&if(i,j)\in A\\ 0&otherwise\end{cases} (1)

where function w:(i,j)Aw\text{:}(i,j)\in A\rightarrow\mathbb{R} is a measure of closeness between the connected nodes ii and jj. The measure of closeness is often inversely related to distance denoting the relatedness between observations at nodes ii and jj.

To define this problem mathematically, suppose that time is discretized into short fixed-duration intervals δ\delta, which will be referred to as time steps, so that the historical horizon HH consists of |H||H| time steps and the horizon TT consists of |T||T| time steps. Suppose NN, as defined above, is the set of locations where traffic is observed, numbering |N||N| locations. Then an observation of traffic speeds for a single time step tt is a vector of size |N||N| denoted 𝑽𝒕=[vt0,,vt|N|]|N|\boldsymbol{V_{t}}=[v_{t}^{0},...,v_{t}^{|N|}]\in\mathbb{R}^{|N|} where vtiv_{t}^{i}\in\mathbb{R} is a scalar value for the speed at location iNi\in N. Thus, at a current time tt the goal is to find 𝑽^t+1,𝑽^t+2,𝑽^t+|T|\hat{\boldsymbol{V}}_{t+1},\hat{\boldsymbol{V}}_{t+2},...\hat{\boldsymbol{V}}_{t+|T|} based on the observed 𝑽^t|H|+1,𝑽^t1,𝑽^t\hat{\boldsymbol{V}}_{t-|H|+1},...\hat{\boldsymbol{V}}_{t-1},\hat{\boldsymbol{V}}_{t} to maximize:

logP(𝑽^t+1,...𝑽^t+|T||𝑽^t+1|H|,,𝑽t1^,𝑽t)^{logP(\hat{\boldsymbol{V}}_{t+1},...\hat{\boldsymbol{V}}_{t+|T|}|\\ \hat{\boldsymbol{V}}_{t+1-|H|},...,\hat{\boldsymbol{V}_{t-1}},\hat{\boldsymbol{V}_{t})}} (2)

where P()P(\cdot) denotes the unknown, dynamic, data-generating process.

IV Methods

IV-A Preliminaries

As seen above, traffic state prediction can be seen as a time-series forecasting problem, where the traffic state in future time intervals is predicted based on observations during past time-intervals. However, traffic in the transportation network evolves both over space and time, and temporal patterns are not independent of spatial dependencies due to the flow of traffic across the network. This paper presents a deep learning approach that predicts traffic state across a transportation network and over medium to long-term prediction horizons. The proposed solution approach should be able to capture and employ knowledge of the spatio-temporal dependencies in traffic state, thus it uses the graph representation of the underlying transportation network as the basis for the GNN-bases model. The transportation network is transformed into a graph, as described in the previous section, with a weighted adjacency matrix determined via Equation 1 where the weights are computed as w(i,j)=eωdijw(i,j)=e^{-\omega d_{ij}} where dijd_{ij} is the road-network distance between stations i,jNi,j\in N and ω\omega is a scaling weight factor.

IV-B Proposed Model Architecture

The proposed spatio-temporal GNN-based traffic prediction approach (STGT) combines neural network components intended to capture spatial and temporal features and patterns in traffic data. The input to the model is the multi-dimensional time-series of the traffic speeds observed over the past |H||H| time intervals in a historical time horizon across |N||N| observation locations, denoted Vt=[vt0,,vt|N|]R|N|V_{t}=[v_{t}^{0},...,v_{t}^{|N|}]\in R^{|N|}. This input feeds into a spatial GNN-based block that extracts and learns the spatial features. We implement two versions of the model, where the spatial GNN-based component can be a GCN or GAT block, which will be referred to as GCN-STGT and GAT-STGT, respectively. This is followed by an RNN block using a two-layer long short-term memory (LSTM) network to preform temporal feature extraction. Finally, the RNN outputs passes through a fully-connected network to generate the final predictions. The model architecture is shown in Figure 1, and the details of the model are described below.

Refer to caption
Figure 1: Spatio-Temporal Graph Convolutional Network Model Architecture

IV-B1 Spatio-temporal Representation of Traffic Data

While GNN models are able to leverage node features to capture spatial dependency of a graph, there are demonstrated challenges in employing them with traffic data. Specifically, Zhang et al. [29] show that it can be difficult to find proper feature representation of time-series data over space and propose a Speed-to-Vector (Speed2Vec) data embedding mechanism to define feasible feature representations of the time-series data to be used within GNNs. The present study adopts the Speed2Vec representation, where the speed observations at a given node in a fixed-duration historical time horizon are taken as the hidden feature at a time step, embedded in a vector ht=[vtF+1,vtF+2,,vt]h_{t}=[v_{t-F+1},v_{t-F+2},…,v_{t}] where htFh_{t}\in\mathbb{R}^{F} and FF is the duration of the historical time horizon in terms of number of time steps. Then, these vectors are used to create the network-wide input HSNH_{S}^{N} to the spatial GAT block, such that:

HSN=[h11hS1h1|N|hS|N|]H_{S}^{N}=\begin{bmatrix}h_{1}^{1}&\cdots&h_{S}^{1}\\ \vdots&\ddots&\vdots\\ h_{1}^{|N|}&\cdots&h_{S}^{|N|}\end{bmatrix} (3)

where HSNS×|N|×FH_{S}^{N}\in\mathbb{R}^{S\times{|N|}\times F}, SS is the length of the temporal sequence, and |N||N| is the number of nodes (i.e., stations generating traffic observations). The historical horizon duration FF can vary depending on the specific dataset and application of the approach. For example, if 1-minute observations are collected, the historical horizon might contain a larger number of time-steps compared to cases when 5-minute data is available. Similarly, the historical horizon might need to be adjusted depending on the desired prediction horizon TT, as specified in the problem definition.

IV-B2 Spatial Feature Extraction

The proposed model architecture is tested with two types of GNNs for spatial feature extraction: a GCN and a GAT block. The architecture is implemented according to the original GAT implementation by [11] where the GAT version employs a multi-head attention mechanism to enable the model to jointly learn spatial dependencies through multiple attention blocks. The GAT uses the provided graph structure, in this case the transportation road network, and performs self-attention over the nodes iNi\in N to compute the so-called attention coefficients eije_{i}j for each of its neighboring nodes jNj\in N s.t. (i,j)A(i,j)\in A, which indicate the importance of node jj’s features to node ii. Therefore, the GAT learns a weight matrix for the relatedness between the graph nodes. On the other hand, the GCN architecture is simpler, in that the weight matrix is assumed to be as provided to the model, in this case the inverse-distance-based weight matrix defined previously. Comparing the performance of GCN-STGT and GAT-STGT model architectures will allow for an understanding how much of the attention information can be captured by the interpretable inverse-distance-based weight matrix in the GCN-STGT, and how much improvement can be brought on by the attention mechanism employed in GAT-STGT.

IV-B3 Temporal Feature Extraction

The traffic state data, with its distinct time-series format is suitable for use with recurrent neural networks (RNNs) which can be leveraged to learn temporal dependencies for time-series prediction. In particular, LSTM networks are the most commonly used RNNs, especially when long-term dependencies need to be captured [46]. The LSTM architecture uses gating units and cell states for the flow of information for long-term time series prediction. While the cell states store memory information and pass through all the time iterations, the gating units are used to decide whether to add or remove information for a cell state. Additionally, LSTM uses so-called forget gates which decide which information can be removed from a cell state. More details and the mathematical expressions of an LSTM with a forget gate are presented in the original work by Gers et al. [47].

IV-C Sparse Model Structure

To reduce the amount of arithmetic operations to be performed and the number of weights to be stored on GCN-STGT and GAT-STGT, we will sparsify our model by removing some of the neuronal connections. In general, sparsifying the model can reduce the computing time significantly with almost no loss of accuracy.[48] The details of the approach are shown in Figure 2, where first a random sparse model is initialized, then at regular intervals nn weights with the lowest absolute value of gradient are dropped, while nn weights with the highest value are grown.

Refer to caption
Figure 2: Iterative drop & grow based sparse training process.

Dynamic sparse training is the process of training with fixed number of nonzero weights in each neural network layer. We use a toy example to illustrate the sparse training dataflow. For simplicity, we use the matrix with a size of 4×44\times 4 to represent a weight tensor in the neural network. The sparse training is comprised of 4 steps as follows.

  • 1

    The weight tensor is random sparsified as W0W^{0} at a given sparsity S=0.5S=0.5, which means 50% of weights will be deactivated (set as zeros) and others remain activate (non-zero).

  • 2

    The sparsified tensor will be trained ΔT1\Delta T-1 iterations, where ΔT\Delta T is the drop-and-grow frequency. During the ΔT1\Delta T-1 epochs, the non-zero elements in weight tensor are updated following the standard training process, while the zero elements will remain as zero. At the ii-th iteration, the weight tensor is denoted as WiW^{i}, while the gradient is denoted as GiG^{i}.

  • 3

    At the ΔT\Delta T-th iteration, we first drop kk weights that are closed to zero or set the weights that have the least kk absolute magnitude as zeros (k=2k=2). Then,

  • 4

    we grow the weights with the highest kk absolute gradients back to nonzero (updating the weights with the highest kk absolute gradients to nonzero in the following weights updating iteration). During the process, the number of activated weights are kept the same, i.e., the newly activated (non-zero) weights are the same amount as the previously deactivated (zero) weights.

234

will be repeated until the end of the training.

See Algorithm 1 for detailed algorithm.

Algorithm 1 Sparse training algorithm
1:Weight of each layer WlW^{l}, Sparse matrix of each layer MlM^{l}, The number of data NN, Update frequency ff, Drop rate kk, Sparsity rate dd
2:Updated sparse matrix MnewlM_{new}^{l}, Updated weight matrix WnewlW_{new}^{l}
3:if N==0N==0 then
4:    MlInitialize()M^{l}\leftarrow Initialize()
5:end if
6:if N%f==0N\%f==0 then
7:    for each layermodel\textit{each layer}\in model do
8:         WSortedlSort(Wl.view(1))W^{l}_{Sorted}\leftarrow Sort(W^{l}.view(-1))
9:         for MijlMlM^{l}_{ij}\in M^{l} do
10:             if WijlWSortedl[len(WSortedl)×k]W^{l}_{ij}\geq W^{l}_{Sorted}[len(W^{l}_{Sorted})\times k] then
11:                 Mijl1M^{l}_{ij}\leftarrow 1
12:             else
13:                 Mijl0M^{l}_{ij}\leftarrow 0
14:             end if
15:         end for
16:    end for
17:    for each layermodel\textit{each layer}\in model do
18:         gWlWl.grad.clone()g^{W^{l}}\leftarrow W^{l}.grad.clone()
19:         gSortedWlSort(gWl.view(1))g^{W^{l}}_{Sorted}\leftarrow Sort(g^{W^{l}}.view(-1))
20:         gSortedWlgSortedWlMlg^{W^{l}}_{Sorted}\leftarrow g^{W^{l}}_{Sorted}\bigodot M^{l}
21:         for MijlMlM^{l}_{ij}\in M^{l} do
22:             if gWlgSortedWl[len(WSortedl)×(1k)]g^{W^{l}}\geq g^{W^{l}}_{Sorted}[len(W^{l}_{Sorted})\times(1-k)] then
23:                 Mijl1M^{l}_{ij}\leftarrow 1
24:             end if
25:         end for
26:    end for
27:    for each layermodel\textit{each layer}\in model do
28:         WlWlMlW^{l}\leftarrow W^{l}\bigodot M^{l}
29:    end for
30:end if
31:return Wl,MlW^{l},M^{l}

V Experiments

V-A Data Description

The numerical experiments use data from the California Department of Transportation (Caltrans) Performance Measurement System (PeMS), which collects real-time information from nearly 40,000 sensor stations across the freeway system of California and provides an Archived Data User Service (ADUS) containing over 10 years of data for historical analysis. Traffic state prediction in this study is performed using the 5-minute station data, which include aggregate information regarding the traffic speed, flow, and occupancy on road segments at sensor locations. Specifically, we select the stations located in District 7, encompassing the Los Angeles metropolitan area, shown on the map in Fig. 3.

The primary data for this analysis were the 5-minute data from district 7 stations for May and June of 2019. Data for six additional months were also obtained to test the transferability of the model across time periods when significant changes in demand and traffic patterns may have occurred. The station metadata for district 7 were used to obtain the location, corresponding road and direction of travel for each sensor station. This information was combined with the California Road System (CRS) web map and functional classification of roads.

The original data contained information from a total of 4,854 sensor stations within the area of district 7. In the pre-processing stage, two types of missing data patterns were found: (1) some days had no observations (or had only nan value observations) for all or most stations, and (2) certain stations had missing data for multiple full days in the specified time period. The data were cleaned by first removing all days where the information for more than half of the stations was missing and then keeping only the set of working stations across the remaining days. After processing, 2,471 working stations remained to be used across all of the 8 months, with a few days of data removed for each month as needed. The locations of these working stations are represented with the yellow pins on the map in Figure 3.

The process for combining the PeMS Station Metadata with the CRS data to create a graph structure based on the underlying network structure are outlined briefly, following the approach described in the problem definition. First, a set of nodes were created at all road intersections to capture the connectedness of the road network. Second, detectors were matched to the nearest location on the corresponding road, and the graph nodes NN were created at these locations. Third, a set of graph edges AA were created along the roads and across intersections to connect any two adjacent detector locations. The roads connecting the stations are shown as purple lines in Figure 3, where only relevant roads containing at least one sensor station are displayed. This information was used to generate a weighted matrix A, where for each pair of nodes as described previously. The weights were computed using distances along the network roads (i.e., not Euclidean or aerial distances), for the adjacent roads only.

Refer to caption
Figure 3: Locations of 2471 working sensor stations in district on the corresponding road network

V-B Experiment Design

The numerical experiments are designed for training, validation, and testing of the proposed STGT approach using 60 minutes of historical speed data from all 2,471 working sensors as input for the prediction of the traffic speed for the next 15 to 45 minutes across the same stations. Specifically, the models of the two types, GCN-STGT and GAT-STGT were trained for short-, mid- and long-term prediction of 15, 30 and 45-minutes in duration, respectively. The models are trained on data from the 53 available days in May and June of 2019, which will be referred to as the training time period 0, abbreviated TP0. Training is performed across 200 epochs with validation, and then test on a separate subset of the data not used for training or validation.

We conduct our GNN sparse training on an Intel Xeon Gold 5218 machine at 2.30 GHz with Ubuntu 18.04 using an Nvidia Quadro RTX 6000 GPU with 24 GB GPU memory. We use python 3.10.8, CUDA 11.6, pytorch 1.12.1 to setup the code. We set the total number of epochs to 200 and the batch size equals to 242. For sparse training: The model initialization sparse method is Erd˝os-R´enyi-Kernel (ERK) method[49]. We set the death rate as 0.5 and drop-and-grow frequency as 1000 batch iterations. We set the model sparsity as 0.025, 0.05, 0.1, 0.125, 0.25, 0.375, 0.5, 0.625, 0.75, 0.875, 0.9, 0.95, 0.975 to evaluate the performance of the sparse training method and the float point operations (FLOPS) compared with dense model. The performance of the approach is measured using Mean Absolute Errors (MAE), Mean Absolute Percentage Errors (MAPE), and Root Mean Squared Errors (RMSE).

To further evaluate the performance of the models on unseen and potentially different traffic patterns, the models are tested via zero-shot transfer with data from different time periods. A total of 3 testing scenarios (time periods) are created:

  • Testing time period 1 (TP1) during July and August 2019, immediately following the training period. Shifts in traffic patterns are possible in this period due to changes in lifestyle and travel needs during the summer.

  • Testing time period 2 (TP2) during December 2019 and January 2020, i.e. the following winter season. Though weather changes may not be significant in southern California, shifts in traffic patterns could occur due to seasonal shifts in travel demand.

  • Testing time period 3 (TP3) during May and June 2020, exactly a year after TP0 which would be expected to be most similar to TP0 due to the typical seasonality in transportation data. However, some important changes could have occurred due to the impacts of the COVID-19 pandemic causing shifts in activity and travel patterns.

V-C Evaluation Results

The primary results from the numerical experiments are shown in Table I and II, where each table includes the error measures for the GCN-STGT and GAT-STGT with three different prediction horizons and across all four time periods described above, for the original models and with 90% sparsity level, respectively. Overall, comparing the error values between GCN-STGT and GAT-STGT, it can be observed that the use of GAT helps reduce the MAPE by 1 percentage point in most cases. This indicates that the inverse-distance based weights derived from the road network are able to capture some of the spatial dependencies and allow for a relatively well performing model, however the self-attuning mechanism can help further improve its performance. Comparing the values in Tables I and II, we observe very small increase in error values, indicating the the models’ accuracy is preserved while introducing a significant level of sparsity. For the sake of brevity, the error measurement with respect to sparsity are showed for only two cases, the GCN-STGT and GAT-STGT models for a 45-minute prediction horizon, in Figures 4 and 5, respectively.

V-C1 Temporal Prediction Performance

In Table I, before introducing the model sparsity, each model’s performance can be compared across the 3 prediction horizons for a given dataset. For TP0, the GCN-STGT prediction error is highest for the 45-minute horizon, however the performance is not significantly different between the three prediction horizons. With the GAT-STGT approach, the prediction errors are lower than those achieved by GCN-STGT, and the differences in the prediction error for the horizons in the range from 15 to 45 minutes is no longer significant. These results indicate that the proposed approach is suitable for both mid- and long-term traffic prediction.

Considering the zero-shot transfer tests with the datasets from different prediction horizons, some interesting observations can be made. As expected, errors are lowest when testing on a subset of the training dataset within the original time period (TP0), with MAPE averaging close to 4% with GCN-STGT and 3% with GAT-STGT. However, the models’ transferability differs between three testing time periods. Namely, the MAPE values for TP3 are consistently the lowest, averaging close to 10% with GCN-STGT and 9% with GAT-STGT. This indicates that zero-shot transfer is possible for TP3. However, the MAPE values for TP2 are close to 22% with GCN-STGT and 21% with GAT-STGT, while for TP1 the highest error values are observed, with MAPE near 24% for both model types. This indicates that the models may need to retrained for TP1 and TP2. This lack of transferrability over time, especially for the nearest time period (TP1), reinforces the need for a model that can be trained efficiently, which in this case is made possible by the sparsity techniques discussed in the following section.

V-C2 Sparse Training Accuracy Evaluation

Tables I and II show the comparison of GCN-STGT and GAT-STGT on TP1, TP2 and TP3 for three different time steps (15min, 30min, 45min) with 0% sparsity and with the sparsity level at 90%, respectively. We use three error measurements: MAE, RMSE and MAPE. The test results on the three datasets show that, for the most cases, the average error of the GAT-STGT method is lower than that of the GCN-STGT method. The difference between the two approaches is more significant under the 30min and 15min predictions.

Using typical training time (total training epochs is 200), there is almost no accuracy loss compared to the dense model even at sparsity of 90% on all three different datasets. For the GCN-STGT method, the error increases slightly until the sparsity reaches 85%. For the GAT-STGT method, the accuracy-sparsity curve has the Occam’s Hill [50] property where the accuracy first increases with increasing sparsity and then decreases. Therefore, the embedding sparsification is favorable for this specific task.

TABLE I: Comparison of the error values of GCN-STGT and GAT-STGT for different prediction horizons with 0% sparsity
     Dataset      Error      GCN-STGT      GAT-STGT
     45min      30min      15min      45min      30min      15min
     Training (TP0)      MAE      2.053      1.945      1.988      1.829      1.870      1.951
     RMSE      4.325      3.77      3.897      3.657      3.861      3.803
     MAPE      4.194      3.350      3.407      3.121      3.379      3.319
     Time period 1      MAE      8.051      10.67      9.515      8.082      7.585      7.468
     RMSE      11.95      12.78      13.21      10.46      11.76      11.69
     MAPE      24.73      24.02      24.91      24.58      24.16      23.85
     Time period 2      MAE      7.410      9.618      8.687      7.280      6.910      6.798
     RMSE      11.27      11.56      12.39      10.63      10.87      10.83
     MAPE      22.03      21.00      22.11      21.76      21.28      21.12
     Time period 3      MAE      4.246      5.450      5.091      4.056      4.043      4.053
     RMSE      7.031      7.698      7.484      6.249      6.753      6.653
     MAPE      9.978      10.01      10.48      9.273      9.250      9.216
TABLE II: Comparison of the error values of GCN-STGT and GAT-STGT for different prediction horizons with 90% sparsity
     Dataset      Error      GCN-STGT      GAT-STGT
     45min      30min      15min      45min      30min      15min
     Training (TP0)      MAE      2.786      2.664      2.318      3.189      2.500      2.146
     RMSE      4.850      4.776      4.575      5.404      4.616      4.621
     MAPE      5.647      5.418      4.938      6.319      5.247      4.787
     Time period 1      MAE      8.273      10.07      9.947      8.246      8.463      8.608
     RMSE      12.14      14.85      13.85      12.589      12.26      12.36
     MAPE      24.51      26.25      25.41      24.87      25.10      24.88
     Time period 2      MAE      7.477      10.25      9.214      7.468      7.633      7.873
     RMSE      11.24      13.89      12.93      11.10      11.29      11.44
     MAPE      21.71      23.67      22.88      21.71      22.15      22.09
     Time period 3      MAE      4.552      6.024      5.471      4.563      4.642      4.789
     RMSE      6.954      8.054      7.843      6.956      7.040      7.130
     MAPE      9.900      12.19      11.03      9.921      10.09      10.23

Figures 4 and 5 show the evaluation of the accuracy performance of the sparse training for different model sparsity. We observe that the error is generally stabilized until the sparsity reaches 85% or even 90%. The turning point (marked as red circles) appears When the sparsity reaches about 85 percent. Therefore, as long as we keep the sparsity below 85%, there is negligible significant impact on the accuracy of the training model, while we could introduce significant computational reduction and memory footprint reduction in computer system.

Refer to caption
Figure 4: Mean absolute error(MAE), Root-mean-square(RMS), Mean absolute percentage error(MAPE) vs. sparsity for testing time period 1, testing time period 2, testing time period 3, using GCN-STGT with a 45-minute prediction horizon
Refer to caption
Figure 5: Mean absolute error(MAE), Root-mean-square(RMS), Mean absolute percentage error(MAPE) vs. sparsity for testing time period 1, testing time period 2, testing time period 3, using GAT-STGT with a 45-minute prediction horizon

V-C3 Training FLOPs Evaluation

We use floating point operations per second (FLOPs) to compare the number of operations between the sparse training version and dense training version (i.e., 0% sparsity) for both GCN-STGT and GAT-STGT. The ratio of FLOPs of the sparse model to the dense model has the following relationship:

FLOPssparseFLOPsdense=3ΔT3dΔT2d+33(ΔT+1)\displaystyle\frac{FLOPs_{sparse}}{FLOPs_{dense}}=\frac{3\Delta T-3d\Delta T-2d+3}{3(\Delta T+1)} (4)

where ΔT\Delta T is the drop-and-grow frequency, and dd is the sparsity rate.

The calculation of FLOPS is shown below:

Let:
ΘM1×j,Mj×1flop=ξ\displaystyle\Theta^{flop}_{M_{1\times j},M_{j\times 1}}=\xi (6)
Where ΘM1×j,Mj×1flop\Theta^{flop}_{M_{1\times j},M_{j\times 1}} means calculating the asymptotic bounds for the FLOPs of multiplication between matrix M1×jM_{1\times j} and matrix Mj×1M_{j\times 1}
So:
ΘMi×j,Mj×kflop=ikξ\displaystyle\Theta^{flop}_{M_{i\times j},M_{j\times k}}=ik\xi (7)
For simple fully connected layers:
Θfully connectedflop=IOξ\displaystyle\Theta^{flop}_{\textit{fully connected}}=IO\xi (8)
where II is the input dimensionality and OO is the output dimensionality.
For convolutional kernels:
Θconvolutionalflop=LhLwCinCoutK2ξ\displaystyle\Theta^{flop}_{\textit{convolutional}}=L_{h}L_{w}C_{in}C_{out}K^{2}\xi (9)
where LhL_{h}, LwL_{w} and CinC_{in} are height, width and number of channels of the input feature map, K is the kernel width (assumed to be symmetric), and CoutC_{out} is the number of output channels
Training a neural network mainly consist of three steps: calculate loss, calculate gradient, calculate weights. In the estimation process, it is assumed that all three require the same amount of FLOPs. Thus, for sparse training: the total amount is 3fsf_{s}, where the fsf_{s} is the FLOPs for one step in sparse training. For dense training : the total amount is 3fdf_{d}, where the fdf_{d} is the FLOPs for one step in dense training. In our model: the total FLOPs is:
FLOPs=3fsΔT+2fs+fdΔT+1\displaystyle FLOPs=\frac{3f_{s}\Delta T+2f_{s}+f_{d}}{\Delta T+1} (10)
We need to calculate the dense gradients for updating connections every ΔT\Delta T iteration.
When there are more fully connected layer nodes and a larger number of layers:
Θmodelflop=i=1LIiOiξ\displaystyle\Theta^{flop}_{\textit{model}}=\sum_{i=1}^{L}I_{i}O_{i}\xi (11)
where ii means the ii-th layer
For the sparse model:
Θmodelflop=i=1LIiOiξ(1d)\displaystyle\Theta^{flop}_{\textit{model}}=\sum_{i=1}^{L}I_{i}O_{i}\xi(1-d) (12)
where dd denoted the sparsity rate
Thus the connection between fdf_{d} and fsf_{s} approximately equal to
fs=fd×(1d)\displaystyle f_{s}=f_{d}\times(1-d) (13)
Hence:
FLOPs=fd3ΔT3dΔT2d+3ΔT+1\displaystyle FLOPs=f_{d}\frac{3\Delta T-3d\Delta T-2d+3}{\Delta T+1} (14)

The ratio of FLOPs of the sparse models with respect to different sparsity is shown in Fig. 6. Under a certain update frequency (1,000 iterations/time in our model), the sparsity is linearly related to FLOPS. According to Fig. 4 and Fig. 5, our sparse training method could stabilize the prediction accuracy when sparsity is up to 90%. Therefore, we could bring 10×\times FLOPs reduction throughout the training process, when the training epochs are the same. This can increase the speed of operations by nearly ten times.

Refer to caption
Figure 6: The relationship between sparsity and FLOPs ratio for sparse training.

VI Conclusion

In this work, to mitigate the long-term traffic prediction challenge, we integrate the spatio-temporal dependencies in the transportation network from network modeling, together with the GCN and GAT. To further tackle the dramatic computation and memory cost caused by the giant model size (i.e., number of weights) caused by multiple cascaded layers, we propose sparse training to mitigate the training cost, while preserving the prediction accuracy. We test the proposed methods on a real large-scale transportation network data, Archived Data User Service (ADUS), from California Department of Transportation (Caltrans) Performance Measurement System (PeMS). Experimental results show that the proposed GCN-STGT and GAT-STGT methods achieve very low prediction error for short-, mid- and long-term prediction horizons. Using our sparse training, we could train from scratch with high sparsity (e.g., 90%), equivalent to 10 times saving on computational cost using the same epochs as dense training, and arrive at a model with very small accuracy loss compared with the original dense training.

References

  • [1] T. Luttrell, W. Sampson, D. Ismart, and D. Matherly, “Predicting performance with traffic analysis tools,” tech. rep., United States. Federal Highway Administration, 2008.
  • [2] Y. Chen, J. Kim, and H. S. Mahmassani, “Operational scenario definition in traffic simulation-based decision support systems: Pattern recognition using a clustering algorithm,” Journal of Transportation Engineering, Part A: Systems, vol. 145, no. 4, p. 04019008, 2019.
  • [3] A. M. Nagy and V. Simon, “Survey on traffic prediction in smart cities,” Pervasive and Mobile Computing, vol. 50, pp. 148–163, 2018.
  • [4] M. Filipovska, H. S. Mahmassani, and A. Mittal, “Estimation of path travel time distributions in stochastic time-varying networks with correlations,” Transportation Research Record, vol. 2675, no. 11, pp. 498–508, 2021.
  • [5] M. S. Ahmed and A. R. Cook, Analysis of freeway traffic time-series data by using Box-Jenkins techniques. No. 722, 1979.
  • [6] B. M. Williams and L. A. Hoel, “Modeling and forecasting vehicular traffic flow as a seasonal arima process: Theoretical basis and empirical results,” Journal of transportation engineering, vol. 129, no. 6, pp. 664–672, 2003.
  • [7] H. Tavana and H. S. Mahmassani, “Estimation and application of dynamic speed-density relations by using transfer function models,” Transportation Research Record, vol. 1710, no. 1, pp. 47–57, 2000.
  • [8] M. Filipovska, H. S. Mahmassani, and A. Mittal, “Prediction and mitigation of flow breakdown occurrence for weather affected networks: Case study of chicago, illinois,” Transportation Research Record: Journal of the Transportation Research Board, 2019.
  • [9] M. Filipovska and H. S. Mahmassani, “Traffic flow breakdown prediction using machine learning approaches,” Transportation research record, vol. 2674, no. 10, pp. 560–570, 2020.
  • [10] J. Bruna, W. Zaremba, A. Szlam, and Y. Lecun, “Spectral networks and locally connected networks on graphs international conference on learning representations (iclr2014),” CBLS, April, 2014.
  • [11] P. Velickovic et al., “Graph attention networks,” ArXiv, vol. abs/1710.10903, 2018.
  • [12] W. Hu, M. Shuaibi, A. Das, S. Goyal, A. Sriram, J. Leskovec, D. Parikh, and C. L. Zitnick, “Forcenet: A graph neural network for large-scale quantum calculations,” arXiv preprint arXiv:2103.01436, 2021.
  • [13] M. Van Der Voort, M. Dougherty, and S. Watson, “Combining kohonen maps with arima time series models to forecast traffic flow,” Transportation Research Part C: Emerging Technologies, vol. 4, no. 5, pp. 307–318, 1996.
  • [14] J. Whittaker, S. Garside, and K. Lindveld, “Tracking and predicting a network traffic process,” International Journal of Forecasting, vol. 13, no. 1, pp. 51–61, 1997.
  • [15] A. Stathopoulos and M. G. Karlaftis, “A multivariate state space approach for urban traffic flow modeling and prediction,” Transportation Research Part C: Emerging Technologies, vol. 11, no. 2, pp. 121–135, 2003.
  • [16] Y. Kamarianakis and P. Prastacos, “Forecasting traffic flow conditions in an urban network: Comparison of multivariate and univariate approaches,” Transportation Research Record, vol. 1857, no. 1, pp. 74–84, 2003.
  • [17] J. Guo, W. Huang, and B. M. Williams, “Adaptive kalman filter approach for stochastic short-term traffic flow rate prediction and uncertainty quantification,” Transportation Research Part C: Emerging Technologies, vol. 43, pp. 50–64, 2014.
  • [18] S. H. Hosseini, B. Moshiri, A. Rahimi-Kian, and B. N. Araabi, “Short-term traffic flow forecasting by mutual information and artificial neural networks,” in 2012 IEEE International Conference on Industrial Technology, pp. 1136–1141, IEEE, 2012.
  • [19] Y. Jia, J. Wu, and Y. Du, “Traffic speed prediction using deep learning method,” in 2016 IEEE 19th international conference on intelligent transportation systems (ITSC), pp. 1217–1222, IEEE, 2016.
  • [20] W. Huang, G. Song, H. Hong, and K. Xie, “Deep architecture for traffic flow prediction: deep belief networks with multitask learning,” IEEE Transactions on Intelligent Transportation Systems, vol. 15, no. 5, pp. 2191–2201, 2014.
  • [21] Y. Wu and H. Tan, “Short-term traffic flow forecasting with spatial-temporal correlation in a hybrid deep learning framework,” arXiv preprint arXiv:1612.01022, 2016.
  • [22] D. Tahlyan, E. Kim, and H. S. Mahmassani, “A meta-learner ensemble framework for real-time short-term traffic speed forecasting,” in Transportation Research Board 100th Annual Meeting, pp. 1–17, Transportation Research Board USA, 2021.
  • [23] Y. Lv, Y. Duan, W. Kang, Z. Li, and F.-Y. Wang, “Traffic flow prediction with big data: a deep learning approach,” IEEE Transactions on Intelligent Transportation Systems, vol. 16, no. 2, pp. 865–873, 2014.
  • [24] X. Ma, Z. Tao, Y. Wang, H. Yu, and Y. Wang, “Long short-term memory neural network for traffic speed prediction using remote microwave sensor data,” Transportation Research Part C: Emerging Technologies, vol. 54, pp. 187–197, 2015.
  • [25] Y. Li, R. Yu, C. Shahabi, and Y. Liu, “Diffusion convolutional recurrent neural network: Data-driven traffic forecasting,” arXiv preprint arXiv:1707.01926, 2017.
  • [26] H. Yu, Z. Wu, S. Wang, Y. Wang, and X. Ma, “Spatiotemporal recurrent convolutional networks for traffic prediction in transportation networks,” Sensors, vol. 17, no. 7, p. 1501, 2017.
  • [27] X. Ma, Z. Dai, Z. He, J. Ma, Y. Wang, and Y. Wang, “Learning traffic as images: a deep convolutional neural network for large-scale transportation network speed prediction,” Sensors, vol. 17, no. 4, p. 818, 2017.
  • [28] B. Yu, H. Yin, and Z. Zhu, “Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting,” arXiv preprint arXiv:1709.04875, 2017.
  • [29] C. Zhang, J. James, and Y. Liu, “Spatial-temporal graph attention networks: A deep learning approach for traffic forecasting,” IEEE Access, vol. 7, pp. 166246–166256, 2019.
  • [30] X. Kong, W. Xing, X. Wei, P. Bao, J. Zhang, and W. Lu, “Stgat: spatial-temporal graph attention networks for traffic flow forecasting,” IEEE Access, vol. 8, pp. 134363–134372, 2020.
  • [31] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio, “Graph attention networks,” arXiv preprint arXiv:1710.10903, 2017.
  • [32] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778, 2016.
  • [33] T. B. Brown, B. P. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krüger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. J. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei, “Language models are few-shot learners,” 2020.
  • [34] S. Huang, D. Xu, I. Yen, Y. Wang, S.-E. Chang, B. Li, S. Chen, M. Xie, S. Rajasekaran, H. Liu, et al., “Sparse progressive distillation: Resolving overfitting under pretrain-and-finetune paradigm,” in Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 190–200, 2022.
  • [35] S. Huang, N. Liu, Y. Liang, H. Peng, H. Li, D. Xu, M. Xie, and C. Ding, “An automatic and efficient bert pruning for edge ai systems,” in 2022 23rd International Symposium on Quality Electronic Design (ISQED), pp. 1–6, IEEE, 2022.
  • [36] S. Huang, S. Chen, H. Peng, D. Manu, Z. Kong, G. Yuan, L. Yang, S. Wang, H. Liu, and C. Ding, “Hmc-tran: A tensor-core inspired hierarchical model compression for transformer-based dnns on gpu,” in Proceedings of the 2021 on Great Lakes Symposium on VLSI, pp. 169–174, 2021.
  • [37] S. Chen, S. Huang, S. Pandey, B. Li, G. R. Gao, L. Zheng, C. Ding, and H. Liu, “Et: re-thinking self-attention for transformer models on gpus,” in Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1–18, 2021.
  • [38] H. Peng, S. Huang, T. Geng, A. Li, W. Jiang, H. Liu, S. Wang, and C. Ding, “Accelerating transformer-based deep learning models on fpgas using column balanced block pruning,” in 2021 22nd International Symposium on Quality Electronic Design (ISQED), pp. 142–148, IEEE, 2021.
  • [39] H. Peng, S. Huang, S. Chen, B. Li, T. Geng, A. Li, W. Jiang, W. Wen, J. Bi, H. Liu, et al., “A length adaptive algorithm-hardware co-design of transformer on fpga through sparse attention and dynamic pipelining,” in Proceedings of the 59th ACM/IEEE Design Automation Conference, pp. 1135–1140, 2022.
  • [40] P. Qi, E. H.-M. Sha, Q. Zhuge, H. Peng, S. Huang, Z. Kong, Y. Song, and B. Li, “Accelerating framework of transformer by hardware design and model compression co-optimization,” in 2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD), pp. 1–9, IEEE, 2021.
  • [41] P. Qi, Y. Song, H. Peng, S. Huang, Q. Zhuge, and E. H.-M. Sha, “Accommodating transformer onto fpga: Coupling the balanced model compression and fpga-implementation optimization,” in Proceedings of the 2021 on Great Lakes Symposium on VLSI, pp. 163–168, 2021.
  • [42] H. Peng, D. Gurevin, S. Huang, T. Geng, W. Jiang, O. Khan, and C. Ding, “Towards sparsification of graph neural networks,” arXiv preprint arXiv:2209.04766, 2022.
  • [43] D. Manu, S. Huang, C. Ding, and L. Yang, “Co-exploration of graph neural network and network-on-chip design using automl,” in Proceedings of the 2021 on Great Lakes Symposium on VLSI, pp. 175–180, 2021.
  • [44] T. Chen, Y. Sui, X. Chen, A. Zhang, and Z. Wang, “A unified lottery ticket hypothesis for graph neural networks,” in ICML, 2021.
  • [45] J. Frankle and M. Carbin, “The lottery ticket hypothesis: Finding sparse, trainable neural networks,” in ICLR, 2018.
  • [46] Y. Yu, X. Si, C. Hu, and J. Zhang, “A review of recurrent neural networks: Lstm cells and network architectures,” Neural computation, vol. 31, no. 7, pp. 1235–1270, 2019.
  • [47] F. A. Gers, J. Schmidhuber, and F. Cummins, “Learning to forget: Continual prediction with lstm,” Neural computation, vol. 12, no. 10, pp. 2451–2471, 2000.
  • [48] T. Hoefler, D. Alistarh, T. Ben-Nun, N. Dryden, and A. Peste, “Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks.,” J. Mach. Learn. Res., vol. 22, no. 241, pp. 1–124, 2021.
  • [49] U. Evci, T. Gale, J. Menick, P. S. Castro, and E. Elsen, “Rigging the lottery: Making all tickets winners,” in ICML, pp. 2943–2952, PMLR, 2020.
  • [50] C. Rasmussen and Z. Ghahramani, “Occam’s razor,” ICLR, vol. 13, 2000.

VII Biography Section

[Uncaptioned image] Bin Lei is a Master student in the Department of Computer Science and Engineering at University of Connecticut, Storrs, Connecticut, USA. His research interests include machine learning and algorithms.
[Uncaptioned image] Shaoyi Huang (Student Member, IEEE) is a third year Ph.D. student in the Department of Computer Science and Engineering at University of Connecticut, Storrs, Connecticut, USA. Her research interests include deep learning, efficient AI, software / hardware co-design, natural language processing, computer vision.
[Uncaptioned image] Caiwen Ding is an assistant professor in the Department of Computer Science & Engineering at the University of Connecticut. He received his Ph.D. degree from Northeastern University (NEU), Boston in 2019. His interests include machine learning systems; computer architecture and heterogeneous computing (FPGAs/GPUs); computer vision, natural language processing; non-von Neumann computing and neuromorphic computing; privacy-preserving machine learning.
[Uncaptioned image] Monika Filipovska is an assistant professor in the Department of Civil and Environmental Engineering at the University of Connecticut. She received her Ph.D. in transportation systems from Northwestern University. Her research focuses on predictive and prescriptive analytics for dynamic transportation networks and intelligent transportation systems, including applications of emerging vehicle and infrastructure technologies.