STC-Flow: Spatio-temporal Context-aware Optical Flow Estimation
Abstract
In this paper, we propose a spatio-temporal contextual network, STC-Flow, for optical flow estimation. Unlike previous optical flow estimation approaches with local pyramid feature extraction and multi-level correlation, we propose a contextual relation exploration architecture by capturing rich long-range dependencies in spatial and temporal dimensions. Specifically, STC-Flow contains three key context modules — pyramidal spatial context module, temporal context correlation module and recurrent residual contextual upsampling module, to build the relationship in each stage of feature extraction, correlation, and flow reconstruction, respectively. Experimental results indicate that the proposed scheme achieves the state-of-the-art performance of two-frame based methods on the Sintel dataset and the KITTI 2012/2015 datasets.
I Introduction
Optical flow estimation is an important yet challenging problem in the field of video analytics. Recently, deep learning based approaches have been extensively exploited to estimate optical flow via convolutional neural networks (CNNs). Despite the great efforts and rapid developments, the advancements are not as significant as works in single image based computer vision tasks. The main reason is that optical flow is not directly measurable in the wild, and it is challenging to model motion dynamics with pixel-wise correspondence between two consecutive frames, which would contain variable motion displacements; thus optical flow estimation requires the efficient representation of features to match different motion objects or scenes.

Conventional methods attempt to propose mathematical algorithms of optical flow estimation such as DeepFlow [1] and EpicFlow [2] by matching features of two frames. Most of these methods, however, are complicated with heavy computational complexity, and usually fail for motions with large displacements. CNN-based methods for optical flow estimation, which usually utilize encoder-decoder architectures with pyramidal feature extraction and flow reconstruction like FlowNet [3], SpyNet [4], PWC-Net [5], boost the state-of-the-art performance of optical flow estimation and outperform conventional methods. However, the features in lower level contain rich details, while the receptive field is small, which is not effective to catch the larger displacement of motions; while the features in higher level highlight the overall outlines or shapes of objects, with less details, which may cause the misalignment of objects with complex shapes or non-rigid motions. So it is essential to capture context information with large receptive field and long-range dependencies, to build the global relationship for each stage of CNNs.
In this paper, as shown in Figure 1, we propose an end-to-end architecture for optical flow estimation, with jointly and effectively spatio-temporal contextual network. To respectively build the relationship in each stage of feature extraction, correlation and flow reconstruction, the network contains three key context modules: (a) Pyramidal spatial context module aims to enhance the discriminant ability of feature representations in spatial dimension. (b) Temporal context correlation module is adopted to model the global spatio-temporal relationships of the cost volume calculated from correlation operation. (c) Recurrent residual context upsampling module leverages the underlying content of predicted flow field between adjacent levels, to learn high-frequency features and preserve edges within a large receptive field.
In summary, the main contributions of this work are three-fold:
-
•
We propose a general framework, i.e. contextual attention framework, for efficient feature representation learning, which benefits multiple inputs and complicated target operation.
-
•
We propose corresponding context modules based on the contextual attention framework, for feature extraction, correlation and optical flow reconstruction stages.
-
•
Our network achieves the state-of-the-art performance in the Sintel and KITTI datasets for two-frame based optical flow estimation.
II Related Work
Optical flow estimation. Inspired by the success of CNNs, various deep networks for optical flow estimation have been proposed. Dosovitskiy et al. [3] establish FlowNet which is the important CNN exploration on optical flow estimation with the encoder-deconder architecture, of which FlowNetS and FlowNetC are proposed with simple operations. However, the number of parameters is large with heavy calculation on correlation. Ilg et al. [6] propose a cascaded network with milestone performance based on FlowNetS and FlowNetC with huge parameters and expensive computation complexity.
To reduce the number of parameters,Ranjan et al. [4] present a compact SPyNet for spatial pyramid with multi-level representation learning. Hui et al. [7] propose LiteFlowNet and Sun et al. [5] propose PWC-Net, which are pioneers of the trend to lightweight optical flow estimation networks. LiteFlowNet [7] involves cascaded flow inference for flow warping and feature matching. PWC-Net [5] utilizes feature pyramid extraction and feature warping to construct the cost volume, and uses context network for optical flow refinement. HD3 [8] decomposes the full match density into hierarchical features to estimate the local matching, with heavy computational complexity. IRR [9] involves the iterative residual refinement scheme, and integrates occlusion prediction as additional auxiliary supervision. SelFlow [10] uses reliable flow predictions from non-occluded pixels, to learn optical flow for hallucinated occlusions from multiple frames for better performance.

Context modeling in neural networks. Context modeling has been successfully applied to capture long-range dependencies. Since a typical convolution operator has a local receptive field, context learning can affect an individual element by aggregating information from all elements. Many recent works utilize spatial self-attention to emphasize features of the key local regions [11, 12]. Object relation module [13] extends original attention to geometric relationship, and could be applied to improve the performance of object detection and other tasks. DANet [14] and CBAM [15] introduce the channel-wise attention via self-attention mechanism. Global context network [16] effectively models the global context with a lightweight architecture. Specifically, the non-local network [17] utilizes 3D convolution layers to aggregate spatial and temporal long-range dependencies for video frames.
In the optical flow estimation task, spatial contextual information helps to refine details and deal with occlusion. PWC-Net [5] consists of the context network with stacked dilated convolution layers for flow post-processing. In LiteFlowNet [7], flow regularization layer is applied to ameliorate the issue of outliers and fake edges. IRR [9] utilizes bilateral filters to refine blurry flow and occlusion. Nevertheless, previous work of context modeling in optical flow estimation mainly focuses on spatial features. For motion context modeling, it is essential to provide an elegant framework to explore spatial and temporal information. Accordingly, our network introduces spatial and temporal context module, and also introduce recurrent context to upsample spatial features of predicted flow field.
III STC-Flow
Given a pair of video frames, scene or objects are diverse on movement velocity and direction in temporal dimension, and changes in scales, views, and luminance in spatial dimension. Convolutional operations built in CNNs process just a local neighborhood, and thus convolution stacks would lead to a local receptive field. The features corresponding to the pixels have similar textures of one object, even though they have differences in motion. These textures would introduce false-positive correlation, which result in wrong prediction of optical flow.
To address this issue, our method, i.e. STC-Flow, models contextual information by building global associations of intra-/extra-features with the attention mechanism in spatial and temporal dimensions respectively. The network could adaptively aggregate long-range contextual information, thus optimizing feature representation in feature extraction, correlation, and reconstruction stages, as shown in Figure 2. In this section, we first introduce the contextual attention framework with single or multiple inputs for efficient feature representation learning. Based on the framework, we then propose three key context modules: pyramidal spatial context (PSC) module, temporal context correlation (TCC) module and recurrent residual contextual upsampling (RRCU) module for modeling contextual information.
III-A Contextual Attention Framework
Analysis on Attention Mechanism. To capture long-range dependencies and model contextual details of single images or video clips, the basic non-local network [17] aggregates pixel-wise information via self-attention mechanism. We denote and as the input and output signals, such as the single image and the video clip. The non-local block can be expressed as follows:
(1) |
where and are the indices of target position coordinates and all possible enumerated positions. denotes the relationship between position and , which is normalized by a factor . The matrix multiplication operation is utilized to strengthen details of each query position. Embedded Gaussian is a widely-used instantiation of , to compute similarity in an embedding space, and normalized with a softmax function, which is a soft selection across channels in one position. the non-local block with Embedded Gaussian is shown in Figure 3(b), and is expressed as follows:
(2) |
where , and are linear transformation matrices.

Why attention for optical flow estimation? Here, we discuss the relation between correlation in optical flow estimation and matrix multiplication in self-attention mechanism. We aim to explore the contextual information from the input pairs. Denote the feature pairs by and . To distinguish the features from different input paths, we mark the size of features with height , width , channel (), and position coordinate , channel index , and here , and . As the key function in optical flow estimation, the “correlation” operation between two patches of feature pairs, and , is defined as follows for temporal modeling:
(3) |
where denotes the cost volume calculated via correlation. denotes the offset of correlation operation with search region. In consideration of matrix multiplication in the attention mechanism of shown in Figure 4, the different order of the two matrices in multiplication leads to great disparity of explanation with different displacements of correlation.
Discussions. In Figure 4(a), the expression is defined as . If , this operation strengthens the detail features of each position via aggregating information across channels from other positions, which would indicate the spatial attention integration at full resolution, and it is utilized to the basic non-local block [17]. However, if , as the definition of cost volume, only the diagonal elements present the correlation with no displacement. On the contrary, the expression is defined as in Figure 4(b), which is a global correlation representation at full resolution among channels, and is essential to the naive correlation operation between feature pairs. For different matrix multiplication approaches, the attention maps catch dependencies with corresponding concepts in spatial features and temporal dynamics, which enhance representation for input feature extraction and correlation calculation, respectively.


Lite matrix multiplication. Considering the runtime of the flow prediction, the matrix multiplication in contextual attention block needs to be simplified with less computational complexity. In Figure 5, according to the neighbor similarity of images or frame pairs, we propose the polyphase decomposition and reconstruction scheme to simplify matrix multiplication opreation, which would obtain better approximation than the naive downsampling-upsampling scheme, and reduce the computation complexity compared to the direct multiplication. Denote the polyphase decomposition factor as (). Given a reshaped feature , the FLOPs of the entire multiplication is reduced from to . The comparison of different factors is presented in Section IV.
Contextual Attention Framework. In general, the input of CNNs is not limited to the single feature through the single path, and the attention block needs to be adapted to more than one features, e.g. two input features of the correlation operation. As shown in Figure 3(a), the components of the attention block can be abstracted as follows:
Attention aggregation. To aggregate the attention integration feature to the intrinsic feature representation in each corresponding dimension, where the intrinsic representation often adopts basic operators like interpolation, convolution and transposed convolution.
Context transformation. To transform the aggregated attention via the 11 or 1D convolution, and obtain the contextual attention feature of all positions and channels.
Target fusion. To aggregate the output feature from target operation with the contextual attention, where the target operation is the main function to attain the objective from input features.
Denote as the multiple input features. We regard this abstraction as a contextual attention framework defined as follows:
(4) |
where and are the fusion operations for attention aggregation and target fusion. and denote target operation and attention integration for the input features, is the factor of linear transformation. The non-local block or the other attention modules are the specific form of context attention block with the single input feature, e.g. , and is the all-pass function in the non-local block.
III-B Pyramidal Spatial Context Module

Inspired by the non-local network and global context network, we propose a pyramidal spatial context module with the tight dual-attention block to enhance the discriminative ability of feature representations in spatial position and channel dimensions. As shown in Figure 6, given a local feature at stage , the calculation of the spatial context module is formulated as:
(5) |
where and are contextual attention at stage fused with that of stage , which is to aggregate context from different granularity:
(6) |
where “” denotes max-pooling, and “” denotes the concatenation operator. and are attention integrations in position and channel, defined as follows to learn the spatial and channel interdependencies:
(7) |
III-C Temporal Context Correlation Module

As the spatial context module learns query-independent context relationships at the feature extraction stage, the temporal context module is adopted to model the relationships of correlation calculation. As the analysis on matrix multiplication, full-resolution correlation is utilized to describe the global context of correlation operation. As shown in Figure 7(a), given the local feature pairs from feature extraction, the contextual correlation is formulated as:
(8) |
where is the temporal attention integration with the “cross-attention” mechanism, which is defined as follows:
(9) |
Notice that the linear transformation of is modeled by a 3D convolution and a 11 convolution, which aims to explore the temporal information across time dimension. Since the max displacement of correlation is selected to 4, the kernel of 3D convolution needs to cover all frames in the temporal dimension, and the height and width are greater than or equal to the max displacement, i.e. 5 in the proposed module.
The TCC module is a flexible correlation operator and it can be utilized to PWC module in PWC-Net [5] as “Contextual PWC” module, to learn long-dependencies between the reference feature and the warped feature.
III-D Recurrent Residual Contextual Upsampling

Different from the spatial and temporal context representation modeling, the reconstruction context learning is a detail-aware operation to learn high-frequency features and preserve edges within a large receptive field. In view of the multi-stage structure of reconstruction, we propose an efficient recurrent module for upsampling, which leverages the underlying content information between the current stage and the previous stage.
The predicted optical flow at stage and the upsampled optical flow at stage are encoded by 11 convolution with shared weights, and the residual with smaller size is calculated from the encoder at first. Denote the residual between and as , and then the context modeling is utilized for to explore the up-sampling attention kernels for each corresponding source position, and is fused back to the bilinear interpolated . Finally, the fined residual feature is resembled to to obtain the refined upsampled flow with rich details. The architecture is illustrated in Figure 8, and the formulation is expressed as follows:
(10) |
where “*” denotes the position-wise convolution operator, and here is a bilinear interpolation operator for . denotes the adaptive attention kernels to model the detail context defined as follows:
(11) |
where denotes the “Pixel Shuffle [18]” operator for sub-pixel convolution, to reconstruct the sub-pixel information and preserve edges and textures. is the upsampling factor, and here .
III-E Overall Architecture
Given the proposed contextual attention modules, we now describe the overall architecture of the proposed STC-Flow. The input is the frame pairs and with size , and the goal of STC-Flow is to obtain the optical flow map with size . The contextual representations are modeled via three key components — pyramidal spatial context (PSC) module, temporal context correlation (TCC) module, and recurrent residual contextual upsampling (RRCU) module, to leverage long-range dependencies relationship in feature extraction, correlation and flow reconstruction, respectively. The entire network is trained jointly, shown in Figure 2.
Since PWC-Net [5] and LiteFlowNet [7] provide superior performance with lightweight architectures, we take a simplified version of PWC-Net, with layer reduction in feature extraction and reconstruction, as the baseline of our STC-Flow. For successive of image/frame pairs, the backbone network with PSC outputs pyramidal feature maps for each image. With the feature maps of each stage converted to cost volumes via correlation operation, the cost volumes are decoded and reconstructed to predict optical flow, assisted by TCC. With the guidance of backbone features and warping alignments, the predicted flow field goes through the RRCU module and the fined flow is obtained.
IV Experiments
In this section, we introduce the implementation details, and evaluate our method on public optical flow benchmarks, including MPI Sintel [19], KITTI 2012 [20] and KITTI 2015 [21], and compare it with state-of-the-art methods.
IV-A Implementation and training details
We take a simplified version of PWC-Net, with the same number of stages and layer reduction in feature extraction and reconstruction. PSC and RRCU modules are utilized at stage 3, 4 and 5 for feature extraction and reconstruction respectively. TCC Module is applied at stage 3, 4, 5 and 6 for correlation of feature pairs or warped features. The training loss weights among stages are 0.32, 0.08, 0.02, 0.01, 0.005. We first train the models with the FlyingChairs dataset [3] using L2 loss and the learning rate schedule, with random flipping and cropping of size 448 384 introduced by [6]. Secondly, we fine-tune the models on the FlyingThings3D dataset [22] using the schedule with cropping size of 768 384. Finally, the model is fine-tuned on Sintel and KITTI datasets using the general Charbonnier function () as the robust training loss. We use both the clean and final pass of the training data throughout the Sintel fine-tuning process, with cropping size of 768 384; and we use the mixed data of KITTI 2012 and 2015 training for KITTI fine-tuning process, with cropping size of 896 320.
Sintel | KITTI 2012 | KITTI 2015 | |||
---|---|---|---|---|---|
Clean | Final | AEE | AEE | Fl-all | |
baseline | 2.924 | 4.088 | 4.621 | 11.743 | 36.53% |
w. PSC3 | 2.802 | 3.891 | 4.565 | 11.031 | 35.37% |
w. PSC3-4 | 2.747 | 3.873 | 4.545 | 10.677 | 34.84% |
w. PSC3-5 | 2.741 | 3.864 | 4.494 | 10.332 | 34.45% |
w. 2D-NL3-5 | 2.785 | 3.968 | 4.523 | 10.482 | 34.76% |
Full model | 2.412 | 3.601 | 4.196 | 10.181 | 32.23% |
Sintel | KITTI 2012 | KITTI 2015 | |||
---|---|---|---|---|---|
Clean | Final | AEE | AEE | Fl-all | |
baseline | 2.924 | 4.088 | 4.621 | 11.743 | 36.53% |
w. TCC6 | 2.787 | 3.863 | 4.523 | 10.712 | 35.59% |
w. TCC3-6 | 2.641 | 3.780 | 4.389 | 10.313 | 34.58% |
w. 2D-NL3-6 | 2.764 | 3.869 | 4.498 | 10.564 | 35.25% |
w. 3D-NL3-6 | 2.635 | 3.745 | 4.393 | 10.324 | 34.63% |
Full model | 2.412 | 3.601 | 4.196 | 10.181 | 32.23% |
Sintel | KITTI 2012 | KITTI 2015 | |||
---|---|---|---|---|---|
Clean | Final | AEE | AEE | Fl-all | |
baseline | 2.924 | 4.088 | 4.621 | 11.743 | 36.53% |
w. RRCU | 2.696 | 3.794 | 4.432 | 10.332 | 34.65% |
TCC+RRCU | 2.567 | 3.722 | 4.368 | 10.295 | 33.89% |
Full model | 2.412 | 3.601 | 4.196 | 10.181 | 32.23% |
IV-B Ablation Study
To demonstrate the effectiveness of individual contextual attention module in our network, as shown in Table I(c) and Figure 9, we conduct a rigorous ablation study of PSC, TCC, and RRCU, respectively. We observe that these modules could capture clear semantic information with long-range dependencies. The baseline is trained on FlyingChairs and finetuned on FlyingThings3D. We also discuss the efficacy of Lite matrix multiplier in Table II.
Pyramidal spatial context module. STC-Flow utilizes PSC Module in level 3, 4 and 5. Table I(c)(a) demonstrates that using PSC Module can improve the performance on both the Sintel and KITTI datasets, since this module enhances the ability of discriminating feature texture in feature extraction stage, and PSC at stage 3 is more beneficial, for the low-level discriminative details matter.
Temporal context correlation module. TCC Module describes the relationship of correlation with the spatial and temporal context. In Table I(c)(b), we compare the performance of our network using TCC Module with naive correlation operator, and also compare with 2D non-local block for concatenated feature and 3D non-local block for feature pairs. It demonstrates that fusion of correlation with spatial and temporal context is better than single correlation. Notice that 3D non-local blocks perform better in Sintel, however, with heavy computational complexity. TCC can achieve the comparable performance with fewer FLOPs.
Recurrent residual contextual upsampling. We utilize the RRCU Module to learn high-frequency context features and preserve edges. In Table I(c)(c), we compare the quantity of our method using RRCU with single transpose convolution, which demonstrates that reconstruction context learning could preserve details and improve performance.
Lite matrix multiplication. Lite matrix multiplication is an efficient scheme to reduce the computational complexity. We compare the performance of this scheme with different polyphase decomposition factor on Sintel training. As shown in Table II, lite matrix multiplication has a margin influence on AEE, but increases the frame rate conspicuously. Considering the tradeoff between accuracy and time consumption, we select for the full model.
AEE/SSIM (Clean) | AEE/SSIM (Final) | Runtime (fps) | |
---|---|---|---|
1 | 2.407/— | 3.588/— | 20 |
2 | 2.412/0.9765 | 3.601/0.9982 | 22 |
4 | 2.515/0.9061 | 3.856/0.8990 | 25 |


Method | Sintel Clean | Sintel Final | KITTI 2012 | KITTI 2015 | |||||
---|---|---|---|---|---|---|---|---|---|
train | test | train | test | train | test | train | train(Fl-all) | test(Fl-all) | |
DeepFlow [1] | 2.66 | 5.38 | 3.57 | 7.21 | 4.48 | 5.8 | 10.63 | 26.52% | 29.18% |
EpicFlow [2] | 2.27 | 4.12 | 3.56 | 6.29 | 3.09 | 3.8 | 9.27 | 27.18% | 27.10% |
FlowFields [23] | 1.86 | 3.75 | 3.06 | 5.81 | 3.33 | 3.5 | 8.33 | 24.43% | — |
FlowNetS [3] | 4.50 | 7.42 | 5.45 | 8.43 | 8.26 | — | — | — | — |
FlowNetS-ft [3] | (3.66) | 6.96 | (4.44) | 7.76 | 7.52 | 9.1 | — | — | — |
FlowNetC [3] | 4.31 | 7.28 | 5.87 | 8.81 | 9.35 | — | — | — | — |
FlowNetC-ft [3] | (3.78) | 6.85 | (5.28) | 8.51 | 8.79 | — | — | — | — |
FlowNet2 [6] | 2.02 | 3.96 | 3.54 | 6.02 | 4.01 | — | 10.08 | 29.99% | — |
FlowNet2-ft [6] | (1.45) | 4.16 | (2.19) | 5.74 | 3.52 | — | 9.94 | 28.02% | — |
SPyNet [4] | 4.12 | 6.69 | 5.57 | 8.43 | 9.12 | — | — | — | — |
SPyNet-ft [4] | (3.17) | 6.64 | (4.32) | 8.36 | 3.36 | 4.1 | — | — | 35.07% |
LiteFlowNet [7] | 2.48 | — | 4.04 | — | 4.00 | — | 10.39 | 28.50% | — |
LiteFlowNet-ft [7] | (1.35) | 4.54 | (1.78) | 5.38 | (1.05) | 1.6 | (1.62) | (5.58%) | (9.38%) |
PWC-Net [5] | 2.55 | — | 3.93 | — | 4.14 | — | 10.35 | 33.67% | — |
PWC-Net-ft [5] | (2.02) | 4.39 | (2.08) | 5.04 | (1.45) | 1.7 | (2.16) | (9.80%) | 9.60% |
SelFlow-ft [10] | (1.68) | 3.74 | (1.77) | 4.26 | (0.76) | 1.5 | (1.18) | — | 8.42% |
IRR-PWC-ft [9] | (1.92) | 3.84 | (2.51) | 4.58 | — | — | (1.63) | (5.32%) | 7.65% |
HD3-ft [8] | (1.70) | 4.79 | (1.17) | 4.67 | (0.81) | 1.4 | (1.31) | (4.10%) | 6.55% |
STC-Flow (ours) | 2.41 | — | 3.60 | — | 4.20 | — | 10.18 | 32.23% | — |
STC-Flow-ft (Ours) | (1.36) | 3.52 | (1.73) | 4.87 | (0.98) | 1.5 | (1.46) | (5.43%) | 7.99% |
IV-C Comparison with State-of-the-art Methods
As shown in Table III, we achieve the comparable quantity results in Sintel and KITTI datasets compared with state-of-the-art methods. Some samples of visualization results are shown in Figure 10. STC-Flow performs better on AEE among the methods on the Sintel Clean pass. We can see that the finer details are well preserved via context modeling of spatial and temporal long-range relationships, with fewer artifacts and lower end-point error. In addition, our method is based on only two frames without additional information (like occlusion maps for IRR [9] and additional datasets) used, but it outperforms state-of-the-art multi-frames methods, e.g. SelFlow[10]. In addition, STC-Flow is lightweight with far fewer parameters, i.e. 9M instead of 110M of FlowNet2 [6] and 40M of HD3 [8]. We believe that our flexible scheme is helpful to achieve better performance for other baseline networks, including multi-frame based methods.
V Conclusion
To explore the motion context information for accurate optical flow estimation, we propose a spatio-temporal context-aware network, STC-Flow, for optical flow estimation. We propose three context modules for feature extraction, correlation, and optical flow reconstruction stages, i.e. pyramidal spatial context (PSC) module, temporal context correlation (TCC) module, and recurrent residual contextual upsampling (RRCU) module, respectively. We have validated the effectiveness of each component. Our proposed scheme achieves the state-of-the-art performance without multi-frame or additional information used.
References
- [1] P. Weinzaepfel, J. Revaud, Z. Harchaoui et al., “Deepflow: Large displacement optical flow with deep matching,” in ICCV, 2013.
- [2] J. Revaud, P. Weinzaepfel, Z. Harchaoui et al., “Epicflow: Edge-preserving interpolation of correspondences for optical flow,” in CVPR, 2015.
- [3] A. Dosovitskiy, P. Fischer, E. Ilg et al., “Flownet: Learning optical flow with convolutional networks,” in ICCV, 2015.
- [4] A. Ranjan and M. J. Black, “Optical flow estimation using a spatial pyramid network,” in CVPR, 2017.
- [5] D. Sun, X. Yang, M.-Y. Liu et al., “Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume,” in CVPR, 2018.
- [6] E. Ilg, N. Mayer, T. Saikia et al., “Flownet 2.0: Evolution of optical flow estimation with deep networks,” in CVPR, 2017.
- [7] T.-W. Hui, X. Tang, and C. Change Loy, “Liteflownet: A lightweight convolutional neural network for optical flow estimation,” in CVPR, 2018.
- [8] Z. Yin, T. Darrell, and F. Yu, “Hierarchical discrete distribution decomposition for match density estimation,” in CVPR, 2019.
- [9] J. Hur and S. Roth, “Iterative residual refinement for joint optical flow and occlusion estimation.” in CVPR, 2019.
- [10] P. Liu, M. R. Lyu, I. King et al., “Selflow: Self-supervised learning of optical flow,” CVPR, 2019.
- [11] Y. Du, C. Yuan, B. Li et al., “Interaction-aware spatio-temporal pyramid attention networks for action classification,” ECCV, 2018.
- [12] H. Zhang, I. Goodfellow, D. N. Metaxas et al., “Self-attention generative adversarial networks,” ICML, 2019.
- [13] H. Hu, J. Gu, Z. Zhang et al., “Relation networks for object detection,” CVPR, 2018.
- [14] J. Fu, J. Liu, H. Tian et al., “Dual attention network for scene segmentation,” CVPR, 2018.
- [15] S. Woo, J. Park, J. Lee et al., “Cbam: Convolutional block attention module,” ECCV, 2018.
- [16] Y. Cao, J. Xu, S. Lin et al., “Gcnet: Non-local networks meet squeeze-excitation networks and beyond.” CVPR, 2019.
- [17] X. Wang, R. Girshick, A. Gupta et al., “Non-local neural networks,” CVPR, 2018.
- [18] W. Shi, J. Caballero, F. Huszar et al., “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” CVPR, 2016.
- [19] D. J. Butler, J. Wulff, G. B. Stanley et al., “A naturalistic open source movie for optical flow evaluation,” in ECCV, 2012.
- [20] A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in CVPR, 2012.
- [21] M. Menze and A. Geiger, “Object scene flow for autonomous vehicles,” in CVPR, 2015.
- [22] N. Mayer, E. Ilg, P. Hausser et al., “A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation,” in CVPR, 2016.
- [23] C. Bailer, B. Taetz, and D. Stricker, “Flow fields: Dense correspondence fields for highly accurate large displacement optical flow estimation,” in ICCV, 2015.