Light Field Raindrop Removal via 4D Re-sampling
Abstract
The Light Field Raindrop Removal (LFRR) aims to restore the background areas obscured by raindrops in the Light Field (LF). Compared with single image, the LF provides more abundant information by regularly and densely sampling the scene. Since raindrops have larger disparities than the background in the LF, the majority of texture details occluded by raindrops are visible in other views. In this paper, we propose a novel LFRR network by directly utilizing the complementary pixel information of raindrop-free areas in the input raindrop LF, which consists of the re-sampling module and the refinement module. Specifically, the re-sampling module generates a new LF which is less polluted by raindrops through re-sampling position predictions and the proposed 4D interpolation. The refinement module improves the restoration of the completely occluded background areas and corrects the pixel error caused by 4D interpolation. Furthermore, we carefully build the first real scene LFRR dataset for model training and validation. Experiments demonstrate that the proposed method can effectively remove raindrops and achieves state-of-the-art performance in both background restoration and view consistency maintenance.
Keywords:
Light Fields, Raindrop Removal, Deep Neural Network1 Introduction
Image raindrop removal refers to the restoration of background areas obscured by raindrops in the image, which is beneficial for many high-level computer vision applications, such as object detection [2, 27] and auto-driving [18]. Most existing raindrop removal approaches are based on 2D single image [13, 6, 4, 22, 19]. However, due to the lack of texture details in the occlusion area, 2D approaches excessively depend on the surrounding pixels or the inpainting patterns learned from the training set when restoring the background, which leads to unnatural and unreliable results.
The Light Field (LF) has lately received great attention due to its capability of providing abundant complementary information and implicit scene depth information. The LF consists of multiple scene views generated by regularly and densely sampling in the camera plane. Compared with the 2D image, the LF owns two angular dimensions together with two spatial dimensions, so the LF can also be described as the 4D sampling of the scene. In the LF, the distance a pixel moves between adjacent views is called the pixel’s disparity. Since raindrops are closer to the camera lens than the background, raindrops have larger disparities than the background in the LF. Therefore, as shown in the left instance of Fig.1, most texture details of the background areas obscured by raindrops are visible in other views. In addition to the complementary information between views, structural characteristics of the LF cause raindrop removal targets of different views are highly similar and correlated, which is beneficial for learning objective designing. Thus, we introduce the LF into the raindrop removal task to go beyond the limitation of single image.

In this paper, we propose a novel convolutional neural network to address the Light Field Raindrop Removal (LFRR) problem. On one hand, we directly select raindrop-free areas from other views to restore the polluted areas by re-sampling the input raindrop LF. On the other hand, we utilize the convolution module to implicitly refine the re-sampled LF. In the re-sampling stage, the model selects pixels from the raindrop LF and generates a novel LF which has less raindrop pollution through re-sampling position prediction and the proposed 4D interpolation. The re-sampling strategy not only leverages the complementary pixel information between different views, but also effectively reduces the dependence on implicit residual prediction. In the refinement stage, the model generates a residual map for improving the restoration of the completely occluded areas and correcting the error caused by 4D interpolation in the re-sampled LF. Considering that the areas with large differences between the raindrop LF and the re-sampled LF are consistent with the areas where need refinement, we devise a novel difference guided encode block adopted in the refinement module to leverage the guidance of these differences.
Furthermore, we carefully build the first real scene LFRR dataset which contains a variety of indoor and outdoor scenes to meet the need of model training and validation. Experimental results demonstrate that our approach can effectively remove raindrops in the LF and achieves the state-of-the-art performance in both background restoration and view consistency maintenance.
In summary, our main contributions are as follows:
-
1.
In order to go beyond the limitation of single image, we introduce the LF into the raindrop removal field, which provides more abundant scene information and enhances the raindrop removal performance.
-
2.
We propose a novel LFRR network by re-sampling the raindrop LF and refining the re-sampled LF. We design a novel difference guided encode block for more effective feature encoding during the refinement.
-
3.
We build the first real scene LFRR dataset, including various indoor and outdoor scenes, to meet the needs of model training and validation.
2 Related Work
In this section, we briefly summary the related single image raindrop removal methods and LF processing methods.
2.1 Single Image Raindrop Removal
Due to the lack of texture details in occluded areas, the single image raindrop removal problem is much more challenging. The progress of this task has been stagnated until the rapid progress of deep learning in recent years.
[3] used three convolutional layers to remove raindrops and generated outputs with relatively poor performance. [12] proposed a generative adversarial network for raindrop removal, where the generative network produces the attention map via an attentive-recurrent network and applies this raindrop attention mask to generate a raindrop-free image through a contextual auto-encoder. [14] proposed a network, in which the shape-driven attention module exploits the physical shape properties of raindrops, including closedness and roundness, and the channel attention refines the features relevant to the background layer or the raindrop layer. [16] designed a uncertainty guided multi-scale attention network which benefits from the exploration of the blur-level of raindrops, and utilized several basic modules to explore the inherent correlations of the similar raindrop patterns across scales.
2.2 Light Field Processing
Since there are no reference algorithms in LFRR, we briefly present some approaches in related LF based fields in this subsection.
The LF is a 4D data with unique structural characteristics. A hot and general research topic is to design appropriate modules to extract effective information from the LF. [23] proposed a spatial-angular separable convolution module, in which two convolution layers are separately and serially extract features in spatial or angular dimensions. [20] designed an interaction mechanism to incorporate decoupled spatial and angular information. The angular feature in this model is extracted by a no-padding convolution layer which contains the global angular information. [17] devised a module named Multi-Dimension Fusion Block (MDFB) which utilizes four parallel convolution layers to extract features from sub-aperture images, micro-lens images, horizontal and vertical epipolar images (EPIs) respectively. The MDFB is also adopted in our model for LF feature extraction.
The LF spatial super-resolution (LFSSR) task has many similarities with LFRR, and LFSSR approaches are conveniently adjusted to solve the LFRR task. Firstly, both LFRR and LFSSR process all views of the LF, while many typical LF tasks only deal with the central view of the LF, such as saliency detection [8], depth estimation [1] and occlusion removal [26]. Secondly, they both are low-level image restoration tasks and aim to enhance the quality of the input LF. Thirdly, they both need effective complementary information and structural characteristics existed within and between views of the LF. Besides, existing LFSSR approaches up-sample features and LFs by spatial interpolation, transposed convolution or pixel shuffle. By removing up-sampling operations, most LFSSR approaches can be utilized to handle the LFRR task.

3 Light Field Raindrop Dataset
In this section, we introduce the LFRR dataset construction in details. Each raindrop removal image pair consists of two LFs with the same background, yet one is degraded by raindrops and the other is raindrop-free. The Lytro Illum camera is used for the LF acquisition. In order to eliminate the harmful influence caused by the refraction difference between glasses, different from [12] which uses two glasses for the LF acquisition, we prepare only one piece of transparent glass with 4.5 mm thickness. We first shoot through the clean glass, then spray water droplets on the glass, and shoot again. These two shoots are completed with the same camera parameters, such as focal length, ISO, and so on. In this way, we get a pair of LF raw data. We set the distance between the glass and the camera lens varying from 1 to 4 cm to generate diverse raindrop images. To ensure that two LFs have the same background layer, we take measures to alleviate camera motion and scene motion. For the former, we use a tripod to fix the camera and a shutter cable to control the shooting. For the latter, we shoot static scenes on a windless and cloudless sunny day. The LF view images are generated by open source LF toolbox from LF raw data without any post processing, such as color correction and histogram equalization.
After carefully shooting, decoding and selection, we obtained 32 pairs of high-quality LFRR data, including various background scenes and raindrops. Some samples are shown in Fig.2. The angular resolution is , while the spatial resolution is . Among them, 24 LF pairs constitute the training set, and the remaining 8 LF pairs serve as the validation set. We will public this dataset once our paper is accepted.
4 Methodology
4.1 Overall Architecture
The goal of the LFRR is to restore the background area obscured by raindrops in each view of the LF. The input LF with raindrops is represented as , where and are angular and spatial resolutions, respectively, and the number refers to the RGB color channels. The overall architecture is divided into two parts: the re-sampling module (RSM) and the refinement module (RM), as shown in Fig.3.
For each pixel in , the RSM first predicts re-sampling positions, and then carries out the proposed 4D interpolation on according to re-sampling positions to obtain the sampled LF which is also the initial raindrop removal output :
(1) |
The RM extracts features from and and predicts a residual map to refine :
(2) |
The final raindrop removal output is the summation of and :
(3) |
where , and have the same size with .
4.2 Re-Sampling Module
4.2.1 Motivation
One unique characteristic of the raindrop removal task is that raindrops are closer to the camera and have the shallower depth than the background. In the LF, the shallower the depth of the object, the larger the disparity it has. Therefore, in the raindrop LF , most texture details obscured by raindrops in one view are clear in the corresponding areas of other views, as shown in Fig.1. Thus, we propose a novel re-sampling strategy to select pixels from and generate one new LF with less raindrop degradation.
4.2.2 Re-sampling Position Prediction
The first step of the RSM is to calculate the correct re-sampling positions for each point. Specifically, the re-sampling position is the summation of the initial position and the re-sampling position offset to be predicted.
In the LF, the initial position of each pixel is represented by a vector with length four, including two angular coordinates and two spatial coordinates. For example, the coordinates of the upper left starting point of the upper left view and its adjacent points on the right are and . Thus, the available is the coordinate set of the whole LF in the size of .
Then, the core task of this step is to calculate the position offset relative to the initial positions for each point in . Here, the Multi-Dimension Fusion Block (MDFB) [17] is adopted as the feature extraction block, and MDFBs are cascaded in serial as the encoder. Next, the offset is predicted by two 2D convolutional layers from the features extracted by the encoder:
(4) |
where has the same size with . Now, the re-sampling is conducted on the irregular and offset locations . Moreover, we constrain re-sampling positions in to be located in the range of , hence each re-sampling point is valid.
4.2.3 4D Interpolation
As the offset is typically fractional, re-sampling is implemented via interpolation operation. Since the pixel location in the LF is four-dimensional, the 4D interpolation is necessary. We separate 4D interpolation into two steps: spatial bilinear interpolation and angular bilinear interpolation, which is shown in Fig.4. The result of 4D interpolation is the initial raindrop removal output . The 4D interpolation is defined as
(5) |
where denotes a 4D fractional location in , enumerates all integral locations around in for spatial bilinear interpolation, is the spatial bilinear interpolation result from , enumerates all integral locations around in for angular bilinear interpolation, and are spatial and angular bilinear interpolation kernel respectively, is the angular bilinear interpolation result from . and are both two-dimensional, and they are separated into two 1D kernels as
(6) |
where . Note that, the order of the spatial interpolation and angular interpolation does not affect the 4D interpolation result.
4.3 Refinement Module
There are two weaknesses existed in the re-sampling strategy. On one hand, it is difficult for pixel re-sampling strategy to recover the region completely polluted by raindrops at all views. On the other hand, since the re-sampling position is fractional, the 4D interpolation will inevitably bring slight pixel errors, e.g., pixels in clean backgrounds are slightly affected by surrounding pixels.
The RM is a module strongly coupled with the RSM for addressing shortcomings of the RSM. As shown in the bottom part of Fig.3, the RM first extracts features from and , and then generates a residual map to refine . The raindrop LF is introduced to the RM for ensuring information integrity.
Specifically, and are first separately fed to a weight-shared 2D convolution layer which works on spatial dimensions to generate the initial feature and . Considering that the difference between and reflects on the areas of raindrops and wrong re-sampled backgrounds, the difference is beneficial for identifying the areas to be refined. Therefore, we propose a novel Difference Guided Encode Block (DGEB) to focus more on areas where need improvement by leveraging the guidance of the difference between and .
As shown in the gray box of the Fig.3, the first DGEB takes the pair of and as inputs to achieve interaction. Firstly, and are separately encoded by a weight-shared MDFB, and the generated features are called the encoded and the encoded . Then, the difference feature are calculated by the element-wise subtraction between the encoded and the encoded . Next, the difference feature are fed to two convolution layers to produce two single-channel attention maps. Finally, these two attention maps are separately fused with the encoded and the encoded by element-wise multiplication and summation. The outputs of the first DGEB are denoted as and . In RM, we cascade DGEBs for feature extraction and interaction, i.e., the outputs of a DGEB form the inputs of its subsequent DGEB, and the final output features are represented as and . In summary, the DGEB can be formulated as
(7) |
where and represent the output features corresponding to and of the DGEB, respectively, represent the MDEB in the DGEB.
The outputs and are then concatenated along the channel dimension. Inspired by [5], the Squeeze-and-Excitation Block (SEB) is adopted here for better information fusion. Suppose the input feature is , The calculation process of the SEB is:
(8) |
where is the global spatial average pooling, refers to a full-connection layer, and is the sigmoid function. The residual map is produced by 2D spatial convolution layers from the fused features. The final raindrop removal output is the summation of and .
4.4 Learning Objective
Besides the widely used Mean Absolute Error (MAE), we utilize the SSIM function [21] and EPI gradient loss [7] as additional evaluation metrics for the generated LF. These three loss functions are denoted as , and . The combination of the MAE loss function and SSIM loss function can preserve the per-pixel similarity as well as the local structure. The EPI gradient loss is beneficial for enhancing view consistency of the output LF.
Given the raindrop-free ground truth and the model’s output , the learning Objective of is defined as:
(9) |
where and denote the hyper parameters used to balance losses, which are set to 0.1 and 1 in the experiment.
The whole training process of the proposed model is divided into two stage.
In the first stage, both and are supervised by the raindrop-free ground truth . The supervision of is necessary during the initial training, because it enables the model to effectively search the raindrop-free backgrounds in . The learning objective of this stage is
(10) |
where the weight hyper parameter is set to 0.5 in the experiment. When the model converges, the training enters the second stage.
In the second stage, we remove the supervision of to make the model pay attention to learning the prediction of the final result . The learning objective of the second stage is
(11) |
When the model converges again, the training ends. The second training stage brings slight performance improvement (about 0.15dB PSNR).
5 Experiment
5.1 Comparison Methods
We compare the proposed model with six 2D image rain removal methods: AttGAN [12], RSECAN [10], SDA [14], PIDN [15], DualRes [11] and MSPIR [24]. Among them, RSECAN, PIDN and MSPIR are rain streak removal algorithms. Because these three algorithms do not model the physical properties of rain streaks, but make improvements in the design of the convolutional networks, all of them can be transferred to the raindrop removal task.
For a more convincing comparison, four LF spatial super resolution methods are adjusted for LFRR task which are SAS [23], MDFN [17], InterNet [20], MEGNet [25]. As mentioned in Section 2.2, algorithms in LF spatial super resolution domain reconstruct all views of the LF by extracting complementary information between different views, which has many similarities to solve the LFRR problem. Besides, these four methods are convenient to be adjusted to handle LFRR problem by removing the up-sampling operations.
5.2 Implementation Details
We train and validate models on the proposed LFRR dataset. For 2D methods, views in LFs are separated, e.g., a pair of LFs with angular resolutions is equivalent to 2D image pairs. We augment training data for 2D methods by random cropping, flipping, rotating and resizing. The popular LF super resolution methods are carefully adjusted by removing their up-sampling operations. Apart from selecting appropriate initial learning rate, we maintain the original configuration of algorithms involved in comparisons. The quantitative evaluation is done in terms of two metrics: PSNR and SSIM calculated on the luminance (Y channel of the YCbCr space).
The proposed model is implemented by PyTorch. All experiments are run on one NVIDIA GeForce GTX A4000. The designed model has MDFBs and DGEBs. Each MDFB or DGEB has the same number of input and output channel . In training, we keep the angular size and randomly crop patches of spatial size as inputs. To relieve the over-fitting problem on the premise of protecting the structural characteristics of LFs, we augment the training set by random global flipping and rotating for 4D approaches. The proposed model is trained by Adam [9] optimizer with batch size of 1. The initial learning rate is , and the number of epochs is set to . We use cosine annealing strategy to adjust learning rate during the training, and set the final minimum learning rate to 0. The total training time for the proposed model is approximately 2 days.
5.3 Qualitative Evaluation
See Fig. 5 for the visualization of some results of five top ranking methods.
In image restoration quality, 2D methods generate significant artifacts and wrong textures, and even poorly affect raindrop-free backgrounds, which can be attributed to the limited scale of training set and the lack of details in the occluded background. For other 4D methods, the restored background areas are not smooth and natural enough. These methods rely on convolutional neural network to implicitly predict the residual between input and output, which is unstable. By comparisons, our model first removes raindrops by directly utilizing useful pixel information in raindrop LFs, which reduces the dependence on and the difficulty of residual learning. In Fig.5, our results show the clearest and most real details in the restored backgrounds.
Because the LFRR is to deal with all views of the LF, the view consistency is also an important measurement. In Fig.5, we show horizontal EPIs corresponding to the dotted lines for view consistency comparisons. While the EPIs of other methods contain noisy artifacts, our EPIs have the most consistent lines compared with ground truth.
Metrics | 2D Methods | 4D Methods | |||||||||
AttGAN (CVPR18) | RSECAN (ECCV18) | SDA (ICCV19) | PIDN (CVPR19) | DualRes (CVPR19) | MSPIR (CVPR21) | SAS (TIP18) | MDFN (Arxiv20) | InterNet (ECCV20) | MEGNet (TIP21) | Ours | |
PSNR | 30.3670 | 31.8507 | 27 0175 | 31.7731 | 31.8626 | 33.0273 | 36.2761 | 37.4302 | 37.6900 | 37.1023 | 37.9671 |
SSIM | 0.9341 | 0.9393 | 0.9098 | 0.9317 | 0.9431 | 0.9440 | 0.9575 | 0.9710 | 0.9716 | 0.9681 | 0.9734 |
Params | 6.2M | 0.26M | 7.2M | 0.17M | 10M | 3.6M | 0.74M | 0.48M | 8M | 0.42M | 0.4M |
5.4 Quantitative Evaluation
The quantitative comparisons are shown in Table.1. By comparison, our model achieves the best performance in both PSNR and SSIM with light-weight parameters. It is noted that compared with the second best method InterNet, our model generates better results with only one twentieth of its learnable parameters.
According to the Table.1, the overall performance of 2D methods is not as good as that of 4D methods. 2D models benefit for more available data sets and higher upper limitation of model parameters than 4D models. However, when training in the current small-scale training set, these 2D methods have two obvious disadvantages. Firstly, because texture details covered by raindrops in single image are completely lost, when restoring the occluded background areas, 2D methods rely more on the inpainting patterns learned from the training data, while the small-scale training set provides little inpainting patterns. Secondly, compared with the LF based methods, due to the lack of scene structure characteristics in single image, 2D methods are weak in the design of the learning objective.
It can be seen that SDA performs worse than other 2D approaches. The method SDA introduces the edge map as additional input, and builds module to exploit the physical shape properties of raindrops, including closedness and roundness. However, since the camera focuses on the background when shooting, raindrop areas in our dataset have severe blur and grid effects, which brings great trouble to SDA. Besides, some scenes in our dataset contain rain stripes that are also hard for SDA to deal with.
Compared with 2D inputs, the LF with unique structural characteristics provides abundant complementary information, which enables 4D approaches more dependent on the input LF rather than the inpainting patterns learned from the training set. Therefore, 4D methods can achieve great raindrop removal performance by training on a small-scale dataset.
Interpolation | Refinement | Metrics | |||
Spa | Ang | MDFB | DGEB | PSNR | SSIM |
✓ | ✓ | 36.7829 | 0.9655 | ||
✓ | ✓ | 37.8123 | 0.9725 | ||
✓ | ✓ | 36.6814 | 0.9679 | ||
✓ | ✓ | ✓ | 37.9210 | 0.9725 | |
✓ | ✓ | ✓ | 37.9671 | 0.9734 |
6 Ablation Study
6.1 Effects of Re-sampling Patterns
Apart from the 4D re-sampling in both spatial and angular dimensions mentioned above, re-sampling can be separately conducted in the spatial or angular dimensions. Since the re-sampling module and the refinement module are strongly coupled, we conduct two ablation experiments with the refinement module to evaluate re-sampling patterns:
-
•
2D spatial re-sampling with the refinement module.
-
•
2D angular re-sampling with the refinement module.
In both cases, the model predicts 2D location offset, then performs the bilinear interpolation on in corresponding dimensions according to the predicted re-sampling positions. The results are listed in the first two lines of Table 2.
It is hard to recover the occluded background areas by spatial re-sampling due to the lack of texture details. The useless spatial re-sampling also hinders the subsequent processing. As shown in the first row of Table.2, the performance of the model using spatial re-sampling is the worst.
When re-sampling only on angular dimensions, the model can obtain effective information from other views to supplement background details. However, due to the disparity and occlusion existed in backgrounds, angular re-sampling strategy is difficult to accurately obtain the details of the occlusion boundaries in backgrounds. By comparison, 4D re-sampling can alleviate this problem by providing a wider search space. In the second row of Table.2, the model using angular re-sampling performs better than the spatial re-sampling model and worse than the 4D re-sampling model, which not only verifies the effectiveness of the angular re-sampling, but also confirms the superiority of 4D re-sampling. It is worth mentioning that the proposed model using angular re-sampling already performs better than all approaches involved in comparisons.
6.2 Effects of the Refinement Module
As mentioned in Section 4.3, the RM is designed to solve the shortcomings of the re-sampling strategy. In this subsection, we conduct two ablation experiments to separately verify the effectiveness of the RM and the DGEB:
-
•
Abandon the refinement module.
-
•
Replace the DGEB in the refinement module with the MDFB.
For the former ablation experiment, we enlarge the channel number of the RSM to to ensure the similar parameter quantities compared with the complete model. The corresponding results are listed in the third and forth lines of the Table 2. Without the RM, the model’s performance decreases a lot, e.g., the PSNR decline of dB is observed, which illustrates that the RM is essential and significant. The comparison between the forth line and the fifth line show that the DGEB brings slight PSNR and SSIM improvements.
6.3 Performance Analysis
In this subsection, we discuss more about the performance of each module.
In Fig. 6, we show a visual instance of the 4D re-sampling procedure. The 4D re-sampling procedure illustrates that points of raindrop areas can re-sample the pixels with less raindrop pollution in other views. According to the offset list, 4D re-samplings in raindrop area are dominated by the offset in angular domains, while the spatial offset expands the sampling range on the basis of angular offset, which is accord with our analysis about the spatial and angular re-sampling in Section 6.1. The re-sampled result show that the re-sampling module directly utilizes the complementary pixel information between different views to produce pretty initial raindrop removal result.
In Fig.7, more intermediate results produced by our model are visualized. We can see that the residual map generated by the refinement module plays an auxiliary role to make more natural by providing some high-frequency information, which illustrates that the 4D re-sampling strategy reduces the dependence on and the difficulty of implicit residual learning.
7 Conclusions
In this paper, we illustrate the advantages of the light field in raindrop removal task compared with single image. We propose a novel light field raindrop removal approach which consists of the re-sampling module and the refinement module. The re-sampling module generates a novel light field with less raindrop pollution through position prediction and 4D interpolation, while the refinement module improves the re-sampled light field. Furthermore, we build the first real scene light field raindrop removal dataset. Experiments demonstrate the effectiveness of our model.
References
- [1] Chen, J., Zhang, S., Lin, Y.: Attention-based multi-level fusion network for light field depth estimation. In: Proc. of AAAI (2021)
- [2] Dai, J., Qi, H., Xiong, Y., Li, Y., Zhang, G., Hu, H., Wei, Y.: Deformable convolutional networks. In: Proceedings of the IEEE international conference on computer vision (2017)
- [3] Eigen, D., Krishnan, D., Fergus, R.: Restoring an image taken through a window covered with dirt or rain. In: Proceedings of the IEEE international conference on computer vision (2013)
- [4] Fu, X., Huang, J., Zeng, D., Huang, Y., Ding, X., Paisley, J.: Removing rain from single images via a deep detail network. In: Proc. of CVPR (2017)
- [5] Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition (2018)
- [6] Jiang, K., Wang, Z., Yi, P., Chen, C., Huang, B., Luo, Y., Ma, J., Jiang, J.: Multi-scale progressive fusion network for single image deraining. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (2020)
- [7] Jin, J., Hou, J., Yuan, H., Kwong, S.: Learning light field angular super-resolution via a geometry-aware network. In: Proc. of AAAI (2020)
- [8] Jing, D., Zhang, S., Cong, R., Lin, Y.: Occlusion-aware bi-directional guided network for light field salient object detection. In: Proc. of ACM MM (2021)
- [9] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
- [10] Li, X., Wu, J., Lin, Z., Liu, H., Zha, H.: Recurrent squeeze-and-excitation context aggregation net for single image deraining. In: Proc. of ECCV (2018)
- [11] Liu, X., Suganuma, M., Sun, Z., Okatani, T.: Dual residual networks leveraging the potential of paired operations for image restoration. In: Proc. of CVPR (2019)
- [12] Qian, R., Tan, R.T., Yang, W., Su, J., Liu, J.: Attentive generative adversarial network for raindrop removal from a single image. In: Proceedings of the IEEE conference on computer vision and pattern recognition (2018)
- [13] Quan, R., Yu, X., Liang, Y., Yang, Y.: Removing raindrops and rain streaks in one go. In: Proc. of CVPR (2021)
- [14] Quan, Y., Deng, S., Chen, Y., Ji, H.: Deep learning for seeing through window with raindrops. In: Proc. of ICCV (2019)
- [15] Ren, D., Zuo, W., Hu, Q., Zhu, P., Meng, D.: Progressive image deraining networks: A better and simpler baseline. In: Proc. of CVPR (2019)
- [16] Shao, M.W., Li, L., Meng, D.Y., Zuo, W.M.: Uncertainty guided multi-scale attention network for raindrop removal from a single image. IEEE Transactions on Image Processing (2021)
- [17] Sun, Q., Zhang, S., Chang, S., Zhu, L., Lin, Y.: Multi-dimension fusion network for light field spatial super-resolution using dynamic filters. arXiv preprint arXiv:2008.11449 (2020)
- [18] Wang, H., Wu, Y., Li, M., Zhao, Q., Meng, D.: A survey on rain removal from video and single image. arXiv preprint arXiv:1909.08326 (2019)
- [19] Wang, T., Yang, X., Xu, K., Chen, S., Zhang, Q., Lau, R.W.: Spatial attentive single-image deraining with a high quality real rain dataset. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2019)
- [20] Wang, Y., Wang, L., Yang, J., An, W., Yu, J., Guo, Y.: Spatial-angular interaction for light field image super-resolution. In: Proc. of ECCV (2020)
- [21] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing (2004)
- [22] Yasarla, R., Patel, V.M.: Uncertainty guided multi-scale residual learning-using a cycle spinning cnn for single image de-raining. In: Proc. of CVPR (2019)
- [23] Yeung, H.W.F., Hou, J., Chen, X., Chen, J., Chen, Z., Chung, Y.Y.: Light field spatial super-resolution using deep efficient spatial-angular separable convolution. IEEE Transactions on Image Processing (2018)
- [24] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.H., Shao, L.: Multi-stage progressive image restoration. In: Proc. of CVPR (2021)
- [25] Zhang, S., Chang, S., Lin, Y.: End-to-end light field spatial super-resolution network using multiple epipolar geometry. IEEE Transactions on Image Processing (2021)
- [26] Zhang, S., Shen, Z., Lin, Y.: Removing foreground occlusions in light field using micro-lens dynamic filter. In: Proc. of IJCAI (2021)
- [27] Zhu, X., Hu, H., Lin, S., Dai, J.: Deformable convnets v2: More deformable, better results. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2019)