This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

11institutetext: Tao Wang, Ziqian Shao and Tong Lu 22institutetext: Nanjing University, Nanjing, 210023, China
22email: [email protected], [email protected], [email protected]
33institutetext: Kaihao Zhang 44institutetext: Harbin Institute of Technology, Shenzhen, 518055, China
44email: [email protected]
55institutetext: Wenhan Luo 66institutetext: Hong Kong University of Science and Technology, Hong Kong
66email: [email protected]
77institutetext: Bjorn Stenger 88institutetext: Rakuten Institute of Technology, Japan
88email: [email protected]
99institutetext: Tae-Kyun Kim 1010institutetext: Imperial College London, London, UK & KAIST, Daejeon, South Korea
1010email: [email protected]
1111institutetext: Wei Liu 1212institutetext: Tencent, Shenzhen, 518107, China
1212email: [email protected]
1313institutetext: Hongdong Li 1414institutetext: Australian National University, Australia
1414email: [email protected]

GridFormer: Residual Dense Transformer with Grid Structure for Image Restoration in Adverse Weather Conditions

Tao Wang    Kaihao Zhang    Ziqian Shao    Wenhan Luo   
Bjorn Stenger
   Tong Lu    Tae-Kyun Kim    Wei Liu    Hongdong Li
(Received: date / Accepted: date)
Abstract

Image restoration in adverse weather conditions is a difficult task in computer vision. In this paper, we propose a novel transformer-based framework called GridFormer which serves as a backbone for image restoration under adverse weather conditions. GridFormer is designed in a grid structure using a residual dense transformer block, and it introduces two core designs. First, it uses an enhanced attention mechanism in the transformer layer. The mechanism includes stages of the sampler and compact self-attention to improve efficiency, and a local enhancement stage to strengthen local information. Second, we introduce a residual dense transformer block (RDTB) as the final GridFormer layer. This design further improves the network’s ability to learn effective features from both preceding and current local features. The GridFormer framework achieves state-of-the-art results on five diverse image restoration tasks in adverse weather conditions, including image deraining, dehazing, deraining & dehazing, desnowing, and multi-weather restoration. The source code and pre-trained models are available at https://github.com/TaoWangzj/GridFormer.

1 Introduction

Capturing high-quality images in adverse weather conditions like rain, haze, and snow is a challenging task due to the complex degradation that occurs in such conditions. These include color distortion, blur, noise, low contrast, and other issues that directly lower the visual quality. Furthermore, such degradation can lead to difficulties in downstream computer vision tasks such as object recognition and scene understanding itti1998model ; carion2020end .

Traditional methods for image restoration in adverse weather conditions often rely on handcrafted priors such as smoothness and dark channel, with linear transformations roth2005fields ; garg2005does ; he2010single ; chen2013generalized . However, these methods are limited in their ability to address complex weather conditions due to poor prior generalization. Recently, convolutional neural network (CNN) based methods have been proposed to handle the problems of image deraining fu2017clearing ; wang2019spatial ; you2015adherent , dehazing cai2016dehazenet ; ren2016single ; zhang2018densely , and desnowing liu2018desnownet ; li2019stacked ; zhang2021deep . These methods focus on learning a mapping from the weather-degraded image to the restored image using specific architectural designs, such as residual learning liu2019dual ; jiang2020multi , multi-scale or multi-stage networks dong2020multi ; zhang2021deep , dense connections liu2019griddehazenet ; zhang2021deep , GAN structure qu2019enhanced ; jaw2020desnowgan , and attention mechanism zhang2020pyramid ; zamir2021multi . However, these methods are often designed for a single specific task and may not work well for multi-weather restoration.

Refer to caption
Figure 1: Comparison results for image restoration in adverse weather conditions. Results on (top) weather-specific restoration, and (bottom) multi-weather restoration tasks, showing state-of-the-art performance in terms of PSNR.

Recently, a new approach has emerged to address the challenge of multi-weather restoration in a unified architecture li2020all ; valanarasu2022transweather ; li2022all ; ozdenizci2023restoring . The pioneering work of Li et al.li2020all proposes a multi-encoder and decoder network, with each encoder dedicated to processing one type of degradation. The network is optimized using neural architecture search. Subsequent works have borrowed this structure to improve multi-weather restoration performance. For instance, Valanarasu et al.valanarasu2022transweather introduced the TransWeather network that employs self-attention for multi-weather restoration. Although TransWeather is more efficient than the task-specific encoder network, its performance is constrained by its inadequate exploitation of feature fusion across different scales in the network. Recently, some works focus on designing the general backbone network to exploit multi-scale features in the network for vision tasks. For example, HRNet wang2020deep and HRFormer NEURIPS2021_3bbfdde8 are built by multi-resolution parallel design to learn high-resolution representations. RevCo cai2022reversible adopts the design of using columns (each column is a subnetwork), which aims to learn disentangled representations. These methods work well on human pose estimation, semantic segmentation, object detection, etc. However, there are currently no specifically designed transformer-based methods to effectively utilize these features to recover degraded images under severe weather conditions.

In this paper, we propose GridFormer, a transformer-based network for image restoration in adverse weather conditions. GridFormer uses residual dense transformer blocks (RDTB) embedded in a grid structure to exploit hierarchical image features. The RDTB, as the key unit of the GridFormer, contains compact-enhanced transformer layers with dense connections, and local feature fusion with local skip connections. The compact-enhanced transformer layer employs a sampler and compact self-attention for efficiency and a local enhancement stage for strengthening local details. We evaluate GridFormer on weather degradation benchmarks, including RainDrop qian2018attentive , SOTS-indoor li2018benchmarking , Haze4K liu2021synthetic , Outdoor-Rain li2019heavy , and Snow100K liu2018desnownet , see Fig. 1.

In summary, the contributions of this work are three-fold:

  • Unified Framework: We propose a novel and unified framework called GridFormer, which is tailored specifically for image restoration under adverse weather conditions. This innovative framework seamlessly integrates residual dense transformer blocks (RDTBs) with a grid structure, creating a comprehensive architecture. Notably, incorporating RDTBs within a grid structure enables GridFormer to capture hierarchical image features efficiently. The grid structure facilitates the integration of contextual information from various spatial scales, enhancing the network’s ability to restore images effectively.

  • Compact-enhanced Self-Attention: GridFormer introduces the compact-enhanced self-attention mechanism, a critical contribution. This mechanism enhances the local modeling capacity of transformer units, enabling GridFormer to capture fine-grained details in adverse weather conditions while improving network efficiency.

  • State-of-the-art Performance: We show the general applicability of our GridFormer by applying it to five diverse image restoration tasks in adverse weather conditions, including image deraining, image dehazing, image deraining &\& dehazing, desnowing, and multi-weather restoration. Our GridFormer achieves a new state-of-the-art on both weather-specific and multi-weather restoration tasks.

The remainder of this paper is organized as follows: Sec. 2 discusses the related work. Sec. 3 introduces our proposed method. Then, experimental results are reported and analyzed in Sec. 4. Sec. 5 discusses limitations and future work. Finally, Sec. 6 presents a conclusion of this paper.

2 Related Work

The proposed method is related to image restoration in adverse weather conditions and transformer architecture, which are reviewed in the following.

2.1 Restoration in Adverse Weather Conditions

Image restoration in adverse weather conditions is the task of restoring a high-quality image under weather-related foreground degradations like rain, fog, and snow. Especially, image restoration in adverse weather conditions typically includes image deraining ba2022not ; kang2011automatic ; li2019heavy ; yang2019joint ; wang2020model , image dehazing berman2016non ; cai2016dehazenet ; ren2018gated ; liu2019griddehazenet , image desnowing ren2017video ; liu2018desnownet ; zhang2021deep , and multi-weather restoration li2020all ; valanarasu2022transweather ; ozdenizci2023restoring . The traditional model-based methods he2010single ; luo2015removing ; zhu2017joint focus on exploring appropriate weather-related priors to address the image restoration problem. However, there has been a surge in the number of data-driven methods proposed in recent years. Next, we mainly discuss these data-driven methods in detail.

Deraining: The task of removing rain streaks from images has been approached using a deep network called DerainNet, proposed by Fu et al.fu2017clearing . This approach learns the nonlinear mapping between clean and rainy detail layers. Several techniques have been proposed to improve performance, such as the recurrent context aggregation in RESCAN li2018recurrent , spatial attention in SPANet wang2019spatial , multi-stream dense architecture in DID-MDN zhang2018density , conditional GAN-based method in zhang2019image , and conditional variational deraining based on VAEs du2020conditional . Another approach to image deraining is removing raindrops. Yamashita et al.yamashita2005removal developed a stereo system to detect and remove raindrops, while You et al.you2015adherent proposed a motion-based method. Qian et al.qian2018attentive developed a raindrop removal benchmark and proposed an attentive GAN. Quan et al.quan2019deep introduced an image-to-image CNN embedded attention mechanism to recover rain-free images, and Liu et al.liu2019dual designed a dual residual network to remove raindrops. Zhang et al.zhang2021multifocal proposed a multifocal attention-based cross-scale network that employs spatial and channel attention to explore cross-scale correlations of rain streaks and background for image draining. Recent works aim to remove both streaks and raindrops from images simultaneously quan2021removing ; xiao2022image .

Dehazing: Two pioneering methods for image dehazing are DehazeNet cai2016dehazenet and MSCNN ren2016single , which first estimate the transmission map and generate haze-free images using an atmosphere scattering model narasimhan2000chromatic . AOD-Net li2017aod represents another advancement, which estimates one variable from the transmission map and atmospheric light. DCPDN zhang2018densely employs two sub-networks to estimate the transmission map and the atmospheric light, respectively. Recent works have focused on directly restoring clear images from hazy images, using attention mechanisms qin2020ffa ; zhang2020pyramid , multi-scale structures dong2020multi ; liu2019griddehazenet , GAN structures qu2019enhanced and transformers song2022vision . The network in liu2019griddehazenet is a similar method to our GridFormer. However, GridFormer significantly differs from liu2019griddehazenet in several ways. First, GridFormer is the first transformer-based method for image restoration in adverse weather conditions, whereas liu2019griddehazenet is a CNN-based method specifically designed for image dehazing. GridFormer is more general in terms of its utility. Second, in each GridFormer layer, we design a novel compact-enhanced transformer layer and integrate it in a residual dense manner. This promotes feature reuse and consequently enhances feature representation, whereas liu2019griddehazenet uses existing residual dense blocks in its network. Finally, extensive experiments demonstrate the superior performance of GridFormer compared to the method in liu2019griddehazenet .

Desnowing: In DesnowNet liu2018desnownet , translucency and residual generation modules were employed to restore image details. Li et al.li2019stacked proposed a stacked dense network with a multi-scale structure. Chen et al.chen2020jstasr introduced a desnowing method called JSTASR, which is specifically developed for size- and transparency-aware snow removal. They used a joint scale and transparency-aware adversarial loss to improve the quality of the desnowed images. Li et al.li2020all adopted the network architecture search technique to obtain excellent results. Zhang et al.zhang2021deep proposed a dense multi-scale desnowing network that incorporates learned semantic and geometric priors. More recently, some works chen2022snowformer ; zhang2022desnowformer have explored the transformer architecture and further improved the performance.

Multi-weather restoration: Beyond the above task-specific image restoration methods, recent works li2020all ; valanarasu2022transweather ; li2022all attempt to address multi-weather restoration in a single architecture. Li et al.li2020all proposed All-in-One networks with a multi-encoder and decoder structure to restore adverse multi-weather degraded images. Specifically, they adopt separate encoders for different weather degradations and resort to neural architecture search to seek the best task-specific encoder. In li2022all , All-in-one restoration network consists of a contrastive degraded encoder and a degradation-guided restoration network. Valanarasu et al.valanarasu2022transweather proposed an end-to-end multi-weather image restoration model named TransWeather that achieves high performance on multi-weather restoration. The core insights in TransWeather are the intra-path transformer block and transformer decoder with learnable weather-type embeddings. In this paper, our work aligns with this direction and focuses on designing a general model to address the multi-weather restoration problem. In addition, there are methods aimed at designing effective network architecture for image restoration. For example, MPRNet zamir2021multi and MAXIM tu2022maxim are general image restoration methods that have also been successful in addressing a range of adverse weather conditions. MIMOUNet cho2021rethinking adopts an encoder-decoder-based U-shaped network with multi-input and multi-output to achieve image deblurring. In our method, we employ the coarse-to-fine strategy to the transformer network in the grid structure for image restoration under adverse weather conditions.

Refer to caption
Figure 2: GridFormer architecture. It consists of a grid head, a grid fusion module, and a grid tail. The pyramid degraded images 𝐗0,𝐗1,𝐗2\mathbf{X}_{0},\mathbf{X}_{1},\mathbf{X}_{2} are first fed into the grid head to extract hierarchical initial features 𝐅0,𝐅1,𝐅2\mathbf{F}_{0},\mathbf{F}_{1},\mathbf{F}_{2}. The initial features are further refined by the grid fusion module to generate features 𝐅^0,𝐅^1,𝐅^2\hat{\mathbf{F}}_{0},\hat{\mathbf{F}}_{1},\hat{\mathbf{F}}_{2}. Finally, the gird tail reconstructs clear images 𝐗^0,𝐗^1,𝐗^2\hat{\mathbf{X}}_{0},\hat{\mathbf{X}}_{1},\hat{\mathbf{X}}_{2}.

2.2 Vision Transformers in Image Restoration

Recently, vision transformers have witnessed great success in low-level image restoration. Specifically, inspired by the seminal work in vaswani2017attention , Chen et al.chen2021pre proposed an Image Processing Transformer (IPT) for general image restoration, which employs a special multi-head and multi-tail structure to adapt for the specific image restoration tasks. However, IPT requires costly pre-training on large-scale datasets. Further, SwinIR liang2021swinir and Uformer wang2022uformer modify the original Swin Transformer block and obtain good performance with relatively low computational cost. In particular, SwinIR stacks the proposed residual transformer blocks to extract deep features for image reconstruction. Uformer adopts a U-shape structure, embedding the proposed LeWin transformer blocks to predict residual images. Yao et al.yao2022dense adopted the LeWin transformer block as a basic unit and introduced the dense residual skip connection to propose a dense residual skip-connection network based on transformer called DenSformer for image denoising. Liang et al.liang2022drt proposed a recursive transformer, which first introduces a recursive local window-based self-attention structure in the network. A recent method, Restormer zamir2021restormer , which is a multi-scale hierarchical transformer architecture, has also yielded fine restoration performance on image restoration e.g., deraining. Inspired by the success of these methods, we propose a general grid framework with novel transformer blocks to restore images in adverse weather conditions. SwinIR and DenSformer are similar methods to our GridFormer. However, while SwinIR fuses Swin Transformer and convolutional layers in its residual Swin Transformer block, our GridFormer’s residual dense block more effectively enhances feature reuse. Unlike DenSformer’s dense residual transformer block, our approach is characterized by the unique compact-enhanced self-attention mechanism, local feature fusion, and local skip connections within the residual dense transformer block.

3 Method

To explore the potential use of the transformer on image restoration in adverse weather conditions for obtaining better results, we propose the GridFormer by embedding residual dense transformer blocks in a grid structure. The motivation and overall architecture of the proposed GridFormer will firstly be introduced in Sec. 3.1, and then the core component (i.e., residual dense transformer block) of our GridFormer will be discussed in Sec. 3.2. Finally, the loss functions will be presented in Sec. 3.3.

3.1 Motivation and Architecture

Motivation. Our motivation arises from the urgent need for techniques that restore images captured in unfavorable weather conditions. Weather-related factors, such as haze, rain, and snow, significantly impact the quality and perception of images, which in turn affects various practical applications such as surveillance, autonomous driving, and outdoor photography. The main objective of developing the proposed GridFormer is to address the persistent challenges caused by adverse weather conditions on image quality. Our goal is to create an image restoration framework that effectively handles a range of adverse weather scenarios, thereby enhancing the quality of images affected by these conditions.

Architecture. As shown in Fig. 2, GridFormer contains three paths from the weather-degraded images to the recovered ones, where each path conducts restoration at different image resolutions. In GridFormer, the higher resolution path continuously interacts dynamically with the lower resolution path in the network to remove weather degradation accurately, and the lower resolution path provides useful global information owing to larger receptive fields. Each path is composed of seven GridFormer layers. Different paths are interlinked with a down-sampling layer, an up-sampling layer, and weighted attention fusion units to compose the columns of the GridFormer. Thanks to the grid structure with three rows and seven columns, information from different resolutions can be shared effectively. Specifically, GridFormer consists of three parts: grid head (GH), grid fusion module (GFM), and grid tail (GT). We present the details of each part in the following.

Grid head. To extract initial multi-resolution features, we use a grid head architecture to process pyramid input images in parallel. Every path in the grid head consists of a feature embedding layer, achieved by 3×33\times 3 convolutions, and a GridFormer layer. As shown in Fig. 2, given a weather-degraded image 𝐗0\mathbf{X}_{0}, the grid head extracts hierarchical features 𝐅={𝐅0,𝐅1,𝐅2}\mathbf{F}=\left\{\mathbf{F}_{0},\mathbf{F}_{1},\mathbf{F}_{2}\right\} in different channels (i.e., CC, 2C2C, and 4C4C) from pyramid images 𝐗={𝐗0,𝐗1,𝐗2}\mathbf{X}=\left\{\mathbf{X}_{0},\mathbf{X}_{1},\mathbf{X}_{2}\right\} (1/2, 1/4 scales for 𝐗1\mathbf{X}_{1} and 𝐗2\mathbf{X}_{2}). In our experiments, we use C=48C=48. The grid head computation can be defined as:

𝐅i={GFLi(Ei(𝐗0)),i=0GFLi(Ei(𝐗i))+(𝐅i1),i=1,2\small\mathbf{F}_{i}=\begin{cases}\mathrm{GFL}_{i}\left(\mathrm{E}_{i}(\mathbf{X}_{0})\right),&i=0\\ \mathrm{GFL}_{i}(\mathrm{E}_{i}(\mathbf{X}_{i}))+(\mathbf{F}_{i-1})_{\downarrow},&i=1,2\end{cases} (1)

where ii is the ii-th network path, and Ei\mathrm{E}_{i} is the feature embedding layer. The \downarrow symbol denotes the down-sampling layer, where we use a 3×33\times 3 convolution with a pixel-unshuffle operation shi2016real to halve the features in the spatial dimensions while doubling the channels. GFL\mathrm{GFL} is a GridFormer layer that is mainly built from residual dense transformer blocks.

Refer to caption
Figure 3: Grid unit structure and information flow. (a) The structure of a single grid unit is comprised of four parts: the down-sampling layer, the GridFormer layer, the up-sampling layer, and attention fusion operations. RDTL refers to the proposed residual dense transformer layer. (b) Information flow of grid units in the fusion module.

Grid fusion module. To fully integrate the hierarchical features of different rows and columns in the network, we propose a grid fusion module between the grid head and the grid tail. The structure of the proposed grid fusion module is organized into a 2D grid pattern. As illustrated in Fig. 2, the fusion module is designed in a grid-like structure of three rows and five columns. In particular, each row contains five consecutive GridFormer layers that keep the feature dimension constant. In the column axis, according to the position in the grid, we resort to the down-sampling layers or up-sampling layers to change the size of the feature maps for feature fusion. Fig. 3 (a) shows a representative grid unit in the fusion module. The GridFormer layer is a dense structure consisting of three residual dense transformer layers (RDTL) and a 1×11\times 1 convolution, which will be discussed in the next subsection. The down-sampling and up-sampling layers are symmetrical and use a 3×33\times 3 convolution with pixel-shuffle or pixel-unshuffle operation shi2016real to change the feature dimensions. In addition, considering that the features of different scales may not be equally important, we use a simple weighted attention fusion strategy to achieve feature fusion from the different row and column dimensions. Inspired by zheng2022t ; wang2022ultra , we first generate two trainable weights for different features, where each parameter is an nn-dimensional vector (nn is the channels of feature). We add these weighted features to derive the fusion features. Grid units in the grid fusion module provide different information flows for feature fusion shown in Fig. 3 (b), which guides the network to produce better-recovered results in combination with different complementary information.

Refer to caption
Figure 4: The structure of the proposed Residual Dense Transformer Block (RDTB). It includes three residual dense transformer layers, a 1×11\times 1 convolution for local feature fusion, and a local skip connection for local residual learning. The residual dense transformer layer is mainly built by the proposed compact-enhance transformer layer, which contains the compact-enhanced self-attention and FFN.
Refer to caption
Figure 5: Right: the schematic illustration of the proposed Compact-enhanced Transformer Layer consisting of a compact-enhanced attention and a Feed-Forward Network (FFN). Left: the compact-enhanced attention layer, which contains three steps, feature sampling, compact self-attention, and local enhancement. HH, WW, and CC denote the height, width, and numbers of feature channels, respectively. rr is the feature sampling rate. ©\copyright and \oplus refer to concatenate and element-wise summation operations respectively.

Grid tail. To further improve the quality of the recovered images, we design a grid tail module to predict multi-scale outputs. The structure of the grid tail is symmetrical to that of the grid head. Specifically, each path is composed of a GridFormer layer, a 3×33\times 3 convolution, and a long skip connection for image reconstruction. The skip connection is used to transmit input information directly to the grid tail module, which maintains the color and detail of the original image. The complete process is formulated as:

𝐗^i=Ci(GFLi(𝐅^i))+𝐗i,i{0,1,2},\small\hat{\mathbf{X}}_{i}=\mathrm{C}_{i}(\mathrm{GFL}_{i}(\hat{\mathbf{F}}_{i}))+\mathbf{X}_{i},i\in\{0,1,2\}, (2)

where 𝐗^i\hat{\mathbf{X}}_{i} is the final result of GridFormer on the ii-th path, Ci\mathrm{C}_{i} is a 3×33\times 3 convolution, and 𝐅^i,i{0,1,2}\hat{\mathbf{F}}_{i},i\in\{0,1,2\} is the output feature of the grid fusion module. To optimize the network parameters, we train GridFormer using a combination of two losses, multi-scale Charbonnier loss charbonnier1994two and perceptual loss, where the weight of perceptual loss johnson2016perceptual is set to 0.10.1. Next, we detail the core component residual dense transformer block that is used to build the elemental layer of GridFormer.

3.2 Residual Dense Transformer Block

Previous works huang2017densely ; zhang2018residual ; liu2019griddehazenet ; zhang2021deep ; zheng2022t have shown that using dense connections has many advantages, mitigating the vanishing gradient problem, encouraging feature reuse and enhancing information propagation. Accordingly, we propose to design the transformer with dense connections to build the basic GridFormer layers. Specifically, we propose residual dense transformer blocks (RDTB) to compose GridFormer using different settings. As illustrated in Fig. 4, RDTB contains densely connected transformer layers, local feature fusion, and local residual learning. When implementing the dense connection, we mainly incorporate three layers of residual dense transformer layers (RDTL), with the growth rate set at 16. This implies that each individual RDTL generates 16 new feature maps. These newly generated feature maps are subsequently concatenated with the feature maps received from the preceding layer. Within each RDTL, we use several compact-enhanced transformer layers (CETL) with a ReLU glorot2011deep activation function to extract features, and adopt a 1×11\times 1 convolution to ensure the same number of channels for input and output features. For local feature fusion and local residual learning, we introduce a 1×11\times 1 convolution and a local skip connection in RDTB to control the final output.

The direct application of transformers vaswani2017attention ; dosovitskiyimage to our grid network will lead to high computational overhead, we thus develop a cost-effective compact-enhanced attention, with the stages of sampler and compact self-attention for improving the efficiency, as well as a local enhancement stage for enhancing the local information in the transformer. Fig. 5 illustrates the detailed structure of the proposed compact-enhanced attention.

Feature sampling. We first design a sampler to produce down-sampled input tokens for the subsequent self-attention computation. The sampler is built by an average pooling layer with stride rr. The sampler layer not only increases the receptive field to observe more information, but also enhances the invariance on the input token. In addition, the produced lower-resolution features can reduce the computation of subsequent layers. The feature sampling step is formulated as:

𝐙=Avgr(𝐙in),\small\mathbf{Z}=\mathrm{Avg}_{r}(\mathbf{Z}_{in}), (3)

where 𝐙inH×W×C\mathbf{Z}_{in}\in\mathbb{R}^{H\times W\times C} represents the input token. 𝐙Hr×Wr×C\mathbf{Z}\in\mathbb{R}^{\frac{H}{r}\times\frac{W}{r}\times C} is the output token. Avgr\mathrm{Avg}_{r} indicates the average pooling operation with stride rr. In the experiments, we empirically set rr as 44, 22, and 22 in three rows of GridFormer layers, respectively (see Sec. 4.4).

Compact self-attention. Given a feature of dimensions H×W×CH\times W\times C, recent low-level transformer-based methods valanarasu2022transweather ; lee2022knn ; wang2022uformer aim to explore the long-range dependence between key and query to calculate the N×NN\times N attention map (N=H×WN=H\times W), which leads to high complexity and fails to model the global information from the channel dimension. Thus, for more efficient computation in self-attention, we resort to a different strategy. Specifically, as illustrated in Fig. 5, for an output feature 𝐙Hr×Wr×C\mathbf{Z}\in\mathbb{R}^{\frac{H}{r}\times\frac{W}{r}\times C} from the sampler, we first implement the split operation by dividing it along the channel dimension to produce 𝐳1Hr×Wr×C2\mathbf{z}_{1}\in\mathbb{R}^{\frac{H}{r}\times\frac{W}{r}\times\frac{C}{2}} and 𝐳2Hr×Wr×C2\mathbf{z}_{2}\in\mathbb{R}^{\frac{H}{r}\times\frac{W}{r}\times\frac{C}{2}}. We then apply a convolution layer with reshape operation on 𝐳1\mathbf{z}_{1} and 𝐳2\mathbf{z}_{2}, which projects 𝐳1\mathbf{z}_{1} and 𝐳2\mathbf{z}_{2} into Queries (𝐪1,𝐪2C2×HWr2\mathbf{q}_{1},\mathbf{q}_{2}\in\mathbb{R}^{\frac{C}{2}\times\frac{HW}{r^{2}}}), Keys (𝐤1,𝐤2C2×HWr2\mathbf{k}_{1},\mathbf{k}_{2}\in\mathbb{R}^{\frac{C}{2}\times\frac{HW}{r^{2}}}) and Values 𝐯1,𝐯2C2×HWr2\mathbf{v}_{1},\mathbf{v}_{2}\in\mathbb{R}^{\frac{C}{2}\times\frac{HW}{r^{2}}}), respectively. Inspired by existing methods petit2021u ; susladkar2023gafnet ; gu2023adafuse ; zhang2024cf , we exchange the values produced by them to perform multi-head self-attention, which can improve the interaction between 𝐳1\mathbf{z}_{1} and 𝐳2\mathbf{z}_{2}. Compared with the method of exchanging queries for feature interaction in cross-attention petit2021u ; zhang2024cf , our approach exchanges the values for interaction and feature fusion, finding it beneficial for better restoration performance (see Sec. 4.4). Finally, we obtain the result 𝐙^\hat{\mathbf{Z}} by concatenating the output of the two multi-head self-attention and changing their dimensions. The proposed compact self-attention mechanism can be formulated as:

𝐙^=[softmax1(𝐪1𝐤1dk1)𝐯2,softmax2(𝐪2𝐤2dk2)𝐯1],\small\!\hat{\mathbf{Z}}=[\operatorname{softmax}_{1}\left(\frac{\mathbf{q}_{1}\mathbf{k}_{1}^{\top}}{\sqrt{d_{k_{1}}}}\right)\mathbf{v}_{2},\!\operatorname{softmax}_{2}\left(\frac{\mathbf{q}_{2}\mathbf{k}_{2}^{\top}}{\sqrt{d_{k_{2}}}}\right)\mathbf{v}_{1}], (4)

where [][\cdot] indicates the concatenation operation. The major computational overhead in transformers mainly arises from the self-attention (SA) layer. In contrast to recent transformer-based methods that employ spatial modeling for SA, the complexity of the key-query dot-product interaction grows quadratically with the spatial resolution of input, i.e., O(N×N)O(N\times N). Our proposed compact self-attention addresses this by performing SA across channels instead of the spatial dimension, resulting in cross-covariance computation across channels to produce an attention map that implicitly encodes the global context. Consequently, our compact self-attention generates an attention map of size C×C\mathbb{R}^{C\times C}, instead of the huge regular attention map of size N×N\mathbb{R}^{N\times N}. Thus, our compact self-attention successfully reduces complexity.

Local enhancement. As shown in Fig. 5, we add a local feature enhancement stage in the tail of compact self-attention. This stage consists of a deconvolution operation, sometimes referred to as a “transposed convolution,” with a deconvolution for local feature propagation and a 1×11\times 1 convolution for local fusion:

𝐙out=Conv1×1(Deconv(𝐙^)),\small\mathbf{Z}_{out}=\mathrm{Conv_{1\times 1}}(\mathrm{Deconv}(\hat{\mathbf{Z}})), (5)

where 𝐙out\mathbf{Z}_{out} is the final output. Conv1×1\mathrm{Conv_{1\times 1}} and Deconv\mathrm{Deconv} are 1×11\times 1 convolution and deconvolution layers respectively.

3.3 Loss Function

Inspired by existing works yin2023multiscale ; valanarasu2022transweather ; ye2022perceiving ; jiang2020decomposition ; li2022two ; yu2022frequency ; ali2023vision ; hsu2023wavelet ; qiao2023dual , we use a loss function combining the Charbonnier loss charbonnier1994two and the perceptual loss wang2018esrgan to train our GridFormer. We regard the Charbonnier loss as a pixel-wise loss, which is used between the recovered images and the ground truth images at each scale, and the perceptual loss is used to help our model produce visually pleasing results. The Charbonnier loss is defined as:

char=13k=02𝐗^k𝐈k2+ε2,\mathcal{L}_{\text{char}}=\frac{1}{3}\sum_{k=0}^{2}\sqrt{\|\hat{\mathbf{X}}_{k}-\mathbf{I}_{k}\|^{2}+\varepsilon^{2}}, (6)

where 𝐗^k\hat{\mathbf{X}}_{k} and 𝐈k\mathbf{I}_{k} refer to the restored image and ground-truth image respectively, and kk represents the index of the image scale level in our GridFormer. The constant ε\varepsilon is empirically set to 10310^{-3}. For the perceptual loss, following previous work wang2018esrgan , we adopt a pre-trained VGG19 simonyan2014very to extract the perceptual features from the Conv5_4Conv5\_4 layer of VGG19, and then use the L1L_{1} loss function to compute the difference between the perceptual features of the restored images and their corresponding ground truths. This effective perceptual loss focuses on capturing high-level semantic information, resulting in sharper edges and visually appealing outcomes, all while ensuring computational efficiency wang2018esrgan . Specifically, the perceptual loss is as follows:

per=13k=021CHWϕ(𝐗^k)ϕ(𝐈k)1,\displaystyle\mathcal{L}_{per}=\frac{1}{3}\sum_{k=0}^{2}\frac{1}{CHW}\|\phi(\hat{\mathbf{X}}_{k})-\phi(\mathbf{I}_{k})\|_{1}, (7)

where CC, HH, and WW denote the dimensions of the feature map obtained from the Conv5_4Conv5\_4 layer of the pretrained VGGNet ϕ\phi.

The final loss function \mathcal{L} to train our proposed GridFormer is shown as follows:

=char+αper,\mathcal{L}=\mathcal{L}_{\text{char}}+\alpha\mathcal{L}_{\text{per}}, (8)

where char\mathcal{L}_{\text{char}} denotes the Charbonnier loss, per\mathcal{L}_{\text{per}} is the perceptual loss. α\alpha is a hyper-parameter that is used to balance these two losses. In our experiments, it is empirically set to 0.10.1.

3.4 Differences from Existing Methods

While HRNet wang2020deep , HRFormer NEURIPS2021_3bbfdde8 , and RevCol cai2022reversible utilize a grid-like structure, they diverge from our GridFormer. First, GridFormer captures multi-scale features directly from the pixel level, in contrast to HRNet and HRFormer which perform multi-scale feature extraction at the feature layer level, and RevCol, which does not incorporate a multi-scale mechanism. Second, GridFormer integrates a new self-attention mechanism to enhance the fusion of multi-scale features more effectively. This approach sets it apart from HRNet, HRFormer, and RevCol, which do not employ compact self-attention in their feature fusion processes. Third, our network is intricately designed for image restoration under adverse weather conditions, striving to produce images of superior quality. Unlike HRNet, HRFormer, and RevCol, which are not specifically engineered for this challenge, our network architecture is uniquely suited to tackle the complexities inherent in this task.

4 Experiments and Analysis

We evaluate our GridFormer for several image restoration tasks in severe weather conditions, including (1) image dehazing, (2) image desnowing, (3) raindrop removal, (4) image deraining and dehazing, and (5) multi-weather restoration. Specifically, in this section, we first introduce datasets, the implementation details of our GridFormer, and the comparison methods. Then, we show the restoration results of our GridFormer and the comparison with the state-of-the-art methods. Finally, we conduct extensive ablation studies to verify the effectiveness of modules in our GridFormer.

4.1 Experimental Setup

We evaluate GridFormer on several image restoration tasks under severe weather conditions.

Datasets. For image dehazing, the first setting uses ITS li2018benchmarking to train the model and test it on indoor SOTS li2018benchmarking . Another setting is training and testing on Haze4K liu2021synthetic covering both indoor and outdoor scenes. Desnowing is evaluated on Snow100K liu2018desnownet . RainDrop qian2018attentive is used for raindrop removal, and Outdoor-Rain li2019heavy is used for image deraining and dehazing. For multi-weather restoration, we train the model on a combination of images degraded in adverse weather conditions similar to ozdenizci2023restoring . Table 1 lists the datasets used for the different tasks. In the following, we introduce the dataset and experimental details for specific tasks for image restoration in adverse weather conditions.

Image dehazing. Following liu2019griddehazenet ; qin2020ffa ; song2022vision ; tu2022maxim , we conduct our experiments on RESIDE li2018benchmarking and Haze4K liu2021synthetic datasets. Specifically, for the RESIDE dataset, we adopt Indoor Training Set (ITS) to train the model and test the model on the indoor set of the SOTS dataset. ITS contains 13,99013,990 indoor pair images and the indoor set of the SOTS dataset includes 500500 indoor pair images. For the Haze4K dataset, we follow the previous work ye2021perceiving . The Haze4K dataset contains 3,0003,000 haze and haze-free image pairs for training and 1,0001,000 for testing. The Haze4K dataset is more challenging, which considers both indoor and outdoor scenes.

Table 1: Dataset summary on five tasks of image restoration in adverse weather conditions.
Task Dataset #Train #Test
Image Dehazing ITS li2018benchmarking 13,99013,990 0
SOTS-Indoor li2018benchmarking 0 500500
Haze4K liu2021synthetic 3,0003,000 1,0001,000
Image Desnowing Snow100K liu2018desnownet 50,00050,000 0
Snow100K-S liu2018desnownet 0 16,61116,611
Snow100K-L liu2018desnownet 0 16.80116.801
Raindrop Removal RainDrop qian2018attentive 861861 0
RainDrop-Test qian2018attentive 0 5151
Image Deraining & Outdoor-Rain li2019heavy 9,0009,000 0
Image Dehazing Outdoor-Rain-Test li2019heavy 0 750750
Multi-weather Restoration All-weather li2022all 18,06918,069 0
Snow100K-S liu2018desnownet 0 16,61116,611
Snow100K-L liu2018desnownet 0 16.80116.801
RainDrop-Test qian2018attentive 0 5151
Outdoor-Rain-Test li2019heavy 0 750750

Image desnowing. For this task, we use the popular Snow100K dataset liu2018desnownet for training and evaluating the proposed method. Snow100K contains 50,00050,000 training and 50,00050,000 testing images. The testing set has three sub-sets i.e., Snow100K-S/M/L, which refers to different snowflake sizes (light/mid/heavy). The Snow100K-S, Snow100K-M, and Snow100K-L have 1661116611, 1658816588, and 1680116801 image pairs, respectively. In our experiment, we keep the same setup of ozdenizci2023restoring . Specifically, we use the training set to train our model and evaluate the proposed method on Snow100K-S and Snow100K-L.

Raindrop removal. Consistent with previous works qian2018attentive ; li2022all ; valanarasu2022transweather ; ozdenizci2023restoring , we adopt a representative RainDrop dataset qian2018attentive for raindrop removal. The RainDrop dataset includes 861861 synthetic raindrop training images and 5858 images for testing.

Image deraining and dehazing. For this task, we train our GridFormer with Outdoor-Rain dataset li2019heavy , which considers dense synthetic rain streaks and provides realistic scene views. Therefore, this dataset is designed to solve the problem of image deraining and dehazing. It consists of 9,0009,000 images for training and 750750 for testing.

Multi-weather restoration. Following the previous works li2022all ; valanarasu2022transweather ; ozdenizci2023restoring , we use a mixed dataset called All-weather, in which the training set contains 18,06918,069 images sampled from Snow100K liu2018desnownet , Raindrop qian2018attentive , and Outdoor-Rain li2019heavy . We use the Snow100k-S/L test sets to evaluate the model’s performance for the image desnowing task. In addition, we adopt the testing sets of the RainDrop dataset and Outdoor-Rain dataset to evaluate the model’s performance for the raindrop removal task and image deraining & dehazing task, respectively.

Refer to caption
Figure 6: Dehazing comparison on SOTS-indoor. From left to right are the input images, results of DCP he2010single , DehazeNet cai2016dehazenet , FFA-Net qin2020ffa , GCANet chen2019gated , GridDehazeNet liu2019griddehazenet , MSBDN dong2020multi , our GridFormer, and ground truth images, respectively. The images restored by GridFormer are more clear and closer to the ground truth. Zoom in for details.
Refer to caption
Figure 7: Desnowing comparison on Snow100K-S test set. From left to right are the input images, results of DesnowNet liu2018desnownet , DDMSNet zhang2021deep , SnowDiff64 ozdenizci2023restoring , SnowDiff128 ozdenizci2023restoring , our GridFormer, and ground truth images, respectively. Zoom in for details.
Refer to caption
Figure 8: Desnowing comparison on Snow100K-L test set. From left to right are the input images, results of DesnowNet liu2018desnownet , DDMSNet zhang2021deep , SnowDiff64 ozdenizci2023restoring , SnowDiff128 ozdenizci2023restoring , our GridFormer, and ground truth images, respectively. Zoom in for details.

Implementation details. We implemented GridFormer in PyTorch, using the AdamW optimizer loshchilovdecoupled with β1=0.9\beta_{1}=0.9 and β2=0.999\beta_{2}=0.999. The learning rate is set to 3×1043\times 10^{-4} and decreased to 10610^{-6} using the cosine annealing decay strategy loshchilov2016sgdr . For each task, we train the model with different iterations and patch sizes. At training time we use random horizontal and vertical flips for data augmentation. Following the setup in xiao2022image ; valanarasu2022transweather ; tu2022maxim , we evaluate the performance by PSNR and SSIM calculated in RGB space for image dehazing, and on the Y channel for other tasks.

Comparison methods. The comparison methods for the image dehazing task are traditional method DCP he2010single , CNN-based methods DehazeNet cai2016dehazenet , MSCNN ren2016single , AOD-Net li2017aod , GFN ren2016single , GCANet chen2019gated , GridDehazeNet liu2019griddehazenet , MSBDN dong2020multi , PFDN dong2020physics , FFA-Net qin2020ffa , and AECR-Net wu2021contrastive , and recent transformer-based methods DehazeF-B song2022vision , and MAXIM-2S tu2022maxim . For the task of image desnowing, the comparison methods are SPANet wang2019spatial , JSTASR chen2020jstasr , RESCAN li2018recurrent , DesnowNet liu2018desnownet , DDMSNet zhang2021deep , SnowDiff64 ozdenizci2023restoring , and SnowDiff128 ozdenizci2023restoring . As for the raindrop removal task, the comparison methods are pix2pix isola2017image , DuRN liu2019dual , RaindropAttn quan2019deep , AttentiveGAN qian2018attentive , IDT xiao2022image , RainDropDiff64 ozdenizci2023restoring , and RainDropDiff128 ozdenizci2023restoring . The comparison methods for the image deraining and dehazing task are CycleGAN zhu2017unpaired , pix2pix isola2017image , HRGAN li2019heavy , PCNet jiang2021rain , MPRNet zamir2021multi , RainHazeDiff64 ozdenizci2023restoring , and RainHazeDiff128 ozdenizci2023restoring . Finally, the comparison methods for the multi-weather restoration task are All-in-One li2020all , TransWeather valanarasu2022transweather , Restormer zamir2021restormer , WeatherDiff64 ozdenizci2023restoring , and WeatherDiff128 ozdenizci2023restoring .

Table 2: Dehazing results on SOTS-indoor and Haze4K. Bold and underlined fonts denote the best and second-best results, respectively.
SOTS-Indoor li2018benchmarking Haze4K liu2021synthetic
Type Method PSNR\uparrow SSIM\uparrow PSNR\uparrow SSIM\uparrow
DCP he2010single 16.62 0.818 14.01 0.760
DehazeNet cai2016dehazenet 19.82 0.821 19.12 0.840
MSCNN ren2016single 19.84 0.833 14.01 0.510
AOD-Net li2017aod 32.33 0.950 27.17 0.898
GFN ren2016single 22.30 0.880 - -
GCANet chen2019gated 30.23 0.980 - -
Dehazing GridDehazeNet liu2019griddehazenet 32.16 0.984 23.29 0.930
Task MSBDN dong2020multi 33.67 0.985 22.99 0.850
PFDN dong2020physics 32.68 0.976 - -
FFA-Net qin2020ffa 36.39 0.989 26.96 0.950
AECR-Net wu2021contrastive 37.17 0.990 - -
DehazeF-B song2022vision 37.84 0.994 - -
MAXIM-2S tu2022maxim 38.11 0.991 - -
GridFormer 42.34 0.994 33.27 0.986
Table 3: Desnowing results on Snow100K-S/L. Bold and underlined fonts denote best and second-best results, respectively.
Snow100K-S liu2018desnownet Snow100K-L liu2018desnownet
Type Method PSNR\uparrow SSIM\uparrow PSNR\uparrow SSIM\uparrow
SPANet wang2019spatial 29.92 0.8260 23.70 0.7930
JSTASR chen2020jstasr 31.40 0.9012 25.32 0.8076
RESCAN li2018recurrent 31.51 0.9032 26.08 0.8108
Desnowing DesnowNet liu2018desnownet 32.33 0.9500 27.17 0.8983
Task DDMSNet zhang2021deep 34.34 0.9445 28.85 0.8772
SnowDiff64 ozdenizci2023restoring 36.59 0.9626 30.43 0.9145
SnowDiff128 ozdenizci2023restoring 36.09 0.9545 30.28 0.9000
GridFormer 38.89 0.9698 33.09 0.9340
All-in-One li2020all - - 28.33 0.8820
TransWeather valanarasu2022transweather 32.51 0.9341 29.31 0.8879
Multi-weather Restormer zamir2021restormer 36.08 0.9591 30.28 0.9124
Restoration WeatherDiff64 ozdenizci2023restoring 35.83 0.9566 30.09 0.9041
WeatherDiff128 ozdenizci2023restoring 35.02 0.9516 29.58 0.8941
GridFormer-S 36.68 0.9602 30.78 0.9167
GridFormer 37.46 0.9640 31.71 0.9231

4.2 Experimental Results

Dehazing results. We perform image dehazing on different datasets to evaluate the performance of GridFormer. We compare the performance of GridFormer with various methods, including traditional prior-based methods, CNN-based methods, and recent transformer-based methods. Table 2 shows the quantitative results in terms of PSNR and SSIM. It shows that GridFormer achieves the best performance on the indoor subset of SOTS regarding all metrics. In particular, GridFormer obtains a significant gain of 4.234.23 dB in PSNR compared to the second-best method MAXIM-2S tu2022maxim .

We further compare the performance on the more challenging Haze4K dataset, which includes more realistic images from both indoor and outdoor scenarios. GridFormer obtains the best performance in terms of all metrics on this dataset as well. Fig. 6 provides a visual comparison for the SOTS indoor dataset. The recovered images by GridFormer contain finer details and are closer to the ground truth.

Refer to caption
Figure 9: Raindrop removal results on RainDrop test set. From left to right are the input images, results of RaindropAttn quan2019deep , AttentiveGAN qian2018attentive , RainDropDiff64 ozdenizci2023restoring , RainDropDiff128 ozdenizci2023restoring , our GridFormer, and ground truth images, respectively. Zoom in for details.
Refer to caption
Figure 10: Visual results of deraining & dehazing on Outdoor-Rain test set. From left to right are the input images, results of HRGAN li2019heavy , MPRNet zamir2021multi , RainHazeDiff64 ozdenizci2023restoring , RainHazeDiff128 ozdenizci2023restoring , our GridFormer, and ground truth images, respectively. Zoom in for details.

Desnowing results. We evaluate the desnowing performance on the public Snow100K dataset liu2018desnownet . The test set is divided into three subsets according to the particle size: Snow100K-S, Snow100K-M, and Snow100K-L. We select Snow100K-S and Snow100K-L for testing. Table 3 shows the quantitative results. On the Snow100K-S subset, GridFormer outperforms the diffusion-based method SnowDiff64 ozdenizci2023restoring by 2.32.3 dB and by 0.00720.0072 in terms of PSNR and SSIM. As for the most difficult Snow100K-L subset, GridFormer still gains an improvement of 2.662.66 dB and 0.01950.0195 in terms of PSNR and SSIM compared to the second-best method SnowDiff64. Fig. 7 and Fig. 8 provide the visual comparisons, showing that GridFormer is effective in removing image corruption due to snow while producing perceptually pleasing results.

Refer to caption
Figure 11: Multi-weather Restoration comparison on Snow100K-S, Snow100K-L, RainDrop, Outdoor-Rain datasets. From left to right are the input images, results of TransWeather valanarasu2022transweather , Restormer zamir2021restormer , WeatherDiff64 ozdenizci2023restoring , WeatherDiff128 ozdenizci2023restoring , our GridFormer, and ground truth images, respectively. Zoom in for details.
Table 4: RainDrop removal results on RainDrop test set. Bold and underlined fonts denote best and second-best results, respectively.
RainDrop qian2018attentive
Type Method PSNR\uparrow SSIM\uparrow
pix2pix isola2017image 28.02 0.8547
DuRN liu2019dual 31.24 0.9259
RaindropAttn quan2019deep 31.44 0.9263
RainDrop AttentiveGAN qian2018attentive 31.59 0.9170
Removal IDT xiao2022image 31.87 0.9313
RainDropDiff64 ozdenizci2023restoring 32.29 0.9422
RainDropDiff128 ozdenizci2023restoring 32.43 0.9334
GridFormer 32.92 0.9400
All-in-One li2020all 31.12 0.9268
TransWeather valanarasu2022transweather 30.17 0.9157
Multi-weather Restormer zamir2021restormer 30.91 0.9282
Restoration WeatherDiff64 ozdenizci2023restoring 29.64 0.9312
WeatherDiff128 ozdenizci2023restoring 29.66 0.9225
GridFormer-S 31.02 0.9301
GridFormer 32.39 0.9362

RainDrop removal results. In Table 4 we present the quantitative results for raindrop removal on the RainDrop dataset. For an extensive comparison, we compare GridFormer with seven different methods: pix2pix isola2017image , DuRN liu2019dual , RaindropAttn quan2019deep , AttentiveGAN qian2018attentive , IDT xiao2022image , RainDropDiff64 ozdenizci2023restoring , and RainDropDiff128 ozdenizci2023restoring . The results show that GridFormer is competitive. Specifically, GridFormer achieves the best performance in terms of PSNR and achieves almost the same level of performance as the state-of-the-art method RainDropDiff64 ozdenizci2023restoring in terms of SSIM with a difference of 0.00220.0022. A visual comparison of the results on RainDrop is provided in Fig. 9. It shows that our method can remove raindrops successfully and generate realistic images.

Table 5: Image deraining & dehazing results on Outdoor-Rain test set. The MACs of each model is measured on 256×256256\times 256 image.
Outdoor-Rain li2019heavy Overhead
Type Method PSNR\uparrow SSIM\uparrow Param/MACs
CycleGAN zhu2017unpaired 17.62 0.6560 7.84M/42.38G
pix2pix isola2017image 19.09 0.7100 54.41M/18.15G
HRGAN li2019heavy 21.56 0.8550 25.11M/34.93G
Deraining & PCNet jiang2021rain 26.19 0.9015 627.56K/268.45G
Dehazing MPRNet zamir2021multi 28.03 0.9192 3.64M/148.55G
RainHazeDiff64 ozdenizci2023restoring 28.38 0.9320 82.92M/475.16G
RainHazeDiff128 ozdenizci2023restoring 26.84 0.9152 85.56M/263.45G
GridFormer 28.49 0.9213 30.12M/251.35G
All-in-One li2020all 24.71 0.8980 44.00M/12.26G
TransWeather valanarasu2022transweather 28.83 0.9000 21.90M/5.64G
Restormer zamir2021restormer 30.21 0.9208 26.10M/140.99G
Multi-weather WeatherDiff64 ozdenizci2023restoring 29.64 0.9312 82.92M/475.16G
Restoration WeatherDiff128 ozdenizci2023restoring 29.72 0.9216 85.56M/263.45G
GridFormer-S 30.48 0.9313 14.83M/133.24G
GridFormer 31.87 0.9335 30.12M/251.35G

Deraining and dehazing results. For the image deraining and dehazing task, we conduct experiments on the Outdoor-Rain dataset li2019heavy . This dataset has 9,0009,000 pairs of images for training, and 750750 pairs for testing, where degraded images are synthesized considering the rain and haze scenes simultaneously. The comparisons between GridFormer and other state-of-the-art methods are reported in Table 5. GridFormer outperforms other competitors in terms of PSNR, and ranks second place regarding SSIM. More specifically, GridFormer achieves 0.110.11 dB and 0.460.46 dB improvement in terms of PSNR when compared to RainHazeDiff64 ozdenizci2023restoring and MPRNet zamir2021multi . Fig. 10 shows the visual comparison, indicating that GridFormer can handle haze and rainfall scenarios well at the same time, and generate vivid results.

Table 6: Cross-dataset evaluation. Models are trained only on the All-weather dataset and directly applied to the Rain100L and Test100 benchmark datasets. Bold and underlined fonts denote the best and second-best results, respectively.
Rain100L yang2017deep Test100 zhang2019image
Method PSNR\uparrow SSIM\uparrow PSNR\uparrow SSIM\uparrow
TransWeather 30.33 0.9365 24.20 0.8317
Restormer 27.08 0.8432 23.28 0.7136
WeatherDiff64 27.46 0.8534 23.13 0.7091
WeatherDiff128 27.56 0.8552 23.26 0.7255
GridFormer-S 33.21 0.9541 27.10 0.8713
GridFormer 34.24 0.9649 29.26 0.8912
Refer to caption Refer to caption Refer to caption
Input Restormer TransWeather
Refer to caption Refer to caption Refer to caption
WeatherDiff64 WeatherDiff128 GridFormer
Figure 12: Exemplar results on the real-world image.
Refer to caption
Figure 13: In each sub-image, the top images are captured in haze, rain, and snow weather conditions. The bottom images are recovered by our GridFormer. We report the detection confidences of these images, which shows that our GridFormer as a pre-processing tool benefits the task of object detection.

Multi-weather restoration results. We further explore the potential of GridFormer for multi-weather restoration. Specifically, we first train our model on the mixed dataset sampled from Snow100K liu2018desnownet , Raindrop qian2018attentive , and Outdoor-Rain li2019heavy datasets. Then, we evaluate our model on the Snow100k-S/L test sets, the RainDrop test dataset, and the Outdoor-Rain test dataset. We choose four representative multi-weather restoration methods for comparison: All-in-One network li2020all is a CNN-based method, TransWeather is based on transformers, WeatherDiff64 and WeatherDiff128 are diffusion models. Table 3, 4, and 5 summarize the quantitative results. GridFormer achieves the best performance in all weather conditions. We present visual comparisons in Fig. 11. Images produced by GridFormer exhibit fewer artifacts and are closer to ground truth compared to other methods. In an additional experiment, we set C=32C=32 in the grid head to construct a tiny variant of the network called GridFormer-S for comparison. The results show that our methods achieve competitive results with less complexity and parameters, see Table 5.

Cross-dataset evaluation. To further verify the models’ performance across different datasets, we conduct a cross-dataset evaluation on different SOTA methods. To be specific, the models (i.e., TransWeather, Restormer, WeatherDiff64, WeatherDiff128, GridFormer-S, and GridFormer) are trained on the All-weather dataset, and then directly applied to the specific deraining datasets Rain100L yang2017deep and Test100 zhang2019image for testing. Experimental results in Table 6 show that our GridFormer-S and GridFormer outperform other approaches.

Table 7: Memory consumption and inference time of different methods evaluated on 512×512512\times 512 resolution images.
Methods Platform Memory (MB) time (ms)
Restormer PyTorch 30031.50 321.6
WeatherDiff64 PyTorch 6307.53 101232.1
WeatherDiff128 PyTorch 7941.03 133557.7
GridFormer-S PyTorch 20793.94 165.0
GridFormer PyTorch 28461.94 259.1

Performance in real-world scenarios. To further verify the effectiveness of the proposed method in real-world scenarios, we conduct a qualitative comparison experiment on the real-world hazy image from the Internet. The comparison result is shown in Fig. 12. Compared with current state-of-the-art methods, our method effectively removes the haze and produces a clear result. The result shows that our method outperforms the current methods in real-world scenarios.

Efficiency comparison. We also analyze the efficiency of our models. Table 7 displays the comparison results of different methods in terms of the memory consumption and inference time for 512×512512\times 512 resolution. Specifically, we choose top three SOTA methods (i.e., Restormer, WeatherDiff64, and WeatherDiff128) for comparison. Compared with other SOTA methods, our GridFormer-S exhibits the highest inference time. In addition, GridFormer-S and GridFormer are competitive in terms of memory consumption.

Refer to caption
Figure 14: The top images are captured in haze, rain, and snow weather conditions. The bottom images are recovered by our GridFormer. We show the segmentation results of these images, which demonstrates that our GridFormer as a pre-processing tool benefits the task of image segmentation.
Refer to caption
Figure 15: In each sub-image, the top images are captured in haze, rain, and snow weather conditions. The bottom images are recovered by our GridFormer. We show the image caption results of these images, which shows that our GridFormer as a pre-processing tool benefits the task of image caption.

4.3 Application

Image restoration in adverse weather conditions can enhance the image content, which can be easily incorporated into other high-level vision tasks. As a result, we investigate the potential of GridFormer in improving the performance of object detection, image segmentation, and image caption algorithms when dealing with adverse weather scenes. In the case of object detection, we consider both synthetic and real-world images. Fig. 13 shows detection results, where we use Google Vision API for object detection. We observe that haze, rain, and snow greatly reduce the detection accuracy, that is, increased missed detection, higher false detection, and lower detection confidence. In contrast, the detection accuracy and confidence of the images recovered by GridFormer show significant improvement over those of weather-degraded images. Fig. 14 showcases the segmentation results, utilizing the state-of-the-art Segment Anything Model kirillov2023segment for image segmentation. Our restoration results demonstrate an enhancement in segmentation accuracies, indicating that GridFormer effectively facilitates subsequent segmentation performance. Lastly, Fig. 15 presents the image caption results using the BLIP model li2022blip . These results show that the BLIP model can generate detailed captions by utilizing our restoration results, further validating the effectiveness of our GridFormer.

Table 8: Ablation studies on the proposed compact-enhanced self-attention (CESA). FS, CS, and LE refer to feature sampling, channel split, and local enhancement operations in CESA respectively. MACs are measured on 256×256256\times 256 images.
FS CS LE Param/MACs RainDrop SOTS-Indoor
PSNR/SSIM PSNR/SSIM
38.08M/322.26G 29.93/0.8710 36.89/0.9631
34.91M/237.79G 30.82/0.9012 37.81/0.9831
26.88M/227.65G 31.98/0.9294 40.51/0.9921
30.12M/251.35G 32.57/0.9365 41.84/0.9932

4.4 Ablation Study

We conduct extensive ablation studies to verify the proposed compact-enhanced self-attention, residual dense transformer block, grid structure, and used loss functions. Specifically, we conduct ablation studies on the tasks of raindrop removal and dehazing to analyze the performance of GridFormer. For each model, we train it for 2×1052\times 10^{5} iterations using a batch size of 1212 on the RainDrop dataset qian2018attentive and the ITS dataset li2018benchmarking respectively. Subsequently, we assess the performance of each model on the testing sets of the RainDrop dataset and the testing SOTS-Indoor dataset. The detailed results are presented as follows.

A. Compact-enhanced self-attention. We verify the impact of feature sampling, channel split, and local enhancement operations in compact-enhanced self-attention. Table 8 shows the comparison results. After applying feature sampling (FS) and channel split (CS) operations respectively, the model achieves 0.890.89 dB and 1.161.16 dB improvements in the RainDrop dataset (0.920.92 dB and 2.702.70 dB improvements in the SOTS-Indoor dataset), while the computational complexity is significantly reduced. Using local enhancement (LE) operations, the performance gains on the RainDrop and SOTS-Indoor datasets are 0.590.59 dB and 1.331.33 dB respectively. The ablation study results suggest the effectiveness of these operations. We also conduct an additional ablation study on the RainDrop dataset to verify the effectiveness of exchanging the Values for feature interaction and fusion in our compact-enhanced self-attention. Specifically, we focus on exchanging only the Queries between 𝐳1\mathbf{z}_{1} and 𝐳2\mathbf{z}_{2} to investigate its impact on performance. The results of this experiment demonstrate that exchanging the Queries results in a PSNR value of 31.8131.81, which is inferior to the outcome achieved by exchanging the Values (32.5732.57). These ablation results show the effectiveness of the Value exchange in enhancing interaction and feature fusion, thereby contributing to improved restoration performance. Furthermore, in our model, the choice of the step parameter rr, as described in formula 3, indeed impacts the model’s computational complexity and performance. Thus, we evaluate the effect of different settings of rr in the GridFormer. Table 9 shows that our model with the [4,2,2][4,2,2] setting achieves a better trade-off between computation cost and performance.

Table 9: Ablation studies on the different settings of rr in three rows of GridFormer layers, where rr indicates the stride of the average pooling operation in feature sampling of the proposed compact-enhanced transformer layer. [4,2,2][4,2,2] denotes rr is set as 4, 2, and 2 in three rows of GridFormer layers, respectively.
Different Settings of rr Param/MACs RainDrop SOTS-Indoor
PSNR/SSIM PSNR/SSIM
[2,2,2][2,2,2] 29.53M/213.01G 30.93/0.8991 33.81/0.9631
[4,4,4][4,4,4] 48.63M/356.57G 31.62/0.9112 40.31/0.9901
[4,2,2][4,2,2] 30.12M/251.35G 31.98/0.9294 40.51/0.9921
Table 10: Ablation studies on the proposed residual dense transformer block (RDTB). DC, LF, and LSC denote dense connection, local fusion with 1×11\times 1 convolution, and local skip connection in RDTB respectively. MACs are measured on 256×256256\times 256 images.
DC LF LSC Param/MACs RainDrop SOTS-Indoor
PSNR/SSIM PSNR/SSIM
27.99M/253.57G 31.48/0.9187 40.32/0.9901
32.78M/284.87G 32.07/0.9284 40.45/0.9912
30.12M/251.35G 32.05/0.9298 40.87/0.9926
30.12M/251.35G 32.57/0.9365 41.84/0.9932
Table 11: Ablation study on different gird configurations. rr and cc denote the numbers of rows and columns of the model.
Grid Setting Overhead RainDrop SOTS-Indoor
rr cc Param (M) MACs (G) PSNR/SSIM PSNR/SSIM
r=1r=1 c=3c=3 0.81 51.24 30.07/0.9163 38.02/0.9879
c=4c=4 1.01 64.01 30.44/0.9198 38.21/0.9880
c=5c=5 1.21 76.77 30.54/0.9199 38.61/0.9885
c=6c=6 1.41 89.54 30.62/0.9205 38.78/0.9891
r=2r=2 c=3c=3 2.64 80.48 31.21/0.9280 39.68/0.9900
c=4c=4 3.50 103.98 31.23/0.9281 39.73/0.9901
c=5c=5 4.51 129.52 31.37/0.9294 40.21/0.9915
c=6c=6 5.37 153.01 31.58/0.9311 40.50/0.9918
r=3r=3 c=3c=3 13.96 125.38 31.67/0.9339 40.79/0.9918
c=4c=4 19.09 166.01 31.73/0.9356 41.13/0.9929
c=5c=5 24.99 210.72 31.89/0.9359 41.01/0.9926
c=6c=6 30.12 251.35 32.57/0.9365 41.84/0.9932
r=4r=4 c=6c=6 150.86 410.09 32.05/0.9361 40.85/0.9921

B. Residual dense transformer block. To demonstrate the effectiveness of the proposed residual dense transformer block, we conduct ablation studies by considering the following three factors: (1) dense connections (DC), (2) local fusion with 1×11\times 1 convolution (LF), and (3) local skip connection (LSC). Specifically, we analyze the different models by progressively adding these components. The results are shown in Table 10. We observe that each component improves the performance, where dense connections contribute the most.

C. Exploring different configurations in the grid structure of GridFormer. To comprehensively understand the impact of GridFormer’s grid structure, we have conducted ablation experiments involving variations in the number of rows and columns. Each row within our GridFormer framework corresponds to a distinct scale, while the columns in the grid fusion module act as conduits that facilitate the exchange of information across diverse scales. This grid structure profoundly influences the information interchange that occurs among the grid units within the Grid Fusion module. In our study, we have systematically altered the number of rows, ranging from 11 to 44, while maintaining columns at values of 33, 44, 55, and 66. The results with different configurations are shown in Table 11. By increasing rr and cc, the performance is improved, and the overhead gradually becomes complex. The model performance achieves its maximum for r=3r=3 and c=6c=6. Thus, we select these values in our final model.

D. Other GridFormer components. The skip connection from input images and the perceptual loss also contribute to improving the performance. Without the skip connection from the input image, the PSNR value would decrease from 32.5732.57 dB to 31.8531.85 dB on the testing set of the RainDrop dataset. Training GridFormer without the perceptual loss results in a PSNR of 32.7232.72 dB on the testing set of the RainDrop dataset.

5 Limitations and Future Work

As a new backbone, GridFormer has achieved better performance than previous methods in image restoration under adverse weather conditions, but it still has space for improvement. For example, using the pre-trained strategy chen2021pre or the contrastive learning technique wu2021contrastive on our GridFormer can further explore its performance potential. In addition, we fuse multi-scale features with simple weighted attention zheng2022t ; wang2022ultra . We can improve this fusion by designing special modules using sophisticated attention mechanisms qin2020ffa ; song2022vision . Finally, GridFormer is evaluated in the image scenery, and we are still exploring whether it can handle the video restoration problem. In the future, it is also an important direction to extend our GridFormer to deal with video restoration in adverse weather conditions.

6 Conclusion

In this paper, we propose GridFormer, a unified Transformer architecture for image restoration in adverse weather conditions. It adopts a grid structure to facilitate information communication across different streams and makes full use of the hierarchical features from the input images. In addition, to build the basic layer of GridFormer, we propose a compact-enhanced transformer layer and integrate it in a residual dense manner, which encourages feature reuse and enhances feature representation. Comprehensive experiments show that GridFormer significantly surpasses state-of-the-art methods, producing good results on both weather-specific and multi-weather restoration tasks.

Acknowledgement

This work was supported in part by the National Natural Science Foundation of China (GrantNo. 62372223, 62372480), in part by the Guangdong Basic and Applied Basic Research Foundation (No. 2023A1515012839), in part by Shenzhen Science and Technology Program (No. JSGG20220831093004008), in part by China Mobile Zijin Innovation Insititute (No. NR2310J7M).

Data Availability Statement

The datasets generated during and/or analyzed during the current study are available in the WeatherDiffusion repository, with the link as https://github.com/IGITUGraz/WeatherDiffusion.

References

  • (1) Ali, A.M., Benjdira, B., Koubaa, A., El-Shafai, W., Khan, Z., Boulila, W.: Vision transformers in image restoration: A survey. Sensors 23(5), 2385 (2023)
  • (2) Ba, Y., Zhang, H., Yang, E., Suzuki, A., Pfahnl, A., Chandrappa, C.C., de Melo, C.M., You, S., Soatto, S., Wong, A., et al.: Not just streaks: Towards ground truth for single image deraining. In: Proceedings of European Conference on Computer Vision, pp. 723–740 (2022)
  • (3) Berman, D., Avidan, S., et al.: Non-local image dehazing. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1674–1682 (2016)
  • (4) Cai, B., Xu, X., Jia, K., Qing, C., Tao, D.: Dehazenet: An end-to-end system for single image haze removal. IEEE Transactions on Image Processing 25(11), 5187–5198 (2016)
  • (5) Cai, Y., Zhou, Y., Han, Q., Sun, J., Kong, X., Li, J., Zhang, X.: Reversible column networks. In: Proceedings of International Conference on Learning Representations (2022)
  • (6) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proceedings of European Conference on Computer Vision, pp. 213–229 (2020)
  • (7) Charbonnier, P., Blanc-Feraud, L., Aubert, G., Barlaud, M.: Two deterministic half-quadratic regularization algorithms for computed imaging. In: Proceedings of International Conference on Image Processing, pp. 168–172 (1994)
  • (8) Chen, D., He, M., Fan, Q., Liao, J., Zhang, L., Hou, D., Yuan, L., Hua, G.: Gated context aggregation network for image dehazing and deraining. In: Proceedings of IEEE Winter Conference on Applications of Computer Vision, pp. 1375–1383 (2019)
  • (9) Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 12299–12310 (2021)
  • (10) Chen, S., Ye, T., Liu, Y., Chen, E., Shi, J., Zhou, J.: Snowformer: Scale-aware transformer via context interaction for single image desnowing. arXiv preprint arXiv:2208.09703 (2022)
  • (11) Chen, W.T., Fang, H.Y., Ding, J.J., Tsai, C.C., Kuo, S.Y.: Jstasr: Joint size and transparency-aware snow removal algorithm based on modified partial convolution and veiling effect removal. In: Proceedings of European Conference on Computer Vision, pp. 754–770 (2020)
  • (12) Chen, Y.L., Hsu, C.T.: A generalized low-rank appearance model for spatio-temporally correlated rain streaks. In: Proceedings of IEEE International Conference on Computer Vision, pp. 1968–1975 (2013)
  • (13) Cho, S.J., Ji, S.W., Hong, J.P., Jung, S.W., Ko, S.J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4641–4650 (2021)
  • (14) Dong, H., Pan, J., Xiang, L., Hu, Z., Zhang, X., Wang, F., Yang, M.H.: Multi-scale boosted dehazing network with dense feature fusion. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 2157–2167 (2020)
  • (15) Dong, J., Pan, J.: Physics-based feature dehazing networks. In: Proceedings of European Conference on Computer Vision, pp. 188–204 (2020)
  • (16) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proceedings of International Conference on Learning Representations (2021)
  • (17) Du, Y., Xu, J., Zhen, X., Cheng, M.M., Shao, L.: Conditional variational image deraining. IEEE Transactions on Image Processing 29, 6288–6301 (2020)
  • (18) Fu, X., Huang, J., Ding, X., Liao, Y., Paisley, J.: Clearing the skies: A deep network architecture for single-image rain removal. IEEE Transactions on Image Processing 26(6), 2944–2956 (2017)
  • (19) Garg, K., Nayar, S.K.: When does a camera see rain? In: Proceedings of IEEE International Conference on Computer Vision, pp. 1067–1074 (2005)
  • (20) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. In: Proceedings of International Conference on Artificial Intelligence and Statistics, pp. 315–323 (2011)
  • (21) Gu, X., Wang, L., Deng, Z., Cao, Y., Huang, X., Zhu, Y.m.: Adafuse: Adaptive medical image fusion based on spatial-frequential cross attention. arXiv preprint arXiv:2310.05462 (2023)
  • (22) He, K., Sun, J., Tang, X.: Single image haze removal using dark channel prior. IEEE Transactions on Pattern Analysis and Machine Intelligence 33(12), 2341–2353 (2010)
  • (23) Hsu, W.Y., Chang, W.C.: Wavelet approximation-aware residual network for single image deraining. IEEE Transactions on Pattern Analysis and Machine Intelligence (2023)
  • (24) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)
  • (25) Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017)
  • (26) Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(11), 1254–1259 (1998)
  • (27) Jaw, D.W., Huang, S.C., Kuo, S.Y.: Desnowgan: An efficient single image snow removal framework using cross-resolution lateral connection and gans. IEEE Transactions on Circuits and Systems for Video Technology 31(4), 1342–1350 (2020)
  • (28) Jiang, K., Wang, Z., Yi, P., Chen, C., Han, Z., Lu, T., Huang, B., Jiang, J.: Decomposition makes better rain removal: An improved attention-guided deraining network. IEEE Transactions on Circuits and Systems for Video Technology 31(10), 3981–3995 (2020)
  • (29) Jiang, K., Wang, Z., Yi, P., Chen, C., Huang, B., Luo, Y., Ma, J., Jiang, J.: Multi-scale progressive fusion network for single image deraining. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 8346–8355 (2020)
  • (30) Jiang, K., Wang, Z., Yi, P., Chen, C., Wang, Z., Wang, X., Jiang, J., Lin, C.W.: Rain-free and residue hand-in-hand: A progressive coupled network for real-time image deraining. IEEE Transactions on Image Processing 30, 7404–7418 (2021)
  • (31) Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Proceedings of European Conference on Computer Vision, pp. 694–711 (2016)
  • (32) Kang, L.W., Lin, C.W., Fu, Y.H.: Automatic single-image-based rain streaks removal via image decomposition. IEEE Transactions on Image Processing 21(4), 1742–1755 (2011)
  • (33) Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023)
  • (34) Lee, H., Choi, H., Sohn, K., Min, D.: Knn local attention for image restoration. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 2139–2149 (2022)
  • (35) Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 17452–17462 (2022)
  • (36) Li, B., Peng, X., Wang, Z., Xu, J., Feng, D.: Aod-net: All-in-one dehazing network. In: Proceedings of IEEE International Conference on Computer Vision, pp. 4770–4778 (2017)
  • (37) Li, B., Ren, W., Fu, D., Tao, D., Feng, D., Zeng, W., Wang, Z.: Benchmarking single-image dehazing and beyond. IEEE Transactions on Image Processing 28(1), 492–505 (2018)
  • (38) Li, J., Li, D., Xiong, C., Hoi, S.: Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In: Proceedings of International Conference on Machine Learning, pp. 12888–12900 (2022)
  • (39) Li, P., Yun, M., Tian, J., Tang, Y., Wang, G., Wu, C.: Stacked dense networks for single-image snow removal. Neurocomputing 367, 152–163 (2019)
  • (40) Li, R., Cheong, L.F., Tan, R.T.: Heavy rain image restoration: Integrating physics model and conditional adversarial learning. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1633–1642 (2019)
  • (41) Li, R., Tan, R.T., Cheong, L.F.: All in one bad weather removal using architectural search. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 3175–3185 (2020)
  • (42) Li, X., Hua, Z., Li, J.: Two-stage single image dehazing network using swin-transformer. IET Image Processing 16(9), 2518–2534 (2022)
  • (43) Li, X., Wu, J., Lin, Z., Liu, H., Zha, H.: Recurrent squeeze-and-excitation context aggregation net for single image deraining. In: Proceedings of European Conference on Computer Vision, pp. 254–269 (2018)
  • (44) Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: Proceedings of IEEE International Conference on Computer Vision, pp. 1833–1844 (2021)
  • (45) Liang, Y., Anwar, S., Liu, Y.: Drt: A lightweight single image deraining recursive transformer. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 589–598 (2022)
  • (46) Liu, X., Ma, Y., Shi, Z., Chen, J.: Griddehazenet: Attention-based multi-scale network for image dehazing. In: Proceedings of IEEE International Conference on Computer Vision, pp. 7314–7323 (2019)
  • (47) Liu, X., Suganuma, M., Sun, Z., Okatani, T.: Dual residual networks leveraging the potential of paired operations for image restoration. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 7007–7016 (2019)
  • (48) Liu, Y., Zhu, L., Pei, S., Fu, H., Qin, J., Zhang, Q., Wan, L., Feng, W.: From synthetic to real: Image dehazing collaborating with unlabeled real data. In: Proceedings of ACM International Conference on Multimedia, pp. 50–58 (2021)
  • (49) Liu, Y.F., Jaw, D.W., Huang, S.C., Hwang, J.N.: Desnownet: Context-aware deep network for snow removal. IEEE Transactions on Image Processing 27(6), 3064–3073 (2018)
  • (50) Loshchilov, I., Hutter, F.: Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983 (2016)
  • (51) Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Proceedings of International Conference on Learning Representations (2019)
  • (52) Luo, Y., Xu, Y., Ji, H.: Removing rain from a single image via discriminative sparse coding. In: Proceedings of IEEE International Conference on Computer Vision, pp. 3397–3405 (2015)
  • (53) Narasimhan, S.G., Nayar, S.K.: Chromatic framework for vision in bad weather. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 598–605 (2000)
  • (54) Özdenizci, O., Legenstein, R.: Restoring vision in adverse weather conditions with patch-based denoising diffusion models. IEEE Transactions on Pattern Analysis and Machine Intelligence (2023)
  • (55) Petit, O., Thome, N., Rambour, C., Themyr, L., Collins, T., Soler, L.: U-net transformer: Self and cross attention for medical image segmentation. In: Proceedings of Machine Learning in Medical Imaging, pp. 267–276 (2021)
  • (56) Qian, R., Tan, R.T., Yang, W., Su, J., Liu, J.: Attentive generative adversarial network for raindrop removal from a single image. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 2482–2491 (2018)
  • (57) Qiao, Y., Huo, Z., Meng, S.: Dual-route synthetic-to-real adaption for single image dehazing. IET Image Processing (2023)
  • (58) Qin, X., Wang, Z., Bai, Y., Xie, X., Jia, H.: Ffa-net: Feature fusion attention network for single image dehazing. In: Proceedings of AAAI Conference on Artificial Intelligence, pp. 11908–11915 (2020)
  • (59) Qu, Y., Chen, Y., Huang, J., Xie, Y.: Enhanced pix2pix dehazing network. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 8160–8168 (2019)
  • (60) Quan, R., Yu, X., Liang, Y., Yang, Y.: Removing raindrops and rain streaks in one go. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 9147–9156 (2021)
  • (61) Quan, Y., Deng, S., Chen, Y., Ji, H.: Deep learning for seeing through window with raindrops. In: Proceedings of IEEE International Conference on Computer Vision, pp. 2463–2471 (2019)
  • (62) Ren, W., Liu, S., Zhang, H., Pan, J., Cao, X., Yang, M.H.: Single image dehazing via multi-scale convolutional neural networks. In: Proceedings of European Conference on Computer Vision, pp. 154–169 (2016)
  • (63) Ren, W., Ma, L., Zhang, J., Pan, J., Cao, X., Liu, W., Yang, M.H.: Gated fusion network for single image dehazing. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 3253–3261 (2018)
  • (64) Ren, W., Tian, J., Han, Z., Chan, A., Tang, Y.: Video desnowing and deraining based on matrix decomposition. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 4210–4219 (2017)
  • (65) Roth, S., Black, M.J.: Fields of experts: A framework for learning image priors. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 860–867 (2005)
  • (66) Shi, W., Caballero, J., Huszár, F., Totz, J., Aitken, A.P., Bishop, R., Rueckert, D., Wang, Z.: Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1874–1883 (2016)
  • (67) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  • (68) Song, Y., He, Z., Qian, H., Du, X.: Vision transformers for single image dehazing. arXiv preprint arXiv:2204.03883 (2022)
  • (69) Susladkar, O., Deshmukh, G., Makwana, D., Mittal, S., Teja, R., Singhal, R.: Gafnet: A global fourier self attention based novel network for multi-modal downstream tasks. In: Proceedings of the IEEE Winter Conference on Applications of Computer Vision, pp. 5242–5251 (2023)
  • (70) Tu, Z., Talebi, H., Zhang, H., Yang, F., Milanfar, P., Bovik, A., Li, Y.: Maxim: Multi-axis mlp for image processing. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 5769–5780 (2022)
  • (71) Valanarasu, J.M.J., Yasarla, R., Patel, V.M.: Transweather: Transformer-based restoration of images degraded by adverse weather conditions. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 2353–2363 (2022)
  • (72) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proceedings of Advances in Neural Information Processing Systems (2017)
  • (73) Wang, H., Xie, Q., Zhao, Q., Meng, D.: A model-driven deep neural network for single image rain removal. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 3103–3112 (2020)
  • (74) Wang, J., Sun, K., Cheng, T., Jiang, B., Deng, C., Zhao, Y., Liu, D., Mu, Y., Tan, M., Wang, X., et al.: Deep high-resolution representation learning for visual recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 43(10), 3349–3364 (2020)
  • (75) Wang, T., Yang, X., Xu, K., Chen, S., Zhang, Q., Lau, R.W.: Spatial attentive single-image deraining with a high quality real rain dataset. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 12270–12279 (2019)
  • (76) Wang, T., Zhang, K., Shen, T., Luo, W., Stenger, B., Lu, T.: Ultra-high-definition low-light image enhancement: A benchmark and transformer-based method. In: Proceedings of AAAI Conference on Artificial Intelligence (2023)
  • (77) Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: Esrgan: Enhanced super-resolution generative adversarial networks. In: Proceedings of the European Conference on Computer Vision Workshops (2018)
  • (78) Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 17683–17693 (2022)
  • (79) Wu, H., Qu, Y., Lin, S., Zhou, J., Qiao, R., Zhang, Z., Xie, Y., Ma, L.: Contrastive learning for compact single image dehazing. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 10551–10560 (2021)
  • (80) Xiao, J., Fu, X., Liu, A., Wu, F., Zha, Z.J.: Image de-raining transformer. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022)
  • (81) Yamashita, A., Tanaka, Y., Kaneko, T.: Removal of adherent waterdrops from images acquired with stereo camera. In: Proceedings of International Conference on Intelligent Robots and Systems, pp. 400–405 (2005)
  • (82) Yang, W., Tan, R.T., Feng, J., Guo, Z., Yan, S., Liu, J.: Joint rain detection and removal from a single image with contextualized deep networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 42(6), 1377–1393 (2019)
  • (83) Yang, W., Tan, R.T., Feng, J., Liu, J., Guo, Z., Yan, S.: Deep joint rain detection and removal from a single image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1357–1366 (2017)
  • (84) Yao, C., Jin, S., Liu, M., Ban, X.: Dense residual transformer for image denoising. Electronics 11(3), 418 (2022)
  • (85) Ye, T., Jiang, M., Zhang, Y., Chen, L., Chen, E., Chen, P., Lu, Z.: Perceiving and modeling density is all you need for image dehazing. In: Proceedings of European Conference on Computer Vision (2021)
  • (86) Ye, T., Zhang, Y., Jiang, M., Chen, L., Liu, Y., Chen, S., Chen, E.: Perceiving and modeling density for image dehazing. In: Proceedings of European Conference on Computer Vision, pp. 130–145 (2022)
  • (87) Yin, X., Tu, G., Chen, Q.: Multiscale depth fusion with contextual hybrid enhancement network for image dehazing. IEEE Transactions on Instrumentation and Measurement (2023)
  • (88) You, S., Tan, R.T., Kawakami, R., Mukaigawa, Y., Ikeuchi, K.: Adherent raindrop modeling, detection and removal in video. IEEE Transactions on Pattern Analysis and Machine Intelligence 38(9), 1721–1733 (2015)
  • (89) Yu, H., Zheng, N., Zhou, M., Huang, J., Xiao, Z., Zhao, F.: Frequency and spatial dual guidance for image dehazing. In: Proceedings of European Conference on Computer Vision, pp. 181–198 (2022)
  • (90) Yuan, Y., Fu, R., Huang, L., Lin, W., Zhang, C., Chen, X., Wang, J.: Hrformer: High-resolution vision transformer for dense predict. In: Proceedings of Advances in Neural Information Processing Systems, pp. 7281–7293 (2021)
  • (91) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 5728–5739 (2022)
  • (92) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.H., Shao, L.: Multi-stage progressive image restoration. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 14821–14831 (2021)
  • (93) Zhang, F., Chen, G., Wang, H., Zhang, C.: Cf-dan: Facial-expression recognition based on cross-fusion dual-attention network. Computational Visual Media pp. 1–16 (2024)
  • (94) Zhang, H., Patel, V.M.: Densely connected pyramid dehazing network. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 3194–3203 (2018)
  • (95) Zhang, H., Patel, V.M.: Density-aware single image de-raining using a multi-stream dense network. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 695–704 (2018)
  • (96) Zhang, H., Sindagi, V., Patel, V.M.: Image de-raining using a conditional generative adversarial network. IEEE Transactions on Circuits and Systems for Video Technology 30(11), 3943–3956 (2019)
  • (97) Zhang, K., Li, R., Yu, Y., Luo, W., Li, C.: Deep dense multi-scale network for snow removal using semantic and depth priors. IEEE Transactions on Image Processing 30, 7419–7431 (2021)
  • (98) Zhang, T., Jiang, N., Lin, J., Lin, J., Zhao, T.: Desnowformer: an effective transformer-based image desnowing network. In: Proceedings of IEEE International Conference on Visual Communications and Image Processing, pp. 1–5 (2022)
  • (99) Zhang, X., Wang, T., Wang, J., Tang, G., Zhao, L.: Pyramid channel-based feature attention network for image dehazing. Computer Vision and Image Understanding 197, 103003 (2020)
  • (100) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 2472–2481 (2018)
  • (101) Zhang, Z., Zhu, Y., Fu, X., Xiong, Z., Zha, Z.J., Wu, F.: Multifocal attention-based cross-scale network for image de-raining. In: Proceedings of the 29th ACM International Conference on Multimedia, pp. 3673–3681 (2021)
  • (102) Zheng, L., Li, Y., Zhang, K., Luo, W.: T-net: Deep stacked scale-iteration network for image dehazing. IEEE Transactions on Multimedia (2022)
  • (103) Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of IEEE International Conference on Computer Vision, pp. 2223–2232 (2017)
  • (104) Zhu, L., Fu, C.W., Lischinski, D., Heng, P.A.: Joint bi-layer optimization for single-image rain streak removal. In: Proceedings of IEEE International Conference on Computer Vision, pp. 2526–2534 (2017)