Multi-Expert Adaptive Selection: Task-Balancing for All-in-One Image Restoration
Abstract
The use of a single image restoration framework to achieve multi-task image restoration has garnered significant attention from researchers. However, several practical challenges remain, including meeting the specific and simultaneous demands of different tasks, balancing relationships between tasks, and effectively utilizing task correlations in model design. To address these challenges, this paper explores a multi-expert adaptive selection mechanism. We begin by designing a feature representation method that accounts for both the pixel channel level and the global level, encompassing low-frequency and high-frequency components of the image. Based on this method, we construct a multi-expert selection and ensemble scheme. This scheme adaptively selects the most suitable expert from the expert library according to the content of the input image and the prompts of the current task. It not only meets the individualized needs of different tasks but also achieves balance and optimization across tasks. By sharing experts, our design promotes interconnections between different tasks, thereby enhancing overall performance and resource utilization. Additionally, the multi-expert mechanism effectively eliminates irrelevant experts, reducing interference from them and further improving the effectiveness and accuracy of image restoration. Experimental results demonstrate that our proposed method is both effective and superior to existing approaches, highlighting its potential for practical applications in multi-task image restoration. The source code of the proposed method is available at https://github.com/zhoushen1/MEASNet.
Index Terms:
Image Restoration, All-in-One Framework, Expert Selection.I Introduction
As an inverse problem in the field of computer vision, image restoration aims to restore high-quality and clear images from input images affected by various degradation factors such as haze, rain stains, noise, etc. Given its critical role in numerous downstream tasks, such as image fusion [1, 2, 3], target recognition [4, 5], and detection [6, 7], this technology has attracted widespread attention from researchers. Although many methods have shown excellent performance in their respective fields, such as denoising [8, 9, 10], deblurring [11, 12, 13], deraining [14, 15, 16, 17], and dehazing [18, 19, 20, 21], they are often limited to dealing with a single type of degradation problem. When faced with different types of degradation or varying degrees of degradation, these methods often struggle to provide satisfactory results. To address the above challenges, researchers have begun exploring and designing universal models that can adapt to various image restoration tasks, achieving some promising research results [22, 23]. Although these methods perform well, their generality is still limited in terms of adaptability of the network model, and separate model training is still necessary for different restoration tasks. This means that such general frameworks are not suitable for handling multiple tasks simultaneously without sacrificing single-task performance.

To solve the above problems, researchers have begun exploring the design of an All-in-One framework that can simultaneously handle multiple image restoration tasks with a single model. The key distinction from the aforementioned general model methods is that this framework can accommodate the requirements of multiple tasks without requiring retraining after the model is pre-trained. Therefore, the key to studying such methods lies in balancing the relationships between different tasks within the same framework. To solve these problems, researchers have begun exploring All-in-One frameworks designed to handle multiple image restoration tasks with a single model. Unlike general model methods, this framework can accommodate various tasks without requiring retraining after the model is pre-trained. Therefore, the key to designing such methods is to balance the relationships between different tasks within the same framework. According to the characteristics of existing methods, current all-in-one image restoration methods can be roughly divided into three categories: Prior Knowledge-Based Image Restoration (PKIR) [25, 26, 27, 28, 29, 30], Architecture Search and Feature Modulation-Based Image Restoration (AS-FM-IR) [31, 32, 33, 34, 35, 36], and Prompt-Based Image Restoration (PromptIR) [37, 38, 39, 40, 41]. PKIR guides the image restoration process by utilizing external prior knowledge of images under different tasks, enabling the model to handle multiple types of image degradation simultaneously. However, such methods are limited by the accuracy and effectiveness of the prior knowledge of the input degraded image. If the prior knowledge does not match the characteristics of the actual image, the quality of the restored image may be significantly impacted.
Compared to PKIR, AS-FM-IR is also more prevalent in the “All-in-One” image restoration framework. In AS-FM-IR, architecture search-based methods aim to optimize relationships between different tasks by finding the most suitable network components for each task [31, 32, 33]. Although these methods have shown effectiveness in practice, they require searching for network components based on the specific characteristics of the input image during deployment, which increases the complexity of model testing. On the other hand, feature modulation methods mainly generate a specific set of parameters based on the input image to adjust the network’s output features to better adapt to each image restoration task [34, 35, 36]. Although they have shown potential in addressing various types of image degradation problems, generating ideal modulation parameters remains a significant challenge when faced with complex and ever-changing real-world scenarios. The core idea of PromptIR methods is to dynamically generate a set of corresponding prompt information based on the input text prompts, using this information to assist the model in processing specific image restoration tasks more accurately [37, 38, 39, 40, 41]. Through this approach, the model can achieve collaborative processing of multiple image restoration tasks with the assistance of prompts, ultimately improving overall processing efficiency and restoration quality. Although PromptIR techniques have shown potential and advantages, the effectiveness of such methods is still largely limited by the quality of generated prompts.
Although the aforementioned methods have made some research progress, achieving multi-task image restoration within a unified framework still faces numerous challenges. For instance, correlations among similar restoration tasks may exist, and leveraging these correlations effectively in model design to enhance overall performance and resource utilization efficiency remains a pressing concern. To tackle these challenges, we explore a multi-expert adaptive selection mechanism that coordinates different experts to cater to the diverse requirements of various tasks. To accomplish this objective, we devise a feature representation method based on the single pixel channel level of the image as well as the global level of the low-frequency and high-frequency components of the image. This method comprehensively captures the feature information of the image and provides robust support for subsequent selection and integration of experts.
Based on the aforementioned feature representations, we develop a multi-expert selection and ensemble scheme. As illustrated in Fig. 1, our proposed scheme exhibits notable distinctions from existing expert selection methods. It adaptively selects the most appropriate experts from the expert library for the current task and image content. This mechanism not only satisfies the individualized requirements of different tasks for network structures but also achieves a balance and optimization among tasks, ensuring that they can share expert resources without mutual interference. Furthermore, since expert selection is contingent upon image content, different tasks can establish correlations via shared experts. This mechanism enables the model to share knowledge and learn from each other when tackling diverse tasks, ultimately enhancing the overall restoration effect. In this process, the multi-expert mechanism also has the capability to eliminate irrelevant expert models, effectively mitigating interference between tasks. Experimental results demonstrate that the method proposed in this paper is not only effective but also possesses significant advantages over existing methods, exhibiting superior restoration performance and enhanced generalization ability in the domain of multi-task image restoration.
In summary, the contributions and advantages of the proposed method are primarily reflected in the following aspects:
-
•
A multi-expert adaptive selection mechanism is designed to adaptively select the most suitable experts for the current task and image content, taking into account the characteristics of the input image and the prompts of specific tasks. This design not only meets the personalized requirements of different tasks for network structures but also achieves balance and optimization among tasks, ensuring that they can coordinate and share network resources without interfering with each other.
-
•
Considering the complementarity between local and global image features, we design a feature representation method that integrates the channel-level information of individual pixels with the global-level low-frequency and high-frequency components. This innovative approach captures image features more comprehensively, providing strong support for subsequent expert selection and ensemble processes.
-
•
Since expert selection is contingent upon image content, associations between different tasks are established through the sharing of experts. This mechanism enables the model to learn from and share knowledge across diverse tasks, ultimately enhancing overall performance and improving the efficiency of expert utilization.
-
•
This multi-expert mechanism also possesses the ability to exclude experts that are irrelevant to the current task, effectively mitigating interference between tasks and further enhancing the effectiveness and accuracy of image restoration. Experimental results demonstrate that, compared to existing methods, this approach exhibits superior restoration performance and improved generalization ability in the field of multi-task image restoration.
The remaining content of this paper is arranged as follows: Section II reviews the methods related to the paper. Section III provides a detailed introduction to the proposed method. Section IV presents the experimental results and analysis. Finally, Section V summarizes and discusses the proposed method.
II Related Work
II-A Task-Specific Image Restoration
Image restoration is a significant topic in the field of computer vision. Due to the complexity and uncertainty of the degradation process, traditional image restoration methods often rely on manually designed features and prior knowledge to construct restoration models [42, 8, 19, 43, 44]. While these methods can achieve pleasing results on specific datasets, their performance is often limited when dealing with more diverse and complex degraded images in the real world. With the rapid development of deep learning, image restoration based on convolutional neural networks (CNNs) has received widespread attention [45, 11, 9, 20, 46, 47]. Numerous effective methods have been proposed for various tasks, including image deblurring [11, 12], image denoising [9, 48], image dehazing [20, 47, 49], image deraining [14, 15], and image desnowing [50, 51]. Representative methods include DnCNN for denoising [9], MSPF for deraining [14], DCMPNet for dehazing [20], and DTCW for desnowing [51]. However, CNNs have limitations in modeling long-range dependencies in images. To address this issue, researchers have introduced Transformers [52] to the field of image restoration, proposing methods such as SwinIR [53] and Restormer [23]. These Transformer-based methods can better restore details and clarity in images with complex textures and structures.
However, existing deep learning-based methods, while performing well on specific tasks, still show certain limitations when dealing with different types of image degradation using a unified framework. Each type of image degradation has its unique characteristics, requiring targeted network designs and optimization strategies. To enhance the generality of these methods, some studies have begun to explore new model design strategies. For instance, Wang et al. proposed Uformer [22], Liang et al. proposed SwinIR [53], and Zamir et al. proposed Restormer [23]. These methods further enhance the performance and stability of image restoration by introducing new network structures and optimization algorithms. Although these methods have shown effectiveness in various image restoration tasks, addressing the specific needs of each task within a unified framework remains challenging. For example, denoising tasks may focus more on restoring local textures, while deblurring tasks may emphasize restoring global structures. This indicates that achieving multi-task recovery within a single framework, while preventing the performance of individual task recovery from degrading, remains a significant challenge.

II-B All-in-One Image Restoration
II-B1 Image Restoration via Knowledge-Based Methods
In All-in-One image restoration frameworks, leveraging external knowledge is a common strategy for addressing conflicts arising from the inconsistent demands of diverse restoration tasks on a unified framework [25, 26, 27, 28, 29, 30]. Specifically, Chen et al. [25] transferred the task-specific prior knowledge contained in multiple single-task teacher networks to a student network via knowledge transfer and contrastive regularization, enabling compatibility with various degraded image restoration tasks. Wang et al. [26], on the other hand, utilized a codebook trained on high-quality images to replace the features of degraded regions in images, effectively addressing the incompatibility among specific requirements for the restoration model. Zhang et al. [54] revealed the intrinsic connections between multiple types of degradation by employing the underlying physical principles of different degradation types and a learnable principal component analysis method. They further utilized these connections to construct a dynamic routing mechanism to remove unknown degradation. Lastly, Jiang et al. [27] constructed a correlation between degraded images and predefined degradation descriptions, retrieved the text descriptions of degraded images, and used them as generation conditions for diffusion models, ultimately achieving multi-task image restoration.
Similarly, Lin et al. [28] proposed a method that maps degraded images into a text space. They obtained text descriptions of clear images by removing degradation-related words and used these descriptions as generation conditions for diffusion models. This approach achieved joint restoration of images with various degradation types. Tan et al. [29], on the other hand, incorporated the CLIP Weather Prior embedding module into their image restoration model. This allowed the model to extract prior information related to specific degradation types from input samples using the CLIP image encoder. Based on this information, the model could dynamically adjust its internal parameters, effectively addressing and restoring various types of image degradation. Luo et al. [30] took a different approach by fine-tuning the CLIP image encoder to predict high-quality feature embeddings from various degraded images. They used this embedding as a generation constraint for diffusion models, achieving the joint execution of different degradation restoration tasks. However, despite their effectiveness, these methods rely heavily on external prior knowledge. This reliance may reduce their flexibility in facing unknown or changing conditions to some extent.
II-B2 Image Restoration via ArchSearch and Feature Modulation
In addition to the above methods, architecture search (ArchSearch) and feature modulation are also quite common in the ’all-in-one’ image restoration framework. ArchSearch effectively addresses the issue of inconsistency between different task requirements in the overall network framework design. Specifically, Chen et al. [31] constructed a set of decoders tailored for specific tasks. They intelligently searched for and selected the most appropriate decoder based on the degradation type of the input image to reconstruct the restored image, significantly enhancing the model’s ability to handle a wide range of degraded images. Park et al. [32] proposed a multi-degradation adaptive classifier and utilized it to select suitable filters as convolution kernels for efficient feature extraction. Zhu et al. [33] employed a two-stage training strategy. This strategy initially learns the general features of degradation and then delves into specific features. Based on this, it searches for the convolutional layer that best matches the current task from a pool of convolutional layers, enabling the model to flexibly adapt to various types of degraded images. Yang et al. [55] constructed an expert library containing various convolution kernels. They introduced a question-answering model to extract degradation information and locate degradation positions from input images. This information then guides the selection process of convolution kernels. Although these methods are effective, the search process is relatively complex, which hinders the practical deployment and application of the model.
In terms of feature modulation, Li et al. [56] employed a contrastive learning strategy to learn degradation representations from degraded images. These representations were then used to modulate features within the restoration network. Wei et al. [34] effectively extracted information on degradation types and severity by combining edge quality ranking loss with contrast loss. This approach enabled the generation of parameters for affine transformation of features, achieving affine modulation. Cui et al. [35] utilized a frequency mining module for spectral decomposition of degraded images, extracting both low-frequency and high-frequency components. These components were then modulated through a frequency modulation module. Chen et al. [36] pre-trained a general image restoration model using synthesized degraded images. The model was then fine-tuned with adapters to meet the requirements of specific image restoration tasks. Although these methods have demonstrated effectiveness in addressing various types of degradation, they often struggle to generate ideal modulation parameters in more complex situations. This limitation can lead to modulation outcomes that fail to meet specific task requirements.
II-B3 Image Restoration via Prompt Learning
With advancements in prompt learning research, researchers have begun exploring its use to address the challenge of coordinating multiple tasks within a unified framework. Specifically, Potlapalli et al. [37] introduced learnable prompts to implicitly capture various types of degradation information and modulate features accordingly. Ma et al. [38] employed degradation-aware visual prompts to encode different types of image degradation information, using linear weighting to control the restoration process. Li et al. [39] integrated degradation-aware prompts and restoration prompts into a general restoration prompt and utilized a prompt-feature interaction module to modulate degradation-related features. Kong et al. [40] adopted a sequential learning strategy to optimize restoration objectives and resolve conflicts during training, enhancing the network’s adaptability to different restoration tasks through an explicit adaptive prompting mechanism. Marcos et al. [41] proposed a text-guided image restoration model that uses human instructions as prompts to guide the restoration process. While prompt-based image restoration methods have demonstrated some effectiveness, they are often limited by the quality of the generated prompts. Unlike these methods, our approach leverages adaptive selection and comprehensive utilization of multiple experts. It considers both the connections and differences between various tasks, ultimately achieving superior performance in joint restoration.
III The Proposed Method
III-A Overview
Our core goal is to build an All-in-One image restoration framework that can effectively recover clear images from degraded images without relying on prior information about the degradation of the input image. As shown in Fig. 2, the proposed framework consists of two main components: (i) Task-Specific Prompt-Guided Multi-Expert Selection and Ensemble (STP-G-MESE) and (ii) Feature Decomposition and Multi-Expert Ensemble (FD-MEE). The STP-G-MESE component primarily selects the most suitable expert for the restoration of the current pixel based on features across different channels. This module adaptively generates task-related prompts based on the degradation of the input image and selects the most appropriate experts for the current image based on these prompts and image content, effectively mitigating interference from different tasks.
In contrast, FD-MEE extracts image features from a global perspective. It fully explores the information contained within the entire image to achieve a comprehensive representation of the global features. Technically, FD-MEE decouples the input features into high-frequency and low-frequency components and selects the most suitable experts from the expert library based on these components. This approach allows the features of different frequency components to play their respective roles in the restoration process. Our method integrates pixel-level features across different channels and global image features within a single framework, resulting in an effective representation of image content. Notably, the experts used in this framework consist of multi-layer perceptrons (MLPs) with varying parameters, which endows the framework with robust learning and adaptation capabilities.
III-B Specific-Task Prompts Guide Multi-Expert Selection and Ensemble
STP-G-MESE mainly consists of two core components: Task-Specific Prompt Generation (TSPG) and Expert Selection and Ensemble. The TSPG component generates prompt information that is highly relevant to the task from the input image, providing a key basis for subsequent expert selection. Guided by TSPG, the Expert Selection and Ensemble component identifies the most suitable experts from the expert library for the current sample recovery task. The features output by these selected experts are then integrated to comprehensively leverage their strengths and enhance image quality restoration.
III-B1 Task-Specific Prompt Generation
To effectively handle various types of image restoration tasks within a unified framework, this paper proposes an innovative strategy that utilizes multi-expert collaboration. This strategy aims to coordinate and address the unique requirements of each task within the model. Selecting an appropriate expert network to assist with specific image restoration tasks is crucial for the overall performance of the model. Indeed, the performance of a trained model is influenced not only by the type of task but also by the specific content of the image. To comprehensively address the impact of these two core factors on image restoration performance, this paper designs a TSPG mechanism. This mechanism generates prompt information that is closely related to specific tasks. The detailed implementation process is illustrated in Fig.2(a).

Let the image to be restored be . In the Task-Specific Prompt Generation (TSPG) module, first undergoes convolution processing, followed by global average pooling, and finally passes through the Softmax layer to produce . Here, represents the number of channels in the feature map output of the convolutional layers. To address the specific needs of image restoration, we introduce a set of task-specific and learnable prompts to assist in expert selection. While prompts can be explicitly defined for each task, different image restoration tasks are interrelated rather than independent. Therefore, this interrelation must be thoroughly considered when designing the task-specific prompt generation mechanism.
To address this issue, we propose a method for generating prompts that meet specific task requirements from a set of Task-Related Basic Prompts (TRB Prompts). Specifically, we use the output of the Softmax layer, , as the weight for the basic prompts related to the task, to combine and construct prompts for this specific task. Assuming the basic prompts related to the task are , the prompts for the input image can be expressed as:
(1) |
where is the length of (). For ease of calculation in the subsequent process, we set to be equal to the number of channels of the feature maps output by the convolutional layer. We then broadcast as to match the feature map output by the convolutional layer in the multi-expert selection and ensemble module.
III-B2 Multi-Expert Selection and Ensemble
The performance of the model is not only affected by the task category but also deeply influenced by the characteristics of the image content itself. Therefore, when selecting suitable experts for different tasks, we need to carefully consider both the characteristics of the tasks and the uniqueness of the image content. Since is the result obtained by applying global average pooling to the Softmax layer, it contains information about image degradation. Thus, the created based on naturally embeds this degradation information, allowing to serve as prompts for specific tasks in the image restoration process. Additionally, the image features output by the convolutional layer in the Multi-Expert Selection and Ensemble (MESE) module are rich in specific content information of the image and can serve as prompts for image content, aiding in the more accurate selection of suitable experts.
Specifically, we concatenate and to obtain , where represents the concatenation operation. To select suitable experts to handle pixels at different positions, we need to synthesize along the channel dimension to evaluate which expert should be selected for quality restoration of pixels at each position. To achieve this goal, we introduce learnable expert prompts , where represents the total number of experts required. Using these expert prompts, the features are integrated along the channel, and the integrated results are fed into the Softmax function to obtain the demand for each expert for pixels at different positions. As shown in Fig. 3, this process can be formulated as:
(2) |
The pixel value at position in the -th channel of represents the degree to which the pixel at that position requires the -th expert. The higher the value, the greater the demand for the -th expert to restore that pixel. Instead of using all experts based on their demand degrees, we select the top experts with the highest demand degrees to participate in pixel feature recovery. This approach effectively enhances the feature recovery ability of the selected experts and reduces interference between unrelated tasks. Additionally, because the same experts may be selected for correlated but different image restoration tasks, these tasks can establish associations through shared experts. This association is beneficial for improving the performance of the current task by leveraging insights from other related tasks.
Let denote the feature of the pixel at position in , and let the expert applicable to be denoted as , where and . The processed features are represented as . To reflect the importance of different experts, we apply the weights stored in for the expert to modulate the feature :
(3) |
Since the above operation only considers the spatial correlation of the pixel and does not account for the correlation between pixel feature vectors, we add a transformer layer after the multiple experts to explore the correlation between different pixels.

According to the above process, the importance level of the -th expert to all pixels on a feature map of size can be described by summing all the values on the -th channel of :
(4) |
Assuming that the pixel at selects the -th expert, we set the value at the -th channel position in to 1, and treat other positions where the -th expert is not selected as 0, thus forming a new tensor . The total number of times the -th expert is selected can be expressed as:
(5) |
Let the mean and standard deviation of the sequences and be and , respectively. To ensure that all experts can be selected by pixels with the same probability, we introduce the coefficient of variation squared loss [57] to optimize the relevant parameters:
(6) |
where is an extremely small positive number used to avoid the denominator being zero.
III-C Feature Decomposition and Multi-Expert Ensemble
In reality, each type of image degradation affects the content of the image in a specific way. As shown in Fig.4, noise and haze mainly affect the high-frequency components of the image, low light conditions primarily affect the low-frequency components, and rain marks and blurring affect both the high-frequency and low-frequency components. Therefore, when restoring the quality of degraded images, processing the low-frequency and high-frequency components separately can reduce mutual interference during the restoration process and make the processing more targeted. To achieve effective separation of high-frequency and low-frequency components, this paper introduces dynamically learnable filters [58]. These filters can dynamically learn information from each spatial position and channel, demonstrating excellent feature separation capabilities. Assuming that is the output of the Transformer layer in STP-G-MSE, in the FD of FD-MEE (as shown in Fig.5), the result obtained by sequentially passing through global average pooling, convolutional layers, batch normalization, and Softmax is used as a low-pass filter:
(7) |
where and represent the global average pooling layer and the 11 convolution operation, respectively; BN represents batch normalization processing. Softmax is used to ensure that the generated filter is a low-pass filter. After obtaining , we use the operations in formulas (7) and (8) to separate high-frequency and low-frequency features.
(8) | ||||
where represents convolution operation.

After obtaining and , we feed them into the multi-expert ensemble module to select experts from the global level of low-frequency and high-frequency components of the input image. This compensates for the shortcomings of only achieving image restoration from a single pixel as discussed in section 3.2. As shown in Fig.2, the multi-expert selection and ensemble at the global level consists of two branches: upper and lower. The upper branch is used to generate the identity information of experts, while the lower branch is used to achieve the multi-expert ensemble. In the upper branch, () sequentially passes through LN, DConv, GAP, and Linear layers. The obtained result is processed by Softmax to predict the expert information and store it in the degree vector (). The -th dimensional data in () represents the importance of the -th expert () to the image . We select the top most important experts () for based on (), where and .
As DConv extracts features from a single feature map, while PConv extracts features from the channel dimension, DConv captures the spatial relationships between pixels, whereas PConv captures the correlations between features in different channels. This creates a distinction between the features extracted by DConv and PConv. Therefore, the interaction between the outputs of DConv and PConv can be utilized to highlight the role of key information in image restoration. Assuming that in the lower branch, () passes through LN, DConv, and PConv in sequence, the result obtained is (). In the upper branch, the output result of DConv is (). We use formula (9) to implement the interaction between and ():
(9) |
We feed into the selected experts , where , and the result is expressed as . To reflect the different roles played by various experts in image restoration tasks, we adopt the following approach to achieve effective ensemble of multiple experts:
(10) |
where is the -th element in the vector , which represents the importance of the -th expert to .
To achieve effective restoration of image information, we concatenate and and feed them into a decoder consisting of convolutional layers, transformer layers, and convolutional layers cascaded to reconstruct the restoration result. We then add the restoration result to the original image to reconstruct the final restored image . To ensure the quality of the recovery results, we use -loss to optimize network parameters:
(11) |
where is the ground truth corresponding to . The total loss used for model training is:
(12) |
where is a hyperparameter used to adjust the role played by .

Methods | Dehazing | Deraining | Denoising | Average | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
SOTS | Rain100L | BSD68σ=15 | BSD68σ=25 | BSD68σ=50 | All Tasks | |||||||
PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | |
BRDNet [59] | 23.23 | 0.895 | 27.42 | 0.895 | 32.26 | 0.898 | 29.76 | 0.836 | 26.34 | 0.693 | 27.80 | 0.843 |
LPNet [60] | 20.84 | 0.828 | 24.88 | 0.784 | 26.47 | 0.778 | 24.77 | 0.748 | 21.26 | 0.552 | 23.64 | 0.738 |
FDGAN [61] | 24.71 | 0.929 | 29.89 | 0.933 | 30.25 | 0.910 | 28.81 | 0.868 | 26.43 | 0.776 | 28.02 | 0.883 |
DL [62] | 26.92 | 0.931 | 32.62 | 0.931 | 33.05 | 0.914 | 30.41 | 0.861 | 26.90 | 0.740 | 29.98 | 0.876 |
MPRNet [63] | 25.28 | 0.955 | 33.57 | 0.954 | 33.54 | 0.927 | 30.89 | 0.880 | 27.56 | 0.779 | 30.17 | 0.899 |
AirNet [56] | 27.94 | 0.962 | 34.90 | 0.967 | 33.92 | 0.933 | 31.26 | 0.888 | 28.00 | 0.797 | 31.20 | 0.910 |
PromptIR [37] | 30.58 | 0.974 | 36.37 | 0.972 | 33.98 | 0.933 | 31.31 | 0.888 | 28.06 | 0.799 | 32.06 | 0.913 |
Ours | 31.61 | 0.981 | 39.00 | 0.985 | 34.12 | 0.935 | 31.46 | 0.892 | 28.19 | 0.803 | 32.85 | 0.919 |
IV Experiments
IV-A Datasets
Following the approach of previous research, we used corresponding datasets for different restoration tasks to validate the performance of the model. Specifically: For image dehazing, we selected the SOTS subset from the RESIDE (outdoor) dataset [64], which comprised 72,135 training images and 500 test images; For image deraining, we used the Rain100L dataset [65], which included 200 clean-rain image pairs for model training and an additional 100 pairs for testing; For image denoising, we jointly used the BSD400 [66] and WED [67] datasets for model training. The training set contained 5,144 clear images, from which we generated noisy images by adding Gaussian noise with standard deviations of 5, 25, and 50. The trained model was tested on the BSD68 dataset [68], which contains 68 clear images with noisy images generated by adding Gaussian noise with standard deviations of 15, 25, and 50; For deblurring, we used the GoPro dataset [69] for motion deblurring, which included 2,103 training images and 1,111 test images; For low-light image enhancement, we used the LOL-v1 dataset [70], which contained 485 training images and 15 test images.
IV-B Implementation Details and Evaluation Metrics
All experiments in this paper were conducted on a single NVIDIA GeForce RTX 4090 GPU, and the model was implemented using the PyTorch 1.12.0 framework. During the training phase, we used the Adam optimizer to optimize the network, setting the initial learning rate to and adjusting it using the cosine annealing strategy. Additionally, we randomly cropped the images to a size of pixels for training. In each small batch, data augmentation was performed by flipping the images horizontally or vertically to expand the training sample size. Under the All-in-One setting, we merged these datasets and trained a single model under three and five degradation settings, respectively. The training process lasted for 150 epochs, and the model was directly tested across multiple restoration tasks. Under the single-task setting, we trained individual models for each specific restoration task, with each model trained for 150 epochs and tested on its respective test set. In experiments, the number of experts and the hyperparameter were heuristically set to 2 and 0.0001, respectively. To objectively evaluate the quality of the restoration results, we adopted commonly used image quality assessment metrics, namely Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM), to assess the quality of the reconstructed results.
Methods | Dehazing | Deraining | Denoising | Deblurring | Low-Light | Average | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | |
NAFNet [71] | 25.23 | 0.939 | 35.56 | 0.967 | 31.02 | 0.883 | 26.53 | 0.808 | 20.49 | 0.809 | 27.76 | 0.881 |
HINet [72] | 24.74 | 0.937 | 35.67 | 0.969 | 31.00 | 0.881 | 26.12 | 0.788 | 19.47 | 0.800 | 27.40 | 0.875 |
DGUNet [73] | 24.78 | 0.940 | 36.62 | 0.971 | 31.10 | 0.883 | 27.25 | 0.837 | 21.87 | 0.823 | 28.32 | 0.891 |
MIRNetV2 [74] | 24.03 | 0.927 | 33.89 | 0.954 | 30.97 | 0.881 | 26.30 | 0.799 | 21.52 | 0.815 | 27.34 | 0.875 |
SwinIR [53] | 21.50 | 0.891 | 30.78 | 0.923 | 30.59 | 0.868 | 24.52 | 0.773 | 17.81 | 0.723 | 25.04 | 0.835 |
Restormer [23] | 24.09 | 0.927 | 34.81 | 0.962 | 31.49 | 0.884 | 27.22 | 0.829 | 20.41 | 0.806 | 27.60 | 0.881 |
DL [62] | 20.54 | 0.826 | 21.96 | 0.762 | 23.09 | 0.745 | 19.86 | 0.672 | 19.83 | 0.712 | 21.05 | 0.743 |
Transweather [75] | 21.32 | 0.885 | 29.43 | 0.905 | 29.00 | 0.841 | 25.12 | 0.757 | 21.21 | 0.792 | 25.22 | 0.836 |
TAPE [76] | 22.16 | 0.861 | 29.67 | 0.904 | 30.18 | 0.855 | 24.47 | 0.763 | 18.97 | 0.621 | 25.09 | 0.801 |
AirNet [56] | 21.04 | 0.884 | 32.98 | 0.956 | 31.20 | 0.897 | 24.35 | 0.781 | 18.18 | 0.735 | 25.49 | 0.846 |
IDR [54] | 25.24 | 0.943 | 35.63 | 0.965 | 31.60 | 0.887 | 27.87 | 0.846 | 21.34 | 0.826 | 28.34 | 0.893 |
Ours | 31.05 | 0.980 | 38.32 | 0.982 | 31.40 | 0.888 | 29.41 | 0.890 | 23.00 | 0.845 | 30.64 | 0.917 |

IV-C Comparison with State-of-the-art Methods
IV-C1 Comparison of Restoration Across Three Degradations
We conducted a comprehensive performance evaluation of our method on three typical image restoration tasks: image dehazing, deraining, and denoising. To further assess its restoration effectiveness, we compared it with various advanced image restoration methods, including general methods such as BRDNet [59], LPNet [60], FDGAN [61], and MPRNet [63], as well as specialized All-in-One image restoration methods like DL [62], AirNet [56], and PromptIR [37]. To demonstrate the differences in visual effects of the restored images, we have shown the visual results of some images after dehazing, deraining, and denoising in Fig.6. It is worth noting that the general image restoration methods, including BRDNet, LPNet, FDGAN, and MPRNet, do not provide trained parameters and therefore cannot display the visual effects of their restoration results. Hence, following the PromptIR processing mode, we only compared our proposed method with these methods based on objective evaluation results. From the results shown in Fig.6, it can be seen that the proposed method can more effectively restore the color information of the source image during the dehazing process. During the rain removal process, it not only effectively removes rain streaks but also restores lost image information. In the denoising process, it can more effectively preserve the edge texture details in the source image, making the restoration result closer to the ground truth (GT).
Metrics | MSCNN [77] | AODNet [78] | EPDN [79] | FDGAN [61] | Restormer [23] | AirNet [56] | PromptIR [37] | Ours |
---|---|---|---|---|---|---|---|---|
PSNR | 22.06 | 20.29 | 22.57 | 23.15 | 30.87 | 23.18 | 31.31 | 31.83 |
SSIM | 0.908 | 0.877 | 0.863 | 0.921 | 0.969 | 0.900 | 0.973 | 0.981 |
Metrics | UMR [80] | SIRR [81] | MSPFN [82] | LPNet[83] | Restormer [23] | AirNet [56] | PromptIR[37] | Ours |
---|---|---|---|---|---|---|---|---|
PSNR | 32.39 | 32.37 | 33.50 | 33.61 | 36.74 | 34.90 | 37.04 | 39.06 |
SSIM | 0.921 | 0.926 | 0.948 | 0.958 | 0.978 | 0.977 | 0.979 | 0.985 |
To objectively evaluate the restoration performance of different methods, we used PSNR and SSIM to measure the results generated by our method. The results of other comparative methods were obtained from the data provided in the PromptIR experiment. As shown in Table I, our method exhibits superior performance in improving restoration quality compared to other methods. Specifically, our method improved the average PSNR by 0.79 dB compared to the high-performance PromptIR method and by 1.65 dB compared to the suboptimal AirNet. In detail, compared with PromptIR, our method achieved a performance improvement of 2.63 dB in the deraining task and 1.03 dB in the dehazing task. In the denoising task, our method improved performance by 0.14 dB, 0.15 dB, and 0.13 dB for noise levels of 15, 25, and 50, respectively. In terms of SSIM, our method also shows advantages compared to PromptIR, with improvements of 0.013 in the deraining task and 0.007 in the dehazing task. For denoising tasks with noise levels of 15, 25, and 50, our method improved SSIM by 0.002, 0.004, and 0.004, respectively. These results demonstrate the effectiveness and superiority of the method proposed in this paper.
IV-C2 Comparison of Restoration Across Five Degradations
To comprehensively verify the performance of the method proposed in this paper, we followed the design principles of existing methods and applied our method to five different image restoration tasks: image dehazing, image deraining, image denoising, image deblurring, and low-light image restoration. However, since existing methods do not provide testable parameters under this setting, we are unable to directly display a visual comparison of the recovery results of different methods. Similar to the PromptIR method, our comparison is primarily based on objective evaluation results. As shown in Table II, our method demonstrates significant advantages in handling these various image restoration tasks, further proving the rationality, progressiveness, and applicability of our method.

IV-D Comparison on Single-Task Restoration Settings
The design of this method comprehensively considers the specific requirements of single tasks on the network. This ensures that the performance of single-task image restoration is not sacrificed within the All-in-One framework to achieve coordination of multiple tasks. To verify this, we trained our model on three challenging datasets: SOTS-Outdoor, Rain100L, and BSD68, respectively. We then comprehensively compared its performance with several other representative methods on different testing tasks. The restoration results shown in Fig.7 clearly indicate that our method exhibits stronger restoration ability compared to other methods, achieving the best visual effect. Meanwhile, these restoration results did not introduce any significant artifacts or false information, further demonstrating the effectiveness of our method. Additionally, the objective evaluation data in Tables III-V fully confirm that the performance of our method is still significantly better than that of the relevant comparative methods in single-task mode.
To comprehensively evaluate the performance of our method in the All-in-One setting and compare it with the single-task setting, we present detailed performance changes of different All-in-One methods under both settings in Fig.8. Through comparative analysis, it is evident that our method maintains a high degree of stability, showing no significant fluctuations whether in the All-in-One or single-task recovery setting. This excellent performance is primarily due to our method’s ability to effectively coordinate the requirements of different tasks for the network framework. Additionally, it significantly mitigates potential performance trade-offs when handling multiple tasks. This demonstrates that the proposed method has stronger generalization ability and practical application value compared to the comparison methods, maintaining excellent performance in various settings.
Settings | Metrics | DnCNN [9] | IRCNN [84] | FFDNet [48] | BRDNet [59] | AirNet [56] | Ours |
---|---|---|---|---|---|---|---|
BSD68σ=15 | PSNR | 33.89 | 33.87 | 33.87 | 34.10 | 34.14 | 34.36 |
SSIM | 0.930 | 0.929 | 0.929 | 0.929 | 0.936 | 0.938 | |
BSD68σ=25 | PSNR | 31.23 | 31.18 | 31.21 | 31.43 | 31.48 | 31.73 |
SSIM | 0.883 | 0.882 | 0.882 | 0.885 | 0.893 | 0.898 | |
BSD68σ=50 | PSNR | 27.92 | 27.88 | 27.96 | 28.16 | 28.23 | 28.50 |
SSIM | 0.789 | 0.790 | 0.789 | 0.794 | 0.806 | 0.814 |
IV-E Ablation Study
The proposed method mainly consists of four core components: TSPG, MESE, FD, and MEE. To evaluate the effectiveness of each component, we performed ablation studies on each module of the method. All experiments were conducted under the All-in-One setting, covering three degradation scenarios: image dehazing, image deraining, and image denoising. The quantitative evaluation results (mean values) of these ablation experiments are presented in Table VI.
Effectiveness of TSPG: TSPG is primarily used to generate task-specific prompts, which help in more accurately selecting experts suitable for the current task. To verify the effectiveness of TSPG, we conducted experiments without the MESE and FD-MEE components. Specifically, we tested a method where the output features from TSPG were directly concatenated with the output features from the convolutional layer in MESE and then processed through the Transformer layer. The data in Table VI clearly show that adding TSPG significantly improved model performance. This enhancement is mainly due to TSPG can provide detailed descriptions of image degradation at a global level, thereby increasing the representation of features and substantially improving image restoration quality.
STP-G-MESE | FD-MEE | Average | |||
---|---|---|---|---|---|
TSPG | MESE | FD | MEE | PSNR | SSIM |
29.97 | 0.865 | ||||
31.15 | 0.883 | ||||
32.21 | 0.904 | ||||
32.64 | 0.910 | ||||
32.57 | 0.907 | ||||
32.85 | 0.919 |
Effectiveness of MESE: MESE is mainly used to select and ensemble experts based on the results output by TSPG. According to the data in Table VI, we can observe a significant improvement in model performance when both MESE and TSPG are utilized. This observation demonstrates the positive role of MESE and TSPG in the expert selection process. Furthermore, it confirms the crucial role of the expert selection and ensemble mechanisms within MESE in enhancing overall performance.
Effectiveness of FD: FD is primarily utilized to separate features, enabling the independent processing of high-frequency and low-frequency components. The data in Table VI shows that the model’s performance improves when these components are processed separately. This result highlights the importance of decomposing the features into their low-frequency and high-frequency components.
Effectiveness of MEE: MEE is used to integrate the outputs of multiple experts at a global level. The results in Table VI show that incorporating MEE leads to further improvement in model performance. This indicates that MEE has a positive impact within the overall network framework.

IV-F Limitation and Future Work
As depicted in Fig. 9, the proposed method demonstrates strong restoration performance and low model complexity. However, its parameter size remains relatively large compared to certain CNN-based methods, such as TAPE, DL, and AirNet, due to the inclusion of the Transformer architecture. Nevertheless, our method’s parameter size is notably lower compared to other Transformer-based methods like IDR, Restormer, and DGUNet. In future work, we plan to explore strategies to enhance the restoration performance while maintaining the model’s lightweight characteristics.
V Conclusion
In this paper, we propose a multi-expert adaptive selection mechanism designed to address the diverse requirements of various tasks within the field of image restoration. We develop a feature representation method that captures image information at both the single-pixel channel level and the global level, including low-frequency and high-frequency components. This method establishes a solid foundation for expert selection and ensemble. Our multi-expert selection and ensemble scheme adapts to the content characteristics of the input image and specific task prompts, ensuring the selection of the most suitable experts from our expert library. This approach not only meets the unique demands of different tasks but also maintains an optimal balance, allowing tasks to share expert resources without interference. Furthermore, by selecting experts based on image content, our mechanism promotes knowledge sharing and learning across tasks, ultimately enhancing overall restoration performance. Experimental results confirm that our proposed method outperforms existing approaches, demonstrating superior restoration performance and generalization ability in multi-task image restoration scenarios.
References
- [1] H. Li, X. He, Z. Yu, and J. Luo, “Noise-robust image fusion with low-rank sparse decomposition guided by external patch prior,” Information Sciences, vol. 532, pp. 14–37, 2020.
- [2] J. Chen, L. Yang, W. Liu, X. Tian, and J. Ma, “Lenfusion: A joint low-light enhancement and fusion network for nighttime infrared and visible image fusion,” IEEE Transactions on Instrumentation and Measurement, vol. 73, pp. 1–15, 2024.
- [3] H. Li, J. Liu, Y. Zhang, and Y. Liu, “A deep learning framework for infrared and visible image fusion without strict registration,” International Journal of Computer Vision, vol. 132, p. 1625–1644, 2024.
- [4] J. Pang, D. Zhang, H. Li, W. Liu, and Z. Yu, “Hazy re-id: An interference suppression model for domain adaptation person re-identification under inclement weather condition,” in 2021 IEEE International Conference on Multimedia and Expo (ICME), 2021.
- [5] H. Li, Q. Hu, and Z. Hu, “Catalyst for clustering-based unsupervised object re-identification: Feature calibration,” in The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI), vol. 38, no. 4, 2024, pp. 3091–3099.
- [6] H. Gupta, O. Kotlyar, H. Andreasson, and A. J. Lilienthal, “Robust object detection in challenging weather conditions,” in 2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024, pp. 7508–7517.
- [7] H. Zhang, L. Xiao, X. Cao, and H. Foroosh, “Multiple adverse weather conditions adaptation for object detection via causal intervention,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 46, no. 3, pp. 1742–1756, 2024.
- [8] H. Li, X. He, D. Tao, Y. Tang, and R. Wang, “Joint medical image fusion, denoising and enhancement via discriminative low-rank sparse dictionaries learning,” Pattern Recognition, vol. 79, pp. 130–146, 2018.
- [9] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Transactions on Image Processing, vol. 26, no. 7, pp. 3142–3155, 2017.
- [10] S. Ding, Q. Wang, L. Guo, X. Li, L. Ding, and X. Wu, “Wavelet and adaptive coordinate attention guided fine-grained residual network for image denoising,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 34, no. 7, pp. 6156–6166, 2024.
- [11] S.-J. Cho, S.-W. Ji, J.-P. Hong, S.-W. Jung, and S.-J. Ko, “Rethinking coarse-to-fine approach in single image deblurring,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2021, pp. 4641–4650.
- [12] Y. Cui, Y. Tao, W. Ren, and A. Knoll, “Dual-domain attention for image deblurring,” Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), vol. 37, no. 1, pp. 479–487, 2023.
- [13] Z. Liu, J. Wu, G. Shi, W. Yang, W. Dong, and Q. Zhao, “Motion-oriented hybrid spiking neural networks for event-based motion deblurring,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 34, no. 5, pp. 3742–3754, 2024.
- [14] K. Jiang, Z. Wang, P. Yi, C. Chen, B. Huang, Y. Luo, J. Ma, and J. Jiang, “Multi-scale progressive fusion network for single image deraining,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020, pp. 8346–8355.
- [15] D. Ren, W. Zuo, Q. Hu, P. Zhu, and D. Meng, “Progressive image deraining networks: A better and simpler baseline,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019, pp. 3937–3946.
- [16] W. Wu, Y. Liu, and Z. Li, “Subband differentiated learning network for rain streak removal,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 33, no. 9, pp. 4675–4688, 2023.
- [17] L. Cai, Y. Fu, W. Huo, Y. Xiang, T. Zhu, Y. Zhang, H. Zeng, and D. Zeng, “Multiscale attentive image de-raining networks via neural architecture search,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 33, no. 2, pp. 618–633, 2023.
- [18] Z. Zhu, D. Zhang, Z. Wang, S. Feng, and P. Duan, “Spectral dual-channel encoding for image dehazing,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 33, no. 11, pp. 6236–6248, 2023.
- [19] K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2341–2353, 2020.
- [20] Y. Zhang, S. Zhou, and H. Li, “Depth information assisted collaborative mutual promotion network for single image dehazing,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2024, pp. 2846–2855.
- [21] Y. Feng, X. Meng, F. Zhou, W. Lin, and Z. Su, “Real-world non-homogeneous haze removal by sliding self-attention wavelet network,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 33, no. 10, pp. 5470–5485, 2023.
- [22] Z. Wang, X. Cun, J. Bao, W. Zhou, J. Liu, and H. Li, “Uformer: A general u-shaped transformer for image restoration,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 17 662–17 672.
- [23] S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, and M. Yang, “Restormer: Efficient transformer for high-resolution image restoration,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 5718–5729.
- [24] W. Cai, J. Jiang, F. Wang, J. Tang, S. Kim, and J. Huang, “A survey on mixture of experts,” arXiv preprint arXiv:2407.06204, 2024.
- [25] W. Chen, Z.-K. Huang, C.-C. Tsai, H.-H. Yang, J.-J. Ding, and S.-Y. Kuo, “Learning multiple adverse weather removal via two-stage knowledge learning and multi-contrastive regularization: Toward a unified model,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 17 632–17 641.
- [26] Y. Wan, M. Shao, Y. Cheng, Y. Liu, and Z. Bao, “Restoring images captured in arbitrary hybrid adverse weather conditions in one go,” arXiv preprint arXiv:2305.09996, 2023.
- [27] Y. Jiang, Z. Zhang, T. Xue, and J. Gu, “Autodir: Automatic all-in-one image restoration with latent diffusion,” arXiv preprint arXiv:2310.10123, 2023.
- [28] J. Lin, Z. Zhang, Y. Wei, D. Ren, D. Jiang, Q. Tian, and W. Zuo, “Improving image restoration through removing degradations in textual representations,” in 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 2866–2878.
- [29] Z. Tan, Y. Wu, Q. Liu, Q. Chu, L. Lu, J. Ye, and N. Yu, “Exploring the application of large-scale pre-trained models on adverse weather removal,” IEEE Transactions on Image Processing, vol. 33, pp. 1683–1698, 2024.
- [30] Z. Luo, F. K. Gustafsson, Z. Zhao, J. Sjölund, and T. B. Schön, “Controlling vision-language models for universal image restoration,” arXiv preprint arXiv:2310.01018, 2023.
- [31] H. Chen, Y. Wang, T. Guo, C. Xu, Y. Deng, Z. Liu, S. Ma, C. Xu, C. Xu, and W. Gao, “Pre-trained image processing transformer,” in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 12 294–12 305.
- [32] D. Park, B. H. Lee, and S. Y. Chun, “All-in-one image restoration for unknown degradations using adaptive discriminative filters for specific degradations,” in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 5815–5824.
- [33] Y. Zhu, T. Wang, X. Fu, X. Yang, X. Guo, J. Dai, Y. Qiao, and X. Hu, “Learning weather-general and weather-specific features for image restoration under multiple adverse weather conditions,” in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 21 747–21 758.
- [34] Y.-W. Chen and S.-C. Pei, “Always clear days: Degradation type and severity aware all-in-one adverse weather removal,” arXiv preprint arXiv:2310.18293, 2023.
- [35] Y. Cui, S. W. Zamir, S. Khan, A. Knoll, M. Shah, and F. S. Khan, “Adair: Adaptive all-in-one image restoration via frequency mining and modulation,” arXiv preprint arXiv:2403.14614, 2024.
- [36] H.-W. Chen, Y.-S. Xu, K. C. Chan, H.-K. Kuo, C.-Y. Lee, and M.-H. Yang, “Adair: Exploiting underlying similarities of image restoration tasks with adapters,” arXiv preprint arXiv:2404.11475, 2024.
- [37] V. Potlapalli, S. W. Zamir, S. H. Khan, and F. Shahbaz Khan, “Promptir: Prompting for all-in-one image restoration,” in Advances in Neural Information Processing Systems (NeruIPS), vol. 36, 2023, pp. 71 275–71 293.
- [38] J. Ma, T. Cheng, G. Wang, Q. Zhang, X. Wang, and L. Zhang, “Prores: Exploring degradation-aware visual prompt for universal image restoration,” arXiv preprint arXiv:2306.13653, 2023.
- [39] Z. Li, Y. Lei, C. Ma, J. Zhang, and H. Shan, “Prompt-in-prompt learning for universal image restoration,” arXiv preprint arXiv:2312.05038, 2023.
- [40] X. Kong, C. Dong, and L. Zhang, “Towards effective multiple-in-one image restoration: A sequential and prompt learning strategy,” arXiv preprint arXiv:2401.03379, 2024.
- [41] R. T. Marcos V. Conde, Gregor Geigle, “High-quality image restoration following human instructions,” arXiv preprint arXiv:2401.16468, 2024.
- [42] D. Berman, T. treibitz, and S. Avidan, “Non-local image dehazing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016, pp. 1674–1682.
- [43] H. Li, Y. Wang, Z. Yang, R. Wang, X. Li, and D. Tao, “Discriminative dictionary learning-based multiple component decomposition for detail-preserving noisy image fusion,” IEEE Transactions on Instrumentation and Measurement, vol. 69, no. 4, pp. 1082–1102, 2020.
- [44] H. Li, Z. Yu, and C. Mao, “Fractional differential and variational method for image fusion and super-resolution,” Neurocomputing, vol. 171, pp. 138–148, 2016.
- [45] H. Li, M. Yuan, J. Li, Y. Liu, G. Lu, Y. Xu, Z. Yu, and D. Zhang, “Focus affinity perception and super-resolution embedding for multifocus image fusion,” IEEE Transactions on Neural Networks and Learning Systems, 2024, dOI: 10.1109/TNNLS.2024.3367782.
- [46] H. Li, Y. Cen, Y. Liu, X. Chen, and Z. Yu, “Different input resolutions and arbitrary output resolution: A meta learning-based deep framework for infrared and visible image fusion,” IEEE Transactions on Image Processing, vol. 30, pp. 4070–4083, 2021.
- [47] X. Qin, Z. Wang, Y. Bai, X. Xie, and H. Jia, “Ffa-net: Feature fusion attention network for single image dehazing,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 07, pp. 11 908–11 915, 2020.
- [48] K. Zhang, W. Zuo, and L. Zhang, “Ffdnet: Toward a fast and flexible solution for cnn-based image denoisin,” IEEE Transactions on Image Processing, vol. 29, no. 9, pp. 4608–4622, 2018.
- [49] H. Li, J. Gao, Y. Zhang, M. Xie, and Z. Yu, “Haze transfer and feature aggregation network for real-world single image dehazing,” Knowledge-Based Systems, vol. 251, p. 109309, 2022.
- [50] D.-W. Jaw, S.-C. Huang, and S.-Y. Kuo, “Desnowgan: An efficient single image snow removal framework using cross-resolution lateral connection and gans,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 4, pp. 1342–1350, 2021.
- [51] W. Chen, H. Fang, C. Hsieh, C. Tsai, I.-H. Chen, J. Ding, and S. Kuo, “All snow removed: Single image desnowing algorithm using hierarchical dual-tree complex wavelet representation and contradict channel loss,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), vol. 34, no. 10, 2021, pp. 4196–4205.
- [52] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in neural information processing systems, 2017.
- [53] J. Liang, J. Cao, G. Sun, K. Zhang, L. Van Gool, and R. Timofte, “Swinir: Image restoration using swin transformer,” in 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 2021, pp. 1833–1844.
- [54] J. Zhang, J. Huang, M. Yao, Z. Yang, H. Yu, M. Zhou, and F. Zhao, “Ingredient-oriented multi-degradation learning for image restoration,” in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 5825–5835.
- [55] H. Yang, L. Pan, Y. Yang, and W. Liang, “Language-driven all-in-one adverse weather removal,” in 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 24 902–24 912.
- [56] B. Li, X. Liu, P. Hu, Z. Wu, J. Lv, and X. Peng, “All-in-one image restoration for unknown corruption,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), vol. 36, 2022, pp. 17 431–17 441.
- [57] N. Shazeer, A. Mirhoseini, K. Maziarz, A. Davis, Q. Le, G. Hinton, and J. Dean, “Outrageously large neural networks: The sparsely-gated mixture-of-experts layer,” arXiv preprint arXiv:1701.06538, 2017.
- [58] Y. Cui, Y. Tao, Z. Bing, W. Ren, X. Gao, X. Cao, K. Huang, and A. Knoll, “Selective frequency network for image restoration,” in The Eleventh International Conference on Learning Representations, 2023.
- [59] C. Tian, Y. Xu, and W. Zuo, “Image denoising using deep cnn with batch renormalization,” Neural Networks, vol. 121, pp. 461–473, 2020.
- [60] H. Gao, X. Tao, X. Shen, and J. Jia, “Dynamic scene deblurring with parameter selective sharing and nested skip connections,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 3843–3851.
- [61] Y. Dong, Y. Liu, H. Zhang, S. Chen, and Y. Qiao, “Fd-gan: Generative adversarial networks with fusion-discriminator for single image dehazing,” in Proceedings of the AAAI conference on artificial intelligence (AAAI), vol. 34, no. 07, 2020, pp. 10 729–10 736.
- [62] Q. Fan, D. Chen, L. Yuan, G. Hua, N. Yu, and B. Chen, “A general decoupled learning framework for parameterized image operators,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 1, pp. 33–47, 2021.
- [63] S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, M.-H. Yang, and L. Shao, “Multi-stage progressive image restoration,” in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 14 816–14 826.
- [64] B. Li, W. Ren, D. Fu, D. Tao, D. Feng, W. Zeng, and Z. Wang, “Benchmarking single-image dehazing and beyond,” IEEE Transactions on Image Processing, vol. 28, no. 1, pp. 492–505, 2019.
- [65] W. Yang, R. T. Tan, J. Feng, Z. Guo, S. Yan, and J. Liu, “Joint rain detection and removal from a single image with contextualized deep networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 6, pp. 1377–1393, 2020.
- [66] P. Arbeáez, M. Maire, C. Fowlkes, and J. Malik, “Contour detection and hierarchical image segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 5, pp. 898–916, 2011.
- [67] K. Ma, Z. Duanmu, Q. Wu, Z. Wang, H. Yong, H. Li, and L. Zhang, “Waterloo exploration database: New challenges for image quality assessment models,” IEEE Transactions on Image Processing, vol. 26, no. 2, pp. 1004–1016, 2017.
- [68] D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in Proceedings Eighth IEEE International Conference on Computer Vision (ICCV), vol. 2, 2001, pp. 416–423 vol.2.
- [69] S. Nah, T. H. Kim, and K. M. Lee, “Deep multi-scale convolutional neural network for dynamic scene deblurring,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 257–265.
- [70] C. Wei, W. Wang, W. Yang, and J. Liu, “Deep retinex decomposition for low-light enhancement,” arXiv preprint arXiv:1808.04560, 2018.
- [71] L. Chen, X. Chu, X. Zhang, and J. Sun, “Simple baselines for image restoration,” in European Conference on Computer Vision (ECCV), 2022, pp. 17–33.
- [72] L. Chen, X. Lu, J. Zhang, X. Chu, and C. Chen, “Hinet: Half instance normalization network for image restoration,” in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2021, pp. 182–192.
- [73] C. Mou, Q. Wang, and J. Zhang, “Deep generalized unfolding networks for image restoration,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 17 378–17 389.
- [74] S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, M.-H. Yang, and L. Shao, “Learning enriched features for fast image restoration and enhancement,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 2, pp. 1934–1948, 2023.
- [75] J. M. Jose Valanarasu, R. Yasarla, and V. M. Patel, “Transweather: Transformer-based restoration of images degraded by adverse weather conditions,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 2343–2353.
- [76] L. Liu, L. Xie, X. Zhang, S. Yuan, X. Chen, W. Zhou, H. Li, and Q. Tian, “Tape: Task-agnostic prior embedding for image restoration,” in European Conference on Computer Vision (ECCV), 2022, pp. 447–464.
- [77] W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M.-H. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in European Conference on Computer Vision (ECCV), 2016, pp. 154–169.
- [78] B. Li, X. Peng, Z. Wang, J. Xu, and D. Feng, “Aod-net: All-in-one dehazing network,” in 2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp. 4780–4788.
- [79] Y. Qu, Y. Chen, J. Huang, and Y. Xie, “Enhanced pix2pix dehazing network,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 8152–8160.
- [80] R. Yasarla and V. M. Patel, “Uncertainty guided multi-scale residual learning-using a cycle spinning cnn for single image de-raining,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 8397–8406.
- [81] W. Wei, D. Meng, Q. Zhao, Z. Xu, and Y. Wu, “Semi-supervised transfer learning for image rain removal,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 3872–3881.
- [82] K. Jiang, Z. Wang, P. Yi, C. Chen, B. Huang, Y. Luo, J. Ma, and J. Jiang, “Multi-scale progressive fusion network for single image deraining,” in 22020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 8343–8352.
- [83] H. Gao, X. Tao, X. Shen, and J. Jia, “Dynamic scene deblurring with parameter selective sharing and nested skip connections,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 3843–3851.
- [84] K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep cnn denoiser prior for image restoration,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 2808–2817.
![]() |
Xiaoyan Yu is currently a Ph.D. candidate in the School of Computer Science and Technology, Beijing Institute of Technology. Her research interests include social event mining, natural language processing and image processing. |
![]() |
Shen Zhou received his B.E. degree in Communication Engineering from Suqian University in 2021. He is currently pursuing his M.E. degree in Communication Engineering at the School of Information Engineering and Automation, Kunming University of Science and Technology. His research interests include image processing and computer vision. |
![]() |
Huafeng Li received the M.S. degrees in applied mathematics major from Chongqing University in 2009 and obtained his Ph.D. degree in control theory and control engineering major from Chongqing University in 2012. He is currently a professor at the School of Information Engineering and Automation, Kunming University of Science and Technology, China. His research interests include image processing, computer vision, and information fusion. He has authored or coauthored more than 50 scientific articles in CVPR, IJCV, AAAI, ACMMM, ICME, IEEE TIP, IEEE TIFS, IEEE TNNLS, IEEE TMM, IEEE TCSVT, IEEE TGRS, IEEE TCI, IEEE TII, IEEE TETCI, IEEE TITS, IEEE TIM, IEEE/CAA JAS, PR, INFFUS, NeuNet, INS, ESWA, KBS, etc. |
![]() |
Liehuang Zhu (Senior Member, IEEE) received the Ph.D. degree in computer science from the Beijing Institute of Technology, Beijing, China, in 2004. He is currently a Professor with the School of Cyberspace Science and Technology, Beijing Institute of Technology. He is selected into the Program for New Century Excellent Talents in University from the Ministry of Education, China. He has published over 100 SCI-indexed research papers in these areas, including ten more IEEE/ACM TRANSACTIONS PAPERS. His research interests include blockchain, Internet of Things, data security, and artificial intelligence security. Prof. Zhu has been granted a number of IEEE Best Paper Awards, including IWQoS’17 and TrustCom’18. |