JPEG INFORMATION REGULARIZED DEEP IMAGE PRIOR FOR DENOISING
Abstract
Image denoising is a representative image restoration task in computer vision. Recent progress of image denoising from only noisy images has attracted much attention. Deep image prior (DIP) demonstrated successful image denoising from only a noisy image by inductive bias of convolutional neural network architectures without any pre-training. The major challenge of DIP based image denoising is that DIP would completely recover the original noisy image unless applying early stopping. For early stopping without a ground-truth clean image, we propose to monitor JPEG file size of the recovered image during optimization as a proxy metric of noise levels in the recovered image. Our experiments show that the compressed image file size works as an effective metric for early stopping. ††Copyright 2023 IEEE. Published in 2023 IEEE International Conference on Image Processing (ICIP), scheduled for 8-11 October 2023 in Kuala Lumpur, Malaysia. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works, must be obtained from the IEEE. Contact: Manager, Copyrights and Permissions / IEEE Service Center / 445 Hoes Lane / P.O. Box 1331 / Piscataway, NJ 08855-1331, USA. Telephone: + Intl. 908-562-3966.
Index Terms— Image Denoising, Deep Image Prior, JPEG Compression, Early Stopping
1 Introduction
Image denoising is one of the important tasks in computer vision. Recent attractive image denoising methods tackle the case in which only noisy images are given. In this setting, we cannot apply supervised learning which usually needs a lot of pairs of clean and noisy images for training.
Deep image prior (DIP) [1, 2] is an image denoising method applicable to this setting. The key idea of DIP is to make use of the implicit regularization brought by the convolutional neural network architectures. DIP could estimate a clean image only from a single noisy image without training the network beforehand. In DIP, the network parameters are just randomly initialized and optimized so as to reconstruct the given noisy image. To prevent reconstructing the original noisy image, several follow-up DIP works have proposed to incorporate systematic early-stopping (ES) [3, 4]. However, existing ES methods struggle to compute a stable metric because there is no absolute aesthetic criterion in denoising. Also, feasible denoising during optimization depends on the noise level and the image content. For example, the best result could be that noises can remain in the DIP-based denoised image at some high noise level.
In this study, we propose a simple, but effective heuristic to determine the ES. We propose to use compressed image file size (CIFS), in particular, JPEG compressed file size of the reconstructed image to determine the termination. Since the target of the image compression is mainly clean images, JPEG file size tends to increase as the noise level increases. We preliminarily examined the relationship between the additive gaussian noise levels and their JPEG file sizes as shown in Fig. 1. As we can see, the JPEG file size monotonically increases according to the noise level increases. Our ES utilizes this relationship as the proxy indicator of the reconstructed noise in the denoised image. In our ES criterion, the optimization should be stopped where the CIFS increases although the image content would be reconstructed.





2 Related Work
Since image denoising is one of the long standing problems in computer vision, there are many related works. For the sake of relevance to this paper, we overview only deep image prior (DIP) and its subsequent works employing ES to prevent overfitting.
Deep image prior (DIP) [1, 2] demonstrates that randomly initialized neural networks can be used as an image prior for standard inverse problems including denoising. Although DIP can optimize the network parameters when only given a noisy image in denoising, the performance would degrade because of overfitting to the noisy image. One of the typical approaches to avoid overfitting is ES, in which some criterion is used as a proxy to measure the difference between the network output and the unknown clean image.
Several subsequent ES works of DIP have been proposed. DIP-SURE [5] introduce Stein’s unbiased risk estimator (SURE) to compute approximately unbiased estimate for gaussian noise. DIP-denoising [6] extends DIP-SURE for poisson noise. Self-validation [3] and ES-WMV [4] are different directional ES methods which do not assume noise types and levels in a noisy image. Self-validation trains an autoencoder from windowed consecutive reconstructed images and the autoencoder evaluates the next reconstructed image quality as the ES criterion. ES-WMV computes windowed moving variance (WMV) as the ES criterion.
Our proposed ES method also does not assume noise types and levels. Our ES criterion is based on JPEG compressed image file size (CIFS).
3 Proposed Method
3.1 Preliminaries: Deep Image Prior (DIP)
Before describing our proposed ES method, we describe deep image prior (DIP). DIP is proposed for image restoration problems including denoising. Image restoration algorithms aim to recover an unknown clean image given a corrupted image . DIP argued that a neural network architecture behaves as the general image prior, and the clean image can be recovered from the corrupted image without additional regularizations:
(1) |
where is randomly initialized and fixed noise, is the neural network parameterized by , and is the loss function such as mean squared error.
3.2 Compressed Image File Size based ES
As described in Sec. 1, compressed image file size (CIFS) can be used as a proxy metric of degrees of noise. As the network parameters fit to the corrupted image, the loss function decreases almost consistently. On the contrary, the network gradually outputs a noisy image which would have a larger compressed image file size. Based on this insight, our ES considers finding trade-off between the loss function and CIFS based regularizer formulated as:
(2) |
where is a function which takes an image as input and outputs CIFS, is an epoch number during optimization, and is a balancing weight between two terms. In our experiments, we adopt mean squared error as the loss function same as DIP, JPEG file size output function as , and squared JPEG file size averaged over image size as . Specifically, is
(3) |
where is CIFS and are image height and width, respectively. Since CIFS depends on its image size, is averaged over its image size . Moreover, we also use squared file size so that regularization works strongly as the file size increases.
The is affected by degrees of noise. We show estimated for additive gaussian noise with different standard deviation in Fig. 2. The estimation is based on our preliminary experiment on CBSD500 [7]. We assume that we can observe 400 clean images on CBSD500. Under this assumption, we could search by the following steps: (1) adding a gaussian noise to a clean image to create a noisy image synthetically, (2) denoising the noisy image by DIP and monitoring values in Eq. (2), (3) and finding the optimized as maximizing the average PSNR between the clean image and the denoised image at the ES criterion minimized epoch over the observed 400 images. Specifically, the final step is formulated as:
where is a pixel-wise uniform distribution on , and is a uniform distribution on the 400 observed images of CBSD500.

4 Experiments

Dataset | BRISQUE [10] | NIQE [11] | ES-WMV [4] | CIFS (Ours) | No ES | Peak | |
---|---|---|---|---|---|---|---|
CBSD68 [8] | 15 | 27.7502.108 | 27.4844.172 | 28.6403.877 | 29.2232.618 | 26.4640.535 | 30.4632.066 |
25 | 24.7542.740 | 26.6422.759 | 26.3753.577 | 27.3382.634 | 21.7400.410 | 27.7672.206 | |
50 | 21.0782.183 | 23.7822.369 | 23.2892.826 | 23.8032.321 | 15.8430.416 | 24.3882.241 | |
Kodak24 [9] | 15 | 29.6852.211 | 29.2473.768 | 29.7793.905 | 31.2631.771 | 28.5670.533 | 31.5851.730 |
25 | 26.3414.906 | 27.4983.490 | 27.6863.272 | 28.8041.947 | 24.1881.651 | 29.0401.853 | |
50 | 22.7593.426 | 25.2241.807 | 24.6402.322 | 25.1971.759 | 17.3850.328 | 25.6681.844 |






4.1 Datasets
4.2 Evaluation process
Non-reference image quality assessment (NR-IQA) has tackled to score image quality without a reference image. Since NR-IQA methods are expected to provide good image criteria, these methods can be employed as ES. Following the recent ES method for DIP, ES-WMV [4], we compared to BRISQUE [10] and NIQE [11] as the NR-IQA methods and ES-WMV [4] as the ES method to validate whether our proposed method can provide a good ES criterion. We used models and code provided by OpenCV222https://github.com/opencv/opencv_contrib [12] and LIVE333https://github.com/utlive/live_python_qa for BRISQUE and NIQE, respectively.
Let denote the number of training epochs. First, we optimized the neural network parameters with respect to Eq. (1) reaching to epochs. Second, we obtained the ES detected epoch such that each ES criterion is satisfied. Following ES-WMV ES detection, our and other ES criteria find ES candidate epochs and adopt the minimum epoch in as the ES detected epoch . The candidate epoch satisfies that the ES criterion outputs a smaller value than the next consecutive epochs. Specifically,
(4) |
where is a specific criterion function for each ES method. If an ES method cannot find any candidate epochs (i.e., ), is adopted as an alternative to the ES detected epoch. Finally, PSNR is computed between the clean image and the denoised image as the ES criterion evaluation.
4.3 Implementation details
CIFS utilizes JPEG as image compression method. JPEG can specify a quality value (higher is better image quality) which affects the saved image file size. is set to 95 which is the default value in OpenCV.
The following other implementation details are shared with CIFS and all comparative methods. Pixel-wise gaussian noise is independently sampled from where we set for pixel value range . The network architecture is same as DIP and Adam [13] with a learning rate is used. The input noise to the network is randomly initialized and fixed during optimization where is set to . We adopted the perturbation technique from DIP [1, 2] where we perturb at each iteration. The number of training epochs is set to 20k. is set to 1k, which is same as ES-WMV experimental setting.
4.4 Analysis
Table 1 shows PSNR comparisons of our and other comparative ES methods for on CBSD68 and Kodak24. Fig. 3 depicts PSNR performance gap comparison for on CBSD68 as the broader noise level range evaluation. DIP without ES (”No ES”) is significantly lower than the maximum achievable PSNR (”Peak”) regardless of gaussian noise levels. CIFS provides good ES criterion and almost outperforms other comparative methods for both datasets. Moreover, the standard deviations of PSNR for different noise levels are relatively small compared to other ES methods. ES criteria and qualitative comparison for are depicted in Fig. 4. Note the ”Peak” PSNR is same for all ES methods since all ES metrics are computed in the same optimization process. BRISQUE and NIQE, which are NR-IQA methods, have an issue with the magnitude of variance of their metrics. ES-WMV is stable, however, it keeps a long variance sequence since the metric is windowed moving variance. CIFS can compute the metric independently at each iteration and still provide a stable metric.
5 Conclusion
We propose a novel ES for DIP designed for image denoising. Our ES employs JPEG compressed image file size as a proxy metric to measure degrees of noise for denoised images. Our method provides a good ES criterion and outperforms many of the other comparative methods. Moreover, our ES method can compute the metric independently at each iteration and still provide a stable metric.
Acknowledgement
We thank Sol Cummings for helpful feedback. This research work was financially supported by the Ministry of Internal Affairs and Communications of Japan with a scheme of ”Research and development of advanced technologies for a user-adaptive remote sensing data platform” (JPMI00316).
References
- [1] Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky, “Deep image prior,” in CVPR, 2018.
- [2] Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky, “Deep Image Prior,” IJCV, 2020.
- [3] Taihui Li, Zhong Zhuang, Hengyue Liang, Le Peng, Hengkang Wang, and Ju Sun, “Self-validation: Early stopping for single-instance deep generative priors,” in 32nd British Machine Vision Conference 2021, BMVC 2021, Online, November 22-25, 2021. 2021, p. 108, BMVA Press.
- [4] Hengkang Wang, Taihui Li, Zhong Zhuang, Tiancong Chen, Hengyue Liang, and Ju Sun, “Early stopping for deep image prior,” 2023.
- [5] Christopher A. Metzler, Ali Mousavi, Reinhard Heckel, and Richard G. Baraniuk, “Unsupervised learning with stein’s unbiased risk estimator with applications to denoising and compressed sensing,” in International Biomedical and Astronomical Signal Processing Frontiers Workshop (BASP), 2019.
- [6] Yeonsik Jo, Se Young Chun, and Jonghyun Choi, “Rethinking deep image prior for denoising,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2021, pp. 5087–5096.
- [7] Pablo Arbelaez, Michael Maire, Charless Fowlkes, and Jitendra Malik, “Contour detection and hierarchical image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 5, pp. 898–916, May 2011.
- [8] S. Roth and M.J. Black, “Fields of experts: a framework for learning image priors,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), 2005, vol. 2, pp. 860–867 vol. 2.
- [9] Rich Franzen, “Kodak lossless true color image suite,” 1999.
- [10] Anish Mittal, Anush Krishna Moorthy, and Alan Conrad Bovik, “No-reference image quality assessment in the spatial domain,” IEEE Transactions on Image Processing, vol. 21, no. 12, pp. 4695–4708, 2012.
- [11] Anish Mittal, Rajiv Soundararajan, and Alan C. Bovik, “Making a “completely blind” image quality analyzer,” IEEE Signal Processing Letters, vol. 20, no. 3, pp. 209–212, 2013.
- [12] G. Bradski, “The OpenCV Library,” Dr. Dobb’s Journal of Software Tools, 2000.
- [13] Diederik P. Kingma and Jimmy Ba, “Adam: A method for stochastic optimization,” in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, Yoshua Bengio and Yann LeCun, Eds., 2015.