This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

11institutetext: Technical University of Munich, Munich, Germany 22institutetext: Georg-August-University Göttingen, Göttingen, Germany 33institutetext: GEOMAR Helmholtz Centre for Ocean Research Kiel, Kiel, Germany 44institutetext: Helmholtz AI, Helmholtz Munich - German Research Center for Environmental Health, Neuherberg, Germany
44email: [email protected]

LUCYD: A Feature-Driven Richardson-Lucy Deconvolution Network

Tomáš Chobola 1144 0009-0000-3272-9996    Gesine Müller 22    Veit Dausmann 33 0000-0003-3281-9208    Anton Theileis 33    Jan Taucher 33 0000-0001-9944-0775    Jan Huisken 22 0000-0001-7250-3756    Tingying Peng 44 0000-0002-7881-1749
Abstract

The process of acquiring microscopic images in life sciences often results in image degradation and corruption, characterised by the presence of noise and blur, which poses significant challenges in accurately analysing and interpreting the obtained data. This paper proposes LUCYD, a novel method for the restoration of volumetric microscopy images that combines the Richardson-Lucy deconvolution formula and the fusion of deep features obtained by a fully convolutional network. By integrating the image formation process into a feature-driven restoration model, the proposed approach aims to enhance the quality of the restored images whilst reducing computational costs and maintaining a high degree of interpretability. Our results demonstrate that LUCYD outperforms the state-of-the-art methods in both synthetic and real microscopy images, achieving superior performance in terms of image quality and generalisability. We show that the model can handle various microscopy modalities and different imaging conditions by evaluating it on two different microscopy datasets, including volumetric widefield and light-sheet microscopy. Our experiments indicate that LUCYD can significantly improve resolution, contrast, and overall quality of microscopy images. Therefore, it can be a valuable tool for microscopy image restoration and can facilitate further research in various microscopy applications. We made the source code for the model accessible under https://github.com/ctom2/lucyd-deconvolution/.

Keywords:
Deconvolution Deblurring Denoising Microscopy

1 Introduction

Microscopy is one of the most widely used imaging techniques that allows life scientists to analyse cells, tissues and subcellular structures with a high level of detail. However, microscopy images often suffer from degradation such as blur, noise and other artefacts, which may result in an inaccurate quantification and hinder downstream analysis. Therefore, deconvolution techniques are necessary to restore the images to improve their quality, thus increasing the accuracy of downstream tasks [16, 12, 7]. Image deconvolution is a well-studied task in computer vision and imaging sciences that aims to recover a sharp and clear object out of a degraded input. The mathematical representation of image corruption can be expressed as:

y=xK+n,y=x*K+n, (1)

where * represents convolution, yy denotes the resulting image of an object xx, which has been blurred with a point spread function (PSF) KK, and degraded by noise nn.

Two classic image deconvolution methods widely used in microscopy and medical imaging are Wiener filter [18] and Richardson-Lucy algorithm (RL) [11, 9]. The Wiener filter is a linear filter that is applied to the frequency domain representation of the blurred image. It assumes the Gaussian noise distribution and thus minimises the mean squared error between the restored image and the original one. The RL method, on the other hand, is an iterative algorithm that works in the spatial domain, usually leading to better reconstruction than Wiener filter. It assumes Poisson noise distribution and seeks to estimate the corresponding sharp image xx in a fixed number of iterations or until a convergence criterion is met [5]. While being simple and effective, the main limitations of both methods are their susceptibility to noise amplification [13, 3] and the assumption that the accurate PSF is known. In practice, however, PSF is challenging to obtain and is often unknown or varies across the image, which leads to inaccurate reconstructions of the sharp image. Moreover, as an iterative method, RL is computationally costly for three-dimensional (3D) data [4].

In the computer vision field, numerous deep learning models have been trained on large datasets with the objective to learn a direct mapping between input and output domains [6, 1, 10, 14, 15]. Some of these models have also been adapted for use in microscopy, such as the U-Net-based content-aware image restoration networks (CARE) [17]. These methods have exhibited exceptional performance in tasks such as super-resolution and denoising. However, the interpretability of these methods is limited, and given their data-driven nature, the quantity and quality of training data can be a restricting factor, particularly in biomedical applications where data pairs are often scarce or not available.

Inspired by the RL algorithm, the Richardson-Lucy Network (RLN) [8] was recently designed to overcome the problem of data-driven models by embedding the RL formula for iterative image restoration into a neural network and substituting convolutions of the measured PSF kernel with learnable convolutional layers. Although being more compact than U-Net, the low capacity of RLN makes it insufficiently robust to different blur intensities and noise levels, requiring the network to be re-trained whenever there is a shift in the input image domain. This reduces the efficacy of the method.

To address the limitations of existing methods, we propose a novel lightweight model called LUCYD, which integrates the RL deconvolution formula and a U-shaped network. The main contributions of this paper can be summarised as:

  1. 1.

    LUCYD is a lightweight deconvolution method that embeds the RL deconvolution formula into a deep convolutional network that leverages the features extracted by a U-shaped module while maintaining low computational costs for processing 3D microscopy images and a high level of interpretability.

  2. 2.

    The proposed method outperforms existing deconvolution methods on both real and synthetic datasets, based on qualitative assessment and quantitative evaluation metrics, respectively.

  3. 3.

    We show that LUCYD has strong resistance to noise and can generalise well to new and unseen data. This ability makes it a valuable tool for practical applications in microscopy imaging fields where image quality is critical for downstream tasks yet training data are often scarce or unavailable.

2 Method

The overall architecture of the proposed model is illustrated in Figure 1 and it comprises three main components: a correction module, an update module, and a bottleneck that is shared between the two modules. The data flow in the model is based on the following iterative RL formula:

z(k)=z(k1)x estimate(yz(k1)KK)update term,z_{(k)}=\underbrace{\vphantom{\left(\frac{0}{0_{(x)}}\right)}z_{(k-1)}}_{x\text{ estimate}}\cdot\underbrace{\left(\frac{y}{z_{(k-1)}*K}*K^{\top}\right)}_{\text{update term}}, (2)

which aims to recover xx in kk steps. We bypass the requirement of k1k{-}1 preceding iterations with the correction module that generates a mask MM to form an intermediate sharp image estimation through a single forward pass, allowing to rapidly process 3D data, as follows:

z~=y+M.\tilde{z}=y+M. (3)

Next, inspired by Li et al. [8], we adopt the three-step approach to decompose the RL update term from Equation 2 in the update module:

(a)FP=yf, (b)DV=y/FP, (c)u=DVb.\text{(a)}\;\text{FP}=y*f\text{, (b)}\;\text{DV}={y}/\text{FP}\text{, (c)}\;u=\text{DV}*b. (4)

Specifically, we replace convolutions with a known PSF in steps (a) and (c) with forward projector ff and backward projector bb, which consist of sets of learnable convolutional layers. The produced update term uu allows us to recondition the estimate z~\tilde{z} from the correction module into a sharp image through multiplication, i.e. the last step of image formation in the RL formula: x=z~ux^{\prime}=\tilde{z}\cdot u. The whole network can then be expressed as follows,

x=z~(yyfb).x^{\prime}=\tilde{z}\cdot\left(\frac{y}{y*f}*b\right). (5)

By adhering to the image formation steps as prescribed by the RL formula, we maintain a high degree of interpretability, critical for real-world scenarios, where the accuracy and reliability of the generated results are of utmost importance.

Refer to caption
Figure 1: The architecture of LUCYD consists of a correction module, an update module and a bottleneck that is shared between the two modules.
Refer to caption
(a)
Refer to caption
(b)
Figure 2: The architecture of submodules: (a) Feature Fusion Block (FFBlock), (b) Richardson-Lucy Division Block (RLDiv).

2.1 Correction Module & Bottleneck

The proposed correction module and bottleneck architectures consist of encoder blocks (EBs), decoder blocks (DBs), and multi-scale feature fusion blocks to facilitate efficient information exchange across different scales within the model.

2.1.1 Feature Encoding

The features of the volumetric input image yC×D×H×Wy\in\mathbb{R}^{C\times D\times H\times W} are obtained through the first encoder block EB1\text{EB}_{1} in the correction module, and then encoded by a convolutional layer with a stride 2. Subsequently, the downsampled features are concatenated with the encoded features of the forward projection ff from the update module and then fed to the bottleneck encoder EB2\text{EB}_{2} to integrate the information from both modules.

2.1.2 Feature Fusion Block

Similarly to Cho et al. [2], we enhance the connections between encoders and decoders and allow information flow from different scales within the network through Feature Fusion Blocks (FFBlocks). The features from EB1\text{EB}_{1} and EB2\text{EB}_{2} are refined as follows,

FFBlock1out\displaystyle\text{FFBlock}_{1}^{\text{out}} =FFBlock1(EB1out,(EB2out)),\displaystyle=\text{FFBlock}_{1}\left(\text{EB}_{1}^{\text{out}},(\text{EB}_{2}^{\text{out}})^{\uparrow}\right), (6)
FFBlock2out\displaystyle\text{FFBlock}_{2}^{\text{out}} =FFBlock2((EB1out),EB2out),\displaystyle=\text{FFBlock}_{2}\left((\text{EB}_{1}^{\text{out}})^{\downarrow},\text{EB}_{2}^{\text{out}}\right), (7)

where up-sampling (\uparrow) and down-sampling (\downarrow) is applied to allow for feature concatenation. The multi-scale features are then combined and processed by 1×11\times 1 and 3×33\times 3 convolutional layers, respectively, to allow the decoder blocks DB1\text{DB}_{1} and DB2\text{DB}_{2} to utilise information obtained on different scales. The structure of the blocks is shown in Figure 2a.

2.1.3 Feature Decoding

Initially, the refined features are decoded in the bottleneck using a convolutional layer and residual block within the DB2\text{DB}_{2}. Next, these features are expanded with a convolutional layer to match the dimensions in both the correction and update modules. The resulting features are then concatenated with the output of FFBlock1\text{FFBlock}_{1} and subsequently fed into decoder DB2\text{DB}_{2} within the correction module. The features are then mapped to the image dimensions resulting in mask MM, which is summed with yy to form z~\tilde{z}.

2.2 Update Module

Inspired by the forward and backward projector functions [8], we substitute the PSF convolution operations from Richardson-Lucy algorithm with learnable convolutional layers and residual blocks.

During forward projection (FP), shallow features are initially extracted by a single convolutional layer and then refined by a residual block. The output of ff is then passed to Richardson-Lucy Division Block (RLDiv) which embeds the division of the raw image yy by the channel-wise mean of the refined FP features. Next, we project the division result to a feature map to extract more information about the image. The visualisation of the process is in Figure 2b. These features are then concatenated with the features extracted by the bottleneck and combined by a convolutional layer which initiates the backward projection with bb. The output is then summed with the output of RLDiv, forming a skip-connection, and passed through a residual block. The features are then refined by a convolutional layer and their channel-wise mean is taken to be the \sayupdate term uu, which is used to obtain the final model output xx^{\prime} through multiplication with z~\tilde{z} (denoted as RLMul).

2.3 Loss Function

The entire model is trained end-to-end with a single loss function that combines the Mean Squared Error (MSE) and the Structural Similarity Index Measure (SSIM) as follows:

(x,x)=MSE(x,x)ln(1+SSIM(x,x)2),\mathcal{L}(x^{\prime},x)=\text{MSE}(x^{\prime},x)-\ln\left(\frac{1+\text{SSIM}(x^{\prime},x)}{2}\right), (8)

where xx is the ground truth sharp image and xx^{\prime} is the model estimation of xx.

3 Experiments

Refer to caption
(a) yy
Refer to caption
(b) z~\tilde{z}
Refer to caption
(c) uu
Refer to caption
(d) xx^{\prime}
Refer to caption
(e) xx (GT)
Figure 3: Overview of the deconvolution process given an input (a). The outputs of the correction module and the update module are shown in (b) and (c), respectively, and the final output obtained through their multiplication is in (d).

3.1 Setup

Table 1: Number of learnable parameters, comparing CARE, RLN and LUCYD.
CARE [17] RLN [8] LUCYD (ours)
1 M 15,900 24,964

3.1.1 Datasets

We assess the performance of LUCYD on both simulated phantom objects and real microscopy images. To achieve this, we use five sets of 3D grayscale volumes generated by Li et al. [8], consisting of dots, solid spheres, and ellipsoidal surfaces, which are provided along with their sharp ground truth volumes of dimensions 128×128×128128\times 128\times 128 (one exemplary image shown in Figure 3e). To test the generalization capabilities of our method, we also include two blurry and noisy versions of the dataset, 𝒟nuc\mathcal{D}_{\text{nuc}} and 𝒟act\mathcal{D}_{\text{act}}, which utilize different image degradation processes for embryonic nuclei and membrane data. Additionally, we generate a mixed dataset by applying permutations of three Gaussian blur intensities (σb=[1.0,1.2,1.5]\sigma_{b}=[1.0,1.2,1.5]) and three levels of additive Gaussian noise (σn=[0,15,30]\sigma_{n}=[0,15,30]) to the ground truth volumes, and then test the ability of the model to generalize to volumes blurred with Gaussian kernels (σb=[0.5,2.0]\sigma_{b}=[0.5,2.0]) and additive Gaussian noise (σn=[20,50,70,100]\sigma_{n}=[20,50,70,100]) levels outside of the training dataset. The model is trained on patches of dimensions 32×64×6432\times 64\times 64 that are randomly sampled from the training datasets. Moreover, we evaluate the model trained using synthetic phantom shapes on a real 3D light-sheet image of a starfish (private data) and widefield microscopy image of U2OS cell (from the dataset of [8]), to explore the generalisation capabilities.

3.1.2 Baseline & Metrics

We employ one classic U-Net-based fluorescence image restoration model CARE [17] and one RL-based convolutional model RLN [8] as baselines. We quantitatively evaluate the deconvolution performance on simulated data using two metrics: Structural Similarity Index Measure (SSIM) and Peak Signal-to-Noise Ratio (PSNR).

Table 2: Performance on synthetic datasets (SSIM/PSNR (dB)) degraded with blur and noise levels not present in the training dataset. The models are trained on phantom objects blurred with σb=[1.0,1.2,1.5]\sigma_{b}=[1.0,1.2,1.5] and corrupted with Gaussian noise intensities σn=[0,15,30]\sigma_{n}=[0,15,30].
Blur intensity σb\sigma_{b} Noise level σn\sigma_{n} CARE [17] RLN [8] LUCYD (ours)
0.5 20 0.9166/21.620.9166/21.62 0.9571/25.600.9571/25.60 0.9725/26.85\mathbf{0.9725/26.85}
0.5 50 0.7589/15.960.7589/15.96 0.8519/21.670.8519/21.67 0.9463/24.35\mathbf{0.9463/24.35}
0.5 70 0.6828/14.320.6828/14.32 0.7235/18.520.7235/18.52 0.9040/21.78\mathbf{0.9040/21.78}
0.5 100 0.5856/12.560.5856/12.56 0.5644/15.910.5644/15.91 0.7233/17.47\mathbf{0.7233/17.47}
2.0 20 0.8582/20.360.8582/20.36 0.9040/22.340.9040/22.34 0.9271/23.49\mathbf{0.9271/23.49}
2.0 50 0.7057/16.350.7057/16.35 0.7443/18.850.7443/18.85 0.8575/21.00\mathbf{0.8575/21.00}
2.0 70 0.6259/15.060.6259/15.06 0.6051/16.690.6051/16.69 0.7995/19.38\mathbf{0.7995/19.38}
2.0 100 0.5154/13.080.5154/13.08 0.4495/14.860.4495/14.86 0.6311/16.42\mathbf{0.6311/16.42}
Table 3: Performance on synthetic datasets (SSIM/PSNR (dB)) given varying training data.
Train dataset Test dataset CARE [17] RLN [8] LUCYD (ours)
In-domain 𝒟nuc\mathcal{D}_{\text{nuc}} 𝒟nuc\mathcal{D}_{\text{nuc}} 0.7895/18.000.7895/18.00 0.9247/26.430.9247/26.43 0.9525/28.57\mathbf{0.9525/28.57}
𝒟act\mathcal{D}_{\text{act}} 𝒟act\mathcal{D}_{\text{act}} 0.7666/17.440.7666/17.44 0.8966/26.100.8966/26.10 0.9450/27.83\mathbf{0.9450/27.83}
Cross-domain 𝒟nuc\mathcal{D}_{\text{nuc}} 𝒟act\mathcal{D}_{\text{act}} 0.7623/17.680.7623/17.68 0.8841/24.330.8841/24.33 0.9024/24.82\mathbf{0.9024/24.82}
𝒟act\mathcal{D}_{\text{act}} 𝒟nuc\mathcal{D}_{\text{nuc}} 0.7584/17.000.7584/17.00 0.9081/27.230.9081/27.23 0.9336/27.63\mathbf{0.9336/27.63}
Refer to caption
Figure 4: Quantitative comparison of RLN and LUCYD on lateral and axial maximum-intensity projections of a starfish acquired by 3D light-sheet microscopy is shown in (a). Additional analysis of the deconvolution results of CARE, RLN and LUCYD trained on synthetic phantom objects in (b) shows patches of four-colour lateral maximum-intensity projections of a fixed U2OS cell acquired by widefield microscopy (from the dataset of [8]). LUCYD exhibits superior performance in recovering fine details and structures as compared to CARE and RLN, while simultaneously maintaining low levels of noise and haze surrounding the objects.

3.2 Results

In Table 2, we present the quantitative results of all three methods on simulated phantom objects degraded with blur and noise levels that were not present in the training dataset. LUCYD achieves the best performance even in cases where the amount of additive noise exceeds the maximum level included in the training dataset. This is in contrast to CARE and RLN, which did not demonstrate such exceptional generalisation capabilities and noise resistance. We further examine LUCYD’s performance on datasets simulating widefield microscopy imaging of embryo nuclei and membrane data. As shown in Table 3, LUCYD outperforms CARE and RLN in both in-domain and cross-domain assessments, further supporting the model’s capabilities in cross-domain applications.

We finally apply LUCYD on two real microscopy test samples, as illustrated in Figure 4. On the 3D light-sheet image of starfish, LUCYD recovers more details and structures than RLN while maintaining low levels of noise and haze surrounding the object in both lateral and axial projections. On the other test sample of a fixed U2OS cell acquired by widefield microscopy, LUCYD also suppresses noise and haze to a higher degree compared to RLN and CARE and retrieves finer and sharper details.

4 Conclusion

In this paper, we introduce LUCYD, an innovative technique for deconvolving volumetric microscopy images that combines a classic image deconvolution formula with a U-shaped network. LUCYD takes advantages of both approaches, resulting in a highly efficient method capable of processing 3D data with high efficacy. We have demonstrated through experiments on both synthetic and real microscopy datasets that LUCYD exhibits strong generalization capabilities, as well as robustness to noise. These qualities make it an excellent tool for cross-domain applications in various domains, such as biology and medical imaging. Additionally, the lightweight nature of LUCYD makes it computationally feasible for real-time applications, which can be crucial in various settings.

4.0.1 Acknowledgements

Tomáš Chobola is supported by the Helmholtz Association under the joint research school "Munich School for Data Science - MUDS".

References

  • [1] Chen, J., Sasaki, H., Lai, H., Su, Y., Liu, J., Wu, Y., Zhovmer, A., Combs, C.A., Rey-Suarez, I., Chang, H.Y., Huang, C.C., Li, X., Guo, M., Nizambad, S., Upadhyaya, A., Lee, S.J.J., Lucas, L.A.G., Shroff, H.: Three-dimensional residual channel attention networks denoise and sharpen fluorescence microscopy image volumes. Nature Methods 18(6), 678–687 (May 2021). https://doi.org/10.1038/s41592-021-01155-x
  • [2] Cho, S.J., Ji, S.W., Hong, J.P., Jung, S.W., Ko, S.J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proceedings of the IEEE/CVF international conference on computer vision. pp. 4641–4650 (2021)
  • [3] Dell’Acqua, F., Scifo, P., Rizzo, G., Catani, M., Simmons, A., Scotti, G., Fazio, F.: A modified damped richardson–lucy algorithm to reduce isotropic background effects in spherical deconvolution. NeuroImage 49(2), 1446–1458 (2010). https://doi.org/https://doi.org/10.1016/j.neuroimage.2009.09.033
  • [4] Dey, N., Blanc-Feraud, L., Zimmer, C., Roux, P., Kam, Z., Olivo-Marin, J.C., Zerubia, J.: Richardson–lucy algorithm with total variation regularization for 3d confocal microscope deconvolution. Microscopy Research and Technique 69(4), 260–266 (2006). https://doi.org/10.1002/jemt.20294
  • [5] Eichstädt, S., Schmähling, F., Wübbeler, G., Anhalt, K., Bünger, L., Krüger, U., Elster, C.: Comparison of the richardson–lucy method and a classical approach for spectrometer bandpass correction. Metrologia 50(2),  107 (2013)
  • [6] Guo, M., Li, Y., Su, Y., Lambert, T., Nogare, D.D., Moyle, M.W., Duncan, L.H., Ikegami, R., Santella, A., Rey-Suarez, I., Green, D., Beiriger, A., Chen, J., Vishwasrao, H., Ganesan, S., Prince, V., Waters, J.C., Annunziata, C.M., Hafner, M., Mohler, W.A., Chitnis, A.B., Upadhyaya, A., Usdin, T.B., Bao, Z., Colón-Ramos, D., Riviere, P.L., Liu, H., Wu, Y., Shroff, H.: Rapid image deconvolution and multiview fusion for optical microscopy. Nature Biotechnology 38(11), 1337–1346 (Jun 2020). https://doi.org/10.1038/s41587-020-0560-x
  • [7] Kaderuppan, S.S., Wong, E.W.L., Sharma, A., Woo, W.L.: Smart nanoscopy: A review of computational approaches to achieve super-resolved optical microscopy. IEEE Access 8, 214801–214831 (2020). https://doi.org/10.1109/ACCESS.2020.3040319
  • [8] Li, Y., Su, Y., Guo, M., Han, X., Liu, J., Vishwasrao, H.D., Li, X., Christensen, R., Sengupta, T., Moyle, M.W., Rey-Suarez, I., Chen, J., Upadhyaya, A., Usdin, T.B., Colón-Ramos, D.A., Liu, H., Wu, Y., Shroff, H.: Incorporating the image formation process into deep learning improves network performance. Nature Methods 19(11), 1427–1437 (Oct 2022). https://doi.org/10.1038/s41592-022-01652-7, https://doi.org/10.1038/s41592-022-01652-7
  • [9] Lucy, L.B.: An iterative technique for the rectification of observed distributions. The Astronomical Journal 79,  745 (Jun 1974). https://doi.org/10.1086/111605
  • [10] Qiao, C., Li, D., Guo, Y., Liu, C., Jiang, T., Dai, Q., Li, D.: Evaluation and development of deep neural networks for image super-resolution in optical microscopy. Nature Methods 18(2), 194–202 (Jan 2021). https://doi.org/10.1038/s41592-020-01048-5
  • [11] Richardson, W.H.: Bayesian-based iterative method of image restoration\ast. J. Opt. Soc. Am. 62(1), 55–59 (Jan 1972). https://doi.org/10.1364/JOSA.62.000055, http://www.osapublishing.org/abstract.cfm?URI=josa-62-1-55
  • [12] Sage, D., Donati, L., Soulez, F., Fortun, D., Schmit, G., Seitz, A., Guiet, R., Vonesch, C., Unser, M.: DeconvolutionLab2: An open-source software for deconvolution microscopy. Methods 115, 28–41 (Feb 2017). https://doi.org/10.1016/j.ymeth.2016.12.015
  • [13] Tan, K., Li, W., Zhang, Q., Huang, Y., Wu, J., Yang, J.: Penalized maximum likelihood angular super-resolution method for scanning radar forward-looking imaging. Sensors 18(3),  912 (Mar 2018). https://doi.org/10.3390/s18030912
  • [14] Vizcaíno, J.P., Saltarin, F., Belyaev, Y., Lyck, R., Lasser, T., Favaro, P.: Learning to reconstruct confocal microscopy stacks from single light field images. IEEE Transactions on Computational Imaging 7, 775–788 (2021). https://doi.org/10.1109/TCI.2021.3097611
  • [15] Wagner, N., Beuttenmueller, F., Norlin, N., Gierten, J., Boffi, J.C., Wittbrodt, J., Weigert, M., Hufnagel, L., Prevedel, R., Kreshuk, A.: Deep learning-enhanced light-field imaging with continuous validation. Nature Methods 18(5), 557–563 (May 2021). https://doi.org/10.1038/s41592-021-01136-0
  • [16] Wallace, W., Schaefer, L.H., Swedlow, J.R.: A workingperson’s guide to deconvolution in light microscopy. BioTechniques 31(5), 1076–1097 (Nov 2001). https://doi.org/10.2144/01315bi01
  • [17] Weigert, M., Schmidt, U., Boothe, T., Müller, A., Dibrov, A., Jain, A., Wilhelm, B., Schmidt, D., Broaddus, C., Culley, S., Rocha-Martins, M., Segovia-Miranda, F., Norden, C., Henriques, R., Zerial, M., Solimena, M., Rink, J., Tomancak, P., Royer, L., Jug, F., Myers, E.W.: Content-aware image restoration: pushing the limits of fluorescence microscopy. Nature Methods 15(12), 1090–1097 (Nov 2018). https://doi.org/10.1038/s41592-018-0216-7
  • [18] Wiener, N.: Extrapolation, Interpolation, and Smoothing of Stationary Time Series. The MIT Press (1949). https://doi.org/10.7551/mitpress/2946.001.0001