This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Sub-Nyquist computational ghost imaging with orthonormalized colored noise speckle patterns

Xiaoyu Nie Texas A&M University, College Station, Texas, 77843, USA Xi’an Jiaotong University, Xi’an, Shaanxi 710049, China    Xingchen Zhao Texas A&M University, College Station, Texas, 77843, USA    Tao Peng [email protected] Texas A&M University, College Station, Texas, 77843, USA    Marlan O. Scully Texas A&M University, College Station, Texas, 77843, USA Baylor University, Waco, 76706, USA Princeton University, Princeton, NJ 08544, USA
Abstract

Computational ghost imaging generally requires a large number of patterns to obtain a high-quality image. Both pre-modulated orthogonal patterns and post-processing orthonormalization methods have been demonstrated to reduce the pattern number and increase the imaging quality. In this work, We propose and experimentally demonstrate an orthonormalization approach based on the colored noise speckle patterns to achieve sub-Nyquist computational ghost imaging. We exam the reconstructed image in quality indicators such as the contrast-to-noise ratio, the mean square error, the peak signal to noise ratio, and the correlation coefficient. The results suggest that our method can provide a high-quality image using a sampling ratio an order lower than the conventional methods. The results also suggest that there exist an optimal frame rate under the noise environment.

preprint: APS/123-QED

Computational ghost imaging (CGI) Shapiro (2008); Bromberg et al. (2009), an ameliorated scheme on traditional ghost imaging (GI) Pittman et al. (1995); Bennink et al. (2004); Zhang et al. (2005), owns the ability to reconstruct the object via a single-pixel detector. CGI also grants advantages in an expanding range of non-conventional applications such as wide spectrum imaging Edgar et al. (2015); Radwell et al. (2014) and depth mapping Howland et al. (2013); Sun et al. (2016). It also finds application to various fields, such as temporal imaging Devaux et al. (2016), X-ray imaging Klein et al. (2019), and remote sensing Erkmen (2012). However, its sampling number is usually comparable to the total number of pixels in the speckle pattern to ensure good imaging quality, thus is time-consuming and resource-intensive. It also produces limits such as only suitable for static object reconstruction.

Various methods have been proposed to overcome this problem. Compressing sensing is a well-known technique that reduces the required sample rate to lower than 30% by exploring the properties of sparsity  Katz et al. (2009); Katkovnik and Astola (2012). Still, it is also strictly limited by the sparsity of the image. Deep learning has also shown its ability to achieve sub-Nyquist imaging  Lyu et al. (2017); He et al. (2018); Wu et al. (2020). The limitation is that most of the networks are trained by experimental CGI results. Therefore numerous measurements have to been done in advance. In addition, the environment and the training inputs for image reconstruction have to be made almost identical to the training environment and similar to the tested objects to make the system effective. Another approach is to use the orthonormalized pattern to reduce the sample rate Sun et al. (2017); Luo et al. (2018). In particular, Luo et al. introduced a data post-processing algorithm to improve the reconstructing process in a GI system with pseudo-thermal light Luo et al. (2018). The required sampling number is reduced by applying the Gram-Schmidt process to the noise patterns and intensity sequence collected by the bucket detector. However, such a method is sensitive to noise, and the image quality is not comparable with standard CGI when the sampling rate is high. Traditionally, Gaussian white noise speckle pattern is used for GI. We recently developed a method to generate the colored noise speckle pattern for CGI by customizing the speckle patterns’ power spectrum distribution Li et al. (2021); Nie et al. (2020). Unlike white noise, colored noise generally has non-zero cross-correlation between neighborhood pixels. Sub-Rayleigh imaging was demonstrated with the blue noise pattern, which has negative cross-correlation between two adjacent pixels Li et al. (2021). The pink noise pattern allowed us to image in a variety of noisy environments Nie et al. (2020).

In this letter, we present an orthonormalization method that combines the colored noise technique, thereby reducing the sampling in the CGI experiment significantly. We also compare the orthonormalization colored noise GI (OCGI) with orthonormalization white noise GI (OWGI), traditional white noise GI (WGI), and pink noise GI (PGI). The results are tested using the quality indicators such as the Contrast-to-Noise Ratio (CNR), the Peak Signal to Noise Ratio (PSNR), the Correlation Coefficient (CC), and the Mean Square Error (MSE). We show that OCGI always has the best performance. It can reduce the sample rate one order lower while still obtaining the same image quality as standard CGI. In addition, it suggests an optimal frame rate (<1<1) in the presence of noise.

The experimental setup is shown in Fig. 1. This is a typical CGI setup: a CW laser illuminates the digital micromirror device (DMD), where the speckle patterns with designed distributions are loaded. The pattern generated by the DMD is then projected onto the object with the letters ‘OH’ etched on an opaque plate. A bucket detector is put right after the object to record the transmitted light intensity. The DMD contains tiny pixel-mirrors each measuring 16μm×16μm16\mu m\times 16\mu m. Each noise pattern has 54×9854\times 98 independent pixels in the experiment, and each independent pixel consists of 10×1010\times 10 mirrors.

Refer to caption
Figure 1: Schematic of the setup. The digital micromirror device (DMD) is illuminated by a CW laser. Orthonormalized patterns are loaded on the DMD then imaged onto the object plane. Correlation measurement is made between the patterns and the intensities recorded by the bucket detector.

Firstly, the Gaussian white and pink noise patterns are generated by applying inverse Fourier transformation upon the spectrum in which the spatial frequencies are defined as ω0{\omega^{0}} and ω1{\omega^{-1}} Nie et al. (2020). Random phase matrices are also assigned to each pattern. The Gram-Schmidt process is then performed to orthonormalize the patterns. After the orthonormalization, the pink noise pattern’s spatial frequency gradually changes to blue noise distribution, as shown in Fig. 2. In other words, the spatial frequency of the orthonormalized pattern covers a broad spatial spectrum range from pink to white and blue. The initial colored patterns are matrices P1,P2,P3,,PNP_{1},P_{2},P_{3},\cdots,P_{\mathrm{N}}, and the orthonormalized patterns are matrices P~1,P~2,P~3,,P~N\widetilde{P}_{1},\widetilde{P}_{2},\widetilde{P}_{3},\cdots,\widetilde{P}_{\mathrm{N}}, all of which contain 54×9854\times 98 elements. We define the projection coefficient cmnc_{\mathrm{mn}} as

cmn=PmP~nP~nP~n.c_{\mathrm{mn}}=\frac{{P_{\mathrm{m}}}\cdot{\widetilde{P}_{\mathrm{n}}}}{\widetilde{P}_{\mathrm{n}}\cdot{\widetilde{P}_{\mathrm{n}}}}. (1)

The orthonormalized patterns can be generated by

P~1=P1,\widetilde{P}_{1}=P_{1}, (2)
P~m=Pmn=1m1cmnP~n.\widetilde{P}_{\mathrm{m}}=P_{\mathrm{m}}-\sum\limits_{n=1}^{m-1}c_{\mathrm{mn}}\widetilde{P}_{\mathrm{n}}. (3)

Then, we re-normalize the histogram of P~1,P~2,P~3,,P~N\widetilde{P}_{1},\widetilde{P}_{2},\widetilde{P}_{3},\cdots,\widetilde{P}_{{}_{\mathrm{N}}} to [0, 255], which we define as P~1,P~2,P~3,,P~N\widetilde{P}^{\prime}_{1},\widetilde{P}^{\prime}_{2},\widetilde{P}^{\prime}_{3},......,\widetilde{P}^{\prime}_{\mathrm{N}}. According to the number of orthogonal vector space, we generate 5292 patterns for each kind, which is equal to the number of total pixel in a single pattern. We note here that, unlike the post-processing method shown in Luo et al. (2018), we directly generate these orthonormalized patterns and apply them to DMD. Therefore, the orthonormalization coefficients and patterns are made at once. Besides, we don’t have any intensity losses during the orthonormalization process. In our scheme, the intensity is measured as

Ii=TP~i,I_{\mathrm{i}}=T\cdot{\widetilde{P}^{\prime}_{\mathrm{i}}}, (4)

where T is the object’s transmission coefficient, P~i\widetilde{P}^{\prime}_{\mathrm{i}} is the i-th\mathrm{th} orthonormalized pattern. As shown in Fig. 1, the image is retrieved by calculating the correlation between patterns and collected light intensity sequence as

Γ(2)=1Ni=1NP~iIi1N2i=1NP~i×i=1NIi,\Gamma^{(2)}=\frac{1}{N}\sum\limits_{i=1}^{N}{\widetilde{P}^{\prime}_{\mathrm{i}}{I_{\mathrm{i}}}}-\frac{1}{N^{2}}\sum\limits_{i=1}^{N}{\widetilde{P}^{\prime}_{\mathrm{i}}}\times\sum\limits_{i=1}^{N}{{I_{\mathrm{i}}}}, (5)

where N{N} is the sampling number. We define β\beta as the sampling rate, which is the ratio between the sampling number NN and the number of speckle in each pattern NpixelN_{\mathrm{pixel}}:

β=NNpixel.{\beta}=\frac{N}{N_{\mathrm{pixel}}}. (6)
Refer to caption
Figure 2: The orthonormalized colored noise pattern: (a) the 1st pattern, (c) the 1000th pattern, (e) the last pattern (5292th); (b), (d), and (f) are normalized spatial frequency distributions of the 1st pattern, the 1000th pattern, and the 5292th pattern, respectively.

We explore the properties of the orthonormalized pattern by analyzing its spatial frequency, auto-, and cross-correlation. As shown in Fig. 2, when the pattern number increases, the frequency peak moves to the higher end. This suggests that the pattern gradually changes from pink noise distribution to blue noise distribution under the orthonormalization process. This is easy to understand since the orthonormalization protocol naturally requires that the spatial frequency domain is also under orhtnormalization. Therefore, the OCGI maintains the pink noise’s advantage when the sampling number is small, and it can continuously enhance the resolution when increasing the sampling number. Indeed, the OCGI owns the OWGI’s feature when β\beta approaches 1, as shown in Fig. 3.

Refer to caption
Figure 3: Cross-auto correlation ratio RcaR_{ca} as a function of the sampling rate β\beta. Inserted pictures: (I), (II), (III), and (IV) are 2D plotted auto- and cross-correlation of total pattern number 100, 1000, 3000, and 5292, respectively.

A random pixel p(x,y){p(x,y)} is chosen and its auto-correlation and cross-correlation with all other pixels are calculated. The cross-auto correlation ratio RcaR_{{\mathrm{ca}}} is defined as,

Rca=Γp(x1,y)(2)+Γp(x+1,y)(2)+Γp(x,y1)(2)+Γp(x,y+1)(2)4Γp(x,y)(2).R_{{\mathrm{ca}}}=\frac{\Gamma^{(2)}_{p(x-1,y)}+\Gamma^{(2)}_{p(x+1,y)}+\Gamma^{(2)}_{p(x,y-1)}+\Gamma^{(2)}_{p(x,y+1)}}{4\Gamma^{(2)}_{p(x,y)}}. (7)

From the pink line in Fig. 3, we can see that the ratio is gradually dwindling. the cross-correlation starts from nearly 1 when β\beta is small. It then gradually decreases to 0 when β=1\beta=1, which is the same as the white noise pattern. In a matter of fact, from the spatial frequency distribution of an arbitrary pattern, we can precisely predict the change of result during the image retrieving process with the OCGI method. It is also expected that the OCGI and OWGI measurements will converge to the same results when β\beta approaches 1, as shown in the following.

Refer to caption
Figure 4: Simulation with no noise. Image qualities via different sampling numbers by CGI in the ideal condition. (a) CNR, (b) MSE, (c) PSNR, and (d) CC.

To test the feasibility of the OCGI method, we run a simulation firstly in the ideal condition without any noise. To better judge the performance of various methods, i.e., the OCGI, OWGI, WGI, and PGI, we utilize four evaluating indicators of image quality, i.e., CNR, MSE, PSNR, and CC Zerom et al. (2012); Xu et al. (2015); Li et al. (2017); Luo et al. (2018):

CNR=G(o)G(b)Var[G(o)]+Var[G(b)]{CNR}=\frac{{\langle G_{(\mathrm{o})}\rangle-\langle G_{(\mathrm{b})}\rangle}}{\sqrt{Var[G_{(\mathrm{o})}]+Var[G_{(\mathrm{b})}]}} (8)
MSE=1Npixeli=1Npixel[GiXoG(o)]2{MSE}=\frac{1}{N_{\mathrm{pixel}}}\sum_{i=1}^{N_{\mathrm{pixel}}}{[\frac{G_{\mathrm{i}}-X_{\mathrm{o}}}{\langle G_{(\mathrm{o})}\rangle}]^{2}} (9)
PSNR=10×log10[(2k1)2MSE]{PSNR}=10\times{log_{10}[\frac{(2^{k}-1)^{2}}{MSE}]} (10)
CC=Cov(G,X)Var(G)Var(X){CC}=\frac{Cov(G,X)}{\sqrt{Var(G)Var(X)}} (11)

Here, XX is the reference matrix calculated by

Xi={G(o), Transmission = 1G(b), Transmission = 0{X_{i}}=\begin{cases}\langle G_{(\mathrm{o})}\rangle&\text{, Transmission = 1}\\ \langle G_{(\mathrm{b})}\rangle&\text{, Transmission = 0}\end{cases} (12)

G(o)G_{(\mathrm{o})} represents pixels in the correlation results that the light ought to be transmitted, i.e., the object area, while G(b)G_{(\mathrm{b})} represents pixels in the correlation results that the light ought to be blocked, i.e., the background area. kk is the gray level of the image, and in our experiment k8k\equiv 8.

As shown in Fig. 4, the OCGI, similar to the PGI, gives a stronger signal in the low sample rate domain, as demonstrated in our previous studyNie et al. (2020). Besides, the OCGI always has the best image quality. The OWGI and OCGI have almost the same behavior when the image quality is saturated as the pattern number reaches the maximum. Both of them are still much better than WGI and PGI. Here the orthonormalization process completely smears the weakness of PGI, whose image quality is almost smooth from the beginning to the end, and strengthens the PGI’s advantage at the low sampling level.

The advantage of OCGI is further demonstrated when we introduce background noise into the system. We run another simulation with a noise level at 2%2\% of the signal. The evaluating indicators as functions of β\beta are presented in Fig. 5, from which we can see that MSE is about the same as the noise-free case. OCGI still maintains the best in these imaging methods. CNR is dramatically decreased for all methods when β\beta is large, as compared to the noise-free case. It should be noted that there are peaks clearly shown in Fig. 5(c) and Fig. 5(d) for OCGI. The PSNR and CC of OCGI reach their highest value when β0.1\beta\sim 0.1, then slowly decrease and finally reach the same value as that of the OWGI. It suggests that there exist an optimum sampling rate for the noise-free feature during the orthonormalization process of the colored noise pattern. Again, the orthonormalized results are always better than the conventional speckle patterns.

Refer to caption
Figure 5: Simulation with added noise. Image qualities via different sampling numbers by CGI with noise at 2%2\% signal level. (a) CNR, (b) MSE, (c) PSNR, and (d) CC.
Refer to caption
Figure 6: Experimental results. Left-hand side of the red dash line: CGI results via different types of noise pattern with various β{\beta}. Right-hand side of the red dash line: PGI with different object size at β=1\beta=1. the size of the letters, from top to bottom,are: 4, 3, 2, and 1 times of that used for the left side.

We then experimentally test our scheme. In the experiment, we perform measurement on the object ‘OH’. The noise level is at 2%\sim 2\%, which is about the same as the simulation. The main results are shown in Fig. 6. Npixel=54×98{N_{\mathrm{pixel}}=54\times 98} is used in our experiment. From Fig. 6, we see that when β\beta is only 0.05, the OCGI already gives an image while all the other methods fail to do so. OCGI, OWGI, and WGI all give clear images at β0.5\beta\sim 0.5, but the image obtained with OCGI is clearer than OWGI, and both are better than WGI. On the other hand, PGI failed to give a clear image even when β=1\beta=1. This is due to the relatively small object size compared with the pixel size. To verify that, we then gradually enlarge the object size 2, 3, and 4 times for PGI at β=1\beta=1, as shown in the right-hand side of Fig. 6. We see that when the object size is large enough, the PGI gives a clear image. We conclude that the image quality in OCGI is better than all the other methods. The PGI method, on the other hand, is limited to the object size and cannot be used for resolution-limit imaging.

Refer to caption
Figure 7: Image qualities via different sampling numbers in the experiment. (a) CNR, (b) MSE, (c) PSNR, and (d) CC.

To further compare the results, we utilize those four evaluating indicators of image quality again. The results are shown in Fig. 7. We can see that the experimental results and the simulation results are almost exactly matched. We also note here that, as shown in Fig. 6 and Fig. 7, some of the indicators suggest the best performance occurs when β=0.1{\beta=0.1}. On the other hand, the results at β=1{\beta=1} seem to have clearer image with sharper edges, which also has the lowest MSE. The reason is when β=1{\beta=1}, the cross-correlation disappears, thus no contribution to the area where the object is opaque. Those parameters also give us some indication of the optimal frame rate to choose depending on different experimental goal.

In conclusion, we have developed a method based on the orthonormalized colored noise pattern in the CGI system to yield image reconstruction results with high quality when the sampling number is small, and with continuously improvement during the further sampling. The major advantage of this scheme is the continuous change of cross-correlation from the orthonormalized colored noise speckle patterns, which overcomes the difficulties faced by the conventional speckle patterns. This method is easy to implement due to its simple setup and rapid image reconstruction. It can reduce the sample rate an order lower than previous orthonormalization methods and it is immune to noise.

Funding. Air Force Office of Scientific Research (Award No. FA9550-20-1-0366 DEF), Office of Naval Research (Award No. N00014-20-1-2184), Robert A. Welch Foundation (Grant No. A-1261), National Science Foundation (Grant No. PHY-2013771).

Disclosures. The authors declare no conflicts of interest.

Data Availability. Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

  • Shapiro (2008) J. H. Shapiro, Phys. Rev. A 78, 061802 (2008).
  • Bromberg et al. (2009) Y. Bromberg, O. Katz, and Y. Silberberg, Phys. Rev. A 79, 053840 (2009).
  • Pittman et al. (1995) T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, Phys. Rev. A 52, R3429 (1995).
  • Bennink et al. (2004) R. S. Bennink, S. J. Bentley, R. W. Boyd, and J. C. Howell, Phys. Rev. Lett. 92, 033601 (2004).
  • Zhang et al. (2005) D. Zhang, Y.-H. Zhai, L.-A. Wu, and X.-H. Chen, Opt. Lett. 30, 2354 (2005).
  • Edgar et al. (2015) M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, Sci. Rep 5, 10669 (2015).
  • Radwell et al. (2014) N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, Optica 1, 285 (2014).
  • Howland et al. (2013) G. A. Howland, D. J. Lum, M. R. Ware, and J. C. Howell, Opt. Express 21, 23822 (2013).
  • Sun et al. (2016) M.-J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, Nat. Commun. 7, 1 (2016).
  • Devaux et al. (2016) F. Devaux, P.-A. Moreau, S. Denis, and E. Lantz, Optica 3, 698 (2016).
  • Klein et al. (2019) Y. Klein, A. Schori, I. Dolbnya, K. Sawhney, and S. Shwartz, Opt. Express 27, 3284 (2019).
  • Erkmen (2012) B. I. Erkmen, JOSA A 29, 782 (2012).
  • Katz et al. (2009) O. Katz, Y. Bromberg, and Y. Silberberg, Applied Physics Letters 95, 131110 (2009).
  • Katkovnik and Astola (2012) V. Katkovnik and J. Astola, JOSA A 29, 1556 (2012).
  • Lyu et al. (2017) M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, Sci. Rep 7, 1 (2017).
  • He et al. (2018) Y. He, G. Wang, G. Dong, S. Zhu, H. Chen, A. Zhang, and Z. Xu, Sci. Rep 8, 1 (2018).
  • Wu et al. (2020) H. Wu, R. Wang, G. Zhao, H. Xiao, D. Wang, J. Liang, X. Tian, L. Cheng, and X. Zhang, Opt. Express 28, 3846 (2020).
  • Sun et al. (2017) M.-J. Sun, L.-T. Meng, M. P. Edgar, M. J. Padgett, and N. Radwell, Sci. Rep 7, 1 (2017).
  • Luo et al. (2018) B. Luo, P. Yin, L. Yin, G. Wu, and H. Guo, Opt. Express 26, 23093 (2018).
  • Li et al. (2021) Z. Li, X. Nie, F. Yang, X. Liu, D. Liu, X. Dong, X. Zhao, T. Peng, M. S. Zubairy, and M. O. Scully, Opt. Express 29, 19621 (2021).
  • Nie et al. (2020) X. Nie, F. Yang, X. Liu, X. Zhao, R. Nessler, T. Peng, M. S. Zubairy, and M. O. Scully, arXiv preprint arXiv:2009.14390 (2020).
  • Zerom et al. (2012) P. Zerom, Z. Shi, M. N. O’Sullivan, K. W. C. Chan, M. Krogstad, J. H. Shapiro, and R. W. Boyd, Phys. Rev. A 86, 063817 (2012).
  • Xu et al. (2015) X. Xu, E. Li, X. Shen, and S. Han, Chin. Opt. Lett. 13, 071101 (2015).
  • Li et al. (2017) J. Li, D. Yang, B. Luo, G. Wu, L. Yin, and H. Guo, Opt. Lett. 42, 1640 (2017).