Sub-Nyquist computational ghost imaging with orthonormalized colored noise speckle patterns
Abstract
Computational ghost imaging generally requires a large number of patterns to obtain a high-quality image. Both pre-modulated orthogonal patterns and post-processing orthonormalization methods have been demonstrated to reduce the pattern number and increase the imaging quality. In this work, We propose and experimentally demonstrate an orthonormalization approach based on the colored noise speckle patterns to achieve sub-Nyquist computational ghost imaging. We exam the reconstructed image in quality indicators such as the contrast-to-noise ratio, the mean square error, the peak signal to noise ratio, and the correlation coefficient. The results suggest that our method can provide a high-quality image using a sampling ratio an order lower than the conventional methods. The results also suggest that there exist an optimal frame rate under the noise environment.
Computational ghost imaging (CGI) Shapiro (2008); Bromberg et al. (2009), an ameliorated scheme on traditional ghost imaging (GI) Pittman et al. (1995); Bennink et al. (2004); Zhang et al. (2005), owns the ability to reconstruct the object via a single-pixel detector. CGI also grants advantages in an expanding range of non-conventional applications such as wide spectrum imaging Edgar et al. (2015); Radwell et al. (2014) and depth mapping Howland et al. (2013); Sun et al. (2016). It also finds application to various fields, such as temporal imaging Devaux et al. (2016), X-ray imaging Klein et al. (2019), and remote sensing Erkmen (2012). However, its sampling number is usually comparable to the total number of pixels in the speckle pattern to ensure good imaging quality, thus is time-consuming and resource-intensive. It also produces limits such as only suitable for static object reconstruction.
Various methods have been proposed to overcome this problem. Compressing sensing is a well-known technique that reduces the required sample rate to lower than 30% by exploring the properties of sparsity Katz et al. (2009); Katkovnik and Astola (2012). Still, it is also strictly limited by the sparsity of the image. Deep learning has also shown its ability to achieve sub-Nyquist imaging Lyu et al. (2017); He et al. (2018); Wu et al. (2020). The limitation is that most of the networks are trained by experimental CGI results. Therefore numerous measurements have to been done in advance. In addition, the environment and the training inputs for image reconstruction have to be made almost identical to the training environment and similar to the tested objects to make the system effective. Another approach is to use the orthonormalized pattern to reduce the sample rate Sun et al. (2017); Luo et al. (2018). In particular, Luo et al. introduced a data post-processing algorithm to improve the reconstructing process in a GI system with pseudo-thermal light Luo et al. (2018). The required sampling number is reduced by applying the Gram-Schmidt process to the noise patterns and intensity sequence collected by the bucket detector. However, such a method is sensitive to noise, and the image quality is not comparable with standard CGI when the sampling rate is high. Traditionally, Gaussian white noise speckle pattern is used for GI. We recently developed a method to generate the colored noise speckle pattern for CGI by customizing the speckle patterns’ power spectrum distribution Li et al. (2021); Nie et al. (2020). Unlike white noise, colored noise generally has non-zero cross-correlation between neighborhood pixels. Sub-Rayleigh imaging was demonstrated with the blue noise pattern, which has negative cross-correlation between two adjacent pixels Li et al. (2021). The pink noise pattern allowed us to image in a variety of noisy environments Nie et al. (2020).
In this letter, we present an orthonormalization method that combines the colored noise technique, thereby reducing the sampling in the CGI experiment significantly. We also compare the orthonormalization colored noise GI (OCGI) with orthonormalization white noise GI (OWGI), traditional white noise GI (WGI), and pink noise GI (PGI). The results are tested using the quality indicators such as the Contrast-to-Noise Ratio (CNR), the Peak Signal to Noise Ratio (PSNR), the Correlation Coefficient (CC), and the Mean Square Error (MSE). We show that OCGI always has the best performance. It can reduce the sample rate one order lower while still obtaining the same image quality as standard CGI. In addition, it suggests an optimal frame rate () in the presence of noise.
The experimental setup is shown in Fig. 1. This is a typical CGI setup: a CW laser illuminates the digital micromirror device (DMD), where the speckle patterns with designed distributions are loaded. The pattern generated by the DMD is then projected onto the object with the letters ‘OH’ etched on an opaque plate. A bucket detector is put right after the object to record the transmitted light intensity. The DMD contains tiny pixel-mirrors each measuring . Each noise pattern has independent pixels in the experiment, and each independent pixel consists of mirrors.

Firstly, the Gaussian white and pink noise patterns are generated by applying inverse Fourier transformation upon the spectrum in which the spatial frequencies are defined as and Nie et al. (2020). Random phase matrices are also assigned to each pattern. The Gram-Schmidt process is then performed to orthonormalize the patterns. After the orthonormalization, the pink noise pattern’s spatial frequency gradually changes to blue noise distribution, as shown in Fig. 2. In other words, the spatial frequency of the orthonormalized pattern covers a broad spatial spectrum range from pink to white and blue. The initial colored patterns are matrices , and the orthonormalized patterns are matrices , all of which contain elements. We define the projection coefficient as
(1) |
The orthonormalized patterns can be generated by
(2) |
(3) |
Then, we re-normalize the histogram of to [0, 255], which we define as . According to the number of orthogonal vector space, we generate 5292 patterns for each kind, which is equal to the number of total pixel in a single pattern. We note here that, unlike the post-processing method shown in Luo et al. (2018), we directly generate these orthonormalized patterns and apply them to DMD. Therefore, the orthonormalization coefficients and patterns are made at once. Besides, we don’t have any intensity losses during the orthonormalization process. In our scheme, the intensity is measured as
(4) |
where T is the object’s transmission coefficient, is the i- orthonormalized pattern. As shown in Fig. 1, the image is retrieved by calculating the correlation between patterns and collected light intensity sequence as
(5) |
where is the sampling number. We define as the sampling rate, which is the ratio between the sampling number and the number of speckle in each pattern :
(6) |

We explore the properties of the orthonormalized pattern by analyzing its spatial frequency, auto-, and cross-correlation. As shown in Fig. 2, when the pattern number increases, the frequency peak moves to the higher end. This suggests that the pattern gradually changes from pink noise distribution to blue noise distribution under the orthonormalization process. This is easy to understand since the orthonormalization protocol naturally requires that the spatial frequency domain is also under orhtnormalization. Therefore, the OCGI maintains the pink noise’s advantage when the sampling number is small, and it can continuously enhance the resolution when increasing the sampling number. Indeed, the OCGI owns the OWGI’s feature when approaches 1, as shown in Fig. 3.

A random pixel is chosen and its auto-correlation and cross-correlation with all other pixels are calculated. The cross-auto correlation ratio is defined as,
(7) |
From the pink line in Fig. 3, we can see that the ratio is gradually dwindling. the cross-correlation starts from nearly 1 when is small. It then gradually decreases to 0 when , which is the same as the white noise pattern. In a matter of fact, from the spatial frequency distribution of an arbitrary pattern, we can precisely predict the change of result during the image retrieving process with the OCGI method. It is also expected that the OCGI and OWGI measurements will converge to the same results when approaches 1, as shown in the following.

To test the feasibility of the OCGI method, we run a simulation firstly in the ideal condition without any noise. To better judge the performance of various methods, i.e., the OCGI, OWGI, WGI, and PGI, we utilize four evaluating indicators of image quality, i.e., CNR, MSE, PSNR, and CC Zerom et al. (2012); Xu et al. (2015); Li et al. (2017); Luo et al. (2018):
(8) |
(9) |
(10) |
(11) |
Here, is the reference matrix calculated by
(12) |
represents pixels in the correlation results that the light ought to be transmitted, i.e., the object area, while represents pixels in the correlation results that the light ought to be blocked, i.e., the background area. is the gray level of the image, and in our experiment .
As shown in Fig. 4, the OCGI, similar to the PGI, gives a stronger signal in the low sample rate domain, as demonstrated in our previous studyNie et al. (2020). Besides, the OCGI always has the best image quality. The OWGI and OCGI have almost the same behavior when the image quality is saturated as the pattern number reaches the maximum. Both of them are still much better than WGI and PGI. Here the orthonormalization process completely smears the weakness of PGI, whose image quality is almost smooth from the beginning to the end, and strengthens the PGI’s advantage at the low sampling level.
The advantage of OCGI is further demonstrated when we introduce background noise into the system. We run another simulation with a noise level at of the signal. The evaluating indicators as functions of are presented in Fig. 5, from which we can see that MSE is about the same as the noise-free case. OCGI still maintains the best in these imaging methods. CNR is dramatically decreased for all methods when is large, as compared to the noise-free case. It should be noted that there are peaks clearly shown in Fig. 5(c) and Fig. 5(d) for OCGI. The PSNR and CC of OCGI reach their highest value when , then slowly decrease and finally reach the same value as that of the OWGI. It suggests that there exist an optimum sampling rate for the noise-free feature during the orthonormalization process of the colored noise pattern. Again, the orthonormalized results are always better than the conventional speckle patterns.


We then experimentally test our scheme. In the experiment, we perform measurement on the object ‘OH’. The noise level is at , which is about the same as the simulation. The main results are shown in Fig. 6. is used in our experiment. From Fig. 6, we see that when is only 0.05, the OCGI already gives an image while all the other methods fail to do so. OCGI, OWGI, and WGI all give clear images at , but the image obtained with OCGI is clearer than OWGI, and both are better than WGI. On the other hand, PGI failed to give a clear image even when . This is due to the relatively small object size compared with the pixel size. To verify that, we then gradually enlarge the object size 2, 3, and 4 times for PGI at , as shown in the right-hand side of Fig. 6. We see that when the object size is large enough, the PGI gives a clear image. We conclude that the image quality in OCGI is better than all the other methods. The PGI method, on the other hand, is limited to the object size and cannot be used for resolution-limit imaging.

To further compare the results, we utilize those four evaluating indicators of image quality again. The results are shown in Fig. 7. We can see that the experimental results and the simulation results are almost exactly matched. We also note here that, as shown in Fig. 6 and Fig. 7, some of the indicators suggest the best performance occurs when . On the other hand, the results at seem to have clearer image with sharper edges, which also has the lowest MSE. The reason is when , the cross-correlation disappears, thus no contribution to the area where the object is opaque. Those parameters also give us some indication of the optimal frame rate to choose depending on different experimental goal.
In conclusion, we have developed a method based on the orthonormalized colored noise pattern in the CGI system to yield image reconstruction results with high quality when the sampling number is small, and with continuously improvement during the further sampling. The major advantage of this scheme is the continuous change of cross-correlation from the orthonormalized colored noise speckle patterns, which overcomes the difficulties faced by the conventional speckle patterns. This method is easy to implement due to its simple setup and rapid image reconstruction. It can reduce the sample rate an order lower than previous orthonormalization methods and it is immune to noise.
Funding. Air Force Office of Scientific Research (Award No. FA9550-20-1-0366 DEF), Office of Naval Research (Award No. N00014-20-1-2184), Robert A. Welch Foundation (Grant No. A-1261), National Science Foundation (Grant No. PHY-2013771).
Disclosures. The authors declare no conflicts of interest.
Data Availability. Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
References
- Shapiro (2008) J. H. Shapiro, Phys. Rev. A 78, 061802 (2008).
- Bromberg et al. (2009) Y. Bromberg, O. Katz, and Y. Silberberg, Phys. Rev. A 79, 053840 (2009).
- Pittman et al. (1995) T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, Phys. Rev. A 52, R3429 (1995).
- Bennink et al. (2004) R. S. Bennink, S. J. Bentley, R. W. Boyd, and J. C. Howell, Phys. Rev. Lett. 92, 033601 (2004).
- Zhang et al. (2005) D. Zhang, Y.-H. Zhai, L.-A. Wu, and X.-H. Chen, Opt. Lett. 30, 2354 (2005).
- Edgar et al. (2015) M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, Sci. Rep 5, 10669 (2015).
- Radwell et al. (2014) N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, Optica 1, 285 (2014).
- Howland et al. (2013) G. A. Howland, D. J. Lum, M. R. Ware, and J. C. Howell, Opt. Express 21, 23822 (2013).
- Sun et al. (2016) M.-J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, Nat. Commun. 7, 1 (2016).
- Devaux et al. (2016) F. Devaux, P.-A. Moreau, S. Denis, and E. Lantz, Optica 3, 698 (2016).
- Klein et al. (2019) Y. Klein, A. Schori, I. Dolbnya, K. Sawhney, and S. Shwartz, Opt. Express 27, 3284 (2019).
- Erkmen (2012) B. I. Erkmen, JOSA A 29, 782 (2012).
- Katz et al. (2009) O. Katz, Y. Bromberg, and Y. Silberberg, Applied Physics Letters 95, 131110 (2009).
- Katkovnik and Astola (2012) V. Katkovnik and J. Astola, JOSA A 29, 1556 (2012).
- Lyu et al. (2017) M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, Sci. Rep 7, 1 (2017).
- He et al. (2018) Y. He, G. Wang, G. Dong, S. Zhu, H. Chen, A. Zhang, and Z. Xu, Sci. Rep 8, 1 (2018).
- Wu et al. (2020) H. Wu, R. Wang, G. Zhao, H. Xiao, D. Wang, J. Liang, X. Tian, L. Cheng, and X. Zhang, Opt. Express 28, 3846 (2020).
- Sun et al. (2017) M.-J. Sun, L.-T. Meng, M. P. Edgar, M. J. Padgett, and N. Radwell, Sci. Rep 7, 1 (2017).
- Luo et al. (2018) B. Luo, P. Yin, L. Yin, G. Wu, and H. Guo, Opt. Express 26, 23093 (2018).
- Li et al. (2021) Z. Li, X. Nie, F. Yang, X. Liu, D. Liu, X. Dong, X. Zhao, T. Peng, M. S. Zubairy, and M. O. Scully, Opt. Express 29, 19621 (2021).
- Nie et al. (2020) X. Nie, F. Yang, X. Liu, X. Zhao, R. Nessler, T. Peng, M. S. Zubairy, and M. O. Scully, arXiv preprint arXiv:2009.14390 (2020).
- Zerom et al. (2012) P. Zerom, Z. Shi, M. N. O’Sullivan, K. W. C. Chan, M. Krogstad, J. H. Shapiro, and R. W. Boyd, Phys. Rev. A 86, 063817 (2012).
- Xu et al. (2015) X. Xu, E. Li, X. Shen, and S. Han, Chin. Opt. Lett. 13, 071101 (2015).
- Li et al. (2017) J. Li, D. Yang, B. Luo, G. Wu, L. Yin, and H. Guo, Opt. Lett. 42, 1640 (2017).