This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Far-field diffraction computational imaging based on parameter-robust illumination and direct phase optimization

An-dong Xiong    \authormark1,2 Xiao-peng Jin    \authormark1 Xv-ri Yao    \authormark1and Qing Zhao\authormark1 \authormark1Center for Quantum Technology Research, School of Physics, Beijing Institute of Technology, Beijing 100081, China
\authormark[email protected]
\authormark*[email protected]
Abstract

Coherent diffraction imaging (CDI) is a promising imaging technique revealing most of the information from diffraction measurements. An ideal CDI should reconstruct complex-valued object from a single-shot far-field diffraction without any priori information about the target. To realize the ideal CDI, we propose a class of parameter-robust illumination pattern. A direct phase optimizing algorithm is also raised here to improve the performance of phase retrieval in strong noise. Experimental result demonstrates the efficiency of our scheme in practical noisy measurement for complex-valued target.

journal: opticajournalarticletype: Research Article

1 Introduction

The Fraunhofer diffraction or far-field diffraction can be easily achieved in most scenarios, including visible light, X -ray and electron[1]. CDI utilizing far-field diffraction is widely applied in observing structure of microorganism[2, 3, 4, 5], molecule[6, 7] and material[8, 9, obinson2001reconstruction]. The mathematical form of the diffraction field is concisely the Fourier transformation of the light field at the object plane. However, because the phase part of the light field is hardly directly measurable[1], it needs to be retrieved for the Fourier inversion. In practice, as a multi -solution problem, phase retrieval is usually conducted in two ways. One is multiple measurements: several masks cast on the target[10, 2, 11] or overlap scanning such as ptychography[12, 13, 9, 14] to provide more information. The other way is to use certain generals of the object field in specific application scenarios, such as real-valued condition[15, 6, 16, 17], sparsity in certain transform[18, 19, 20, 21] and deep learning methods[22, 23]. Multiple measurement needs to ensure that the object will not change between the measurements, which limits dynamic imaging and high energy imaging[24]. Utilization of priori information surely cannot faithfully reconstruct the object that violate these characteristics, for example, a random complex-valued field. If the far-field diffraction measurement and computational reconstruction is taken as a black box, it is hoped to be as simple as a mirror or lens: simply reflects any original field without multiple measurements or priori information. Efforts have been made for the ideal black box and achieve good simulation result[fienup1978reconstruction,fienup1982phase, 25]. But when applied in practice, extra restriction seems indispensable. Intensity object is still more welcome in experiments[8, 15, 26].

The phase retrieval of single-shot far-field diffraction without prior information is inherently a problem with multiple equivalent solutions[27]. Some equivalent solutions cannot be excluded by simple oversampling, including:

  • overall phase shift;

  • translation paired with phase modulation that matches the moving distance;

  • inversion of complex conjugate;

  • combinations of the operations above.

In the presence of higher noise, the resulting final phase retrieval solution tends to be degraded by superimposed mixture of the equivalent solutions.

In order to suppress the degradation, there have been some good simulation works using special illumination pattern to destroy certain symmetry[28, 29]. These methods can also be classified into adding general information about the object field. Because the special illumination is usually built-in the set-up, the information remains unchanged, providing more feasibilty for different samples. But it appears difficult to achieve satisfying result in practical applications. Special field restriction is still more viable for intensity sample[15, 30] . The perfect pixel mapping and size parameters of the pattern and object can be easily obtained in the simulation calculation. However, due to the optical path distortion, aberration, fluctuation of refractive index, photosensitive array spacing and other practical problems in experiment, the parameter and pixel mapping used in the reconstruction algorithm are hardly as precise as those in simulation. These errors can greatly degrade the retrieval result under high noise as shown in Fig. 2(b). Here, we propose a class of parametric-robust apertures and a direct phase optimization algorithm for the corresponding special-shaped apertures, together with a measurement correction algorithm.

2 Illumination pattern

For phase retrieval without priori information, all we can rely on is that the intensity of the light field outside the passable region is zero. In the optimization procedure, if the imagined aperture is smaller than the true one, the actual light intensity in the assumed zero zone will not be zero, which will mislead the algorithm to solve an impossible problem. Therefore, it is necessary to ensure that the theoretical aperture can cover the actual aperture. Due to parameter errors, the ideal aperture needs to be slightly enlarged. . We choose linear magnification operation to discuss hereinafter. The reason is: 1.Operations such as dilation and convolution will lead to the emergence of translation equivalent solutions; 2.Errors like boundary error on Fourier plane will lead to linear zoom on image plane; 3.The linear magnification operation retains most of the symmetry embodied in the measurement.

The linear magnification operation is: for a enlarge center point OO and any point AA in the space, the corresponding new point AA^{\prime} needs to satisfy OA=aOA(a>1)\vec{OA^{\prime}}=a*\vec{OA}\ \ (a>1). We require that for any a>1a>1, the enlarged space covers the original one, there is {A}{A}\{A\}\subset\{A^{\prime}\}. Let the boundaries be {Li}\{L_{i}\} for the aperture space. These {Li}\{L_{i}\} are Jordan curves. Jordan curve theorem will be used a lot below: every continuous path connecting ‘outside’ and ‘inside’ points intersects with the curve somewhere.

For OO outside a certain LiL_{i}, if OO is inside LiL_{i}\textquoteright, because O=OO=O\textquoteright, OO\textquoteright is inside LiL_{i}\textquoteright; OO must be inside LiL_{i}; but OO is outside LiL_{i}; contradiction occurs. Therefore, If OO is outside LiL_{i}, OO must be outside LiL_{i}\textquoteright.

If OO is outside a certain LiL_{i}, QQ is a point inside LiL_{i}, the nearest intersection of line segment OQOQ and LiL_{i} to OO is AA(Fig. 1(b)). If AA is on or inside LiL_{i}\textquoteright, because OO must be outside LiL_{i}\textquoteright, OAOA and LiL_{i}\textquoteright should intersect at some point GG\textquoteright. OA>OG>OGOA>OG\textquoteright>OG (OGOG and OGOG^{\prime} must be in the same direction, outerwise OO is inside LiL_{i}\textquoteright(Fig. 1(c))), and GG is on LiL_{i}, so AA is not the nearest intersection, which is a contradiction. So AA is outside LiL_{i}\textquoteright, which means LiL_{i} cannot be completely surrounded by LiL_{i}\textquoteright. For multi-connected graphs, there may be an a large enough that LjL_{j}^{\prime} can surround LiL_{i}. But we need aa to be any number more than 1. For a multiply connected graph, for any OO, there is always some LiL_{i} that OO is outside LiL_{i}. Therefore, only a simply connected graph can meet our needs.

And for this simply connected graph, we require that: there is at least one OO inside LL that any ray from OO intersects LL at only one point. We call OO a simple encircled center. For this kind of graph, suppose that CC is in {A}\{A\} but not in {A}\{A^{\prime}\}, let B1B_{1}^{\prime} and B2B_{2}^{\prime} be the intersection of line OCOC and LL^{\prime} , OB1OB_{1}^{\prime} and OCOC are in the same direction. If B1B_{1} does not belong to {A}\{A\textquoteright\}, B1OB_{1}O intersects LL\textquoteright on GG\textquoteright, OG<OG<OB1OG<OG\textquoteright<OB_{1}, GB1G\neq B_{1} ,contradicting simple encircling. Hence B1{A}B_{1}\in\{A^{\prime}\}. That means, there is at least one intersection DD^{\prime} on B1CB_{1}C and LL^{\prime} . The point DD corresponding to DD^{\prime} is the intersection of line OCOC and LL , contradicting simple encircling (Fig. 1(a)). Therefore, for a simply connected graph that meets the above conditions, its linear magnification of OO according to any a>1a>1 can completely cover the original graph. Incidentally, not every simply connected graph has a simple encircled center, for example, a slim Ω\Omega.

Refer to caption
Figure 1: Schematic for the proof of the features of linear magnification. The red curve is the original curve while the blue curve is the magnified one. the dotted curve is a fake linear magnified curve for reductio ad absurdum, which is actually magnified according to some different OO

In order to satisfy the requirements above, we can generate several sectors centre on point OO to generate the desired aperture shape. The radius, angle and number of these sectors are not limited, which means shapes like rectangle and triangle can also be generated.

In addition to the requirement that linear magnification fully covers the original one. The aperture shape had better have the following properties:

  • It has a small self-convolution maximum value (corresponding to the maximum overlap with the inversion after translation);

  • When enlarged, it limits the possible translation of the original graphics in it;

  • Continuous area in it is as large as possible so that the final reconstructed image is more identifiable.

Based on the conditions described above, we choose the aperture of the equal-radius three-sector structure as an example below. The maximum value of self-convolution for three-sectors is 0.484 while that for five-sectors only decreases to 0.477. The maximum self-convolution value for more number of odd-numbered sectors even increases.

Fig. 1 shows the noiseless performance of multiple connected non-centrosymmetry aperture and simple connected three-sectors aperture. When the assumed aperture is exactly the actual one, both of them works well in attaining the original image. However, when the assumed aperture is slightly enlarged, the result is dramatically degraded in the multiple connected case, because the enlarged pattern cannot cover the original one. While three-sectors after enlarged contains the original graph, leading to robustness towards size error.

Refer to caption
Figure 2: Simulation reconstruction of OPM from noise-free measurement of the ’cameraman’. (a),(b) take a random distributed multiply connected as he illumination pattern, while (d),(e) use three-sectors. (a),(d) are the reconstrunctions when precise-sized pattern is given. (b),(e) are the reconstructions when a=17/16a=17/16. (d) and (e) are acually the same, the red part is the enlarged area of the given pattern. (c),(f) show the overlap after magnification. The black pixels in (c) are those not covered by the magnified pattern.

3 Algorithm optimizing phase and measurement

In this paper, no prior information about the target object is given. The only information is the intensity of the light field outside the aperture is zero or as low as possible. Without the assistance of other information, traditional phase retrieval algorithms such as alternative-projection and gradient algorithms are difficult to stably converge to a better solution for complex-valued object under high noise[27, 31]. Let the light field at the wave vector kk in the Fourier plane be mkeiθk{m_{k}}{e^{i{\theta_{k}}}}. Many algorithm works take eiθk{e^{i{\theta_{k}}}} as a whole to optimize the loss value, so that many linear optimization methods can be utilized[32, 33, 34]. But in the end, The quadratic condition that the modulus of eiθk{e^{i{\theta_{k}}}} is 1 has to be used, which is the principal trouble. In order to maximize the use of aperture information, we take θk\theta_{k} as the optimization object here.

The definitions that will be used later are collected here to be checked.

s=[eiθk1,,eiθkN]Ts={\left[{{e^{i\theta{k_{1}}}},\ldots,{e^{i\theta{k_{N}}}}}\right]^{T}} (1)
M=diag(m1,,mN)M=diag\left({{m_{1}},\ldots,{m_{N}}}\right) (2)
Fkj=eijk{F_{\vec{k}\vec{j}}}={e^{-i\vec{j}\vec{k}}} (3)
fjk=eijk{f_{\vec{j}\vec{k}}}={e^{i\vec{j}\vec{k}}} (4)
F^[x]k=jFkjxj\widehat{F}{\left[x\right]_{\vec{k}}}=\sum\limits_{\vec{j}}{{F_{\vec{k}\vec{j}}}{x_{\vec{j}}}} (5)
f^[X]j=kfjkXk\widehat{f}{\left[X\right]_{\vec{j}}}=\sum\limits_{\vec{k}}{{f_{\vec{j}\vec{k}}}{X_{\vec{k}}}} (6)
gjk=mkeijk{g_{jk}}={m_{k}}{e^{ijk}} (7)
vj=kgjkeiθk{v_{j}}=\sum\limits_{k}{{g_{jk}}{e^{i{\theta_{k}}}}} (8)
γk=jZ(gjkkkgjkeiθk)\gamma_{k}=\sum\limits_{j\in Z}{\left({{g_{jk}}\sum\limits_{k^{\prime}\neq k}{{g_{jk^{\prime}}}^{*}{e^{-i{\theta_{k}}^{\prime}}}}}\right)} (9)
Dj(k)=kkgjkeiθkD_{j}^{\left(k\right)}=\sum\limits_{k^{\prime}\neq k}{{g_{jk^{\prime}}}}{e^{i{\theta_{k}}^{\prime}}} (10)
E1k=diag(1,,1k1,0,1,1Nk)E_{1}^{k}=diag(\underbrace{1,\ldots,1}_{k-1},0,\underbrace{1,\ldots 1}_{N-k}) (11)
ξ2(j)={1,jZ0,jZ{\xi_{2}}(j^{\prime})=\left\{{\begin{array}[]{*{20}{c}}{1{\rm{,}}j^{\prime}\in Z}\\ {0{\rm{,}}j^{\prime}\notin Z}\end{array}}\right. (12)
E2=diag(ξ2){E_{2}}=diag({\xi_{2}}) (13)

Let ZZ denote the set of points unilluminated in object plane.The loss value for the aperture information is expressed as

L=jZvjvjL=\sum\limits_{j\in Z}{{v_{j}}{v_{j}}^{*}} (14)

and its partial derivative with respect to θk\theta_{k} is:

Lθk=\displaystyle\frac{{\partial L}}{{\partial{\theta_{k}}}}= jZ(vjθkvj+vjθkvj)\displaystyle\sum\limits_{j\in Z}{\left({\frac{{\partial{v_{j}}}}{{\partial{\theta_{k}}}}\cdot{v_{j}}^{*}+\frac{{\partial{v_{j}}^{*}}}{{\partial{\theta_{k}}}}\cdot{v_{j}}}\right)} (15)
=\displaystyle= jZ(igjkeiθkvjigjkeiθkvj)\displaystyle\sum\limits_{j\in Z}{\left({i{g_{jk}}{e^{i{\theta_{k}}}}\cdot{v_{j}}^{*}-i{g_{jk}}^{*}{e^{-i{\theta_{k}}}}\cdot{v_{j}}}\right)}
=\displaystyle= jZ2real(ieiθkgjkvj)\displaystyle\sum\limits_{j\in Z}{2real\left({i{e^{i{\theta_{k}}}}{g_{jk}}\cdot{v_{j}}^{*}}\right)}
=\displaystyle= jZ2real(ieiθkgjkkgjkeiθk)\displaystyle\sum\limits_{j\in Z}{2real\left({i{e^{i{\theta_{k}}}}{g_{jk}}\cdot\sum\limits_{k^{\prime}}{{g_{jk^{\prime}}}^{*}{e^{-i{\theta_{k}}^{\prime}}}}}\right)}
=\displaystyle= jZ2real(ieiθk(gjkkkgjkeiθk+gjkgjkeiθk))\displaystyle\sum\limits_{j\in Z}{2real\left({i{e^{i{\theta_{k}}}}\left({{g_{jk}}\cdot\sum\limits_{k^{\prime}\neq k}{{g_{jk^{\prime}}}^{*}{e^{-i{\theta_{k}}^{\prime}}}}+{g_{jk}}{g_{jk}}^{*}{e^{-i{\theta_{k}}}}}\right)}\right)}
=\displaystyle= jZ2real(ieiθk(gjkkkgjkeiθk)+i|mk|2)\displaystyle\sum\limits_{j\in Z}{2real\left({i{e^{i{\theta_{k}}}}\left({{g_{jk}}\cdot\sum\limits_{k^{\prime}\neq k}{{g_{jk^{\prime}}}^{*}{e^{-i{\theta_{k}}^{\prime}}}}}\right)+{{i\left|{{m_{k}}}\right|}^{2}}}\right)}
=\displaystyle= jZ2real(ieiθk(gjkkkgjkeiθk))\displaystyle\sum\limits_{j\in Z}{2real\left({i{e^{i{\theta_{k}}}}\left({{g_{jk}}\cdot\sum\limits_{k^{\prime}\neq k}{{g_{jk^{\prime}}}^{*}{e^{-i{\theta_{k}}^{\prime}}}}}\right)}\right)}
=\displaystyle= 2real(ieiθkγk)\displaystyle 2real\left({i{e^{i{\theta_{k}}}}\gamma_{k}}\right)

Since γk\gamma_{k} does not contain θk\theta_{k} (Eq. (9)),

γkθk=0\frac{{\partial\gamma_{k}}}{{\partial{\theta_{k}}}}=0 (16)

The value of θk\theta_{k} at the extreme point can be directly solved by the condition that the partial derivative is 0. Let

γk=ρeiα\gamma_{k}=\rho{e^{i\alpha}} (17)

At the extreme point

Lθk=ieiθkρeiαieiθkρeiα=2ρsin(θk+α)=0\frac{{\partial L}}{{\partial{\theta_{k}}}}=i{e^{i{\theta_{k}}}}\rho{e^{i\alpha}}-i{e^{-i{\theta_{k}}}}\rho{e^{-i\alpha}}=-2\rho\sin\left({{\theta_{k}}+\alpha}\right)=0 (18)

Because

2Lθk2=2ρcos(θk+α)\frac{{{\partial^{2}}L}}{{\partial{\theta_{k}}^{2}}}=-2\rho\cos\left({{\theta_{k}}+\alpha}\right) (19)

At the local minimum, we have

θk=α+π{\theta_{k}}=-\alpha+\pi (20)
eiθk=(eiα){e^{i{\theta_{k}}}}=-{\left({{e^{i\alpha}}}\right)^{*}} (21)

An equation for θk\theta_{k} at the local minimum is already achieved here. However, if γk\gamma_{k} is calculated separately for each wave vector pixel in the Fourier measurement, the overall phase retrieval may require millions of Fourier transforms, so we need to go further to reduce the calculation amount.

γk=\displaystyle{\gamma_{k}}= jZ(gjkDj(k))\displaystyle\sum\limits_{j\in Z}{\left({{g_{jk}}D{{{}_{j}^{\left(k\right)}}^{*}}}\right)} (22)
=\displaystyle= jmkeijkξ2(j)Dj(k)\displaystyle\sum\limits_{j}{{m_{k}}{e^{ijk}}{\xi_{2}}(j)D{{{}_{j}^{\left(k\right)}}^{*}}}
=\displaystyle= mk(jeijkξ2(j)Dj(k))\displaystyle{m_{k}}{\left({\sum\limits_{j}{{e^{-ijk}}{\xi_{2}}(j)D_{j}^{\left(k\right)}}}\right)^{*}}

Let

Γ=[γ1,,γN]T\Gamma={\left[{{\gamma_{1}},\ldots,{\gamma_{N}}}\right]^{T}} (23)
Γ=\displaystyle\Gamma= (MF^E2f^ME1s)\displaystyle{\left({M\hat{F}{E_{2}}\hat{f}M{E_{1}}s}\right)^{*}} (24)
=\displaystyle= M(F^E2f^MsjZMs)\displaystyle M\left({\hat{F}{E_{2}}\hat{f}Ms-\sum\limits_{j\in Z}{Ms}}\right)^{*}
=\displaystyle= M(NF^E2f^MsjZMs)\displaystyle M\left({N\hat{F}^{\prime}{E_{2}}\hat{f}^{\prime}Ms-\sum\limits_{j\in Z}{Ms}}\right)^{*}

Here F^\hat{F}^{\prime} and f^\hat{f}^{\prime} are two demensional FFT (Fast Fourie Transform) and iFFT (inverse Fast Fourie Transform).

Since it is a multivariate problem, the solution obtained after each batch is the extreme point of the previous batch of phases, so it still takes several iterations to converge to the extreme point. For each pixel, if only one θk\theta_{k} is changed at a time, the new overall loss value obtained after each calculation will only be less than or equal to the value of the previous batch. Therefore we can have confidence in the convergence to this problem. Of course, considering the actual calculation efficiency, it is not necessary to be too cautious here, and the phase value can be updated more comprehensively.

In high noise, the extreme point is not necessarily the global lowest point. Since we are confident in the convergence, we can get away from local minima by resetting the phase values of a large percentage of random locations to random phases when the solution process stalls.

Algorithm 1 Optimizing phase for measurement (OPM)
  Input: MM, E2E_{2}, S0S^{0}
  sbests0s^{best}\leftarrow s^{0}
  for t=1t=1 to TT do
     Γt(NF^E2f^MstjZMst)\Gamma^{t}\leftarrow\left({N\hat{F}^{\prime}{E_{2}}\hat{f}^{\prime}Ms^{t}-\sum\limits_{j\in Z}{Ms^{t}}}\right)^{*}
     st{(Γt|Γt|+μ) 90% location st1 10% location s^{t}\leftarrow\begin{cases}-{(\frac{{{\Gamma^{t}}}}{{\left|{{\Gamma^{t}}}\right|+\mu}})^{*}}&\text{ 90\% location }\\ s^{t-1}&\text{ 10\% location }\end{cases}
     {The μ\mu value not only prevents dividing by zero, but also has the effect of a noise filter.}
     if L(M,st,E2)<L(M,sbest,E2)L(M,{s^{t}},{E_{2}})<L(M,{s^{best}},{E_{2}}) then
        sbeststs^{best}\leftarrow s^{t}
     end if
     if L(M,st1,E2)L(M,st,E2)<thresholdL(M,{s^{t-1}},{E_{2}})-L(M,{s^{t}},{E_{2}})<threshold then
        countercounter+1counter\leftarrow counter+1
     end if
     if counter>counterthresholdcounter>counterthreshold then
        stemp{st 70% chance sbest 30% chance s^{temp}\leftarrow\begin{cases}s^{t}&\text{ 70\% chance }\\ s^{best}&\text{ 30\% chance }\end{cases}
        strandomize(st,90%)s^{t}\leftarrow randomize(s^{t},90\%)
     end if
  end for
  Output: SbestS^{best}

On the other hand, we can also optimize the measurement to some extent in a similar way. (loss to mk)

Lmk=\displaystyle\frac{{\partial L}}{{\partial{m_{k}}}}= eiθkjZeijkvj+eiθkjZeijkvj\displaystyle{e^{i{\theta_{k}}}}\sum\limits_{j\in Z}{{e^{ijk}}{v_{j}}^{*}}+{e^{-i{\theta_{k}}}}\sum\limits_{j\in Z}{{e^{-ijk}}{v_{j}}} (25)
=\displaystyle= 2real(eiθkjZeijkkeijkmkeiθk)\displaystyle 2real\left({{e^{-i{\theta_{k}}}}\sum\limits_{j\in Z}{{e^{-ijk}}\sum\limits_{k^{\prime}}{{e^{ijk^{\prime}}}{m_{k}}^{\prime}{e^{i{\theta_{k}}^{\prime}}}}}}\right)
=\displaystyle= 2real(eiθkF^E2f^Ms)\displaystyle 2real\left({{e^{-i{\theta_{k}}}}\widehat{F}{E_{2}}\widehat{f}Ms}\right)

The measurement value at extreme point under the current phase can be directly obtained utilizing the linear expression of MM. However, the direct calculation will lead to a wrong measurement when a wrong phase distribution is given. Error in phase cause error in measurement; error in measurement cause error in phase, causing error to be fixed in subsequent iterations. So only limited optimization can be implemented based on the existing gradient. At the same time, because loss value corresponding to all 0 measurement is 0, there is always a tendency to reduce the overall measurement intensity. Hence, change on sum of the measurement values must be restricted. So we adopt the following update strategy here to assign the change value according to the gradient.

Δmk=q(LmkkLmk1N)\Delta{m_{k}}=-q\cdot\left({\frac{{\frac{{\partial L}}{{\partial{m_{k}}}}}}{{\sum\limits_{k}{\frac{{\partial L}}{{\partial{m_{k}}}}}}}-\frac{1}{N}}\right) (26)

Where qq is the the variable controlling the overall gradient. In addition, we limit the maximum change during the iteration process to make measurement as faithful as possible to the original one.

Algorithm 2 Optimizing measurement for phase (OMP)
  Input: M0M^{0}, E2E_{2}, SS, amcamc, rmcrmc
  Mminmax(min((1rmc)M0,(M0amc)),0){M_{\min}}\leftarrow\max(\min((1-rmc)\bullet{M^{0}},({M^{0}}-amc)),0)
  Mmaxmax((1+rmc)M0,(M0amc)){M_{\max}}\leftarrow\max((1+rmc)\bullet{M^{0}},({M^{0}}-amc))
  for t=1t=1 to TT do
     Lmk2real(eiθkF^E2f^Ms)\frac{{\partial L}}{{\partial{m_{k}}}}\leftarrow 2real\left({{e^{-i{\theta_{k}}}}\widehat{F}{E_{2}}\widehat{f}Ms}\right)
     Δmkq(LmkkLmk1N)\Delta{m_{k}}\leftarrow-q\cdot\left({\frac{{\frac{{\partial L}}{{\partial{m_{k}}}}}}{{\sum\limits_{k}{\frac{{\partial L}}{{\partial{m_{k}}}}}}}-\frac{1}{N}}\right)
     MtMt1+ΔMM^{t^{\prime}}\leftarrow M^{t-1}+\Delta M
     Mtmin(Mmax,max(Mt,Mmin)){M^{t}}\leftarrow\min({M_{\max}},\max({M^{t^{\prime}}},{M_{\min}}))
  end for
  Output: MtM^{t}

In Fourier measurements, there is a significant difference between the intensity in the low -frequency area and the high -frequency area. The traditional Signal-Noise Ratio (SNR) can hardly reflect the impact of noise on the object plane. We define Inverse Fourier Signal-Noise Ratio (IFSNR):

IFSNR=10log10(jN|kNeijkmkoeiθks|2jN|kNeijkmkreiθkskNeijkmkoeiθks|2)IFSNR=10{\log_{10}}(\frac{{\sum\limits_{\vec{j}}^{N}{{{\left|{\sum\limits_{\vec{k}}^{N}{{e^{i\vec{j}\vec{k}}}m_{\vec{k}}^{o}{e^{i\theta_{\vec{k}}^{s}}}}}\right|}^{2}}}}}{{\sum\limits_{\vec{j}}^{N}{{{\left|{\sum\limits_{\vec{k}}^{N}{{e^{i\vec{j}\vec{k}}}m_{\vec{k}}^{r}{e^{i\theta_{\vec{k}}^{s}}}}-\sum\limits_{\vec{k}}^{N}{{e^{i\vec{j}\vec{k}}}m_{\vec{k}}^{o}{e^{i\theta_{\vec{k}}^{s}}}}}\right|}^{2}}}}}) (27)

Here mo{m^{o}} is the noiseless measurement; mr{m^{r}} is the noisy measurement; θs\theta^{s} is the supplementary phase. The supplementary phase can be selected as the original Fourier phase, random phase, or the phase of the retrieval. In the simulation, the supplementary is the phase of the noise -free original Fourier transform .

Fig. 3 shows the performance of OPM and OMP under high noise measurement. The procedure in this simulation is OPM \rightarrow OMP \rightarrow OPM \rightarrow OMP. As shown in the Fig. 3(a)(c), even though the high -frequency part has been severely damaged, OPM can still obtain a relatively good result. While OMP also reconstructs many of the destroyed characteristics in high -frequency area.

Refer to caption
Figure 3: Simuation reconstruction from noisy measurement of a random complex-valued matrix. (a) Rconstructed and original imagnary part at object plane. (b) The measurement before and after optimizaion and the original one for comparison, FFTshifted and logarithmically shown. (c) Enlarged red part in (b)

4 Experiment

Refer to caption
Figure 4: Far-field diffraction set-up.

The simplified optical path of the set-up is shown in Fig. 4.

The light source is a polarized He-Ne laser light source. The aperture is realized by a pattern loaded on the digital micromirror device (DMD). The aperture is imaged on the spatial light modulator (SLM) through a convex lens. Complex-valued object is a pattern on the SLM. The far-field diffraction pattern is measured with a 14-bit cmos at the focal plane of the lens after SLM reflection. The exposure time is 20ms.

The measurement on CMOS contains some structured noise, hence we conduct OMP first with strong restriction to decrease the noise out of aperture Fourier’s transformation. The supplementary phase here is the mean of the Fourier phases of several random patterns with different granularity.

Distortions on the light field that actually projected on the target cannot be perfectly described by linear magnification. Therefore, we first take an assumed aperture with larger angle and radius for reconstruction, and obtain a new aperture based on the reconstruction. Several steps of erosion and dilation[35] are conducted to eliminate scattered pixels to meet the requirements of simple connection.

The coordinate of the Fourier transform center is hardly an just integer on CMOS, result in ripple phase distribution correlated with the offset. Meanwhile, due to surface unevenness, aberration and oblique light path, there is a base phase pattern on the target. These problems are solved by subtracting the reconstructed phase of a full 1 target, in other word, a mirror.

The experiment result is demonstrated in Fig. 5. The OPM faithfully reconstruct the phase morphology of the complex-valued object, showing more details than Hybrid Input Output (HIO). The pillars are also better distinguishable as shown in Fig. 5(e).

Refer to caption
Figure 5: Reconstruction from experiment measurement of a complex-valued ’cameraman’ cast on the SLM. (a) Reconstrution of OPM; (b) reconstrution of HIO;(c) ’original’ object for comparison (due to the unevenness of the actual magnification of the aperture and object, the location relationship between them is roughly achieved by the reconstruction). (a),(b) are phase unwrapped[36] and monolithic shifted. (d) Measurement achieved at the CMOS, logarithmically shown (e) The phase value at the bars with different colors in (a),(b),(c)

5 Conclusion

In summary, we demonstrate that for the parametrically robustness of special illumination method in CDI, the aperture pattern should be simple connected and have a simple encircled center. Three-sector scheme is raised as an appropriate choice. Meanwhile, we propose an algorithm with Fourier phase as the direct optimization object and a coupling measurement correction algorithm. These methods make it possible to achieve phase retrieval with good convergence for complex-valued objects in high-noise measurements. Finally, the experiment result demonstrates the feasibility and efficacy of our scheme and algorithm.

Our aperture scheme only needs blocking and transmitting, hence it can be easily achieved in various wavelengths. Because no information except for the aperture is required in our algorithm, the target is not necessary to meet certain nature such as real-valued and sparsity. Therefore, the algorithm has good compatibility with various application scenarios. A single-shot imaging without multi -frame modulation enables it to be applied to high-energy destructive imaging and imaging of moving objects. The better imaging performance of complex-valued object is of great use in 3D imaging and non-staining biological imaging.

In the algorithm, we use random resetting method to avoid local optimum. Simulated annealing without cooling down consumes a lot calculation time. Better optimization is anticipated here. Subsequent jobs might take deep learning and other methods to fill up the overexposure. Combination of prior knowledge should bring better image quality in specific practical scenarios.

\bmsection

Funding funding

\bmsection

Acknowledgments We thank Rong-ping Deng, Wen-kai Yu, Zu-xie Hu and Biao-xv Peng for their help in writing and theory assistance.

\bmsection

Disclosures The authors declare no conflicts of interest.

\bmsection

Data Availability The data that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request.

References

  • [1] Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” \JournalTitleIEEE signal processing magazine 32, 87–109 (2015).
  • [2] L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for fourier ptychography with an led array microscope,” \JournalTitleBiomedical optics express 5, 2376–2389 (2014).
  • [3] M. Gallagher-Jones, Y. Bessho, S. Kim, J. Park, S. Kim, D. Nam, C. Kim, Y. Kim, D. Y. Noh, O. Miyashita et al., “Macromolecular structures probed by combining single-shot free-electron laser diffraction with synchrotron coherent x-ray imaging,” \JournalTitleNature communications 5, 1–9 (2014).
  • [4] C. Song, H. Jiang, A. Mancuso, B. Amirbekian, L. Peng, R. Sun, S. S. Shah, Z. H. Zhou, T. Ishikawa, and J. Miao, “Quantitative imaging of single, unstained viruses with coherent x rays,” \JournalTitlePhysical review letters 101, 158101 (2008).
  • [5] J. A. Rodriguez, R. Xu, C.-C. Chen, Y. Zou, and J. Miao, “Oversampling smoothness: an effective algorithm for phase retrieval of noisy diffraction intensities,” \JournalTitleJournal of applied crystallography 46, 312–318 (2013).
  • [6] N. Loh, C. Y. Hampton, A. V. Martin, D. Starodub, R. G. Sierra, A. Barty, A. Aquila, J. Schulz, L. Lomb, J. Steinbrener et al., “Fractal morphology, imaging and mass spectrometry of single aerosol particles in flight,” \JournalTitleNature 486, 513–517 (2012).
  • [7] H. N. Chapman, P. Fromme, A. Barty, T. A. White, R. A. Kirian, A. Aquila, M. S. Hunter, J. Schulz, D. P. DePonte, U. Weierstall et al., “Femtosecond x-ray protein nanocrystallography,” \JournalTitleNature 470, 73–77 (2011).
  • [8] J. Miao, P. Charalambous, J. Kirz, and D. Sayre, “Extending the methodology of x-ray crystallography to allow imaging of micrometre-sized non-crystalline specimens,” \JournalTitleNature 400, 342–344 (1999).
  • [9] F. Pfeiffer, “X-ray ptychography,” \JournalTitleNature Photonics 12, 9–17 (2018).
  • [10] E. J. Candes, X. Li, and M. Soltanolkotabi, “Phase retrieval via wirtinger flow: Theory and algorithms,” \JournalTitleIEEE Transactions on Information Theory 61, 1985–2007 (2015).
  • [11] W.-K. Yu, A.-D. Xiong, X.-R. Yao, G.-J. Zhai, and Q. Zhao, “Efficient phase retrieval based on dark fringe extraction and phase pattern construction with a good anti-noise capability,” \JournalTitleOptics Communications 402, 413–421 (2017).
  • [12] J. M. Rodenburg, “Ptychography and related diffractive imaging methods,” \JournalTitleAdvances in imaging and electron physics 150, 87–184 (2008).
  • [13] M. Holler, A. Diaz, M. Guizar-Sicairos, P. Karvinen, E. Färm, E. Härkönen, M. Ritala, A. Menzel, J. Raabe, and O. Bunk, “X-ray ptychographic computed tomography at 16 nm isotropic 3d resolution,” \JournalTitleScientific reports 4, 1–5 (2014).
  • [14] D. F. Gardner, M. Tanksalvala, E. R. Shanblatt, X. Zhang, B. R. Galloway, C. L. Porter, R. Karl Jr, C. Bevis, D. E. Adams, H. C. Kapteyn et al., “Subwavelength coherent imaging of periodic samples using a 13.5 nm tabletop high-harmonic light source,” \JournalTitleNature Photonics 11, 259–263 (2017).
  • [15] D. Gauthier, M. Guizar-Sicairos, X. Ge, W. Boutu, B. Carré, J. Fienup, and H. Merdji, “Single-shot femtosecond x-ray holography using extended references,” \JournalTitlePhysical review letters 105, 093901 (2010).
  • [16] J. Duarte, R. Cassin, J. Huijts, B. Iwan, F. Fortuna, L. Delbecq, H. Chapman, M. Fajardo, M. Kovacev, W. Boutu et al., “Computed stereo lensless x-ray imaging,” \JournalTitleNature Photonics 13, 449–453 (2019).
  • [17] Mansi, Butola, Sunaina, Rajora, Kedar, and Khare, “Phase retrieval with complexity guidance,” \JournalTitleJOSA A 36, 202–211 (2019).
  • [18] D. L. Donoho, “Compressed sensing,” \JournalTitleIEEE Transactions on information theory 52, 1289–1306 (2006).
  • [19] H. Ohlsson, A. Yang, R. Dong, and S. Sastry, “Cprl–an extension of compressive sensing to the phase retrieval problem,” \JournalTitleAdvances in Neural Information Processing Systems 25 (2012).
  • [20] Y. Shechtman, A. Beck, and Y. C. Eldar, “Gespar: Efficient phase retrieval of sparse signals,” \JournalTitleIEEE transactions on signal processing 62, 928–938 (2014).
  • [21] P. Sidorenko, O. Kfir, Y. Shechtman, A. Fleischer, Y. C. Eldar, M. Segev, and O. Cohen, “Sparsity-based super-resolved coherent diffraction imaging of one-dimensional objects,” \JournalTitleNature communications 6, 1–8 (2015).
  • [22] G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” \JournalTitleOptica 6, 921–943 (2019).
  • [23] Z. Chen, S. Zheng, Z. Tong, and X. Yuan, “Physics-driven deep learning enables temporal compressive coherent diffraction imaging,” \JournalTitleOptica 9, 677–680 (2022).
  • [24] D. Sayre and H. Chapman, “X-ray microscopy,” \JournalTitleActa Crystallographica Section A: Foundations of Crystallography 51, 237–252 (1995).
  • [25] J. Miao, D. Sayre, and H. Chapman, “Phase retrieval from the magnitude of the fourier transforms of nonperiodic objects,” \JournalTitleJOSA A 15, 1662–1669 (1998).
  • [26] A. Barty, S. Boutet, M. J. Bogan, S. Hau-Riege, S. Marchesini, K. Sokolowski-Tinten, N. Stojanovic, R. Tobey, H. Ehrke, A. Cavalleri et al., “Ultrafast single-shot diffraction imaging of nanoscale dynamics,” \JournalTitleNature photonics 2, 415–419 (2008).
  • [27] M. Guizar-Sicairos and J. R. Fienup, “Understanding the twin-image problem in phase retrieval,” \JournalTitleJOSA A 29, 2367–2375 (2012).
  • [28] J. R. Fienup, “Reconstruction of a complex-valued object from the modulus of its fourier transform using a support constraint,” \JournalTitleJOSA A 4, 118–123 (1987).
  • [29] J. R. Fienup, “Lensless coherent imaging by phase retrieval with an illumination pattern constraint,” \JournalTitleOptics express 14, 498–508 (2006).
  • [30] F. Zhang, B. Chen, G. R. Morrison, J. Vila-Comamala, M. Guizar-Sicairos, and I. K. Robinson, “Phase retrieval by coherent modulation imaging,” \JournalTitleNature communications 7, 1–8 (2016).
  • [31] T. Latychevskaia, “Iterative phase retrieval in coherent diffractive imaging: practical issues,” \JournalTitleApplied optics 57, 7187–7197 (2018).
  • [32] T. Goldstein and C. Studer, “Convex phase retrieval without lifting via phasemax,” in International Conference on Machine Learning, (PMLR, 2017), pp. 1273–1281.
  • [33] E. J. Candes, T. Strohmer, and V. Voroninski, “Phaselift: Exact and stable signal recovery from magnitude measurements via convex programming,” \JournalTitleCommunications on Pure and Applied Mathematics 66, 1241–1274 (2013).
  • [34] E. J. Candes, Y. C. Eldar, T. Strohmer, and V. Voroninski, “Phase retrieval via matrix completion,” \JournalTitleSIAM review 57, 225–251 (2015).
  • [35] S. Marchesini, H. He, H. N. Chapman, S. P. Hau-Riege, A. Noy, M. R. Howells, U. Weierstall, and J. C. Spence, “X-ray image reconstruction from a diffraction pattern alone,” \JournalTitlePhysical Review B 68, 140101 (2003).
  • [36] F. Maier, D. Fuentes, J. S. Weinberg, J. D. Hazle, and R. J. Stafford, “Robust phase unwrapping for mr temperature imaging using a magnitude-sorted list, multi-clustering algorithm,” \JournalTitleMagnetic resonance in medicine 73, 1662–1668 (2015).
  • [37] J. R. Fienup, “Reconstruction of an object from the modulus of its fourier transform,” \JournalTitleOptics letters 3, 27–29 (1978).
  • [38] J. R. Fienup, “Phase retrieval algorithms: a comparison,” \JournalTitleApplied optics 21, 2758–2769 (1982).
  • [39] T. Goldstein and C. Studer, “Phasemax: Convex phase retrieval via basis pursuit,” \JournalTitleIEEE Transactions on Information Theory 64, 2675–2689 (2018).
  • [40] A. S. Bandeira, J. Cahill, D. G. Mixon, and A. A. Nelson, “Saving phase: Injectivity and stability for phase retrieval,” \JournalTitleApplied and Computational Harmonic Analysis 37, 106–125 (2014).
  • [41] I. K. Robinson, I. A. Vartanyants, G. Williams, M. Pfeifer, and J. Pitney, “Reconstruction of the shapes of gold nanocrystals using coherent x-ray diffraction,” \JournalTitlePhysical review letters 87, 195505 (2001).