This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Fresnel Integral Computation Techniques

Alexandru Ionu\cbt, James C. Hateley [email protected], [email protected]
Abstract.

This work is an extension of previous work by Alazah et al. [M. Alazah, S. N. Chandler-Wilde, and S. La Porte, Numerische Mathematik, 128(4):635–661, 2014]. We split the computation of the Fresnel Integrals into 3 cases: a truncated Taylor series, modified trapezoid rule and an asymptotic expansion for small, medium and large arguments respectively. These special functions can be computed accurately and efficiently up to an arbitrary precision. Error estimates are provided and we give a systematic method in choosing the various parameters for a desired precision. We illustrate this method and verify numerically using double precision.

Key words and phrases:
Fresnel Integrals and Asymptotic expansion and Numerical analysis and Numerical quadrature

1. Introduction

The Fresnel integrals and their simultaneous parametric plot, the clothoid, have numerous applications including; but not limited to, optics and electromagnetic theory [8, 15, 16, 17, 21], robotics [2, 7, 12, 13, 14], civil engineering [4, 9, 20] and biology [18]. There have been numerous works over the past 70 years computing and numerically approximating Fresnel integrals. Boersma established approximations using the Lanczos tau-method [3] and Cody computed rational Chebyshev approximations using the Remes algorithm [5]. Another approach includes a spreadsheet computation by Mielenz [10]; which is based on successive improvements of known relational approximations. Mielenz also gives an improvement of his work [11], where the accuracy is less then 1.e1.e-99. More recently, Alazah, Chandler-Wilde and LaPorte propose a method to compute these integrals via a modified trapezoid rule [1].

Alazah et al. remark after some experimentation that a truncation of the Taylor series is more efficient and accurate than their new method for a small argument [1]. We build on this philosophy and add to it by endorsing asymptotic expansion for large arguments (see [10] for an earlier example). We propose a technique combining truncated Taylor series, the modified trapezium rule and asymptotic expansion and test its validity for practical computation of the Fresnel integrals all over the real line. Cut-off points are found analytically yielding a technique valid for arbitrary precision.

The paper is organized as follows: Section 2 gives a brief introduction to Fresnel integrals. In Section 3 the approximations with appropriate bounds are introduced. Section 4 contains a systematic method for choosing the parameters to construct an approximation using results from Section 3. Section 5 contains some benchmarking and comparative results. Lastly, conclusions are provided in Section 6.

2. Fresnel Integrals

We define the Fresnel cosine and sine integrals, C(x)\mathrm{C}(x) and S(x)\mathrm{S}(x) respectively, as follows:

C(x)=0xcos(πt22)dt,S(x)=0xsin(πt22)dt.\mathrm{C}(x)=\int_{0}^{x}\cos\Big{(}\frac{\pi t^{2}}{2}\Big{)}\,\mathrm{d}t,\qquad\mathrm{S}(x)=\int_{0}^{x}\sin\Big{(}\frac{\pi t^{2}}{2}\Big{)}\,\mathrm{d}t. (1)

This is consistent with [1]. With this pair of integrals, C(x)\mathrm{C}(x) and S(x)\mathrm{S}(x) can be written as a complex exponential integral:

G(x)=C(x)+iS(x)=0xexp(iπt22)dt.\mathrm{G}(x)=\mathrm{C}(x)+\mathrm{i}\mathrm{S}(x)=\int_{0}^{x}\exp\Big{(}\frac{\mathrm{i}\pi t^{2}}{2}\Big{)}\,\mathrm{d}t. (2)

The Argand diagram of G(x)\mathrm{G}(x) for real xx plots the clothoid. Also, both C(x)\mathrm{C}(x) and S(x)\mathrm{S}(x) are odd functions of xx and can be extended to analytic functions over the complex plane. G(x)\mathrm{G}(x) can also be expressed as

G(x)=1+i2erf(π2(1i)x)\mathrm{G}(x)=\frac{1+i}{2}\text{erf}\Big{(}\frac{\sqrt{\pi}}{2}(1-\mathrm{i})x\Big{)} (3)

with erf(x)=2π0xexp(t2)dt\text{erf}(x)=\frac{2}{\pi}\int_{0}^{x}\exp(-t^{2})\,\mathrm{d}t being the error function.

3. Approximations and Error bounds

In this section we list the approximations to be used for G(x)\mathrm{G}(x) along with their corresponding error bounds. We start with the truncated Taylor series TN(x)\mathrm{T}_{N}(x);

TN(x)=k=0N(iπ)kx2k+12k(2k+1)k!\mathrm{T}_{N}(x)=\sum_{k=0}^{N}\frac{(\mathrm{i}\pi)^{k}x^{2k+1}}{2^{k}(2k+1)k!} (4)

TN(x)T_{N}(x) is an alternating series in both real and imaginary parts bounding these by the next term, we have the estimate

|G(x)TN(x)|ETN(x)=πN+1x2N+32N+1(2N+3)(N+1)!+πN+2x2N+52N+2(2N+5)(N+2)!|\mathrm{G}(x)-\mathrm{T}_{N}(x)|\leq E_{\mathrm{T}_{N}}(x)=\frac{\pi^{N+1}x^{2N+3}}{2^{N+1}(2N+3)(N+1)!}+\frac{\pi^{N+2}x^{2N+5}}{2^{N+2}(2N+5)(N+2)!} (5)

Modified trapezoid sum from [1], which is

GN(x)=1+i2\displaystyle\mathrm{G}_{N}(x)=\frac{1+\mathrm{i}}{2}- 1+iexp((1i)πANx)+1\displaystyle\frac{1+\mathrm{i}}{\exp\left((1-\mathrm{i})\pi A_{N}x\right)+1} (6)
2ixexp(iπx2/2)πANk=1Nexp(π(k1/2)2AN2)x2+i2(k1/2)2AN2,\displaystyle-\frac{2\mathrm{i}x\exp\big{(}\mathrm{i}\pi x^{2}/2\big{)}}{\pi A_{N}}\sum_{k=1}^{N}\frac{\exp\left(-\pi(k-1/2)^{2}A_{N}^{-2}\right)}{x^{2}+\mathrm{i}2(k-1/2)^{2}A_{N}^{-2}}, (7)

where ANA_{N} is defined in (9). A global bound for eq. (6), see eq. (67) from [1], is

|G(x)GN(x)|EGN=22cNexp(πN)2N+1.|\mathrm{G}(x)-\mathrm{G}_{N}(x)|\leq E_{\mathrm{G}_{N}}=\displaystyle\frac{2\sqrt{2}c_{N}\exp(-\pi N)}{2N+1}. (8)

Where cNc_{N} is given as eq. (47) in [1]; which is

cN=202exp(π/2)9π(1exp(2πAN2))(1+2πexp(βπAN2))+(2π+1)exp(π/2)22π3/2c_{N}=\displaystyle\frac{20\sqrt{2}\exp(-\pi/2)}{9\pi\big{(}1-\exp(-2\pi A_{N}^{2})\big{)}}\big{(}1+2\sqrt{\pi}\exp(-\beta\pi A_{N}^{2})\big{)}+\displaystyle\frac{(2\pi+1)\exp(-\pi/2)}{2\sqrt{2}\pi^{3/2}}

and β\beta, ANA_{N} are the constants

β=11222+116,AN=N+1/2.\beta=1-\displaystyle\frac{1}{\sqrt{2}}-\frac{2\sqrt{2}+1}{16},\qquad A_{N}=\sqrt{N+1/2}. (9)

The asymptotic expansion can be derived directly as follows, first write

G(x)=1+i2xexp(iπt22)dt.\mathrm{G}(x)=\frac{1+\mathrm{i}}{2}-\int_{x}^{\infty}\exp\Big{(}\frac{\mathrm{i}\pi t^{2}}{2}\Big{)}\,\mathrm{d}t.

Applying repeated applications of integration by parts gives

G(x)=QN(x)+x(2N+1)!!(i)N+1πN+1t2k+2exp(iπt22)dt\mathrm{G}(x)=\mathrm{Q}_{N}(x)+\int_{x}^{\infty}\frac{(2N+1)!!(-\mathrm{i})^{N+1}}{\pi^{N+1}t^{2k+2}}\exp\Big{(}\frac{\mathrm{i}\pi t^{2}}{2}\Big{)}\,\mathrm{d}t (10)

with

QN(x)=1+i2+exp(iπx22)k=0N(2k1)!!(i)k+1πk+1x2k+1\mathrm{Q}_{N}(x)=\frac{1+\mathrm{i}}{2}+\exp\Big{(}\frac{\mathrm{i}\pi x^{2}}{2}\Big{)}\sum_{k=0}^{N}\frac{(2k-1)!!(-\mathrm{i})^{k+1}}{\pi^{k+1}x^{2k+1}} (11)

For the error bound, estimating the integral in (10) gives,

|G(x)QN(x)|EQN(x)=(2N1)!!πN+1x2N+1.|\mathrm{G}(x)-\mathrm{Q}_{N}(x)|\leq E_{\mathrm{Q}_{N}}(x)=\frac{(2N-1)!!}{\pi^{N+1}x^{2N+1}}. (12)

The expansion in eq. (11) is connected to the generalized hypergeometric series F02(1/2,1,2i/(πx2)){}_{2}F_{0}(1/2,1,-2\mathrm{i}/(\pi x^{2})). The bound in eq. (12) is a much sharper then directly estimating the remainder term of the hypergeometric series, see 13.7.5 from NIST [6].

4. Technique

We start this section by making some observations. The Taylor expansion TN\mathrm{T}_{N} is accurate near zero. As |x||x| becomes increasingly large, more terms are needed to maintain accuracy. On the other hand, the asymptotic expansion QN\mathrm{Q}_{N} is accurate for |x||x| large and as |x||x| become small it needs more terms to maintain accuracy. The modified trapezoid rule does have a global bound; see [1], but in general is more prone to round-off errors for |x||x| too small or too large. Given the parity of the Fresnel integrals, we consider positive xx and split the non-negative reals into 3 sub-intervals with cut-off points x1x_{1} and x2x_{2}. Orders of approximation N1,N2,N3N_{1},N_{2},N_{3} are also selected for the Taylor polynomial, TN\mathrm{T}_{N}, the modified trapezoid rule, GN\mathrm{G}_{N}, and the asymptotic expansion, QN\mathrm{Q}_{N}, respectively. We give a methodical approach to choosing the parameters N1,N2,N3N_{1},N_{2},N_{3}, x1x_{1}, x2x_{2} and we select these parameters for double precision. In doing so we balance the float point operations between the 3 approximations and maintain the accuracy of the approximation. This provides and accurate approximation, near the desired machine precision with no increase in computational time for any x>0x>0. We remark that if one would like another precision, the parameters can be refined in the same manner as done below.

We start with equation (8), which is a global bound for the modified trapezoid rule. Choosing the parameter N2=12N_{2}=12 gives EG121.0733eE_{\mathrm{G}_{12}}\approx 1.0733e-1717 which is below double precision. Choosing the value N2=11N_{2}=11 gives EG112.301eE_{\mathrm{G}_{11}}\approx 2.301e-1616 which is slightly above double precision, for this example we will choose N2=12N_{2}=12. As per the above discussion, for a given NN, TN\mathrm{T}_{N} proves to be a better approximation for small xx, while QN\mathrm{Q}_{N} is a better approximation for large xx. We thus choose N1N_{1} and N3N_{3} based on the computational time. Doing so gives N1=14N_{1}=14, N3=12N_{3}=12. In deciding the orders N1,N2,N3N_{1},N_{2},N_{3}, we can now choose x1x_{1} and x2x_{2} based off the precision we want. Considering eq. (5) at x1=0.688x_{1}=0.688 gives ET14(0.688)2.078eE_{\mathrm{T}_{14}}(0.688)\approx 2.078e-1616. Also, considering eq. (12) at x2=6.725x_{2}=6.725 gives EQ12(6.725)2.212eE_{\mathrm{Q}_{12}}(6.725)\approx 2.212e-1616.

In summary, we have the approximation

G(x)G~(x)={T14(x),x[0,0.688]G12(x),x(0.688,6.725).Q12(x),x[6.725,)G(x)\approx\tilde{G}(x)=\begin{cases}T_{14}(x),&x\in[0,0.688]\\ G_{12}(x),&x\in(0.688,6.725).\\ Q_{12}(x),&x\in[6.725,\infty)\end{cases} (13)

Using the the parity of G(x)\mathrm{G}(x) the estimate |G(x)G~(x)|<252|\mathrm{G}(x)-\tilde{\mathrm{G}}(x)|<2^{-52} holds for all xx\in\mathbb{R}. So we achieve a double precision approximation that has similar computational time for all xx\in\mathbb{R} while avoiding machine rounding errors.

5. Numerical Experiments

Alazah et al. (see [1]) test the validity of their method for arguments between 0 and 1000 and endorse the truncated Taylor series for small arguments, a well known approach. Many other methods present in literature use polynomial or rational approximations, splitting the domain into several regions. As far as we know, the most accurate one is Cody [5] with relative errors 1015.58\leq 10^{-15.58}. However the form for the intermediate region and cut-off points are found experimentally. Mielenz [10] uses an asymptotic expansion for large arguments equivalent to a particular case of the form we use fixing N=6N=6. We confirm the precision of our technique here and test its efficiency.

Numerical experiments were run with MatLab ver. R2020a using a 5th generation Intel i7-5500U CPU @ 2.4GHz, with 16GB of PC3L-12800 DDR3L ram. We compare our approximation against two sources. First the native Frensel cosine and sine functions for MatLab, we denote this by Gm(x)\mathrm{G}_{m}(x) = fresnelc(x)+i(x)+\mathrm{i}fresnels(x)(x) with the fresnelc and fresnels cosine and sine functions. We compare with an approximation based off Mielenz [10], which uses a spline interpolant. This will be denoted as Gs(x)\mathrm{G}_{s}(x) and can be found in the MatLab file exchange111John D’Errico (2020). FresnelS and FresnelC
(https://www.mathworks.com/matlabcentral/fileexchange/28765-fresnels-and-fresnelc),
MATLAB Central File Exchange.
.

Refer to caption
(a) Error |GmG~||\mathrm{G}_{m}-\tilde{\mathrm{G}}|
Refer to caption
(b) Error |GsG~||\mathrm{G}_{s}-\tilde{\mathrm{G}}|
Figure 1. Error plots for x[0,15]x\in[0,15]. 1000 random test points are taken for each subinterval partitioned by x1=0.688x_{1}=0.688, x2=6.725x_{2}=6.725. (a) The native MatLab function Gm(x)\mathrm{G}_{m}(x) = fresnelc(x)+i(x)+\mathrm{i}fresnels(x)(x) vs G~(x)\tilde{\mathrm{G}}(x), (b) Error plot of Gs(x)\mathrm{G}_{s}(x) vs G~(x)\tilde{\mathrm{G}}(x).

Figure 1 shows the errors between the approximation G~(x)\tilde{\mathrm{G}}(x), Gm(x)\mathrm{G}_{m}(x) and Gs(x)\mathrm{G}_{s}(x). For each subinterval, partitioned by x1=0.688x_{1}=0.688 and x2=6.725x_{2}=6.725, each approximation is evaluated at 1000 random points. The average of the errors is in Table 2. Although Gm(x)\mathrm{G}_{m}(x) and Gs(x)\mathrm{G}_{s}(x) are approximations themselves, there is consistent behavior between these approximations. The error reflects the accuracy of the approximation for a the given subinterval.

Approximation
Interval G~\tilde{\mathrm{G}} Gs\mathrm{G}_{s} Gm\mathrm{G}_{m}
[0,x1][0,x_{1}] 3.2951313293e-05 1.2483252225e-04 2.776233718e-02
(x1,x2)(x_{1},x_{2}) 2.9528400045e-05 8.6929393952e-05 2.874295949e-02
[x2,15][x_{2},15] 3.2216610024e-05 4.2383601564e-05 2.809127640e-02
Table 1. Average time comparison for Figure 1. 1000 random points are taken for each subinterval partitioned by x1=0.688x_{1}=0.688, x2=6.725x_{2}=6.725. The time for computing the approximation is record with commands tic/toc.
Average point-wise error
Interval |GmG~||\mathrm{G}_{m}-\tilde{\mathrm{G}}| |GsG~||\mathrm{G}_{s}-\tilde{\mathrm{G}}|
[0,x1][0,x_{1}] 4.3744537030e-17 6.1597220741e-16
(x1,x2)(x_{1},x_{2}) 1.4010574651e-16 2.1708872956e-15
[x2,15][x_{2},15] 4.0216009902e-16 6.8351638744e-16
Table 2. Average point-wise error comparison for Figure 1. 1000 random points are taken for each subinterval partitioned by x1=0.688x_{1}=0.688, x2=6.725x_{2}=6.725.
Refer to caption
(a) |GmQN||\mathrm{G}_{m}-\mathrm{Q}_{N}|
Refer to caption
(b) |GsGm||\mathrm{G}_{s}-\mathrm{G}_{m}|
Figure 2. Error plots for x[10,1.e9]x\in[10,1.e9] with 2000 random test points. (a) The native MatLab functions Gm(x)\mathrm{G}_{m}(x) and The asymptotic expansion Q12(x)\mathrm{Q}_{12}(x). (b) The native MatLab functions Gm(x)\mathrm{G}_{m}(x) and Gs(x)\mathrm{G}_{s}(x).

Figure 2 shows a comparison for large xx, we use 2000 random test points between the interval [10,1.e9][10,1.e9]. We show a comparison between the errors generated by the native functions in MatLab, Gm(x)\mathrm{G}_{m}(x) and the asymptotic expansion Q12(x)\mathrm{Q}_{12}(x), the errors generated by Gm(x)\mathrm{G}_{m}(x) and Gs(x)\mathrm{G}_{s}(x). This illustrates the limit of precision of the approximation Gm(x)\mathrm{G}_{m}(x). We note that Gs(x)\mathrm{G}_{s}(x) and G~(x)\tilde{\mathrm{G}}(x) use the same asymptotic expansion for large xx, but their implementations are different. It should also be noted that both approximations Gs(x)\mathrm{G}_{s}(x) and G~(x)\tilde{\mathrm{G}}(x) are fast to compute, the time can be seen in Table (1). If one were to need a quicker implementation, one could precompute the coefficients for given parameters N1,N2,N3N_{1},N_{2},N_{3}.

6. Conclusion

In this work we have presented a systematic way to compute Fresnel integrals up to an arbitrary machine precision. Our numerical experiments verify this for double precision. If more accuracy is required, one can select the parameters in a similar fashion to what is done in Section 4. In Section 5 we are also able to justify the results from previous works where justification was heuristic. With a proper implementation, the method presented can improve the performance for computing Fresnel integrals in standard packages such as SciPy [19]. In computer aided geometric design, this technique can be used to plot clothoid segments for transition curves and clothoidal splines.

The underlying philosophy combining Taylor series, some clever form of quadrature and asymptotic expansion can be investigated for the computation of other special functions and more exotic curves arising in CAGD.

References

  • [1] M. Alazah, S. N. Chandler-Wilde, and S. La Porte. Computing fresnel integrals via modified trapezium rules. Numerische Mathematik, 128(4):635–661, 2014.
  • [2] P. Bevilacqua, M. Frego, E. Bertolazzi, D. Fontanelli, L. Palopoli, and F. Biral. Path planning maximising human comfort for assistive robots. In 2016 IEEE Conference on Control Applications (CCA), pages 1421–1427. IEEE, 2016.
  • [3] J. Boersma. Computation of fresnel integrals. Mathematics of Computation, 14:380–380, 1960.
  • [4] T. Butz, M. Ehmann, and T.-M. Wolter. A realistic road model for real-time vehicle dynamics simulation. Technical report, SAE Technical Paper, 2004.
  • [5] W. Cody. Chebyshev approximations for the fresnel integrals. Mathematics of Computation, 22(102):450–453, 1968.
  • [6] NIST Digital Library of Mathematical Functions. http://dlmf.nist.gov/, Release 1.0.28 of 2020-09-15. F. W. J. Olver, A. B. Olde Daalhuis, D. W. Lozier, B. I. Schneider, R. F. Boisvert, C. W. Clark, B. R. Miller, B. V. Saunders, H. S. Cohl, and M. A. McClain, eds.
  • [7] S. Fleury, P. Soueres, J.-P. Laumond, and R. Chatila. Primitives for smoothing mobile robot trajectories. IEEE transactions on robotics and automation, 11(3):441–448, 1995.
  • [8] A. A. Goloborodko. Computer simulation of talbot phenomenon using the fresnel integrals approach. Optik, 127(10):4478–4482, 2016.
  • [9] S.-D. Kim and H. J. Lee. The comparison of analytical and numerical solutions for wave diffraction due to insular breakwater. International Journal of Physical Sciences, 5(3):226–237, 2010.
  • [10] K. D. Mielenz. Computation of fresnel integrals. Journal of research of the National Institute of Standards and Technology, 102(3):363, 1997.
  • [11] K. D. Mielenz. Computation of fresnel integrals. ii. Journal of research of the National Institute of Standards and Technology, 105(4):589–590, 2000.
  • [12] N. Montés, A. Herraez, L. Armesto, and J. Tornero. Real-time clothoid approximation by rational bezier curves. In 2008 IEEE International Conference on Robotics and Automation, pages 2246–2251. IEEE, 2008.
  • [13] B. Nagy and A. Kelly. Trajectory generation for car-like robots using cubic curvature polynomials. Field and Service Robots, 11, 2001.
  • [14] J. Reuter. Mobile robots trajectories with continuously differentiable curvature: an optimal control approach. In Proceedings. 1998 IEEE/RSJ International Conference on Intelligent Robots and Systems. Innovations in Theory, Practice and Applications (Cat. No. 98CH36190), volume 1, pages 38–43. IEEE, 1998.
  • [15] M. Sandoval-Hernandez, H. Vazquez-Leal, L. Hernandez-Martinez, U. Filobello-Nino, V. Jimenez-Fernandez, A. Herrera-May, R. Castaneda-Sheissa, R. Ambrosio-Lazaro, and G. Diaz-Arango. Approximation of fresnel integrals with applications to diffraction problems. Mathematical Problems in Engineering, 2018, 2018.
  • [16] L. Sangeorzan, M. Parpalea, A. Nedelcu, and C. Aldea. Some aspects in the modelling of physics phenomena using computer graphics. In Proceedings of the 10th WSEAS international conference on Mathematical and computational methods in science and engineering, pages 518–523. World Scientific and Engineering Academy and Society (WSEAS), 2008.
  • [17] T. Shimobaba, T. Ito, N. Masuda, Y. Abe, Y. Ichihashi, H. Nakayama, N. Takada, A. Shiraki, and T. Sugie. Numerical calculation library for diffraction integrals using the graphic processing unit: the gpu-based wave optics library. Journal of Optics A: Pure and Applied Optics, 10(7):075308, 2008.
  • [18] E. L. Starostin, R. A. Grant, G. Dougill, G. H. M. van der Heijden, and V. G. A. Goss. The euler spiral of rat whiskers. Science advances, 6(3):eaax5145, January 2020.
  • [19] P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, J. Bright, S. J. van der Walt, M. Brett, J. Wilson, K. J. Millman, N. Mayorov, A. R. J. Nelson, E. Jones, R. Kern, E. Larson, C. J. Carey, İ. Polat, Y. Feng, E. W. Moore, J. VanderPlas, D. Laxalde, J. Perktold, R. Cimrman, I. Henriksen, E. A. Quintero, C. R. Harris, A. M. Archibald, A. H. Ribeiro, F. Pedregosa, P. van Mulbregt, and SciPy 1.0 Contributors. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods, 17:261–272, 2020.
  • [20] L. Wang, K. T. Miura, E. Nakamae, T. Yamamoto, and T. J. Wang. An approximation approach of the clothoid curve defined in the interval [0, π\pi/2] and its offset by free-form curves. Computer-Aided Design, 33(14):1049–1058, 2001.
  • [21] H. Zhang, Y. Zhao, L. Cao, and G. Jin. Layered holographic stereogram based on inverse fresnel diffraction. Applied optics, 55(3):A154–A159, 2016.