This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

A gamma variate generator with shape parameter less than unity

Seiji Zenitani
Abstract

Algorithms for generating random numbers that follow a gamma distribution with shape parameter less than unity are proposed. Acceptance-rejection algorithms are developed, based on the generalized exponential distribution. The squeeze technique is applied to our method, and then piecewise envelope functions are further considered. The proposed methods are excellent in acceptance efficiency and promising in speed.

keywords:
Random number generation , gamma distribution , rejection sampling , Monte Carlo method
journal: arXiv
\affiliation

organization=Space Research Institute, Austrian Academy of Sciences,addressline=Schmiedlstraße 6, city=Graz, postcode=8042, country=Austria

1 Introduction

The gamma distribution is one of the most important probability distributions in statistics. It has applications in a broad range of fields in natural sciences, engineering, and social sciences. The probability density is defined by

fGA(x;α,λ)=1λαΓ(α)xα1ex/λ(x0)\displaystyle f_{\rm GA}(x;\alpha,\lambda)=\frac{1}{\lambda^{\alpha}\Gamma(\alpha)}x^{\alpha-1}e^{-x/\lambda}~{}~{}(x\geq 0) (1)

where α\alpha (>0>0) is the shape parameter, λ\lambda (>0>0) the scale parameter, and Γ(x)\Gamma(x) the gamma function.

Generating a gamma variate, a random number drawn from a gamma distribution, is an important and fundamental problem in statistical computing. Owing to its importance, various numerical algorithms have been developed since the early days of computer age [1, 3, 4, 8, 7, 9, 10, 6].

The nature of the gamma distribution is controlled by the shape parameter α\alpha. Accordingly, we need to use different gamma variate generators. When 1<α1<\alpha, the distribution function increases from zero at x=0x=0, has a maximum at x=(α1)λx=(\alpha-1)\lambda, and then decays as xx further increases. To generate a gamma variate with α>1\alpha>1, probably Marsaglia & Tsang [8]’s algorithm is one of the best, as detailed in literature [8, 10, 6]. When α=1\alpha=1, the distribution is reduced to the exponential distribution, and it is easy to generate a variate.

When 0<α<10<\alpha<1, which is the range of our interest, the gamma distribution monotonically decreases from positive infinity at x=0x=0 to zero at xx\rightarrow\infty. To generate a gamma variate for 0<α<10<\alpha<1, various acceptance-rejection methods have been developed [1, 3, 4, 7]. Combining power-law and exponential distributions, Ahrens & Dieter [1] proposed a piecewise rejection method. Best [3] extended the Ahrens & Dieter [1] method, by adjusting a switching point and introducing a squeeze technique. Devroye [4] developed a different rejection method, based on the exponential power distribution. As of 2024, it is employed by a Python package, NumPy [2]. Using a generalized exponential distribution [5], Kundu & Gupta [7] developed a rejection method and its piecewise extensions. Finally, Tanizaki [9] developed a ratio-of-uniforms method, which works either for 0<α<10<\alpha<1 and for 1<α1<\alpha.

In this note, we propose efficient gamma generators for 0<α<10<\alpha<1, advancing an earlier study [7]. In Section 2, we develop an acceptance-rejection algorithm, using the generalized exponential distribution. In Section 3, we construct fractional functions for the squeeze technique. In Section 4, we discuss piecewise envelope functions. In Section 5, we evaluate the performance of the proposed and previous methods. Section 6 contains discussions and the summary.

2 Base algorithm

We construct a base algorithm to generate a random variate drawn from the gamma distribution with shape 0<α<10<\alpha<1. We focus on the λ=1\lambda=1 case, because we can obtain the results for λ1\lambda\neq 1, by multiplying the outputs by λ\lambda.

fGA(x;α,1)\displaystyle f_{\rm GA}(x;\alpha,1) =1Γ(α)xα1ex\displaystyle=\frac{1}{\Gamma(\alpha)}x^{\alpha-1}e^{-x} (2)

In line of Kundu & Gupta [7], we employ the generalized exponential (GE) distribution [5],

fGE(x;α,λ)=αλ(1ex/λ)α1ex/λ(x0)\displaystyle f_{\rm GE}(x;\alpha,\lambda)=\dfrac{\alpha}{\lambda}(1-e^{-x/\lambda})^{\alpha-1}e^{-x/\lambda}~{}~{}~{}~{}(x\geq 0) (3)

where α\alpha (>0>0) and λ\lambda (>0>0) are the shape and scale parameters. Note that Refs. [5] and [7] use different conventions for the scale parameter, and we employ Gupta & Kundu [5]’s. The cumulative distribution function (CDF) of the GE distribution is

FGE(x;α,λ)=(1ex/λ)α\displaystyle F_{\rm GE}(x;\alpha,\lambda)=(1-e^{-x/\lambda})^{\alpha} (4)

Hereafter we consider the GE distribution with λ=1\lambda=1. We generate a uniform random variate U1U_{1} in the (0,1)(0,1) range:

U1U(0,1)\displaystyle U_{1}\sim U(0,1) (5)

Equating U1U_{1} with (4) with λ=1\lambda=1, we find that a random variate xx that follows the generalized exponential distribution fGE(α,1)f_{\rm GE(\alpha,1)} is drawn by

xlog(1U11/α)\displaystyle x\leftarrow-\log(1-U_{1}^{1/\alpha}) (6)

To help our discussion, we use a parameter β\beta, a function g(x)g(x), and a rejection function R1(x)R_{1}(x).

β\displaystyle\beta 1α,g(x)1exx\displaystyle\equiv 1-\alpha,~{}~{}~{}g(x)\equiv\dfrac{1-e^{-x}}{x} (7)
R1(x)\displaystyle R_{1}(x) (x1ex)α1=(g(x))β\displaystyle\equiv\left(\dfrac{x}{1-e^{-x}}\right)^{\alpha-1}=\left(g(x)\right)^{\beta} (8)

The function g(x)g(x) is continuous at x=0x=0. When x0x\geq 0, as g(0)=1g(0)=1 and g(x)=(1+xex)/(x2ex)0g^{\prime}(x)=(1+x-e^{x})/(x^{2}e^{x})\leq 0, we find 0<g(x)10<g(x)\leq 1. Since 0<β<10<\beta<1, we find

0<R1(x)1.\displaystyle 0<R_{1}(x)\leq 1. (9)

All the equal signs are met when x=0x=0.

Comparing the GE (3) and gamma distributions (2), we rewrite (2) as follows.

fGA(x;α,1)=1Γ(α+1)R1(x)fGE(x;α,1)\displaystyle f_{\rm GA}(x;\alpha,1)=\frac{1}{\Gamma(\alpha+1)}R_{1}(x)~{}f_{\rm GE}(x;\alpha,1) (10)

Here, the relation αΓ(α)=Γ(α+1)\alpha\Gamma(\alpha)=\Gamma(\alpha+1) is used. Eq. (10) suggests that we can apply a rejection method to the GE-distributed variate xx, to obtain a gamma-distributed variate. Using another uniform variate U2U(0,1)U_{2}\sim U(0,1), we evaluate the acceptance condition:

U2R1(x)\displaystyle U_{2}\leq R_{1}(x) (11)

or an equivalent inequality:

(U2)1/(1α)xU11/α\displaystyle(U_{2})^{1/(1-\alpha)}x\leq U_{1}^{1/\alpha} (12)

If either (11) or (12) is satisfied, then we take this number xx. If it is not satisfied, we discard the number, and then we go back to the beginning. The final output follows a gamma distribution with shape α\alpha. These procedures are summarized in Algorithm 1 in Table 1. We need the total density 1/Γ(α+1)1/\Gamma(\alpha+1) to generate one gamma distribution.

Table 1: A base algorithm of our generators for 0<α<10<\alpha<1.
Algorithm 1
repeat
      generate U1,U2U(0,1)U_{1},U_{2}\sim U(0,1)
      bU11/αb\leftarrow U_{1}^{1/\alpha},  xlog(1b)x\leftarrow-\log(1-b)
      if  U21/(1α)xbU_{2}^{1/(1-\alpha)}x\leq b   return xx
end repeat

3 Squeeze technique

We improve the speed of the algorithm by the so-called squeeze method. We avoid the power and exponential operations in the acceptance condition (11), by using the following inequalities for x0x\geq 0.

4βx4+βxR1(x)4+(1β)x4+(1+β)x\displaystyle\dfrac{4-\beta x}{4+\beta x}\leq R_{1}(x)\leq\dfrac{4+(1-\beta)x}{4+(1+\beta)x} (13)

To show this, we start from several inequalities:

ex\displaystyle e^{-x} 2x2+x𝐢𝐟x0\displaystyle\geq\dfrac{2-x}{2+x}~{}~{}~{}~{}~{}{\bf if}~{}x\geq 0 (14)
xβ\displaystyle x^{\beta} (1β)+(1+β)x(1+β)+(1β)x𝐢𝐟0<x1and0β1\displaystyle\leq\dfrac{(1-\beta)+(1+\beta)x}{(1+\beta)+(1-\beta)x}~{}~{}~{}~{}~{}~{}{\bf if}~{}0<x\leq 1{\rm~{}and~{}}0\leq\beta\leq 1 (15)
xβ\displaystyle x^{\beta} (1β)+(1+β)x(1+β)+(1β)x𝐢𝐟x1and0β1\displaystyle\geq\dfrac{(1-\beta)+(1+\beta)x}{(1+\beta)+(1-\beta)x}~{}~{}~{}~{}~{}~{}{\bf if}~{}x\geq 1{\rm~{}and~{}}0\leq\beta\leq 1 (16)
xβ\displaystyle x^{\beta} 1+β(x1)𝐢𝐟0<xand0β1\displaystyle\leq 1+\beta(x-1)~{}~{}~{}~{}~{}~{}{\bf if}~{}0<x{\rm~{}and~{}}0\leq\beta\leq 1 (17)

We can prove them in the following way.

p(x)\displaystyle p(x) 2x2+xexwhichtellsp(0)=1,p(x)=x2ex(2+x)2<0\displaystyle\equiv\dfrac{2-x}{2+x}e^{x}{\rm~{}~{}which~{}tells~{}}p(0)=1,~{}p^{\prime}(x)=\dfrac{-x^{2}e^{x}}{(2+x)^{2}}<0 (18)
q(x)\displaystyle q(x) (1β)+(1+β)x(1+β)+(1β)xxβwhichtellsq(1)=1,\displaystyle\equiv\dfrac{(1-\beta)+(1+\beta)x}{(1+\beta)+(1-\beta)x}~{}x^{-\beta}{\rm~{}~{}which~{}tells~{}}q(1)=1,~{}
q(x)=β(β21)(1x)2xβ1[(1+β)+(1β)x]20\displaystyle~{}~{}~{}~{}~{}q^{\prime}(x)=\dfrac{\beta(\beta^{2}-1)(1-x)^{2}x^{-\beta-1}}{[(1+\beta)+(1-\beta)x]^{2}}\leq 0 (19)
r(x)\displaystyle r(x) xβ+1+β(x1)whichtellsr(1)=0,\displaystyle\equiv-x^{\beta}+1+\beta(x-1){\rm~{}~{}which~{}tells~{}}r(1)=0,~{}
r(x)=β(1xβ1)<0forx<1,r(x)>0forx>1\displaystyle~{}~{}~{}~{}~{}r^{\prime}(x)=\beta(1-x^{\beta-1})<0{\rm~{}for~{}}x<1,~{}r^{\prime}(x)>0{\rm~{}for~{}}x>1 (20)

For the left inequality of (13), we differentiate the rejection function

R1(x)\displaystyle R^{\prime}_{1}(x) =β(xex+1x(ex1))R1(x)\displaystyle=\beta\left(\frac{x-e^{x}+1}{x(e^{x}-1)}\right)R_{1}(x) (21)

With help from the Taylor series ex=n=0xnn!e^{x}=\sum_{n=0}^{\infty}\dfrac{x^{n}}{n!}, we find

R1(x)R1(x)=βn=2xnn!xn=1xnn!=βn=21nxn(n1)!n=2xn(n1)!\displaystyle\frac{R^{\prime}_{1}(x)}{R_{1}(x)}=-\beta\dfrac{\sum_{n=2}^{\infty}\dfrac{x^{n}}{n!}}{x\sum_{n=1}^{\infty}\dfrac{x^{n}}{n!}}=-\beta\dfrac{\sum_{n=2}^{\infty}\dfrac{1}{n}\dfrac{x^{n}}{(n-1)!}}{\sum_{n=2}^{\infty}\dfrac{x^{n}}{(n-1)!}} (22)

In the right-hand side, all the terms in the numerator and in the denominator are positive, but the numerator terms contain additional factor of 1/2,1/3,1/2,1/3,\cdots. This indicates that the ratio of the numerator to the denominator is less than 1/21/2. Together with the x=0x=0 case, we find

R1(x)R1(x)β2\displaystyle\frac{R^{\prime}_{1}(x)}{R_{1}(x)}\geq-\frac{\beta}{2} (23)

This suggests, together with (14),

R1(x)exp(β2x)4βx4+βx.\displaystyle R_{1}(x)\geq\exp\left(-\dfrac{\beta}{2}x\right)\geq\dfrac{4-\beta x}{4+\beta x}. (24)

For the right inequality of (13), with help from (8), (14) and (15), we obtain

R1(x)(22+x)β4+(1β)x4+(1+β)x\displaystyle R_{1}(x)\leq\left(\dfrac{2}{2+x}\right)^{\beta}\leq\dfrac{4+(1-\beta)x}{4+(1+\beta)x} (25)

Thus (24) and (25) prove (13).

In practice, we utilize the inequalities (13) or equivalent logical expressions, before evaluating R1(x)R_{1}(x). Translating β(1α)\beta\rightarrow(1-\alpha) again, we show the final algorithm in Algorithm 2 (Table 2).

Table 2: A gamma generator with the squeeze technique.
Algorithm 2
repeat
      generate U1,U2U(0,1)U_{1},U_{2}\sim U(0,1)
      bU11/αb\leftarrow U_{1}^{1/\alpha},  xlog(1b)x\leftarrow-\log(1-b)
      if  U2(4+(1α)x)(4+(α1)x)U_{2}(4+(1-\alpha)x)\leq(4+(\alpha-1)x)   return xx
      if  U2(4+(2α)x)(4+αx)U_{2}(4+(2-\alpha)x)\leq(4+\alpha x) then
          if  U21/(1α)xbU_{2}^{1/(1-\alpha)}x\leq b   return xx
end repeat

4 Piecewise envelope functions

Next, we split the envelope function for the rejection method into two parts. Across the switching point at x=sx=s (>0>0), we consider the GE distribution in the left and an exponential tail in the right. With help from the CDF of the GE distribution (Eq. (4)), we rewrite the gamma distribution (Eq. (2)) as follows.

fGA(x;α,1)\displaystyle f_{\rm GA}(x;\alpha,1) ={SLR1(x)fGE(x;α,1)FGE(s;α,1)(for0xs)SRR2(x)e(xs)(forx>s)\displaystyle=\left\{\begin{array}[]{ll}S_{L}\cdot R_{1}(x)~{}\dfrac{f_{\rm GE}(x;\alpha,1)}{F_{\rm GE}(s;\alpha,1)}{}{}&{\rm(for~{}0\leq x\leq s)}\\[10.0pt] S_{R}\cdot R_{2}(x)~{}e^{-(x-s)}&{\rm(for~{}x>s)}\end{array}\right. (28)

Here, SLS_{L} and SRS_{R} are the densities of the left and right parts,

SL(1es)αΓ(α+1),SRαessα1Γ(α+1),\displaystyle S_{L}\equiv\dfrac{(1-e^{-s})^{\alpha}}{\Gamma(\alpha+1)},~{}~{}S_{R}\equiv\dfrac{\alpha e^{-s}s^{\alpha-1}}{\Gamma(\alpha+1)}, (29)

and R2(x)R_{2}(x) is the second rejection function

R2(x)(xs)α1\displaystyle R_{2}(x)\equiv\left(\frac{x}{s}\right)^{\alpha-1} (30)

which satisfies 0<R2(x)<10<R_{2}(x)<1 for x>sx>s. Equation (28) tells us that we can apply the rejection method to the GE distribution with R1(x)R_{1}(x) in the left part, and that we can apply the rejection method to the exponential density with R2(x)R_{2}(x) in the right part.

Using a uniform variate U1U_{1}, when U1SL/(SL+SR)U_{1}\leq S_{L}/(S_{L}+S_{R}), we proceed to the GE part. We generate the GE-distributed number in the range [0,s][0,s]. Using a uniform variate U2U_{2}, the GE variate xx can be drawn by

xlog(1[FGE(s;α,1)U2]1/α)\displaystyle x\leftarrow-\log\left(1-[F_{\rm GE}(s;\alpha,1)~{}U_{2}]^{1/\alpha}\right) (31)

Then we evaluate the acceptance condition, as discussed in Section 3.

In the right part, we obtain an exponential variate from U2U_{2},

xslogU2\displaystyle x\leftarrow s-\log U_{2} (32)

and then we apply the rejection method. We can similarly apply the squeeze technique to (30). For x1x\geq 1, from (16) and (17), we obtain a squeeze relation

11+β(x1)xβ(1+β)+(1β)x(1β)+(1+β)x\displaystyle\dfrac{1}{1+\beta(x-1)}\leq x^{-\beta}\leq\dfrac{(1+\beta)+(1-\beta)x}{(1-\beta)+(1+\beta)x} (33)

Finally, we choose ss. First, we look for ss that minimizes the total density

S(α,s)=SL+SR=(1es)α+αsα1esΓ(α+1).\displaystyle S(\alpha,s)={S_{L}+S_{R}}=\dfrac{(1-e^{-s})^{\alpha}+\alpha{s^{\alpha-1}}e^{-s}}{\Gamma(\alpha+1)}. (34)

The solution can be found by solving (1es)α1+(α1)sα2sα1=0(1-e^{-s})^{\alpha-1}+(\alpha-1){s^{\alpha-2}}-{s^{\alpha-1}}=0 by a root finder. In this case, we find an approximation by trial and error,

s=1.28+0.23α\displaystyle s^{*}=1.28+0.23\alpha (35)

This gives a near-minimum S(α,s)S(\alpha,s^{*}) within an error of <1.5×106<1.5\times 10^{-6}. Another choice is s=1s=1, in analogy with Ahrens & Dieter [1]. This simplifies several parameters in the algorithm.

These procedures are shown in Algorithm 3 (Table 3). The first three lines initialize coefficients, and then the lines inside the loop generate the variate. We reuse (SL+SR)/SLU1(S_{L}+S_{R})/S_{L}\cdot U_{1} or (SL+SR)/SR(U1SL/(SL+SR))(S_{L}+S_{R})/S_{R}\cdot(U_{1}-S_{L}/(S_{L}+S_{R})) as a uniform variate, in order to reduce the total number of random variates. When s=1s=1, we can further simplify the code, which is a trivial task.

Table 3: A method with the piecewise envelope functions.
Algorithm 3
s1.28+0.23αs\leftarrow 1.28+0.23\alpha,  texp(s)t\leftarrow\exp(-s)    // s=1,t=1/es=1,~{}t=1/e
SL(1t)αS_{L}\leftarrow(1-t)^{\alpha},  SRαtsα1S_{R}\leftarrow\alpha~{}t~{}s^{\alpha-1},  SSL+SRS\leftarrow S_{L}+S_{R}
p1SL/Sp_{1}\leftarrow S_{L}/S,  d2S/SRd_{2}\leftarrow S/S_{R}
repeat
      generate U1,U2U(0,1)U_{1},U_{2}\sim U(0,1)
      if  U1p1U_{1}\leq p_{1} then
          b(SU1)1/αb\leftarrow(SU_{1})^{1/\alpha},  xlog(1b)x\leftarrow-\log(1-b)
          if  U2(4+(1α)x)(4+(α1)x)U_{2}(4+(1-\alpha)x)\leq(4+(\alpha-1)x)   return xx
          if  U2(4+(2α)x)(4+αx)U_{2}(4+(2-\alpha)x)\leq(4+\alpha x) then
              if  U21/(1α)xbU_{2}^{1/(1-\alpha)}x\leq b   return xx
      else
          xslogd2(U1p1)x\leftarrow s-\log d_{2}(U_{1}-p_{1}),  yx/sy\leftarrow x/s
          if  U2(α+(α1)y)1U_{2}(\alpha+(\alpha-1)y)\leq 1   return xx
          if  U2(α+(2α)y)(2α+αy)U_{2}(\alpha+(2-\alpha)y)\leq(2-\alpha+\alpha y) then
              if  U2yα1U_{2}\leq y^{\alpha-1}   return xx
      endif
end repeat

5 Numerical tests

We have carried out several benchmark tests. We wrote C codes for the eight methods: 1) the Ahrens & Dieter [1] method, 2) the Best [3] method, 3) the Devroye [4] method, 4) the Kundu & Gupta [7] method (Algorithm 3 in their paper), 5) Algorithm 1 (Table 1), 6) Algorithm 2 (Table 2), 7) Algorithm 3 (Table 3) with s=ss=s^{*}, and 8) Algorithm 3 with s=1s=1. In the last case, we have simplified the algorithm, as mentioned in Sec. 4. We use a uniform random generator (gsl_rng_uniform) in the GNU Scientific Library. We use Intel oneAPI Compiler (icx) v2024.1 with the -lgsl -O0 -lm option and the clang compiler (clang) v14.0 with the -lgsl -lm option on AMD Ryzen 5955 processor on Ubuntu Linux 24.04.

Refer to caption
Figure 1: Normalized number densities to generate a gamma distribution. Theoretical predictions (curves) and numerical results (circles) are shown, as a function of α\alpha.

Figure 1 shows the normalized number of random variates to generate a gamma distribution, as a function of α\alpha. The curves indicate theoretical predictions. Algorithms 1 & 2 require 1/Γ(α+1)1/\Gamma(\alpha+1), the same number as the most efficient method to date, the Devroye [4] method (not shown). Algorithm 3’s number is given by (34). The circles indicate our Monte Carlo results for eleven shape parameters, α=0.01,0.1,0.2,0.3,0.9,0.99\alpha=0.01,0.1,0.2,0.3,\cdots 0.9,0.99. In each case, 10810^{8} random numbers are generated. They are in excellent agreement with the theories. The proposed methods, Algorithm 3 in particular, need fewer random numbers than most of the other algorithms. The s=ss=s^{*} case is better than the s=1s=1 case by 0.8\lesssim 0.8%.

Refer to caption
Figure 2: (a) Elapsed time to generate 10810^{8} gamma variates, including the parameter initialization. (b) Elapsed time to generate 10810^{8} gamma variates all at once. See the main text for detail.

Next we compare the speed of the algorithms. Considering practical applications, two settings are considered [9]. In the first setting, we repeatedly draw one random variate 10810^{8} times for all 11 parameters. We measure the elapsed time in seconds three times per method, and then take the average elapsed time. Figure 2(a) shows the results. The Devroye [4] method and Algorithm 2 are the fastest. They outperform the well established methods [1, 3]. Although the two compilers give different results, from Algorithms 1 and 2, we see that the squeeze technique is effective. Note that the Kundu & Gupta [7] method does not use the squeeze technique.

In the second setting, we draw 10810^{8} random variates all at once, and then repeat this procedure for the 11 parameters. Some algorithms initialize internal parameters, before generating many random variates. For example, in Algorithm 3, the first three lines in Table 3 are used only once per parameter, before generating many variates at once. These lines are repeated each time we draw a variate in the first setting. Figure 2(b) shows the elapsed times. For reference, results with high-level compiler optimizations (the -O2 option) are also displayed. Now the piecewise algorithms such as the Best [3] method and Algorithm 3 become faster than in Fig. 2(a), suggesting that they are fast, once the parameters are initialized. Together with the Best [3] method and the Devroye [4] method, Algorithms 2–3 are close and promising. In Algorithm 3, the s=1s=1 cases are always faster than the s=ss=s^{*} cases.

6 Discussion and Summary

We have developed the rejection algorithms to generate a gamma variate with the shape parameter 0<α<10<\alpha<1. In line of Kundu & Gupta [7], we have employed the GE distribution [5]. The GE distribution is useful, because it resembles the gamma distribution, and because its CDF has a simple form. Kundu & Gupta [7] employed the GE distribution with the scale parameter 22, that is, fGE(x;α,2)f_{\rm GE}(x;\alpha,2), while we employ the scale parameter 11, fGE(x;α,1)f_{\rm GE}(x;\alpha,1), to obtain better acceptance efficiency. Therefore, the parameters and the rejection function in their study are different from ours. Despite a different strategy, our acceptance efficiency is as good as the best one [4].

To improve the performance, we have constructed the squeeze functions in (13). As demonstrated, this makes the code faster. One may notice that the left-hand side (the lower bound) can be negative for x>4/βx>4/\beta, but logically there is no problem. In addition, since FGE(4/β;α,1)FGE(4;α,1)=(1e4)α0.982F_{\rm GE}(4/\beta;\alpha,1)\geq F_{\rm GE}(4;\alpha,1)=(1-e^{-4})^{\alpha}\gtrsim 0.982, more than 98% of the random numbers are found for x<4/βx<4/\beta, where the lower bound condition is useful.

We have considered piecewise extensions of our algorithms, with an exponential tail [1, 3, 7]. This makes the acceptance efficiency even better. The squeeze functions are presented in (33). The left inequality (the lower bound) is identical to one in the Best [3] method, while the right one (the upper bound) is a new addition. This addition actually makes the Best [3] method few percents faster (not shown). As for Algorithm 3, despite the near-optimum density S(α,s)<S(α,1)S(\alpha,s^{*})<S(\alpha,1), the s=1s=1 case is always faster than the s=ss=s^{*} case, because s=1s=1 simplifies the algorithm. Therefore s=1s=1 is our choice. In the second setting (Fig. 2(b)), Algorithm 3 with s=1s=1 is always faster than Algorithm 2, owing to its good acceptance rate. It is promising when drawing many variates at once. In the first setting (Fig. 2(a)), in agreement with Tanizaki [9], several methods become slower due to the parameter initialization. This is also the case for Algorithm 3. In contrast, Algorithm 2 remains fast in the first and second settings, because of its simplicity.

Considering several aspects, we conclude that Algorithm 2 is one of the best two generators, together with the Devroye [4] method. These two often outperform the others, including the well established methods [1, 3]. With Intel compiler, Algorithm 2 is the best, and the Devroye [4] method is the best with clang. The proposed methods are simple enough, and easy to implement, as shown in Tables 13. In addition to the compiler-dependence, there can be a CPU-dependence (and possibly a GPU-dependence). Considering various runtime environments, we want to have as many good algorithms as possible, and ours are certainly one of them.

In summary, we have developed gamma variate generators for 0<α<10<\alpha<1, using the GE distribution, the squeeze technique, and the piecewise envelope functions. The proposed methods are excellent in acceptance efficiency. The numerical tests suggest that Algorithm 2 is one of the best two generators for 0<α<10<\alpha<1. Algorithm 3 with s=1s=1 can be a good option when drawing many variates at once. We hope that the proposed methods are useful in practical applications.

Data Availability

Program codes in C language are available from the author upon request.

References

  • Ahrens & Dieter [1974] Ahrens, J. H. and Dieter, U., “Computer Methods for Sampling from Gamma, Beta, Poisson and Binomial Distributions,” Computing 12, 223–246 (1974).
  • Baumgarten & Patel [2022] Baumgarten C, and Patel, T., “Automatic random variate generation in Python,” Proc. of the 21st Python in Science Conf. (SCIPY 2022), 46–51 (2022).
  • Best [1983] Best, D. J., “A Note on Gamma Variate Generators with Shape Parameter less than Unity,” Computing 30, 185–188 (1983).
  • Devroye [1986] Devroye, L., Non-Uniform Random Variate Generation, Springer-Verlag (1986), Chap. VII, Sec. 2.6, p. 304.
  • Gupta & Kundu [1999] Gupta, R. D. and Kundu, D., “Generalized exponential distributions,” Australian and New Zealand Journal of Statistics 41, 173–188 (1999).
  • Kroese et al. [2011] Kroese, D. P., Taimre, T., and Botev, Z. I., Handbook of Monte Carlo methods, John Wiley & Sons. (2011).
  • Kundu & Gupta [2007] Kundu, D. and Gupta, R. D., “A convenient way of generating gamma random variables using generalized exponential distribution,” Computing Statistics & Data Analysis 51, 2796–2802 (2007).
  • Marsaglia & Tsang [2000] Marsaglia, G. and Tsang, W. W., “A Simple Method for Generating Gamma Variables,” ACM Transactions on Mathematical Software 26, 363–372 (2000).
  • Tanizaki [2008] Tanizaki, H., “A Simple Gamma Random Number Generator for Arbitrary Shape Parameters,” Economics Bulletin 3, 1–10 (2008).
  • Yotsuji [2010] Yotsuji, T., Random number generation of probability distributions for computer simulations, Pleiades Publishing, Nagano (2010).