Simple rejection Monte Carlo algorithm and its application to multivariate statistical inference
Abstract
The Monte Carlo algorithm is increasingly utilized, with its central step involving computer-based random sampling from stochastic models. While both Markov Chain Monte Carlo (MCMC) and Reject Monte Carlo serve as sampling methods, the latter finds fewer applications compared to the former. Hence, this paper initially provides a concise introduction to the theory of the Reject Monte Carlo algorithm and its implementation techniques, aiming to enhance conceptual understanding and program implementation. Subsequently, a simplified rejection Monte Carlo algorithm is formulated. Furthermore, by considering multivariate distribution sampling and multivariate integration as examples, this study explores the specific application of the algorithm in statistical inference.
Keywords: Monte Carlo; Random simulation; Sampling algorithm; Statistical inference
1 Introduction
Statistical inference plays a crucial role in all scientific fields, serving to provide theoretical underpinnings for quantitative models of uncertainty and validate data-based models. Modern statistical inference methods based on sampling enable us to comprehend complex phenomena (Warne et al., 2018). Sampling can be categorized into two types: population sampling and data set sampling. The former is often referred to as Monte Carlo or random simulation, which can be further divided into distribution function-based sampling (Meng et al., 2022) and density function-based sampling (Chen and Yang, 2006; Andrieu and Freitas, 2003). The latter is commonly known as subsampling (Ai et al., 2021; Li et al., 2022), which also falls under the category of Monte Carlo when we consider a data set as a special representation of the population.
Monte Carlo sampling is commonly employed for generating random numbers of random variables or random vectors, which are then utilized to estimate unknown mathematical expectations and their confidence intervals. This well-known algorithm, known as the Monte Carlo algorithm, can also provide an estimation of the complete distribution (L’Ecuyer et al., 2022). The fundamental concept behind the Monte Carlo algorithm lies in utilizing sampled data to perform various calculations and feature analyses. Generating random numbers that adhere to a probability distribution model becomes a crucial technology for these algorithms, often accomplished by computers. There exist numerous options for sampling methods; among them, Markov Chain Monte Carlo (MCMC) (Robert and Casella, 2010) is widely adopted due to its ease of implementation. MCMC technique serves as the gold standard technique for Bayesian inference (Nemeth and Fearnhead, 2021), frequently used to extract samples from challenging posterior distributions (Vyner et al., 2023). However, correlations exist between data generated by MCMC, posing challenges in convergence assessment. In recent years, the Rejection Monte Carlo (RMC) algorithm has emerged sporadically in literature under alternative names such as screening sampling or reject-accept sampling algorithms (Warne et al., 2018; Martino and M´ıguez, 2011; Ma et al., 2022).
Compared to MCMC technology, there is a relatively limited amount of literature on RMC technology. RMC is commonly employed for constructing complex Monte Carlo algorithms, but it itself serves as a standard Monte Carlo technique that filters samples generated by the proposal distribution through the density function to obtain samples from the target distribution (Chen and Yang, 2006; Maryino and M´ıguez, 2011). In comparison with MCMC, RMC possesses unique advantages (Warne et al., 2018): (1) RMC is user-friendly; (2) The generated samples are independent and equally distributed; (3) Calculation efficiency remains unaffected by calculation parameters, particularly without requiring heuristic definition of transition probability. However, the drawbacks of RMC are also evident as it can be challenging to determine an appropriate recommendation sampling distribution. When the recommendation distribution significantly deviates from the target distribution, computational efficiency becomes low. Therefore, previous studies have primarily focused on sampling efficiency and determining efficient recommendation distributions (Chen and Yang, 2006; Maryino and M´ıguez, 2011). Nevertheless, with continuous advancements in computer technology, efficiency no longer poses a key constraint in many cases. The flexibility of RMC technology towards suggestion distributions has been further enhanced thereby making its application more widespread. Consequently,this paper provides enlightening insights into the convenient learning perspectives and program implementation aspects of RMC techniques to enable arbitrary model sampling using RMC methods—especially for multivariate models—and explores their versatility and practical value.
2 Theoretical Foundation
The RMC algorithms are a versatile approach for generating independent samples from a target density (Martino and M´ıguez, 2011). Let be the probability density function of a target random variable , and let the suggested density function that can be easily sampled, with the same support supp(f) as the function . Additionally, let a constant satisfying the following condition:
(1) |
The RMC algorithms are typically delineated in the literature as follows.
A merit function, such as in Inequality plays a crucial role in the efficiency of the sampling. When the deviation between the proposed model and is significant and is chosen excessively large, the difference between the resulting merit function and objective function, , becomes substantial. This leads to a low acceptance probability , causing most points to be rejected during random selection and ultimately reducing sampling efficiency.
The construction of is typically designed to enhance sampling efficiency by closely approximating , while the constant is determined as the .
Theorem 1 Let the value of random variable X be a random number generated by the Algorithm , then X .
Proof The density functions of random numbers and are denoted as and , respectively, according to the Algorithm 1. As the processes generating and are mutually independent, the joint probability density function of and can be expressed as follows:
(2) |
The cumulative probability function of random number :
The identification of the optimal merit function constitutes a fundamental aspect in existing RMC literature (Chen and Yang, 2006; Martino and Mígues, 2011). However, its practical implementation poses significant challenges, thereby becoming a bottleneck that hinders the progress of the RMC algorithm. According to Algorithm 1, the construction of the proposed distribution is unrestricted and may deviate from the target distribution. This discrepancy inevitably leads to inefficient sampling; nevertheless, advancements in computer technology have compensated for this limitation. Consequently, contemporary approaches favor either uniform distribution models or segmented uniform distribution models as suitable candidates for the proposed distributions. By employing uniformly distributed or segmented uniformly distributed merit functions, it is possible to significantly enhance both versatility and comprehensibility within RMC techniques while effortlessly facilitating computer-based sampling across arbitrary distribution models.
An intuitive understanding of sampling from the target density is to allocate more samples in regions with higher density values and fewer samples in regions with lower density values, proportionally matching the value of the density function. This can be easily achieved by employing a uniform distribution. For instance, when considering sampling from a given density function
(3) |
The uniform distribution can be employed to evenly distribute points across the rectangular area . Subsequently, the points located between the density function and the x-axis effectively reflect the characteristics of the density function and are thus considered valid. As depicted in Figure 1, these accepted points correspond to samples based on their horizontal coordinates.

The previous analysis suggests using a simple technique called Simple Rejection Monte Carlo (SRMC) to sample from a model , which can be described as Algorithm 2.
Given the ease of drawing samples uniformly from the support set of function , Algorithm 2 can be straightforwardly extended to a Simple Rejection Monte Carlo (SRMC) technique for sampling from a multivariate random variable model , as described in algorithm 3.
3 Sampling implementation of multivariate distribution models
As computing technology continues to advance, computational efficiency is often no longer the primary constraint, and the implementability of algorithms assumes greater significance. Algorithm 2 and Algorithm 3 can be easily implemented programmatically, thereby facilitating intelligent computation. The evaluation of algorithm strengths and weaknesses can be measured in terms of accuracy and program runtime. The following specific examples demonstrate that RMC out-sampling for multivariate distributions can be readily implemented, while considering bias and program runtime as indicators of accuracy and feasibility. All programs are implemented using the R language (Meng, 2022).
Example 1 Let the density function of a two-dimensional Gaussian random vector be
(4) |
where , . Please generate a sample of the random vector by RMC technology.
According to Algorithm 3, the square region is selected with . The scatter plot of the sampling is presented in Figure 2. The sample sizes from left to right are: 1000, 10000 and 100000; corresponding program running times are 0.59, 5.19 and 51.65 seconds respectively. The correlation coefficients calculated from the extracted samples are found to be 0.237, 0.189 and 0.202 respectively, which exhibit slight deviation from the theoretical correlation coefficient of 0.2; moreover, as the number of samples increases, the absolute value of this deviation decreases accordingly.The scatterplot (Figure 2) visually demonstrates the structural relationship between variables where most values fall within the square region , while occurrences outside this range can be considered negligible.

Probability calculations play a crucial role in statistical inference, often necessitating the use of integral computations. However, solving these integrals analytically can be challenging, thus making sampling methods an invaluable tool for their resolution. This is exemplified below through the application of ordinary dual integration.
Example 2 Calculate the dual integral , where the region D is bounded by , axis and , and
Construct a rectangular area . Then the integrated function is easily integrated over , and the function
can be viewed as a density function of a random vector . Then the required dual integral can be transformed into
where is a representative function of the set D, i.e., when , ; otherwise, . Since is a density function on S, it follows that where the symbol denotes the number of elements in the corresponding set. If one considers a uniform distribution over the region S, then Therefor, the target integral can be approximated as follows.
(5) |
From Equation (5) and Algorithm 3, in order to complete the computation of the objective integral, first a uniform sampling should be performed on the region S, and then the resulting uniform samples should be screened by the density function . Assume that the set of uniform random numbers drawn first is denoted as S1 and the set of random numbers obtained by subsequent screening is S2. Then the objective integral can be completed by calculating and from S1 and S2, respectively. The results were calculated using R software programming as follows, with 10 runs for each sample size, and the results are averaged.
Samples(S2) | 100 | 1000 | 10000 | 100000 | 1000000 |
---|---|---|---|---|---|
Estimated value-real value | 0.173 | 0.032 | 0.059 | 0.009 | 0.00046 |
Run time (seconds) | 0.0062 | 0.055 | 0.56 | 5.49 | 50.1 |
4 Summary
Previous literature primarily focuses on rejection sampling algorithms for one-dimensional random variables, which involve constructing a proposed sampling model and optimal function that are both easy to sample from and closely approximate the target model. However, despite its high sampling efficiency, the constructed proposed model lacks operability, limiting its intelligent implementation in statistical inference. This paper highlights the versatility and ease of operation provided by the RMC technique and introduces the SRMC technique as a solution. The proposed technique is applicable to both one-dimensional and multivariate random variables, offering generalizability and comprehensibility. Specific examples demonstrate that this algorithm can be easily programmed and implemented in multivariate statistical analysis with high accuracy. Consequently, it will undoubtedly facilitate widespread adoption of the SRMC technique while providing an alternative method for intelligent implementation of multivariate statistical inference.
References
- [1] Ai, M., Yu, J., Zhang, H. & Wang, H. (2021). Optimal Subsampling Algorithms for Big Data Regressions[J]. Statistica Sininca, 31, 749-772.
- [2] Andrieu, C. & Freitas, N. (2003). An Introduction to MCMC for Machine Learning[J]. Machine Learning, 50, 5-43.
- [3] Cheng, W., Yang, Z. (2006). A geometric interpretation of the rejection sampling algorithm and a random number generation algorithm from a curved edge trapezoidal probability density[J]. Mathematical Statistics and Management, 25(24), 494-504.
- [4] L’Ecuyer, P., Puchhammer, F. & Abdellah, B. (2022). Monte Carlo and quasi-Monte Carlo density estimation via conditioning[J]. INFORMS Journal on Computing, 34(3), 1729-1748.
- [5] Li, L., Du, M. & Zhang, X. (2022). A distributed two-step subsampling algorithm for big data based on logistic regression modeling[J]. Mathematical Statistics and Management, 41, 858-866.
- [6] Ma, M., Li, H., Lan, C. & Liu, C. (2022). Reliability update of structural systems based on rejection sampling algorithms[J]. Engineering Mechanics, 3, 11-22.
- [7] Martino, L. & Míguez, J. (2011). A generalization of the adaptive rejection sampling algorithm[J]. Stat Comput, 21, 633-647.
- [8] Meng, X. (2022). R Data Science [M]. Nanjing: River Sea University Press Ltd..
- [9] Meng, X., Wang, J., & Liu, P. (2022). Inverse conversion Monte Carlo algorithm and its applications[J]. Science Technology and Innovation, 3, 55-58.
- [10] Nemeth, C. & Fearnhead, P. (2021). Stochastic gradient Markov chain Monte Carlo[J]. Journal of the American Statistical Association, 116(533), 433-450.
- [11] Robert, C. & Casella, G. (2010). Introducing Monte Carlo Methods with R [M]. Springer.
- [12] Vyner, C., Nemeth, C. & Sherlock, C. (2023). SwISS: A scalable Markov chain Monte Carlo divide-and-conquer strategy[J]. Stat, 12(1),e523.
- [13] Warne, D., Baker R., & Simpson, M. (2018). Multilevel rejection sampling for approximate Bayesian computation. Computational Statistics and Data Analysis, 124,71-86.