An Introduction to Hamiltonian Monte Carlo Method for Sampling
Abstract
The goal of this article is to introduce the Hamiltonian Monte Carlo method – a Hamiltonian dynamics inspired algorithm for sampling from a Gibbs density . We focus on the “idealized” case, where one can compute continuous trajectories exactly. We show that idealized HMC preserves and we establish its convergence when is strongly convex and smooth.
1 Introduction
Many fundamental tasks in disciplines such as machine learning (ML), optimization, statistics, theoretical computer science, and molecular dynamics rely on the ability to sample from a probability distribution. Sampling is used both as a tool to select training data and to generate samples and infer statistics from models such as highly multi-class classifiers, Bayesian networks, Boltzmann machines, and GANs; see [GBW14, SM08, KF09, GBC16]. Sampling is also useful to provide robustness to optimization methods used to train deep networks by allowing them to escape local minima/saddle points, and to prevent them from overfitting [WT11, DPG+14, SM08]. Sampling methods are often the key to solve integration, counting, and volume computation problems that are rampant in various applications at the intersection of sciences and ML [DFK91]. In chemistry and molecular biology sampling is used to estimate reaction rates and simulate molecular dynamics which, in turn, is used for discovering new materials and drugs [CCHK89, RMZT02, ZRC13]. In finance, sampling is used to optimize expected return of portfolios [DGR03]. Sampling algorithms are also used to solve partial differential equations in applications such as geophysics [CGST11].
Mathematically, several of the above applications reduce to the following problem: given a function generate independent samples from the distribution , referred to as the Gibbs or Boltzmann distribution. Markov chain Monte Carlo (MCMC) is one of the most influential frameworks to design algorithms to sample from Gibbs distributions [GBC16]. Examples of MCMC algorithms include Random Walk Metropolis (RWM), ball walk, Gibbs samplers, Grid walks [DFK91], Langevin Monte Carlo [RR98], and a physics-inspired meta-algorithm – Hamiltonian Monte Carlo (HMC) [DKPR87] – that subsumes several of the aforementioned methods as its special cases. HMC was first discovered by physicists [DKPR87], and was adopted with much success in ML [Nea92, War97, CFG14, BG15], and is currently the main algorithm used in the popular software package Stan [CGH+16].
HMC algorithms are inspired from a physics viewpoint and reduce the problem of sampling from the Gibbs distribution of to simulating the trajectory of a particle in according to the laws of Hamiltonian dynamics where plays the role of a “potential energy” function. We show here that the physical laws that govern HMC also endow it with the ability to take long steps, at least in the setting where one can simulate Hamiltonian dynamics exactly. Moreover, we present sophisticated discretizations of continuous trajectories of Hamiltonian dynamics that can lead to sampling algorithms for Gibbs distribution for “regular-enough” distributions.
2 Related Algorithms for Sampling
We give a brief overview of a number of different algorithms available for sampling from continuous distributions. In a typical MCMC algorithm, one sets up a Markov chain whose domain includes that of and whose transition function determines the motion of a particle in the target distribution’s domain.
Traditional algorithms: Ball Walk, Random Walk Metropolis.
In the ball walk and related random walk metropolis (RWM) algorithms, each step of the Markov chain is computed by proposing a step , where is uniformly distributed on the unit ball for the ball walk and standard multivariate Gaussian for RWM, and is the “step size”. The proposal is then passed through a Metropolis filter, which accepts the proposal with probability , ensuring that the target distribution is a stationary distribution of the Markov chain.
Some instances and applications of these algorithms include sampling from and computing the volume of a polytope [DFK91, AK91, LS90, LS92, LS93, LV03] as well as sampling from a non-uniform distribution and the related problem of integrating functions [GGR97, LV06]. One advantage of these algorithms is that they only require a membership oracle for the domain and a zeroth-order oracle for . On the other hand, the running times of these algorithms are sub-optimal in situations where one does have access to information such as the gradient of , since they are unable to make use of this additional higher-order information. To ensure a high acceptance probability, the step size in the RWM or ball walk algorithms must be chosen to be sufficiently small so that does not change too much at any given step. For instance, if is , then the optimal choice of step size is roughly , implying that with high probability [GGR97]. Therefore, since most of the probability of a standard spherical Gaussian lies in a ball of radius , one would expect all of these algorithms to take roughly steps to explore a sufficiently regular target distribution.
Langevin algorithms.
The Langevin algorithm makes use of the gradient of to improve on the provable running times of the RWM and ball walk algorithms. Each update is computed as , where . Recently, provable running time bounds have been shown for different versions of the Langevin algorithm for -strongly log-concave distributions with -Lipschitz gradient [Dal17, DM17, CCBJ18, DM17], including an bound for getting close to the target distribution in the “total variation” (TV) metric, where [DCWY19]. This bound has been improved to roughly in [LST21].
Running time bounds for Langevin in the non-convex setting have also recently been obtained in terms of isoperimetric constants for [RRT17]. For sufficiently regular target distributions, the optimal step size of the Langevin algorithm can be shown to be without Metropolis adjustment and with Metropolis adjustment [RR98, Nea11]. Since the random “momentum” terms are independent, the distance traveled by Langevin in a given number of steps is still roughly proportional to the square of its inverse numerical step size , meaning that the running time is at least without Metropolis adjustment and with the Metropolis adjustment. The fact that the “momentum” is discarded after each numerical step is therefore a barrier in further improving the running time of Langevin algorithms.
3 Hamiltonian Dynamics
We present an overview of the Hamiltonian dynamics and its properties that play a crucial role in the description and analysis of HMC. Imagine a particle of unit mass in a potential well . If the particle has position and velocity , its total energy is given by the function defined as
This function is called the Hamiltonian of the particle. The Hamiltonian dynamics for this particle are:
In the case where the Hamiltonian has a simple forms such as mentioned above, these equations reduce to:
For a starting configuration , we denote the solutions to these equations by for . Let the “Hamiltonian flow” be the position and velocity after time starting from .
Interestingly, these solutions satisfy a number of conservation properties.
-
1.
Hamiltonian dynamics conserves the Hamiltonian: for all . To see this note that, since does not depend on explicitly (or ),
-
2.
Hamiltonian dynamics conserves the volume in the “phase space.” Formally, let be the vector field associated to the Hamiltonian in the phase space First note that the divergence of is zero:
Since divergence represents the volume density of the outward flux of a vector field from an infinitesimal volume around a given point, it being zero everywhere implies volume preservation. Another way to see this is that divergence is the trace of the Jacobian of the map , and the trace of the Jacobian is the derivative of the determinant of the Jacobian. Hence, the trace being implies that the determinant of the Jacobian of does not change.
-
3.
Hamiltonian dynamics of the form mentioned above are time reversible for :
The underlying geometry and conservation laws have been generalized significantly in physics and have led to the area of symplectic geometry in mathematics [DSDS08].
4 Hamiltonian Monte Carlo
To improve on the running time of the Langevin algorithms, one must find a way to take longer steps while still conserving the target distribution. Hamiltonian Monte Carlo (HMC) algorithms accomplish this by taking advantage of conservation properties of Hamiltonian dynamics. These conservation properties allow HMC to choose the momentum at the beginning of each step and simulate the trajectory of the particle for a long time. HMC is a large class of algorithms, and includes the Langevin algorithms and RWM as special cases.
Each step of the HMC Markov chain is determined by first sampling a new independent momentum , and then running Hamilton’s equations for a fixed time , that is . This is called the idealized HMC, since its trajectories are the continuous solutions to Hamilton’s equations rather than a numerical approximation.
Despite its simplicity, popularity, and the widespread belief that HMC is faster than its competitor algorithms in a wide range of high-dimensional sampling problems [Cre88, BPR+13, Nea11, BBG14], its theoretical properties are relatively less understood compared to its older competitor MCMC algorithms, such as the Random Walk Metropolis [MPS+12] or Langevin [DM19, DMss, Dal17] algorithms. Thus, for instance, it is more difficult to tune the parameters of HMC. Several papers have have made progress in bridging this gap, showing that HMC is geometrically ergodic for a large class of problems [LBBG19, DMS17] and proving quantitative bounds for the convergence rate of the idealized HMC for special cases [SRSH14, MS18].
These analysis benefit from the observation that there are various invariants that are preserved along the Hamiltonian trajectories, which in principle obviate the need for a Metropolis step, and raise the possibility of taking very long steps (large ). Using these invariance properties, we first show that the idealized HMC “preserves” the target density (Theorem 5.1) and subsequently give a dimension-independent bound on when is strongly convex and smooth (Theorem 6.1). While not the focus of this article, in Section 7, we discuss different numerical integrators (approximate algorithms to simulate Hamiltonian dynamics dynamics in the non-idealized setting) and bounds associated to them.
5 Stationarity: HMC Preserves the Target Density
Recall that for and , denotes the position in the phase space of the particle moving according to the Hamiltonian dynamics with respect to a Hamiltonian . Let be the Lebesgue measure on with respect to which all densities are defined.
Theorem 5.1
Let be a differentiable function. Let be the step size of the HMC. Suppose is a sample from the density
Then the density of is for any . Moreover the density of , where is also . Thus, the idealized HMC algorithms preserves .
The proof of this theorem heavily relies on the properties of Hamiltonian dynamics. For , let . Then, time reversibility of Hamiltonian dynamics implies that
And, it follows from the preservation of Hamiltonian along trajectories that
Thus,
Thus, , the value of density associated to , is the same as . Let be the pushforward of under the map . The property that Hamiltonian dynamics preserves volume in phase space implies that as the determinant of the Jacobian of the map is . Thus, the density remains invariant under :
To see the second part, note that the marginal density of drawn from is the same as that of . Thus, is an invariant density of the idealized HMC algorithm.
6 Convergence: Running Time Bound for Strongly Convex and Smooth Potentials
For densities and with the same base measure, the Wasserstein distance is defined to be the infimum, over all joint distributions of the random variables and with marginals and , of the expectation of the squared Euclidean distance .
Theorem 6.1
Let be a twice-differentiable function which satisfies . Let be the distribution of at step from Algorithm 1. Suppose that both and have mean and variance bounded by . Then given any , for and , we have that .
The bound in this theorem can be improved to ; see [CV19]. To prove Theorem 6.1 we use the coupling method. In addition to the idealized HMC Markov chain which is initialized at some arbitrary point , to prove Theorem 6.1 we also consider another “copy” of the idealized HMC Markov chain defined in Algorithm 1. To initialize we imagine that we sample a point from the density . Since we have already shown that is a stationary density of the idealized HMC Markov chain (Theorem 5.1), will preserve the distribution at every step .
To show that the density of converges to in the Wasserstein distance, we design a coupling of the two Markov chains such that the distance between and contracts at each step . If has mean and variance bounded by , and we initialize, e.g., , then the we have that as well since . Hence,
Thus, if we can find a coupling such that for each and some , we would have that
We define a coupling of and as follows: At each step of the Markov chain we sample an initial momentum , and set . And at each step of the Markov chain we sample momentum , and set . To couple the two Markov chains, we will give the same initial momentum to the Markov chain ; see Figure 1. This coupling preserves the marginal density of since, in this coupling, and both have marginal density . Thus, we still have that the marginal density of is at each step .

Spherical harmonic oscillator.
As a simple example, we consider the spherical harmonic oscillator with potential function . In this case the gradient is at every point . Define to be the position and momentum of the Hamiltonian flow which determines the update of the Markov chain at each step , and to be the position and momentum of the Hamiltonian flow which determines the update of the Markov chain . Then we have
Thus, the difference between the force on the particle at and the particle at points in the direction of the particle at . This means that, in the case of the spherical harmonic oscillator, after a sufficient amount of time the particle will reach the point and we will have .
We can solve for the trajectory of exactly. We have
(1) |
Since the two Markov chains are coupled such that the particles and have the same initial momentum, , the initial conditions for the ODE in Equation (1) are . Therefore, the solution to this ODE is
(2) |
Hence, after time we have .
General harmonic oscillator.
More generally, we can consider a harmonic oscillator , where . In this case the gradient is , where is the diagonal matrix with -th diagonal entry .
In this case, we have
Thus, since for all , the difference between the force on the particle at force on the particle at and the particle at is has a component in the direction . This means that, for small values of , two particles will move towards each other at a rate of roughly
since the initial velocities and of the two particles are equal. However, unless is a multiple of the identity matrix, the difference between the forces also has a component orthogonal to . Thus, unlike in the case of the spherical harmonic oscillator, the distance between the two particles will not in general contract to zero for any value of .
As in the case of the spherical harmonic oscillator, we can compute the distance between the two particles at any value of by solving for the trajectory exactly. Here we have
(3) |
with initial conditions . Since is diagonal, the ODE is separable along the coordinate directions, and we have, for all
(4) |
with initial conditions . Therefore, . Hence, since , the distance will contract up to at least time . In particular, for any such that after time we have that . However, since the are values such that , there is in general no single value of such that all . However, we can show that for ,
(5) | ||||
(6) | ||||
(7) | ||||
(8) |
where the inequality holds because the fact that is -strongly convex implies that , is monotone decreasing on the interval , and since . The second inequality holds because for all . Hence, we have that
(9) |
General strongly convex and smooth (sketch).
Recall that in the case of the spherical Harmonic oscillator, , the difference between the forces on the two particles is
(10) |
and thus, the difference between the force acting on and the force acting on is a vector which points exactly in the direction of the vector from to .
In more general settings where is -strongly convex and -smooth, but not necessarily a quadratic/harmonic oscillator potential, strong convexity implies that the component of the vector in the direction still points in the direction and still has magnitude at least . However, unless , the difference in the forces, , may also have a component orthogonal to the vector . Since is -smooth, this orthogonal component has magnitude no larger than .
Thus, since the two Markov chains are coupled such that the two particles have the same initial velocity, that is, , we have, in the worst case,
(11) | ||||
(12) |
where is a unit vector orthogonal to . We would like to determine the value of which minimizes , and the extent to which the distance contracts at this value of . In this proof sketch we will ignore the higher-order terms. These higher-order terms can be bounded using comparison theorems for ordinary differential equations; see Section 4 of [MSss].
Ignoring the higher-order terms, we have (since is a unit vector orthogonal to ),
(13) | ||||
(14) | ||||
(15) | ||||
(16) |
The RHS of (13) is minimized at . Thus, for we have:
(17) | ||||
(18) |
Thus, we have for .
7 Discretizing HMC
To prove (non-asymptotic) running time bounds on HMC, we must approximate and in the idealized HMC with some numerical method. One can use a numerical method such as the Euler [GH10] or leapfrog integrators [HLW03]. The earliest theoretical analyses of HMC were the asymptotic “optimal scaling” results of [KP91], for the special case when the target distribution is a multivariate Gaussian. Specifically, they showed that the Metropolis-adjusted implementation of HMC with leapfrog integrator requires a numerical step size of to maintain an Metropolis acceptance probability in the limit as the dimension . They then showed that for this choice of numerical step size the number of numerical steps HMC requires to obtain samples from Gaussian targets with a small autocorrelation is in the large- limit. [PST12] extended their asymptotic analysis of the acceptance probability to more general classes of separable distributions.
The earliest non-asymptotic analysis of an HMC Markov chain was provided in [SRSH14] for an idealized version of HMC based on continuous Hamiltonian dynamics, in the special case of Gaussian target distributions. As mentioned earlier, [MS17] show that idealized HMC can sample from general -strongly logconcave target distributions with -Lipschitz gradient in steps, where (see also [BRSS17, BREZ20] for more work on idealized HMC). They also show that an unadjusted implementation of HMC with first-order discretization can sample with Wasserstein error in gradient evaluations. In addition, they show that a second-order discretization of HMC can sample from separable target distributions in gradient evaluations, where is an unknown (non-polynomial) function of , if the operator norms of the first four Fréchet derivatives of the restriction of to the coordinate directions are bounded by . [LV18] use the conductance method to show that an idealized version of the Riemannian variant of HMC (RHMC) has mixing time with total variation (TV) error of roughly , for any , where is a regularity parameter for and is an isoperimetric constant for . Metropolized variants of HMC have also been studied recently; see [CDWY20] and the references in there.
A second-order discretization.
Here we discuss the approach of [MV18]. They approximate a Hamiltonian trajectory with a second-order Euler integrator that iteratively computes second-order Taylor expansions of Hamilton’s equations, where
Here, is the parameter corresponding to the “step size”. If we approximate
we obtain the following numerical integrator:
This leads to the following algorithm.
It can be shown that under a mild “regularity” condition, this unadjusted HMC requires at most (roughly) gradient evaluations; see [MV18] for details.
Higher-order integrators. More generally, one can replace the second-order Taylor expansion with a th-order Taylor expansion for any , to obtain a th-order numerical integrator for Hamiltonian trajectories. Unfortunately, the number of components in the Taylor expansion grows exponentially with because of the product rule, so it is difficult to compute the Taylor expansion exactly. However, it is possible to compute this expansion approximately using polynomial interpolation methods such as the “Collocation Method” of [LV17] that was used for the special case when is constant on a polytope [LV18]. While higher-order integrators can give theoretical bounds, they are generally unstable in practice as they are not symplectic.
Developing practical higher-order methods and identifying other interesting regularity conditions on the target density that lead to fast algorithms remain interesting future directions.
Acknowledgments
References
- [AK91] David Applegate and Ravi Kannan. Sampling and integration of near log-concave functions. In Proceedings of the twenty-third annual ACM symposium on Theory of computing, pages 156–163. ACM, 1991.
- [BBG14] MJ Betancourt, Simon Byrne, and Mark Girolami. Optimizing the integrator step size for Hamiltonian Monte Carlo. arXiv preprint arXiv:1411.6669, 2014.
- [BG15] Michael Betancourt and Mark Girolami. Hamiltonian Monte Carlo for hierarchical models. Current trends in Bayesian methodology with applications, 79:30, 2015.
- [BPR+13] Alexandros Beskos, Natesh Pillai, Gareth Roberts, Jesus-Maria Sanz-Serna, and Andrew Stuart. Optimal tuning of the hybrid Monte Carlo algorithm. Bernoulli, 19(5A):1501–1534, 2013.
- [BREZ20] Nawaf Bou-Rabee, Andreas Eberle, and Raphael Zimmer. Coupling and convergence for Hamiltonian Monte Carlo. The Annals of Applied Probability, 30(3):1209–1250, 2020.
- [BRSS17] Nawaf Bou-Rabee and Jesús María Sanz-Serna. Randomized hamiltonian monte carlo. The Annals of Applied Probability, 27(4):2159–2194, 2017.
- [CCBJ18] Xiang Cheng, Niladri S. Chatterji, Peter L. Bartlett, and Michael I. Jordan. Underdamped langevin MCMC: A non-asymptotic analysis. In COLT, volume 75 of Proceedings of Machine Learning Research, pages 300–323. PMLR, 2018.
- [CCHK89] EA Carter, Giovanni Ciccotti, James T Hynes, and Raymond Kapral. Constrained reaction coordinate dynamics for the simulation of rare events. Chemical Physics Letters, 156(5):472–477, 1989.
- [CDWY20] Yuansi Chen, Raaz Dwivedi, Martin J. Wainwright, and Bin Yu. Fast mixing of metropolized hamiltonian monte carlo: Benefits of multi-step gradients. J. Mach. Learn. Res., 21:92:1–92:72, 2020.
- [CFG14] Tianqi Chen, Emily Fox, and Carlos Guestrin. Stochastic gradient Hamiltonian Monte Carlo. In International Conference on Machine Learning, pages 1683–1691, 2014.
- [CGH+16] Bob Carpenter, Andrew Gelman, Matt Hoffman, Daniel Lee, Ben Goodrich, Michael Betancourt, Michael A Brubaker, Jiqiang Guo, Peter Li, and Allen Riddell. Stan: A probabilistic programming language. Journal of Statistical Software, 20:1–37, 2016.
- [CGST11] K Andrew Cliffe, Mike B Giles, Robert Scheichl, and Aretha L Teckentrup. Multilevel Monte Carlo methods and applications to elliptic pdes with random coefficients. Computing and Visualization in Science, 14(1):3, 2011.
- [Cre88] M. Creutz. Global Monte Carlo algorithms for many-fermion systems. Phy. Rev. D, 38(4):1228, 1988.
- [CV19] Zongchen Chen and Santosh S. Vempala. Optimal convergence rate of hamiltonian monte carlo for strongly logconcave distributions. In Dimitris Achlioptas and László A. Végh, editors, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2019, September 20-22, 2019, Massachusetts Institute of Technology, Cambridge, MA, USA, volume 145 of LIPIcs, pages 64:1–64:12. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2019.
- [Dal17] Arnak S Dalalyan. Theoretical guarantees for approximate sampling from smooth and log-concave densities. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 79(3):651–676, 2017.
- [DCWY19] Raaz Dwivedi, Yuansi Chen, Martin J. Wainwright, and Bin Yu. Log-concave sampling: Metropolis-hastings algorithms are fast. J. Mach. Learn. Res., 20:183:1–183:42, 2019.
- [DFK91] Martin Dyer, Alan Frieze, and Ravi Kannan. A random polynomial-time algorithm for approximating the volume of convex bodies. Journal of the ACM (JACM), 38(1):1–17, 1991.
- [DGR03] Jerome B Detemple, Ren Garcia, and Marcel Rindisbacher. A Monte Carlo method for optimal portfolios. The journal of Finance, 58(1):401–446, 2003.
- [DKPR87] Simon Duane, Anthony D Kennedy, Brian J Pendleton, and Duncan Roweth. Hybrid Monte Carlo. Physics letters B, 195(2):216–222, 1987.
- [DM17] Alain Durmus and Eric Moulines. Nonasymptotic convergence analysis for the unadjusted langevin algorithm. The Annals of Applied Probability, 27(3):1551–1587, 2017.
- [DM19] Alain Durmus and Eric Moulines. High-dimensional bayesian inference via the unadjusted langevin algorithm. Bernoulli, 25(4A):2854–2882, 2019.
- [DMss] Alain Durmus and Eric Moulines. Non-asymptotic convergence analysis for the Unadjusted Langevin Algorithm. The Annals of Applied Probability, in press.
- [DMS17] Alain Durmus, Eric Moulines, and Eero Saksman. On the convergence of Hamiltonian Monte Carlo. arXiv preprint arXiv:1705.00166, 2017.
- [DPG+14] Yann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In Advances in neural information processing systems, pages 2933–2941, 2014.
- [DSDS08] Ana Cannas Da Silva and A Cannas Da Salva. Lectures on symplectic geometry. Springer, 2008.
- [GBC16] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016.
- [GBW14] Maya R. Gupta, Samy Bengio, and Jason Weston. Training highly multiclass classifiers. J. Mach. Learn. Res., 15(1):1461–1492, January 2014.
- [GGR97] Andrew Gelman, Walter R Gilks, and Gareth O Roberts. Weak convergence and optimal scaling of random walk Metropolis algorithms. The Annals of Applied Probability, 7(1):110–120, 1997.
- [GH10] David F Griffiths and Desmond J Higham. Numerical methods for ordinary differential equations: initial value problems. Springer Science & Business Media, 2010.
- [HLW03] Ernst Hairer, Christian Lubich, and Gerhard Wanner. Geometric numerical integration illustrated by the Störmer–Verlet method. Acta numerica, 12:399–450, 2003.
- [KF09] Daphne Koller and Nir Friedman. Probabilistic Graphical Models: Principles and Techniques - Adaptive Computation and Machine Learning. The MIT Press, 2009.
- [KP91] AD Kennedy and Brian Pendleton. Acceptances and autocorrelations in hybrid Monte Carlo. Nuclear Physics B-Proceedings Supplements, 20:118–121, 1991.
- [LBBG19] Samuel Livingstone, Michael Betancourt, Simon Byrne, and Mark Girolami. On the geometric ergodicity of hamiltonian monte carlo. Bernoulli, 25(4A):3109–3138, 2019.
- [LS90] László Lovász and Miklós Simonovits. The mixing rate of Markov chains, an isoperimetric inequality, and computing the volume. In Foundations of Computer Science, 1990. Proceedings., 31st Annual Symposium on, pages 346–354. IEEE, 1990.
- [LS92] László Lovász and Miklós Simonovits. On the randomized complexity of volume and diameter. In Foundations of Computer Science, 1992. Proceedings., 33rd Annual Symposium on, pages 482–492. IEEE, 1992.
- [LS93] László Lovász and Miklós Simonovits. Random walks in a convex body and an improved volume algorithm. Random structures & algorithms, 4(4):359–412, 1993.
- [LST21] Yin Tat Lee, Ruoqi Shen, and Kevin Tian. Structured logconcave sampling with a restricted gaussian oracle. In Mikhail Belkin and Samory Kpotufe, editors, Conference on Learning Theory, COLT 2021, 15-19 August 2021, Boulder, Colorado, USA, volume 134 of Proceedings of Machine Learning Research, pages 2993–3050. PMLR, 2021.
- [LV03] László Lovász and Santosh Vempala. Hit-and-run is fast and fun. preprint, Microsoft Research, 2003.
- [LV06] László Lovász and Santosh Vempala. Fast algorithms for logconcave functions: Sampling, rounding, integration and optimization. In Foundations of Computer Science, 2006. FOCS’06. 47th Annual IEEE Symposium on, pages 57–68. IEEE, 2006.
- [LV17] Yin Tat Lee and Santosh S Vempala. Geodesic walks in polytopes. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, pages 927–940. ACM, 2017.
- [LV18] Yin Tat Lee and Santosh S Vempala. Convergence rate of Riemannian Hamiltonian Monte Carlo and faster polytope volume computation. In STOC, 2018.
- [MPS+12] Jonathan C Mattingly, Natesh S Pillai, Andrew M Stuart, et al. Diffusion limits of the random walk Metropolis algorithm in high dimensions. The Annals of Applied Probability, 22(3):881–930, 2012.
- [MS17] Oren Mangoubi and Aaron Smith. Rapid mixing of Hamiltonian Monte Carlo on strongly log-concave distributions. arXiv preprint arXiv:1708.07114, 2017.
- [MS18] Oren Mangoubi and Aaron Smith. Rapid mixing of geodesic walks on manifolds with positive curvature. The Annals of Applied Probability, 2018.
- [MSss] Oren Mangoubi and Aaron Smith. Mixing of Hamiltonian Monte Carlo on strongly log-concave distributions: Continuous dynamics. Annals of Applied Probability, in press.
- [MV18] Oren Mangoubi and Nisheeth K Vishnoi. Dimensionally tight running time bounds for second-order Hamiltonian Monte Carlo. In NeurIPS. Available at arXiv:1802.08898, 2018.
- [Nea92] Radford M Neal. Bayesian training of backpropagation networks by the hybrid Monte Carlo method. Technical report, CRG-TR-92-1, Dept. of Computer Science, University of Toronto, 1992.
- [Nea11] Radford M Neal. MCMC using Hamiltonian dynamics. Handbook of Markov Chain Monte Carlo, 2:113–162, 2011.
- [PST12] Natesh S Pillai, Andrew M Stuart, and Alexandre H Thiéry. Optimal scaling and diffusion limits for the Langevin algorithm in high dimensions. The Annals of Applied Probability, 22(6):2320–2356, 2012.
- [RMZT02] Lula Rosso, Peter Mináry, Zhongwei Zhu, and Mark E Tuckerman. On the use of the adiabatic molecular dynamics technique in the calculation of free energy profiles. The Journal of chemical physics, 116(11):4389–4402, 2002.
- [RR98] Gareth O Roberts and Jeffrey S Rosenthal. Optimal scaling of discrete approximations to Langevin diffusions. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 60(1):255–268, 1998.
- [RRT17] Maxim Raginsky, Alexander Rakhlin, and Matus Telgarsky. Non-convex learning via stochastic gradient langevin dynamics: a nonasymptotic analysis. In Conference on Learning Theory, pages 1674–1703, 2017.
- [SM08] Ruslan Salakhutdinov and Andriy Mnih. Bayesian probabilistic matrix factorization using Markov chain Monte Carlo. In Proceedings of the 25th ICML, pages 880–887. ACM, 2008.
- [SRSH14] Christof Seiler, Simon Rubinstein-Salzedo, and Susan Holmes. Positive curvature and Hamiltonian Monte Carlo. In Advances in Neural Information Processing Systems, pages 586–594, 2014.
- [War97] Bradley A Warner. Bayesian learning for neural networks (lecture notes in statistics vol. 118), Radford M. Neal. Journal of American Statistical Association, 92:791–791, 1997.
- [WT11] Max Welling and Yee W Teh. Bayesian learning via stochastic gradient Langevin dynamics. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 681–688, 2011.
- [ZRC13] Wenwei Zheng, Mary A Rohrdanz, and Cecilia Clementi. Rapid exploration of configuration space with diffusion-map-directed molecular dynamics. The journal of physical chemistry B, 117(42):12769–12776, 2013.