A First Look at First-Passage Processes
Abstract
These notes are based on the lectures that I gave (virtually) at the Bruneck Summer School in 2021 on first-passage processes and some applications of the basic theory. I begin by defining a first-passage process and presenting the connection between the first-passage probability and the familiar occupation probability. Some basic features of first passage on the semi-infinite line and a finite interval are then discussed, such as splitting probabilities and first-passage times. I also treat the fundamental connection between first passage and electrostatics. A number of applications of first-passage processes are then presented, including the hitting probability for a sphere in greater than two dimensions, reaction rate theory and its extension to receptors on a cell surface, first-passage inside an infinite absorbing wedge in two dimensions, stochastic hunting processes in one dimension, the survival of a diffusing particle in an expanding interval, and finally the dynamics of the classic birth-death process.
1 What is a First-Passage Process?
The first-passage probability is defined as the probability that a diffusing particle or a random walk first reaches a given site (or set of sites) at a specified time. Typical examples of first-passage processes include: fluorescence quenching, in which light emission by a fluorescent molecule stops when it reacts with a quencher; integrate-and-fire neurons, in which a neuron fires when a fluctuating voltage level first reaches a specified level; and the execution of buy/sell orders when a stock price first reaches a threshold. To appreciate why first-passage phenomena might be relevant practically, consider the following example. You are an investor who buys stock in a company at a price of $100. Suppose that this price fluctuates daily by . You will sell if the stock price reaches $200 and if the stock price reaches $0, the company has gone bankrupt and you’ve lost all your investment. What it the probability of doubling your investment or losing your entire investment? How long will it take before one of these two events occurs? These are the types of questions that are the purview of first-passage phenomena. Much of the material covered here is discussed more detail in this monograph [1], and in other general reviews and texts on probability theory and stochastic processes [2, 3, 4, 5].
2 First-Passage and Occupation Probabilities
Let’s start by deriving the formal relation between the first-passage probability and the familiar occupation probability. For concreteness, consider a random walk in discrete space and discrete time. We define as the occupation probability; this is the probability that a random walk is at site at time when it starts at the origin. Similarly, let be the first-passage probability, namely, the probability that the random walk first visits at time with the same initial condition. Clearly decays more rapidly in time than because once a random walk reaches , there can be no further contribution to , although the same walk may still contribute to .

It is convenient to write in terms of and then invert this relation to find . For a random walk to be located at at time , the walk must first reach at some earlier time step and then return to after additional steps (Fig. 1). This connection between and may be expressed as the convolution
(2.1) |
where is the Kronecker delta function. This equation expresses the fact that if a random walk is at at time , it must have first reached at some earlier time (which could even be ). If the walk reached at a time earlier than , then it must return to (and any number of such returns could occur) in the remaining time . The probability for this set of events is expressed by . The delta function term accounts for the initial condition that the walk starts at .
The above convolution is most conveniently solved by introducing the generating functions,
For a random walk in continuous time, we would merely replace the sum over discrete time in Eq. (2.1) by an integral and then use the Laplace transform. However, the asymptotic results would be identical. To solve for the first-passage probability, we multiply Eq. (2.1) by and sum over all . We thereby find that the generating functions for and are related by
(2.2) |
Thus we obtain the fundamental connection between the generating functions
(2.3) |
The important point is that the first-passage probability can be determined solely from the occupation probability. Many profound results about random walks in infinite space can be obtained from the fundamental relation (2.3) (see, e.g., [6, 7, 8, 9]). Our focus here will be on random walks or diffusion in confined geometries that reflect important physical constraints.
3 The Half Line
Suppose that a diffusing particle starts at on the infinite half line and is absorbed when it reaches the origin. Does the particle ever reach the origin? If so, when does this particle first reach the origin? To answer these questions, we have to solve the diffusion equation for the concentration , subject to the initial condition , and the boundary condition ; the latter enforces the absorption of the particle when it reaches the origin.
A standard approach to solve this problem is to first take the Laplace transform of the diffusion equation and then solve for the Green’s function of this transformed equation. Then one inverts the Laplace transform of the Green’s function to obtain the concentration in the time domain. The diffusive flux to the origin gives the probability that the particle reaches the origin at time . Because of the absorbing boundary condition, when the particle does reach the origin, it is removed from the system. Thus the diffusive flux corresponds to the probability for the particle to reach the origin for the first time—namely, the first-passage probability to the origin.

A more fun way to solve this problem is by invoking the familiar image method from electrostatics. In this method, a diffusing particle that starts at and is subject to an absorbing boundary condition at is equivalent to removing the boundary altogether and introducing an image “antiparticle” that is initially at . Because the concentration is clearly equal to zero at the origin by symmetry, this image antiparticle effectively imposes the absorbing boundary condition . The initial particle and the image antiparticle both diffuse freely on and their superposition gives a resultant concentration for that solves the original problem.
Hence the concentration for a diffusing particle on the positive half-line is the sum of a Gaussian centered at and an anti-Gaussian centered at :
(3.1) |
This concentration profile has a linear dependence on near the origin and a Gaussian tail for , as illustrated in Fig. 2. Because the initial condition is normalized, the first-passage probability to the origin at time is just the diffusive flux to this point. From the above expression for , we find
(3.2) |
This fundamental and simple formula has a number of striking implications:
-
1.
The particle is sure to reach to the origin because . That is, eventual absorption is certain.
-
2.
The average time for the particle to reach the origin is infinite! This fact arises because the first-passage probability has the long-time algebraic tail for . This dichotomy between hitting the origin with certainty but taking an infinite average time to do so underlies many of the intriguing features of one-dimensional diffusion.
-
3.
The typical time to reach the origin is finite. We can define the term typical time in a precise way as follows: As a preliminary, define the typical position of the particle, , as
(3.3) That is, one half of the total probability lies in the range beyond and one half lies in the range . Substituting the concentration profile (3.1) into the above integral and performing the integral leads to the transcendental equation
(3.4) This equation can only be solved numerically and the salient feature is that monotonically decreases with time and reaches zero at a time that is roughly ; this defines the typical hitting time.
-
4.
Even though the average time to reach the origin is infinite, the number of times the origin is reached in a time is proportional to . This result follows directly from the probability distribution of a freely diffusing particle. At large times, the bulk of Gaussian distribution for a particle that starts at will spread past the origin. Each site that is within the Gaussian envelope will have been visited of the order of times. This last fact will be relevant for our discussion of the reaction rate of the sphere in Sec. 6.
4 The Finite Interval
We now turn to the first-passage properties in a finite interval. The reason for focusing on the finite interval is that the basic first-passage questions in this geometry have many profound implications. Moreover, the interval geometry is sufficiently simple that many results can be readily derived. Let us begin by outlining the basic questions that we will address. We consider a diffusing particle that starts at some point within the interval , with absorbing boundary conditions at both ends of the interval. Eventually the particle is absorbed, and our goal is to characterize the time dependence of this absorption. Basic first-passage questions include:
-
1.
What is the time dependence of the survival probability ? This is the probability that a diffusing particle does not touch either absorbing boundary before time .
-
2.
What is the time dependence of the first-passage, or exit probabilities, to either 0 or to as a function of ? Integrating these probabilities over all time gives the eventual hitting, or splitting probability to a specified boundary. What is the dependence of the splitting probability to 0 or to as a function of the starting position?
-
3.
What is the average exit time, that is, the average time until the particle hits either of the absorbing boundaries as a function of starting position? What are the conditional exit times, that is, the average time to hit a specified boundary (without ever touching the other boundary) as a function of the starting position?
To answer the first question, we need to solve the diffusion equation in the interval, subject to the initial condition that a particle starts at , and with absorbing boundary conditions at both ends. This is a standard exercise and the result for the concentration is
(4.1) |
Since the large- eigenmodes decay more rapidly in time, the most slowly decaying eigenmode dominates in the long-time limit. As a result, the survival probability, which is the spatial integral of the concentration over the interval, asymptotically decays as
(4.2) |
For answering the second question, it is mathematically simpler to work in the Laplace transform domain. Applying the Laplace transform to the diffusion equation recasts it as
(4.3) |
where the prime denotes differentiation with respect to . The argument indicates that is the Laplace transform. Within the standard Green’s function approach, the homogeneous equation in each subdomain and has elementary solutions of the form , with the constants and determined by the boundary conditions. Because the absorbing boundary condition at mandates an antisymmetric combination of exponentials and because the form of the Green’s function as must be identical to that as , we can be immediately write
(4.4) |
for the subdomain Green’s functions and , where and are constants.
We now impose the continuity condition and the jump condition that is obtained by integrating Eq. (4.3) over an infinitesimal interval that includes :
to finally obtain
(4.5) |
From this Green’s function, the Laplace transform of the fluxes at and at are
(4.6a) | ||||
(4.6b) |
The subsidiary argument in emphasizes that the flux depends on the initial particle position. Since the initial condition is normalized, the magnitude of the flux to each boundary is identical to the respective first-passage probabilities.
For , these Laplace transforms are just the time-integrated first-passage probabilities to 0 and at . These quantities therefore coincide with the respective splitting probabilities, and , namely, the probabilities to eventually hit the left and the right ends of the interval as a function of the initial position :
(4.7) |
Thus the splitting probabilities are given by an amazingly simple formula—the probability of reaching one endpoint is just the fractional distance to the other endpoint!
It is instructive to also derive these splitting probabilities by the backward Kolmogorov approach [1, 2]. The word backward reflects the feature that the initial condition becomes the dependent variable, rather than the current position of the particle. As we shall see, this method provides a powerful tool for determining first-passage properties. Physically, we obtain the eventual hitting probability to the right boundary by summing the probabilities for all paths that start at and reach without touching . Thus
(4.8) |
where denotes the probability of a path from to that avoids . As illustrated in Fig. 3, the sum over all such paths can be decomposed into the outcome after one step and the sum over all path remainders from the intermediate point to . This gives
(4.9) |
Here is the length of a single random-walk step and paths′ and paths′′ indicate, respectively, all paths that start at and reach without touching 0.

Equation (4.9) reduces to , where is the discrete second-difference operator, . This difference equation is subject to the boundary conditions , . The solution is simply
(4.10) |
and correspondingly .
It is worth mentioning that there is an even simpler way to determine the exit probabilities by the martingale method [10]. This solution relies on the fact that the motion of the random walk in the interval is a “fair game” at any time. Thus the average position of the walk is time independent. Formally, a martingale is a process in which the average value of a random variable at time equals the average value of this variable at time .
For the present example, at , the average position of the particle is . At infinite time, the walk is either at the left end or the right end of the interval, with respective probabilities or . Thus the average position at infinite time is . Since the initial average position equals the final average position, we immediately recover (4.10).
For a biased random walk with a probability of hopping to the right and probability of hopping to the left, the analog of Eq. (4.9) for the splitting probability is
(4.11) |
with solution
(4.12) |
where , , , and is the Péclet number, which is a dimensionless measure of the influence of the bias relative to diffusive fluctuations. When the Péclet number is large, the exit probability clearly reflects the strong influence of the bias (Fig. 4).

We now extend the backward Kolmogorov approach to determine the average exit time from the finite interval. We distinguish between the unconditional average exit time, namely, the average time for a particle to reach either end of the interval, and the conditional average exit time, namely, the average time for a particle to reach, say, the right end of the interval without ever touching the other end.
In close analogy with (4.8), the unconditional exit time satisfies
(4.13) |
That is, to compute the average unconditional exit time, we take the time for a path to go from to times the probability of this path and sum over all possible paths. Using the same decomposition that which led to (4.9), the unconditional exit time satisfies
(4.14) |
Notice that the term
has exactly the same form as (4.13), so that the above expression is merely . A similar identification holds for the analogous term
Finally, the terms multiplying the factor in (4.14) just gives the probability of all possible paths from to either end of the interval; this probability is clearly equal to 1. Thus we have
(4.15) |
In the continuum limit, we expand (4.15) in a Taylor series to second order in to give
Now identifying as the diffusion coefficient , the equation for the unconditional exit time reduces to , subject to the boundary conditions . The solution is (see Fig. 5)
(4.16) |
Notice that this exit time is of the order of for a particle that starts a distance of the order of one from an absorbing boundary and of the order of for a particle that starts near the middle of the interval.

Now let’s calculate the conditional exit time to the right boundary when starting from , . By definition this conditional time is
(4.17) |
That is, the conditional exit time is the time for a path to start at and reach without touching 0 multiplies by the probability of this path, summed over all such allowable paths. Here the subscript + on the word paths indicates that only paths that go from to without touching 0 are included. Since the total probability of all these restricted paths is less than 1, we need to divide by this total probability, , to obtain the properly normalized conditional exit time. Thus, for the quantity , we have
(4.18) |
In the continuum limit, the above equation reduces to , with solution
(4.19) |
From this result, we also immediately find . The dependences of all the exit times on the starting position are illustrated in Fig. 5.
5 Connection with Electrostatics
One of the alluring features of first-passage processes is its intimate connection to electrostatics. By this connection, one can recast an electrostatic problem in a given geometry as a first-passage problem in the same geometry. With this perspective, it is possible to solve seemingly difficult first-passage problems in a simple way by this electrostatic connection.
To illustrate the basic principle, consider the following general problem. Suppose that a diffusing particle starts at some point inside an arbitrary bounded domain. At the boundary of this domain, the particle is absorbed. Eventually, all of the initial probability is absorbed on the boundary and we ask: what is the exit probability at some arbitrary point on the domain boundary? Formally, we have to solve the diffusion equation in this domain, subject to the appropriate initial and boundary conditions:
(5.1) |
Then the exit probability at the boundary point is given by
(5.2) |
where is the outward normal to the surface of the domain at .
Let’s look critically at this calculation. We are attempting to solve a partial differential equation in some domain (which may well be difficult), then take the result of this calculation and integrate over all time. That is, we really don’t need that exit probability at all times, but merely the time integral of the exit probability. This observation suggests that it will be useful to take the original problem (5.1) and integrate it over all time. To simplify what emerges, we also define the time integrated concentration, . Performing this time integration on Eq. (5.1) leads to
The delta function on the left-hand side is what remains when we integrate the time derivative in (5.1) over all time. At the concentration is zero, while at , we merely have the initial condition. But notice that in terms of the time-integrated concentration, the exit probability may be written as
(5.3) |
Thus we arrive at the fundamental result:
Here is the surface area of a -dimensional sphere; this factor is needed to convert the prefactor in the diffusion equation to the correct prefactor in the Laplace equation. Thus a given first-passage problem can be expressed as an equivalent electrostatic problem in the same geometry.
6 Hitting a Sphere and Reaction Rate Theory
What is the probability that a diffusing particle eventually hits a sphere of radius , when the particle starts at a distance from the origin? One can determine this hitting probability in the standard way by solving the diffusion equation exterior to the sphere, computing the flux to the sphere, and then integrating over all time. However, it is much simpler to use the connection between first passage and electrostatics. Indeed, by a direct extension of Eq. (4.9) to three dimensions, the hitting probability satisfies (here for a discrete random walk in Cartesian coordinates for simplicity)
(6.1) |
Let us now take the continuum limit so that we can work in spherical coordinates. Then the above discrete difference equation becomes , subject to the boundary conditions and . The solution is
(6.2) |
Amazingly simple!
Now let’s treat a related problem that is fundamental in chemical kinetics. Suppose that there is an initially uniform concentration of particles exterior to an absorbing sphere of radius . A fundamental kinetic characteristic of the absorbing sphere is its reaction rate , namely, the efficiency at which this sphere captures particles. Formally, is defined as the number of particles absorbed per unit time divided by the initial concentration. This normalization ensures that the reaction rate is a quantity that is intrinsic to the system. By dimensional analysis, the reaction rate as defined above has units of (length)time. Moreover the reaction rate can only be a function of intrinsic parameters of the system, namely, the diffusion coefficient and the sphere radius . Since the units of are length2/time, we infer that must have units of . Thus the reaction rate , where is a constant of the order of 1. This example shows the power of dimensional analysis in obtaining a non-trivial physical property of multi-particle system.
This simple result has some surprising implications. First, in three dimensions, the reaction rate is linear in ; it is not proportional to the cross-sectional area of the sphere. Second for , the reaction rate increases when the radius of the absorbing sphere is decreased! This nonsensical result indicates that there is a basic problem with classic chemical kinetics for . This pathology arises because the concentration field exterior to the absorber never reaches a steady state for . Instead, the absorption rate of the sphere is time dependent. In contrast, for , a steady state concentration field does arise. Once this steady state is reached, it is a trivial exercise to compute the steady-state density by solving the Laplace equation rather than the diffusion equation, and thereby obtain the steady-state flux to the sphere. From these steps, one finds the exact reaction rate
(6.3) |
Armed with the basic result for the reaction rate, we now turn to a much more profound problem that is fundamental to living systems, namely, how many receptors should there be on the surface of a cell? One might expect that much of the cell surface should be covered by receptors so that its detection efficiency is high. On the other hand, one can imagine that receptors are complex and evolutionarily expensive machines. Based on cost considerations only, it would be advantageous for a cell to minimize the number of receptors. What is the appropriate balance between these two competing attributes? As a first step to address this question, we want to compute the reaction rate of a cell that is sensitive to its environment only at the locations of the receptors. We model the cell as a sphere of radius in which most of the surface is reflecting. However, on the sphere surface there are also circular domains of radius that are absorbing (Fig. 6). We view these absorbing circles as the receptors on the cell surface. What is the reaction rate of this toy model of a cell? If most of the sphere surface is reflecting, one might anticipate that the reaction efficiency of the cell will be poor. Surprisingly, the reaction efficiency of the sphere with absorbing receptors is almost as good as a perfectly absorbing sphere, even when the area fraction covered by the receptors is vanishingly small! This realization is the brilliant insight of the article by Berg and Purcell that was far ahead of its time [11]. Here I outline their argument.

The first step in their argument relies on the feature that if a diffusing particle hits the surface of a sphere, it will hit again many times before diffusing away; this point was discussed above in Sec. 3. There are two types of subsequent hitting events: (i) the particle rises a distance less than above the surface before hitting it again, and (ii) the particle rises a distance greater than above the surface before hitting it again (Fig. 6). In the former case, if the particle initially misses a receptor, it will likely miss upon the second encounter. In the latter case, if the particle misses a receptor initially, we have no information about whether the second encounter will hit or miss a receptor. Thus the rise distance demarcates the regime of dependent subsequent hits and independent subsequent hits. It is the latter events that are relevant for estimating the reaction rate. We thus use the height as the criterion for determining the number of times that a particle independently hits the cell surface before diffusing away.
When a particle is a distance above the cell surface, the probability that it eventually hits the cell again is, using Eq. (6.2),
(6.4a) | |||
Thus the probability that the diffusing particle independently hits the cell times before diffusing away is . Correspondingly, the average number of independent hits to the surface is | |||
(6.4b) |
The probability for a diffusing particle to not land on a receptor in a single independent hitting event is
(6.4c) |
namely, the area fraction of the surface that is not covered by receptors. Thus the probability that a diffusing particle that reaches the cell but never lands on a receptor is the probability that the particle always misses a receptor in each of its independent hitting attempts. This is
(6.4d) |
The probability that a diffusing particle ultimately hits a receptor thus is
(6.4e) |
The final result for the reaction rate of a cell of radius that is covered by receptors, each of radius , is
(6.5) |
To understand the implication of this result, let’s use some numbers that typify a cell: micron, nanometers, and . The area fraction covered by the receptors is roughly , but the absorption efficiency of the cell is roughly 2/3! Evidently, Mother Nature is very smart to not waste resources on endowing a cell with too many receptors.
7 Wedge Domains
We now turn to the first-passage properties of diffusion in a two-dimensional wedge domain with absorption when the particle hits the wedge boundary. One of our motivations for studying this system is that first passage in the wedge can be mapped onto a simple diffusive capture problem in one dimension that we’ll treat in the next section. We will obtain first-passage properties in the wedge geometry both by direct solution of the diffusion equation and also, for two dimensions, in a more aesthetically pleasing fashion by conformal transformation techniques, in conjunction with the electrostatic formulation. We also present a heuristic extension of the electrostatic approach that allows us to infer, with little additional computational effort, time-dependent first-passage properties in the wedge from corresponding time-integrated properties.
Solution to the Diffusion Equation
We first solve the diffusion equation in the wedge to determine the survival probability of a diffusing particle. While the exact Green’s function for this system is well known [12, 13], we adopt the strategy of choosing an initial condition that allows us to eliminate angular variables and deal with an effective radial problem. This simplification is appropriate if we are interested only in asymptotic first-passage properties.
The diffusion equation for the two-dimensional wedge geometry in plane polar co-ordinates is
(7.1) |
where is the particle concentration at at time , is the diffusion coefficient, and the boundary conditions are at , where is the wedge opening angle. To reduce this two-dimensional problem to an effective one-dimensional radial problem, note that the exact Green’s function can be written as an eigenfunction expansion in which the angular dependence is a sum of sine waves of the form , such that an integral number of half-wavelengths fit within to satisfy the absorbing boundary conditions [13]. In this series, each sine wave is multiplied by a conjugate decaying function of time, in which the decay rate increases with . In the long time limit, only the lowest term in this expansion dominates the survival probability. Consequently, we obtain the long-time behavior by choosing an initial condition whose angular dependence is a half sine-wave in the wedge. This ensures that the time-dependent problem will contain only this single term in the Fourier series.
We therefore define . With this initial distribution function and after the Laplace transform is applied, the diffusion equation (7.1) becomes
where . Substituting in the ansatz , the angular dependence may now be separated and reduces the system to an effective one-dimensional radial problem. By introducing the dimensionless co-ordinate , we find the modified Bessel equation for the remaining radial co-ordinate,
(7.2) |
where and the prime now denotes differentiation with respect to .
The general solution for is a superposition of modified Bessel functions of order . Since the domain is unbounded, the interior Green’s function () involves only , since diverges as , while the exterior Green’s function () involves only , since diverges as . By imposing continuity at , we find that the Green’s function has the symmetric form , with the constant determined by the joining condition that arises from integrating Eq. (7.2) over an infinitesimal radial range that includes . This gives
from which . Therefore the radial Green’s function in the wedge is
(7.3) |
and its Laplace inverse has the relatively simple closed form [12, 13]
(7.4) |
With this radial Green’s function, the asymptotic survival probability is
(7.5) |
We can estimate this integral by noting that the radial distance over which the concentration is appreciable extends to the order of . This provides a cutoff in the radial integral in Eq. (7.5), within which the Gaussian factors in can be replaced by one. Using the small-argument expansion of the Bessel function, we then obtain
(7.6) |
The basic result is that the survival probability of a diffusing particle in a wedge of opening angle decays with time as
(7.7) |
The striking feature of this formula is that the exponent depends on the wedge opening angle in a non-trivial way, with as and for .
Conformal Transformations and Electrostatic Methods
Let’s now solve the same wedge problem by exploiting conformal transformations, together with the connection between first passage and electrostatics. To set the stage for the wedge geometry, consider the first-passage probability for a diffusing particle in two dimensions to an absorbing infinite line. This problem may also be solved elegantly by the electrostatic formulation. In this approach, the time-integrated concentration obeys the Laplace equation
where is the complex co-ordinate and the factor ensures the correct normalization. Using the image method for two-dimensional electrostatics, we find that the complex potential is
(7.8) |
where the asterisk denotes complex conjugation. Finally, the time-integrated flux that is absorbed at coincides with the electric field at this point. This is
(7.9) |
We now use a conformal transformation to extend the result for the hitting probability to the infinite line to the hitting probability in the wedge. Consider the transformation that maps the interior of the wedge of opening angle to the upper half plane. In complex co-ordinates, the electrostatic potential in the wedge is
(7.10) |
From this expression and using the analogy between electrostatics and first passage, we can extract time-integrated first-passage properties in the wedge. For example, the probability of being absorbed at a distance from the wedge apex, when a particle begins at a unit distance from the apex along the wedge bisector, is just the electric field at this point
(7.11) |
Although the electrostatic formulation ostensibly gives only time-integrated first-passage properties, we can adapt it to also give time-dependent features. This adaptation is based on the following re-interpretation of the equivalence between electrostatics and diffusion: an electrostatic system with a point charge and specified boundary conditions is identical to a diffusive system in the same geometry and boundary conditions, in which a continuous source of particles is fed in at the location of the charge starting at time . Suppose now that the particle source is “turned on” at . Then, in a near zone that extends out to a distance of the order of from the source, the concentration has sufficient time to reach its steady-state value. Within this zone, the diffusive solution converges to the Laplacian solution. Outside this zone, however, the concentration is close to zero. This almost-Laplacian solution provides the time integral of the survival probability up to time . We can then deduce the survival probability by differentiating the concentration that is due to this finite-duration source.
Thus suppose that a constant source of diffusing particles at inside the absorbing wedge is turned on at . Within the region where the concentration has had time to reach the steady state, , the density profile is approximately equal to the Laplacian solution, . We can neglect the angular dependence of in this zone, as this dependence is immaterial for the survival probability. Conversely, for , the particle concentration is vanishingly small because a particle is unlikely to diffuse such a large distance. From the analogy between electrostatics and first passage, the near-zone density profile is just the same as the time integral of the diffusive concentration. Thus, by using the equivalence between the spatial integral of this near-zone concentration in the wedge and the time integral of the survival probability, we have
(7.12) |
Since the total density injected into the system equals , the survival probability in the wedge is roughly , which gives .
8 Stochastic Hunting in One Dimension
What is the time dependence of the survival probability of a diffusing lamb that is hunted by diffusing lions? We define this survival probability as . This toy problem is most interesting in a one-dimensional geometry where all the lions are located to one side of the lamb. It is known [14] that this survival probability asymptotically decays algebraically with time,
(8.1) |
and the goal is to compute the decay exponent . As we shall discuss, the decay exponent is known for and only: , and . For , grows slowly with , and numerical simulations give , , and . The focus of this section is to derive by a simple geometric approach and to develop some analytical understanding of the dependence of on for .
Let us begin by treating a lamb that starts at and a single lion that starts at . For simplicity, the diffusivities of the lamb and the lion are assumed to both equal . The separation between the lamb and the lion thus diffuses with diffusion coefficient . When this separation reaches zero, the lamb has been eaten. This problem is just the classic first-passage problem on the positive infinite line, except that the diffusion coefficient is . The probability that the lamb survives until time is the same as the first-passage time being greater than . This probability therefore is
(8.2) |
Thus the survival probability of the lamb asymptotically decays as . While the lamb is sure to die, its average lifetime is infinite. Thus a single diffusing lion is not a particularly good hunter and it might starve before eating the lamb.
What happens when there are lions? We again assume that the lions start from the origin while the lamb starts at , and that the diffusivities of all particles are the same. Let us label the positions of the lions as and , and the position of the lamb as The lamb survives up to time if the conditions and always hold. We can give an insightful geometric interpretation of this problem by viewing the motion of the three particles on the line as equivalent to the motion of a single effective particle in three dimensions with coordinates . The constraints and mean that the effective particle in three-space remains to the left of the plane and behind the plane (Fig. 7(a)). This allowed region is a wedge of opening angle that is defined by the intersection of these two planes. If the particle hits one of the planes, then one of the lions has eaten the lamb.


This mapping therefore provides the lamb survival probability, since the survival probability of a diffusing particle within this absorbing wedge asymptotically decays as . What is the opening angle of this wedge? We can determine this angle in a simple way by also including the plane and then viewing the system along the axis (Fig. 7(b)). It is then clear that the wedge angle is . Substituting this result in Eq. (7.7), we find that . Notice that . This inequality reflects the fact that the incremental threat to the lamb from the second lion is less than the first.
In general, we can map the motion of the lamb and lions in one dimension to a single effective particle in dimensions, with absorption when the effective particle hits any of the constraint planes. However, the calculation of the survival probability of the effective particle within the domain where the effective particle is confined—known as a Weyl chamber—appears to be intractable.

While the problem for lions is difficult, the problem becomes much simpler for large . To determine the lamb survival probability for large , we only need to focus on the lion closest to the lamb, because this last lion ultimately kills the lamb. As shown in Fig. 8, the individual identity of this last lion can change with time due to the crossing of different lion trajectories. For large , there is a systematic bias of the motion of the last lion, . This bias becomes stronger for increasing , so that becomes smoother as increases (Fig. 9). This gradual approach of the last lion to the lamb is the mechanism which leads to the survival probability of the lamb decaying as , with a slowly increasing function of .

To estimate the location of this last lion when lions are initially at the origin, we use the extreme statistics condition [15]
(8.3) |
Equation (8.3) states that one lion out of an initial group of is in the range . Although the integral in Eq. (8.3) can be expressed in terms of the complementary error function, it is instructive to evaluate it approximately in a self consistent way by writing and re-expressing the integrand in terms of . We thus find
Now the second term in the integrand,
is non-negligible for . Over this range of , the third exponential factor is of the order of
If we use the result for in Eq. (8.5), the above exponential factor becomes
which is very close to 1 for large N. If we thus ignore this term, the integral above reduces to simple exponential decay, with the result
(8.4) |
We now define and , so that the above condition can be simplified to , whose asymptotic solution is
To lowest order, this gives
(8.5) |
for finite. For , would always equal if an infinite number of discrete random walking lions were initially at the origin. A more suitable initial condition therefore is a concentration of lions that are uniformly distributed from to 0. In this case, only of the lions are “dangerous,” that is, within a diffusion distance from the edge of the pack and thus potential candidates for eating the lamb. Consequently, for , the leading behavior of becomes
(8.6) |
An important feature of the time dependence of is that fluctuations in this quantity decrease for large (Fig. 9). Therefore the lamb and diffusing lions can be recast as a two-body system of a lamb and an absorbing boundary which deterministically advances toward the lamb according to . This determinism is what makes the problem for large tractable.
To solve this effective two-body problem, it is convenient to change coordinates from to to fix the absorbing boundary at the origin. By this construction, the diffusion equation for the lamb probability distribution is transformed to the convection-diffusion equation
(8.7) |
with the absorbing boundary condition . In this reference frame that is fixed on the average position of the last lion, the second term in Eq. (8.7) accounts for the bias of the lamb towards the absorber with an effective speed . Because and have the same time dependence, the lamb survival probability acquires a nontrivial dependence on the dimensionless parameter . This behavior arises whenever there is a coincidence of fundamental length scales in the system (see, e.g., [16] for other such examples).
Equation (8.7) can be transformed into the parabolic cylinder equation by first introducing the dimensionless length and making the following scaling ansatz for the lamb probability density,
(8.8) |
The power law prefactor in Eq. (8.8) ensures that the integral of over all space, namely the survival probability, decays as , and expresses the spatial dependence of the lamb probability distribution in scaled length units. This ansatz codifies the fact that the probability density is not a function of and separately, but is a function only of the dimensionless ratio .
Substituting Eq. (8.8) into Eq. (8.7), we obtain
(8.9) |
By introducing and in Eq. (8.9), we are led to the parabolic cylinder equation of order [17]
(8.10) |
subject to the boundary condition, for both and . Equation (8.10) has the form of a Schrödinger equation for a quantum particle of energy in a harmonic oscillator potential for , but with an infinite barrier at [18]. For the long-time behavior, we need to find the ground state energy in this potential. For , we may approximate this energy as the potential at the classical turning point, that is, . We therefore obtain . Using the value of given in Eqs. (8.5) and (8.6) the decay exponent is
(8.11) |
The latter dependence of implies that for , the survival probability has the log-normal form
(8.12) |
The important feature of the exponent is its very slow increase with . That is, each successive lion that is added to the hunt has a decreasing influence on the survival of the lamb. Indeed, only a small subset of the lions for large actually have a chance to catch the lamb.
9 The Expanding Interval
We have seen that the survival probability of a diffusing particle in a fixed-length absorbing interval of length asymptotically decays as . What happens if the interval length grows with time, ? This simple question illustrates the relative effects of diffusion and the motion of the boundary on first-passage properties. This interplay is a classic problem in the first-passage literature, especially when the boundary motion matches that of diffusion. Solutions to this problem have been obtained by a variety of methods (see, e.g., [19, 20, 21, 22]). Here we give a physics-based approach that is based on [23].
It is easy to infer the survival probability for a slowly expanding interval. Here, slowly expanding means that the interval length grows slower than diffusion. Consequently, the probability distribution of the particle spreads faster than the interval grows and so that the survival probability should decay rapidly with time. Using an adiabatic approximation that one typically encounters in basic quantum mechanics, we will show that , where is a constant. Conversely, for the rapidly expanding interval, , the particle is unlikely reach the either end of the interval and the probability distribution is close to that for free diffusion. This is the basis of the free approximation that leads to a non-zero limiting value for as .
In the marginal case where the interval expands at the same rate as diffusion, , a new dimensionless parameter arises—the ratio of the diffusion length to the interval length. As we shall show, this leads to decaying as a non-universal power-law in time, , with diverging for and approaching zero for .
Slowly Expanding Interval
For , we invoke the adiabatic approximation [18], in which the spatial dependence of the concentration for an interval of length is assumed to be identical to that of the static diffusion equation at the instantaneous value of . This assumption is based on the expectation that the concentration in a slowly expanding interval is always close to that of a fixed-size interval. Thus we write
(9.1) |
with to be determined. The corresponding survival probability is
(9.2) |
For convenience, we now define the interval boundaries as . To obtain , we substitute approximation (9.1) into the diffusion equation, as in separation of variables, to give
(9.3) |
Notice that variable separation does not strictly hold, since the equation for also involves . However, when increases as with , the second term on the right-hand side is negligible. Thus we drop this second term and solve the simplified form of (9.3). We thereby find that the controlling factor of is given by
(9.4) |
Notice that reduces to a pure exponential decay for a fixed-length interval, while for , Eq. (9.4) suggests a more slowly decaying functional form for .
Rapidly Expanding Interval
For a rapidly expanding interval, the escape rate from the system is small and the absorbing boundaries should eventually become irrelevant. We therefore expect that the concentration profile should approach the Gaussian distribution of free diffusion at long times [23]. We may then account for the slow decay of the survival probability by augmenting the Gaussian with an overall decaying amplitude. This free approximation is a nice example in which the existence of widely separated time scales, and , suggests the nature of the approximation itself.
According to the free approximation, we write
Although this concentration does not satisfy the absorbing boundary condition, the inconsistency is negligible at large times, since the density is exponentially small at the interval boundaries. We may now find the time dependence of the survival probability by equating the probability flux to the interval boundaries, , to the loss of probability within the interval. For , this flux is
(9.5) |
which rapidly goes to zero for . Since this flux equals , it follows that the survival probability approaches a non-zero limiting value for , and that this limiting value goes to zero as . Explicitly,
(9.6) |
where and . We now introduce and change the integration variable from to . After some straightforward steps we have
(9.7) |
where is the Euler gamma function. Thus a diffusing particle has a non-zero probability to survive forever when the interval grows fast enough. This ultimate survival probability rapidly goes to zero as from above.
Marginally Expanding Interval
For the marginal case of , the adiabatic and the free approximations are ostensibly no longer appropriate, since and have a fixed ratio. However, for and , we might hope that these methods could still be useful. Thus we continue to apply these heuristic approximations in their respective domains of validity, and , and check their accuracies a posteriori. We will see that the survival probability exponents predicted by these two approximations are each quite close to the exact result except for .
When the adiabatic approximation is applied, the second term in Eq. (9.3) is, in principle, non-negligible for . However, for , the interval still expands more slowly (in amplitude) than free diffusion and the error made by neglecting the second term in Eq. (9.3) may still be small. The solution to this crudely truncated equation immediately gives , which, when substituted into approximately (9.2) leads to , with
(9.8) |
The trailing factor of should not be taken very seriously, because the neglected term in Eq. (9.3) leads to additional corrections to that are also of the order of 1.

Similarly for , the free approximation gives
(9.9) |
This again leads to the non-universal power law for the survival probability, , with
(9.10) |
As shown in Fig. 10, these approximations are surprisingly accurate over much of the range of .
To complete our discussion, we outline a first-principles analysis for the survival probability of a diffusing particle in a marginally expanding interval [23]. When , a natural scaling hypothesis is to write the density in terms of the two dimensionless variables
We now seek solutions for the concentration in the form,
(9.11) |
where is a two-variable scaling function that encodes the spatial dependence. The power law prefactor ensures that the survival probability, namely, the spatial integral of , decays as , as defined at the outset of this section.
After substituting Eq. (9.11) into the diffusion equation, the scaling function satisfies the ordinary differential equation
Then by introducing and , we transform this into the parabolic cylinder equation [25]
(9.12) |
When the range of is unbounded, this equation has solutions for quantized values of the energy eigenvalue , , [18].
For our interval problem, the range of is restricted to . In the equivalent quantum mechanical system, this corresponds to a particle in a harmonic-oscillator potential for and an infinite potential for . For this geometry, a spatially symmetric solution to Eq. (9.12), appropriate for the long-time limit for an arbitrary starting point, is
where is the parabolic cylinder function of order . Finally, the relation between the decay exponent and is determined implicitly by the absorbing boundary condition, namely,
(9.13) |
This condition for simplifies in the limiting cases and . In the former, the exponent is large and the second two terms in the brackets in Eq. (9.12) can be neglected. Equivalently, the physical range of is small, so that the potential plays a negligible role. The solution to this limiting free-particle equation is just the cosine function, and the boundary condition immediately gives the limiting expression of Eq. (9.8), but without the subdominant term of . In the latter case of , and Eq. (9.12) approaches the Schrödinger equation for the ground state of the harmonic oscillator. In this case, a detailed analysis of the differential equation reproduces the limiting exponent of Eq. (9.10) (see [23] for details). These provide rigorous justification for the limiting values of the decay exponent which we obtained by heuristic means.
The Khintchine Iterated Logarithm Law
In the marginal situation of , we have seen that the survival probability decays as a power law , with as . This decay becomes progressively slower as increases. On the other hand, when , with strictly greater than 1/2, the survival probability at infinite time is greater than zero. This leads to the following natural question: what is the nature of the transition between certain death, defined as , and a non-zero survival survival probability, ?
The answer to this question is surprisingly rich. There is an infinite sequence of transitions, where acquires additional iterated logarithmic time dependences, which define regimes where assumes progressively slower functional forms. The first term in this series is known as the Khintchine iterated logarithm law ([24, 3]). While the Khintchine law has been obtained by rigorous methods, we can also obtain this intriguing result, as well as the infinite sequence of transitions, with relatively little computation by the free approximation.
Because we anticipate that the transition between life and death occurs when grows slightly faster than , we make the hypothesis that , with growing slower than a power law in . Now that increases more rapidly than the diffusion length , the free approximation should be asymptotically exact, since it already works extremely well when with large. Within this approximation, we rewrite Eq. (9.9) as
(9.14) |
Here we neglect the lower limit, since the free approximation is valid only as , where the short-time behavior is irrelevant. In this form, it is clear that for , decreases by an infinite amount for because of the divergence of the integral. Thus . To make the integral converge, the other factors in the integral must somehow cancel the logarithmic divergence that arises from the factor . Accordingly, let us substitute into the approximation (9.14). This gives
To simplify this integral, it is helpful to define so that
(9.15) |
To lowest order, it is clear that if we choose with , the integral converges as . Thus the asymptotic survival probability is positive. Conversely, for , the integral diverges and the particle surely dies. In this latter case, evaluation of the integral to lowest order gives
(9.16) |
This decay is slower than any power law, but faster than any power of logarithm, that is, for and .
What happens in the marginal case of ? Here we can refine the criterion between life and death still further by incorporating into a correction that effectively cancels the subdominant factor in Eq. (9.15). We therefore define such that . Then in terms of , Eq. (9.15) becomes
(9.17) |
This integral now converges for and diverges for . In the latter case, the survival probability now lies between the bounds for and . At this level of approximation, we conclude that when the cage length grows faster than
(9.18a) | |||
a diffusing particle has a non-zero asymptotic survival probability, while for a interval that expands as , there is an extremely slow decay of the survival probability. |
By incorporating successively finer corrections into and following the same logic that led to Eq. (9.17), an infinite series of correction terms can be generated in the expression for . By this approach, the ultimate life-death transition corresponds to an ultra-slow decay in which has the form , where and . It is remarkable that the physically motivated and relatively naive free approximation can generate such an intricate solution. As a final note, P. Erdös sharpened the result of Eq. (9.18a) considerably and found that has the infinite series representation
(9.18b) |
in which only the coefficient of the term multiplying is different than 1.
10 Birth-Death Dynamics
As our last topic, we determine the kinetics of the birth-death process. We imagine a collection of identical particles, each of which gives birth to an identical offspring with rate , and each particle can independently die with rate . The goal is to determine the time dependence of the population size. As one can easily imagine, this is a classic model for a variety of biological processes and there is vast literature on this general topic (see, e.g., [26, 27]).
The most interesting case physically is the symmetric situation of equal birth and death rates for each particle, , so that the average population is static. For , the population size decreases as , which quickly goes to zero. In the opposite case, the population grows exponentially with time and an additional mechanism is needed to cut off this growth. For , the average population is fixed, but the time dependence of the distribution of the number of particle exhibits non-trivial kinetics on the positive infinite line. We can alternatively view the birth-death process as a continuous-time random walk on the line, but with birth and death rates for the entire population that are linear functions of . That is, the overall process is symmetric but moves faster for a larger population.
Let denote the number of particles in the population. The time dependence of the average number of particles obeys the rate equation , where the overdot denotes the time derivative. Thus the average number of particles is conserved, as is clear from the condition . That is, the birth-death process for is a martingale. More meaningful information is obtained from the full population distribution. For simplicity in the ensuing formulas, we now set without loss of generality. Let denote the probability that the population consists of particles at time . This probability distribution changes in time according to
(10.1) |
where we define , so that this equation is valid for all . For the standard continuous-time random walk, the corresponding master equation is . We know that this random walk eventually hits the origin, but that the average time to do so is infinite. We want to find the behavior of these two first-passage properties for the birth-death process.
A convenient and powerful way to solve the master equation (10.1) is by the generating function method. We first define the generating function
then take each of the equations for , multiply it by , and then sum over all . In doing so, we will encounter terms from the right-hand side of (10.1), for example, the second term on the right, that looks like
which we can recast as
By this device of converting multiplication by to differentiation for all three terms on the right-hand side of (10.1), we recast (10.1) as
(10.2) |
where the subscripts now denote partial differentiation and the arguments of are not written for compactness.
This first-order partial differential equation can be simplified further by defining the variable via , which implies that , or equivalently, . In terms of the variable , (10.2) is converted to the classic wave equation . This equation has the general solution , where the function is, in principle, arbitrary, and whose explicit form is fixed by the initial condition. Let us specialize to the simple case of the single-particle initial condition, namely, . This immediately leads to . Then at the function is simply given by . However, we must express the right-hand side in terms of the true dependent variable , which means that . Thus for any the generating function is
(10.3) |
To extract the individual terms in the power-series representation of the generating function, we now need to re-express this function in terms of :
(10.4) |
where, for notational simplicity, we introduce . From the last line of the above, we can immediately extract all the and obtain the well-known formulas:
(10.5) |
With these results, we now obtain the first-passage properties of the birth-death process. The quantity may be interpreted as the probability that the population has gone extinct by time , while is the probability that the population survives up to to time . Thus extinction is sure to occur, but the average extinction time is infinite, just as for isotropic diffusion. The main distinction with isotropic diffusion is that for diffusion, while for the birth-death process. Thus survival is less likely when the hopping rate is a linearly increasing function of .
Concluding Comments
These lecture notes have given a whirlwind tour through some basic and some not-so-basic aspects of first-passage processes. At the level of fundamentals, I presented some classic results about first passage in the simplest geometries of the infinite half line and the finite interval, including first-passage probabilities, first-passage times, and splitting probabilities. I also discussed the intriguing connection between first passage and electrostatics. I then presented a number of applications. Some, like the reaction rate of a cell and the birth-death process are classic and have many immediate applications. Some, like the survival of a diffusing lamb that is hunted by diffusing lions and survival of a diffusing particle in a growing interval may seem somewhat idiosyncratic. However, the solution methods are quite generic and may prove useful in many other settings. I hope that the uninitiated reader will enjoy learning about some of these applications of first-passage processes and will be inspired to delve further into this fascinating topic.
Much of the material in Secs. 8 and 9 stems from joint work with Paul Krapivsky. I thank him for pleasant collaborations on these projects, as well as pointing out Ref. [28] to me. I also thank the National Science Foundation for financial support over many years that helped advance some of the topics discussed in these notes, most recently through NSF grant DMR-1910736.
References
- [1] S. Redner, A Guide to First-Passage Processes (Cambridge University Press, Cambridge, UK 2001).
- [2] A. J. Bray, S. N. Majumdar, and G. Schehr, Adv. Phys. 62 225 (2015).
- [3] W. Feller, An Introduction to Probability Theory and Its Applications, Vol. I, 3rd ed. (John Wiley, New York, 1968).
- [4] N. G. van Kampen, Stochastic Processes in Physics and Chemistry, 2nd ed., (North-Holland, Amsterdam, 1997).
- [5] S. Karlin and H. M. Taylor, A First Course in Stochastic Processes, 2nd ed. (Academic Press, New York, 2014).
- [6] E. Montroll, Proceedings of the Symposium on Applied Mathematics (American Mathematical Society, Providence, RI, 1965), 16, 193 (1965).
- [7] E. Montroll and G. H. Weiss, J. Math. Phys. 6, 167 1965.
- [8] G. H. Weiss, Aspects and Applications of the Random Walk, (North-Holland, Amsterdam, 1964).
- [9] B. D. Hughes, Random Walks and Random Environments, (Oxford University Press, New York, 1995).
- [10] P. G. Doyle and J. L. Snell, Random Walks and Electric Networks Carus Mathematical Monographs #22 (Mathematical Association of America, Oberlin, OH, 1984).
- [11] H. C. Berg and E. M. Purcell, Biophys. J. 20, 193 (1977).
- [12] A. Sommerfeld, Math. Annalen (Leipzig) 45, 263 (1894).
- [13] H. S. Carslaw and J. C Jaeger, Conduction of Heat in Solids edition, (Oxford University Press, Oxford, 1959).
- [14] M. Bramson and D. Griffeath, in Random Walks, Brownian Motion, and Interacting Particle Systems: A Festschrift in Honor of Frank Spitzer, 153 eds. R. Durrett and H. Kesten (Birkhäuser, Boston, MA, 1991).
- [15] J. Galambos, The Asymptotic Theory of Extreme Order Statistics. (Krieger, Malabar, FL, 1987).
- [16] G. I. Barenblatt, Scaling, Self-similarity, and Intermediate Asymptotics, (Cambridge University Press, Cambridge, UK, 1996).
- [17] C. M. Bender, & S. A. Orszag, Advanced Mathematical Methods for Scientists and Engineers (McGraw-Hill, New York, 1978).
- [18] See, for example, L. I. Schiff, Quantum Mechanics, (McGraw-Hill, New York, 1968).
- [19] L. Breiman, Proc. Fifth Berkeley Symp. Math. Statist. and Probab. 2, 9 (1966).
- [20] H. E. Daniels, J. Appl. Prob. 6, 399 (1969).
- [21] K. Uchiyama, Z. Wahrsch. verw. Gebiete 54, 75 (1980).
- [22] P. Salminen, Adv. Appl. Prob. 20, 411 (1988).
- [23] P. L. Krapivsky and S. Redner, Am. J. Phys. 64, 546 (1996).
- [24] A. Khintchine, Fundamental Mathematicae 6, 9 (1924).
- [25] M. Abramowitz, & I. A. Stegun, Handbook of Mathematical Functions (Dover, New York, 1972).
- [26] D. G. Kendall, J. Roy. Statist. Soc. Ser. B 11, 230 (1949).
- [27] P. L. Krapivsky, S. Redner and E. Ben-Naim, A Kinetic View of Statistical Physics (Cambridge University Press, Cambridge, UK, 2010).
- [28] P. Erdös, Ann. Math. 42, 119 (1942).