A Comparative Tutorial of Bayesian Sequential Design and Reinforcement Learning
Abstract
Reinforcement Learning (RL) is a computational approach to reward-driven learning in sequential decision problems. It implements the discovery of optimal actions by learning from an agent interacting with an environment rather than from supervised data. We contrast and compare RL with traditional sequential design, focusing on simulation-based Bayesian sequential design (BSD). Recently, there has been an increasing interest in RL techniques for healthcare applications. We introduce two related applications as motivating examples. In both applications, the sequential nature of the decisions is restricted to sequential stopping. Rather than a comprehensive survey, the focus of the discussion is on solutions using standard tools for these two relatively simple sequential stopping problems. Both problems are inspired by adaptive clinical trial design. We use examples to explain the terminology and mathematical background that underlie each framework and map one to the other. The implementations and results illustrate the many similarities between RL and BSD. The results motivate the discussion of the potential strengths and limitations of each approach.
1 Introduction
Sequential design problems (SDP) involve a sequence of decisions with data observed at every time step (DeGroot, 2004; Berger, 2013). The goal is to find a decision rule that maximizes the expected value of a utility function. The utility function encodes an agent’s preferences as a function of hypothetical future data and assumed truth. The decision rule is prescribed before observing future data beyond – as the term “design” emphasizes – assuming a probabilistic model with unknown parameters that generate the future data. For example, can be the true effect of a drug. Figure 1a summarizes the setup of a general SDP. As motivating examples in the upcoming discussion we will use two examples of clinical trial design, which naturally give rise to SDPs (Rossell et al., 2007; Berry and Ho, 1988; Christen and Nakamura, 2003). Both examples are about sequential stopping, that is, the sequential decision is to determine when and how to end the study. Figure 1b shows the setup of sequential stopping problems.
Using these examples, we compare two families of simulation-based methods for solving SDPs with applications in sequential stopping. The first is simulation-based Bayesian Sequential Design (BSD) (Müller et al., 2007), which is based on Bayesian decision theory (Berger, 2013). The other approach is Reinforcement Learning (RL), a paradigm based on the interaction between an agent and an environment that results in potential rewards (or costs) for each decision. RL has recently been proposed as a method to implement SDPs focusing on recent advances in deep learning (Shen and Huan, 2021), outside the context of clinical studies. Earlier application of RL related to clinical study design can be found in the dynamic treatment regimes literature (Zhao et al., 2009; Murphy, 2003; Murphy et al., 2007). There is a longstanding literature on such problems in statistics; see the review by Parmigiani and Inoue (2009, chapter 15) and the references therein. Implementing RL and BSD in these two motivating example problems (using standard algorithms) will illustrate many similarities between the two frameworks, while highlighting the potential strengths and limitations of each paradigm111The implementation code is freely available at https://github.com/mauriciogtec/bsd-and-rl..
A note about the scope of the upcoming discussion. It is meant to highlight the similarities and differences of algorithms in BSD and RL, and we provide a (partial) mapping of notations. There is no intent to provide (yet another) review of BSD or RL and its many variations. This article focuses on a few variations that best contrast the two traditions, \cbstartand we try to highlight the relative advantages of each method. In short, BSD is better equipped to deal with additional structure, that is, to include more details in the inference model. For example, dealing with delayed responses in a clinical study, one might want to include a model to use early responses to predict such delayed outcomes. Or one might want to borrow strength across multiple related problems. Doing so would also be possible in RL, but it requires to leave the framework of Markov decision processes underlying many RL algorithms (Gaon and Brafman, 2020), and discussed below. On the other hand, RL allows computation-efficient implementations, and is routinely used for much larger scale problems than BSD. The availability of computation-efficient implementations is critical in applications where sequential designs need to be evaluated for summaries under (hypothetical) repeated experimentation. This is the case, for example, in clinical trial design when frequentist error rates and power need to be reported. The evaluation requires Monte Carlo experiments with massive repeat simulations, making computation-efficient implementation important. \cbend


BSD:
BSD is a model-based paradigm for SDPs based on a sampling model for the observed data and a prior reflecting the agent’s uncertainty about the unknown parameters.
To compare alternative decisions , the agent uses an optimality criterion that is formalized as a utility function which quantifies the agent’s preferences under hypothetical data, decisions and truth. It will be convenient to write the utility as , where denotes the history at decision time . Rational decision makers should act as if they were to maximize such utility in expectation conditioning on already observed data, and marginalizing with respect to any (still) unknown quantities like future data or parameters (DeGroot, 2004).
To develop a solution strategy, we start at time (final horizon or stopping time). Denote by the expected utility at the stopping time . Then, the rational agent would select at the stopping time. For earlier time steps, rational decisions derive from expectations over future data, with later optimal decisions plugged in,
(1) |
which determine the optimal decision (“Bayes rule”) as
(2) |
Here, we use a potential outcomes notation to emphasize that future history depends on the action (Robins, 1997), and similarly for other quantities. In the notation for expected utility , the lack of an argument indicates marginalization (with respect to future data and parameters) and optimization (with respect to future actions), respectively. For example, in (1) expected utility conditions on , but marginalizes w.r.t. future data and substitutes optimal future decisions .
Some readers, especially those familiar with RL, might wonder why does not appear on the right-hand side of the conditional expectation (as in RL’s state-action value functions). This is because in the BSD framework actions are deterministic. There is no good reason why a rational decision maker would randomize (Berger, 2013). \cbstartHowever, note that in clinical studies randomization is usually included and desired, but for other reasons – not to achieve an optimal decision, but to facilitate attribution of differences in outcomes to the treatment selection. \cbend
RL
RL addresses a wider variety of sequential problems than BSD, provided one can formulate them as an agent interacting with an environment yielding rewards at every time step. Environments can be based on simulations. For example, popular successful RL applications with simulation-based environments include Atari video-games (Mnih et al., 2013), complex board games like chess and Go (Silver et al., 2018), robotic tasks (Tunyasuvunakool et al., 2020) and autonomous driving (Wurman et al., 2022; Sallab et al., 2017).
Just as in BSD, the interactive setup for RL (transition and rewards) can be defined by a sampling model and a prior over . The interaction replicates the decision process, shown in Figure 1. Each draw constitutes a new instance or episode of the environment. The RL agent seeks to maximize the expected sum of rewards , known as the return, over an episode. The return is the analogue of the utility function.
Decision rules are called policies in RL. A (stochastic) policy maps an observed history to a distribution over actions . The optimal policy satisfies . As mentioned before, the notion of stochastic policies is not natural in BSD with its focus on decisions that a rational agent would take. In fact, under some regularity conditions it can be shown that also the optimal RL policy is deterministic (Puterman, 2014). So why stochastic policies? Stochastic policies in RL serve to implement exploration. In BSD it is assumed that if exploration were called for, it would be recognized by the optimal decision rule. While in theory this is the case, in practice additional reinforcement of exploration is reasonable. Also, as we shall see later, the use of stochastic policies facilitates the search for optimal policies by allowing the use of the stochastic gradient theorem.
Another close similarity of BSD and RL occurs in the definition of expected utility and the state-action value function in RL. The state-action value function is . The value function of the optimal policy plays the same role as expected utility when optimal decisions are substituted for future decisions , . An important difference is the stochastic nature of in the state-value function, versus the deterministic decisions in BSD.
From this brief introduction one can already notice many correspondences between the objects in RL and BSD. Table 1 shows a partial mapping between their respective terminologies. Not all are perfect equivalences. Sometimes common use in BSD and RL involves different levels of marginalization and/or substituting optimal values. Some of the analogies in the table will be developed in the remainder of the paper.
BSD | RL | |
data observed at time | ||
history (information set) at decision time | ||
action (decision) at time | ||
summary (sufficient) statistic | state | |
(deterministic) action at time | n/a (a) | |
indexed by parameter (policy) | ||
n/a (no randomization) | (random) policy | |
n/a | policy indexed by | |
optimal decision/policy | Bayes rule (2) | optimal policy |
unknown parameter | ||
(usually) required | optional | |
n/a (b) | (immediate) reward at time | |
optimality criterion | utility | total return or |
remaining return | ||
state-action value | n/a (deterministic ) | |
state value | n/a (deterministic ) | |
value under optimal | ||
future actions | ||
optimal value | ||
expectation under policy/decision indexed by | ||
Bellman equation/ | ||
backward induction |
(a) deterministic policies are not discussed in this review, but see
Silver et al. (2014).
(b) additive decomposition
as is possible, but not
usually made explicit.
2 Two examples of optimal stopping in clinical trials
The two stylized examples introduced here mimic sequential stopping in a clinical trial. The agent is an investigator who is planning and overseeing the trial. The data are clinical outcomes recorded for each patient. In both cases refers to a stopping decision after (cohorts of) patients. Under continuation (), the agent incurs an additional cost for recruiting the next cohort of patients. Under stopping (), on the other hand, the agent incurs a cost if a wrong (precise meaning to be specified) recommendation is made. At each time the agent has to choose between continuing the study – to learn more – versus stopping and realizing a reward for a good final recommendation (about the treatment). Throughout we use the notions of cost (or loss) and utility interchangeably, treating loss as negative utility.
Example 1: A binary hypothesis
Consider the decision problem of choosing between and . For instance, could represent the probability of a clinical response for an experimental therapy. Assume a binary outcome with a Bernoulli sampling model and a discrete two-point prior .
The possible decisions at any time are . Here indicates continuation, indicates to terminate the trial and report , and means terminate and report . The utility function includes a (fixed) sampling cost for each cohort and a final cost for reporting the wrong hypothesis. The utility function is
(3) |
The relevant history can be represented using the summary statistic , since this statistic is sufficient for the posterior of . The implementations of simulation-based BSD and RL use this summary statistic. The problem parameters are fixed as and . Example trajectories of assuming no stopping are shown in Figure 2(a).



Example 2: A dose-finding study
This example is a stylized version of the ASTIN trial (Grieve and Krams, 2005). The trial aims to find the optimal dose for a drug from a set of candidate doses where stands for placebo. At each time , a dose is assigned to the next patient (cohort), and an efficacy outcome is observed. The aim is to learn about the dose-response curve , and more specifically, to find the dose (the ED95) that achieves 95% of the maximum possible improvement over placebo. Let be the advantage over the placebo at dose . We set up a nonlinear regression with using a dose-response function
(4) |
with and a prior . In the PK/PD (pharmacokinetics/pharmacodynamics) literature model (4) is known as the model (Meibohm and Derendorf, 1997).
Similar to Example 1, with indicating continuation, indicating stopping the trial and recommending no further investigation of the experimental therapy, and indicating stopping and recommending for a follow-up trial set up as a pivotal trial to test the null hypothesis . If continuing the trial, the next assigned dose is where is the latest estimate of the ED95, and is a maximum allowable dose escalation between cohorts. If requiring a pivotal trial, patients are assigned to the dose , with computed from the observed data to achieve a desired power at a certain alternative for test of size . Details are given in Appendix A.
The utility function includes a patient recruitment cost of and a prize if the null hypothesis is rejected in the pivotal trial (meaning the agent found evidence of an effective drug). Denote . At the stopping time , utility is calculated as
(5) |
In our implementation we fix in (4) and in (5), and , leaving the unknown parameters , including the maximum effect and the location of the . The prior of has mean with variances and we add the constraint .
The summary statistic is where and are posterior mean and standard deviation of . Figure 2(b) shows examples of trajectories of these summaries until some maximum time horizon (i.e., assuming no stopping). The trajectories are created by sampling from the prior, assigning doses as described, and sampling responses using (4). Figure 2(c) shows examples of the implied dose-response curve for different prior draws. Notice that the chosen summaries do not capture the full posterior. The statistic is not a sufficient statistic for the posterior. However, as shown in Appendix A, and , and therefore the utility, depend on the data only through .
3 Simulation-based Bayesian sequential design
From the expected utility definition in (1), one immediately deduces that
(6) |
with the expectation being with respect to future data and , and substituting optimal choices for future decisions, . In words, for a rational agent taking optimal actions, the expected utility given history must be the same as the expected utility in the next step. Thus, one can (theoretically) deduce from knowing the best actions for all possible future histories by implementing backward induction starting with . Equation (6) is known as the Bellman equation (Bellman, 1966).
Enumerating all histories is computationally intractable in most realistic scenarios, rendering backward induction usually infeasible, except in some special setups (Berry and Ho, 1988; Christen and Nakamura, 2003). Simulation-based BSD comes to help: instead of enumerating all possible histories, we compute approximations using some simulated trajectories. The version presented here follows Müller et al. (2007). Similar schemes are developed in Brockwell and Kadane (2003), Kadane and Vlachos (2002) and Carlin et al. (1998). We use two strategies: first, we represent history through a (low-dimensional) summary statistic , as already hinted in Section 2 when we proposed the posterior moments of the ED95 response for Example 2. The second – and closely related – strategy is to restrict to depend on only indirectly through . Two instances of this approach are discussed below and used to solve Examples 1 and 2.
3.1 Constrained backward induction
Constrained backward induction is an algorithm consisting of three simple steps. The first step is forward simulation. Our implementation here uses the assumption that the sequential nature is limited to sequential stopping, so trajectories can be generated assuming no stopping independently from decisions. Throughout we use to denote continuation. Other actions, , indicate stopping the study and choice of a terminal decision. The second step is constrained backward induction, which implements (6) using decisions restricted to depend on the history indirectly only through . The third step simply keeps track of the best action and iterates until convergence. We first briefly explain these steps and then provide an illustration of their application in Example 1 and additional implementation considerations.
- Step 1.
-
Forward simulation: Simulate many trajectories, say , until some maximum number of steps (e.g. cohorts in a trial). To do this, each corresponds to a different prior draw and samples , . For each and , we evaluate and record the summary statistic discretized over a grid.
- Step 2.
-
Backward induction: For each possible decision and each grid value , the algorithm approximates using the forward simulation and Bellman equation as follows. Denote with the set of forward simulations that fall within grid cell . Then,
(7) The evaluation under requires the optimal actions . We use an initial guess (see below), which is then iteratively updated (see next).
- Step 3.
-
Iteration: Update the table after step 2.
Repeat steps 2 and 3 until updating in step 3 requires no (or below a minimum number of) changes of the tabulated . \cbend
Figure 3(a) shows the estimated utility function in Example 1 using , and 100 grid values for the running average in . Optimal actions are shown in Figure 3(b). The numerical uncertainty due to the Monte Carlo evaluation of the expectations is visible. If desired, one could reduce it by appropriate smoothing (MacDonald et al., 2015). One can verify, however, that the estimates are a close approximation to the analytic solution which is available in this case (Müller et al., 2007).
We explain Step 2 by example. Consider Figure 3(b) and assume, for example, that we need the posterior expected utility of . In this stylized representation, only three simulations, pass through this grid cell. In this case, for the three trajectories since is part of the summary statistic. We evaluate and as averages . For , we first determine the grid cells in the next period for each of the three trajectories. . We then look up the optimal decisions (using in this case ) and average , as in (7).
Constrained backward induction requires iterative updates of and their values. The procedure starts with arbitrary initial values for , recorded on a grid over . For example, a possible initialization is , maximizing over all actions that do not involve continuation. With such initial values, can be evaluated over the entire grid. Then, for updating the optimal actions one should best start from grid values that are associated with the time horizon , or at least high . This is particularly easy when is an explicit part of , as in Example 1 with . Another typical example arises in Example 2 with , the mean and standard deviation of some quantity of interest. For large we expect small , making it advisable to start updating in each iteration with the grid cells corresponding to smallest . We iterate until no more (or few) changes happen.
The algorithm can be understood as an implementation of (6).
Consider as an arbitrary function over pairs and the function operator defined as where is the summary statistic resulting from sampling one more data point from the unknown and recompute from . Then Bellman equation (6) can be written as . In other words, expected utility under the optimal decision is a fixed point of the operator . Constrained backward induction attempts to find an approximate solution to the fixed-point equation. The same principle motivates the Q-learning algorithm in RL (Watkins and Dayan, 1992) (see Section 4). Backward induction is also closely related to the value iteration algorithm for Markov decision processes (Sutton and Barto, 2018), which relies on exact knowledge of the state transition function.







3.2 Sequential design with decision boundaries
Inspection of Figure 3(b) suggests an attractive alternative algorithm. Notice the decision boundaries on that trace a funnel with an upper boundary separating from , and a lower boundary separating versus .
Recognizing such boundaries suggests an alternative approach based on searching for optimal boundaries in a suitable family .
This approach turns the sequential decision problem of finding optimal into a non-sequential problem of finding an optimal . This method is used, for example, in Rossell et al. (2007).
In Example 1, we could use
using a single tuning parameter . Both functions are linear in , and mimic the funnel shape seen in Figure 3(b). The decision rules implied by these boundaries is
Here, the additional subscript ϕ in indicates that the decision follows the rule implied by decision boundaries . Note that and , ensuring continuation at .
For a given , the forward simulations are used to evaluate expected utilities under the policy . Let denote the stopping time under . Then the expected utility under policy is
(8) |
where the expectation is with respect to data and . It is approximated as an average over all Monte Carlo simulations, stopping each simulation at , as determined by the parametric decision boundaries,
(9) |
Optimizing w.r.t. we find the optimal decision boundaries . As long as the nature of the sequential decision is restricted to sequential stopping, the same set of Monte Carlo simulations can be used to evaluate all , using different truncation to evaluate . In general, a separate set of forward simulations for each , or other simplifying assumptions might be required.
Figure 3(e) shows the decision boundaries for the best parameter estimated at in Example 1. The estimated values for are in Figure 3(f). In Figure 3(e), the boundaries using constrained backward induction are overlaid in the image for comparison. The decision boundaries trace the optimal decisions under the backward induction well. The differences in expected utility close to the decision boundary are likely very small, leaving minor variations in the decision boundary negligible.
The same approach is applied to the (slightly more complex) Example 2.
Recall the form of the summary statistics , the posterior standard deviation and mean of the ED95 effect. We use the boundaries
parameterized by . The implied decision rules are
The results are in Figure 4(a). Again, the sequential decision problem is reduced to the optimization problem of finding the optimal in (9). Since now the evaluation of requires a 3-dimensional grid. To borrow strength from Monte Carlo evaluations of (9) across neighboring grid points for we proceed as follows. We evaluate on a coarse grid, and then fit a quadratic response surface (as a function of ) to these Monte Carlo estimates. The optimal decision is the maximizer of the quadratic fit. We find Instead of evaluating on a regular grid over , one could alternatively select a random number of design points (in ).
The use of parametric boundaries is closely related to the notion of function approximation and the method of policy gradients in RL, which will be described next.


4 Reinforcement learning
The basic setup in RL is usually framed in terms of Markov decision process (MDP) (Puterman, 2014). The Markov property ensures that optimal decisions depend only on the most recently observed state, enabling practicable algorithms. In this section we first describe MDPs and how a sequential design problem can be adapted to fit in this framework. Next, we discuss two algorithms, Q-learning (Watkins and Dayan, 1992) and policy gradients (Grondman et al., 2012), implemented in Example 1 and 2, respectively. Both methods are implemented using neural networks. Throughout this section, the summary statistics are referred to as states, in keeping with the common terminology in the RL literature.


4.1 Markov decision processes and partial observability
The Markov property for a decision process is defined by the conditions
That is, the next state and the reward depend on the history only indirectly through the current state and action. When the condition holds, the decision process is called an MDP. For MDPs, the optimal policy is only a function of the latest state and not of the entire history (Puterman, 2014). Many RL algorithms assume the Markov property. However, many sequential decision problems are more naturally characterized as partially observable MDP (POMDP) that satisfy the Markov property only conditional on some that is generated at the beginning of each episode. Such problems have
been studied in the RL literature under the name of Hidden-Parameter MDP (HiMDP) (Doshi-Velez and Konidaris, 2016).
There is a standard – Bayesian motivated – way to cast any POMDP as an MDP using so-called belief states (Cassandra et al., 1997). Belief states are obtained by including the posterior distribution of unobserved parameters as a part of the state. With a slight abuse of notation, we may write the belief states as . While the belief state is, in general, a function, it can often be represented as a vector when the posterior admits a finite (sufficient) summary statistic. The reward distribution can also be written in terms of such belief states as
(10) |
See Figure 5 for a graphical representation of an HiMDP and the resulting belief MDP.
We implement this approach for Example 1. The reward is chosen to match the definition of utility. It suffices to define it in terms of (and let the posterior take care of the rest, using (10)). We use
(11) |
as in (3). Next we introduce the belief states. Recall the notation from Example 1. We have . The summary statistic is a two-dimensional representation of the belief state.
Considering Example 2, we note that the utility function (5) depends on the state only, and does not involve . We define
(12) |
While the reward is clearly Markovian, the transition probability is not necessarily Markovian. This is the case because in this example the posterior moments are not a sufficient statistic. In practice, however, a minor violation of the Markov assumption for the transition distribution does not seem to affect the ability to obtain good policies with standard RL techniques.
4.2 Q-learning
Q-learning (Watkins and Dayan, 1992; Clifton and Laber, 2020; Murphy, 2003) is an RL algorithm that is similar in spirit to the constrained backward induction described in Section 3.1. The starting point is Bellman optimality equation for MDPs (Bellman, 1966). Equation (6) for the optimal and written for MDPs becomes
(13) |
where the expectation is with respect to and . The optimal policy is implicitly defined as the solution to (13).
Q-learning proceeds iteratively following the fixed-point iteration principle. Let be some approximation of . We assume that a set of simulated transitions is available. This collection is used like the forward simulations in the earlier discussion. In RL it is known as the “experience replay buffer”, and can be generated using any stochastic policy. And suppose, for the moment, that the state and action spaces are finite discrete, allowing to record in a table. Q-learning is defined by updating as
(14) |
Note the moving average nature of the update. The procedure iterates until convergence from a stream of transitions.
Deep Q-networks (DQN) (Mnih et al., 2013) are an extension of Q-learning for continuous states. A neural network is used to represent . Let denote the parameters of the neural network at iteration . Using simulated transitions from the buffer
DQN performs updates
(15) |
In practice, exact minimization is replaced by a gradient step from mini-batches, together with numerous implementation tricks (Mnih et al., 2013).
We implemented DQN in Example 1 using the Python package Stable-Baselines3 (Raffin et al., 2021). The experience replay buffer is continuously updated. The algorithm uses a random policy to produce an initial buffer and then adds experience from an -greedy policy, where the current best guess for the optimal policy is chosen with probability , and otherwise fully random actions are chosen with probability . Figure 3(c) shows , the estimate of , for each state and action . Figure 3(d) shows the corresponding optimal actions. Overall, the results are similar to the results with constrained backward induction, but much smoother. Also, notice that the solution under DQN is usually better in terms of expected utility as shown in Figure 3(f), even in (out-of-sample) evaluation episodes. This is likely due to to the flexibility and high-dimensional nature of the neural network approximation.
The better performance of RL comes at a price. First is sample efficiency (the number of simulations required by the algorithm to yield a good policy). The best is obtained after 2 million sampled transitions. Data efficiency is a known problem in DQN, and in RL in general (Yu, 2018). In many real applications investigators cannot afford such a high number of simulation steps. Another limitation is training instability. In particular, Figure 3(f) illustrates a phenomenon known as catastrophic forgetting, which happens when additional training decreases the ability of the agent to perform a previously learned task (Atkinson et al., 2021). This can happen because of the instability that arises from a typical strategy of evaluating performance periodically and keeping track of the best performing policy only. Several improvements over basic DQN have been proposed, with improved performance and efficiency (Hessel et al., 2018).
4.3 Policy gradients
The approach is similar to the use of parametric boundaries discussed before. Policy gradient (PG) approaches start from a parameterization of the policy. Again, consider a neural network (NN) with weights . The goal of a PG method is to maximize the objective
(16) |
This objective is the analogue to maximizing in (9), except that here the stochastic policy is a probability distribution on decisions . The main characteristic of PG methods is the use of gradient descent to solve (16).
The evaluation of gradients is based on the PG theorem (Sutton et al., 1999),
(17) |
where the total return is a function of the entire trajectory . Using gradients, PG methods can optimize over high-dimensional parameter spaces like in neural networks. In practice, estimates of the gradient are known to have huge variance, affecting the optimization. But several implementation tricks exist that improve the stability and reduce the variance. Proximal policy optimization (PPO) (Schulman et al., 2017) incorporates many of these tricks, and is widely used as a default for PG-based methods.
The PG theorem is essentially Leibniz rule for the gradient of .
With a slight abuse of notation, write for the distribution of induced by for the sequential decisions. Then Leibniz rule for the gradient of the integral gives
(18) |
where the expectations are with respect to the (stochastic) policy over . The log probability in the last expression can be written as a sum of log probabilities, yielding (17).
PPO is implemented in Example 2 using Stable-Baselines3 (Raffin et al., 2021). The results are shown in Figure 4(b). Not surprisingly, the results are similar to those obtained earlier using parametric decision boundaries. Interestingly, the figure shows that neural networks do not necessarily extrapolate well to regions with low data. This behavior is noticeable on the lower left corner of the figure, where there could be data, but where it is never observed in practice because of the early stopping implied by the boundaries.
5 Discussion
We have introduced some of the main features of RL and BSD in the context of two optimal stopping problems. In the context of these examples the two approaches are quite similar, including an almost one-to-one mapping of terminology and notation, as we attempted in Table 1. In general, however, the applicability, especially the practical use of RL is much wider. The restriction of the sequential problems to optimal stopping was only needed for easy application of the BSD solution. In contrast, RL methods are routinely used for a variety of other problems, such as robotics (Tunyasuvunakool et al., 2020) autonomous driving (Sallab et al., 2017; Wurman et al., 2022), and smart building energy management (Yu et al., 2021). The main attraction of BSD is the principled nature of the solution. One can argue from first principles that a rational agent should act as if he or she were optimizing expected utility as in (1). There is a well-defined and coherent propagation of uncertainties. This might be particularly important when the SDP and underlying model are only part of a bigger problem. Overall, we note that the perspective of one method, and corresponding algorithms can be useful for improvements in the respective other method. For example, policy gradients could readily be used to solve BSD if randomized decision rules were used. The latter is usually not considered. On the other hand, hierarchical Bayesian inference models could be used to combine multiple sources of evidence in making sequential decisions under RL, or multiple related problems could be linked in a well-defined manner in a larger encompassing model. For example, clinical trials are never carried out in isolation. Often the same department or group might run multiple trials on the same patient population for the same disease, with obvious opportunities to borrow strength.
Acknowledgements
The authors gratefully thank Peter Stone and the Learning Agents Research Group (LARG) for helpful discussions and feedback.
Funding
Yunshan Duan and Peter Müller are partially funded by the NSF under grant NSF/DMS 1952679.
Conflicts of Interest
The authors report there are no competing interests to declare
References
- Atkinson et al. (2021) Atkinson, C., B. McCane, L. Szymanski, and A. Robins (2021). Pseudo-rehearsal: Achieving deep reinforcement learning without catastrophic forgetting. Neurocomputing 428, 291–307.
- Bellman (1966) Bellman, R. (1966). Dynamic programming. Science 153(3731), 34–37.
- Berger (2013) Berger, J. O. (2013). Statistical decision theory and Bayesian analysis. Springer Science & Business Media.
- Berry and Ho (1988) Berry, D. A. and C.-H. Ho (1988). One-sided sequential stopping boundaries for clinical trials: A decision-theoretic approach. Biometrics 44(1), 219–227.
- Brockwell and Kadane (2003) Brockwell, A. E. and J. B. Kadane (2003). A gridding method for Bayesian sequential decision problems. Journal of Computational and Graphical Statistics 12(3), 566–584.
- Carlin et al. (1998) Carlin, B. P., J. B. Kadane, and A. E. Gelfand (1998). Approaches for optimal sequential decision analysis in clinical trials. Biometrics 54, 964–975.
- Cassandra et al. (1997) Cassandra, A., M. L. Littman, and N. L. Zhang (1997). Incremental pruning: A simple, fast, exact method for partially observable Markov decision processes. In Proceedings of the Thirteenth conference on Uncertainty in artificial intelligence, pp. 54–61.
- Christen and Nakamura (2003) Christen, J. A. and M. Nakamura (2003). Sequential stopping rules for species accumulation. Journal of Agricultural, Biological, and Environmental Statistics 8(2), 184–195.
- Clifton and Laber (2020) Clifton, J. and E. Laber (2020). Q-learning: Theory and applications. Annual Review of Statistics and Its Application 7(1), 279–301.
- DeGroot (2004) DeGroot, M. (2004). Optimal statistical decisions. New York: Wiley-Interscience.
- Doshi-Velez and Konidaris (2016) Doshi-Velez, F. and G. Konidaris (2016). Hidden parameter Markov decision processes: A semiparametric regression approach for discovering latent task parametrizations. In IJCAI: Proceedings of the Conference, Volume 2016, pp. 1432.
- Gaon and Brafman (2020) Gaon, M. and R. Brafman (2020). Reinforcement learning with non-Markovian rewards. In Thirty-fourth AAAI Conference on Artificial Intelligence.
- Grieve and Krams (2005) Grieve, A. P. and M. Krams (2005). ASTIN: A Bayesian adaptive dose–response trial in acute stroke. Clinical Trials 2(4), 340–351.
- Grondman et al. (2012) Grondman, I., L. Busoniu, G. A. Lopes, and R. Babuska (2012). A survey of actor-critic reinforcement learning: Standard and natural policy gradients. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 42(6), 1291–1307.
- Hessel et al. (2018) Hessel, M., J. Modayil, H. Van Hasselt, T. Schaul, G. Ostrovski, W. Dabney, D. Horgan, B. Piot, M. Azar, and D. Silver (2018). Rainbow: Combining improvements in deep reinforcement learning. In Thirty-second AAAI Conference on Artificial Intelligence.
- Kadane and Vlachos (2002) Kadane, J. B. and P. K. Vlachos (2002). Hybrid methods for calculating optimal few-stage sequential strategies: Data monitoring for a clinical trial. Statistics and Computing 12, 147–152.
- MacDonald et al. (2015) MacDonald, B., P. Ranjan, and H. Chipman (2015). GPfit: An R package for fitting a Gaussian process model to deterministic simulator outputs. Journal of Statistical Software 64, 1–23.
- Meibohm and Derendorf (1997) Meibohm, B. and H. Derendorf (1997). Basic concepts of pharmacokinetic/pharmacodynamic (pk/pd) modelling. International Journal of Clinical Pharmacology and Therapeutics 35(10), 401–413.
- Mnih et al. (2013) Mnih, V., K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller (2013). Playing Atari with deep reinforcement learning. arXiv preprint 1312.5602.
- Müller et al. (2007) Müller, P., D. A. Berry, A. P. Grieve, M. Smith, and M. Krams (2007). Simulation-based sequential Bayesian design. Journal of Statistical Planning and Inference 137(10), 3140–3150.
- Murphy (2003) Murphy, S. A. (2003). Optimal dynamic treatment regimes. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 65(2), 331–355.
- Murphy et al. (2007) Murphy, S. A., D. W. Oslin, A. J. Rush, and J. Zhu (2007). Methodological challenges in constructing effective treatment sequences for chronic psychiatric disorders. Neuropsychopharmacology 32(2), 257–262.
- Parmigiani and Inoue (2009) Parmigiani, G. and L. Inoue (2009). Decision Theory: Principles and Approaches. Wiley.
- Puterman (2014) Puterman, M. L. (2014). Markov decision processes: Discrete stochastic dynamic programming. John Wiley & Sons.
- Raffin et al. (2021) Raffin, A., A. Hill, A. Gleave, A. Kanervisto, M. Ernestus, and N. Dormann (2021). Stable-baselines3: Reliable reinforcement learning implementations. Journal of Machine Learning Research 22(268), 1–8.
- Robins (1997) Robins, J. M. (1997). Causal inference from complex longitudinal data. In Latent variable modeling and applications to causality, pp. 69–117. Springer.
- Rossell et al. (2007) Rossell, D., P. Müller, and G. Rosner (2007). Screening designs for drug development. Biostatistics 8, 595–608.
- Sallab et al. (2017) Sallab, A. E., M. Abdou, E. Perot, and S. Yogamani (2017). Deep reinforcement learning framework for autonomous driving. Electronic Imaging 2017(19), 70–76.
- Schulman et al. (2017) Schulman, J., F. Wolski, P. Dhariwal, A. Radford, and O. Klimov (2017). Proximal policy optimization algorithms. arXiv preprint 1707.06347.
- Shen and Huan (2021) Shen, W. and X. Huan (2021). Bayesian sequential optimal experimental design for nonlinear models using policy gradient reinforcement learning. arXiv preprint 2110.15335.
- Silver et al. (2018) Silver, D., T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, et al. (2018). A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science 362(6419), 1140–1144.
- Silver et al. (2014) Silver, D., G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller (2014). Deterministic policy gradient algorithms. In Proceedings of the 31st International Conference on International Conference on Machine Learning, pp. 387–395.
- Sutton and Barto (2018) Sutton, R. S. and A. G. Barto (2018). Reinforcement learning: An introduction. MIT Press 5, 31.
- Sutton et al. (1999) Sutton, R. S., D. McAllester, S. Singh, and Y. Mansour (1999). Policy gradient methods for reinforcement learning with function approximation. In S. Solla, T. Leen, and K. Müller (Eds.), Advances in Neural Information Processing Systems, Volume 12.
- Tunyasuvunakool et al. (2020) Tunyasuvunakool, S., A. Muldal, Y. Doron, S. Liu, S. Bohez, J. Merel, T. Erez, T. Lillicrap, N. Heess, and Y. Tassa (2020). dm_control: Software and tasks for continuous control. Software Impacts 6, 100022.
- Watkins and Dayan (1992) Watkins, C. J. and P. Dayan (1992). Q-learning. Machine learning 8(3-4), 279–292.
- Wurman et al. (2022) Wurman, P. R., S. Barrett, K. Kawamoto, J. MacGlashan, K. Subramanian, T. J. Walsh, R. Capobianco, A. Devlic, F. Eckert, F. Fuchs, et al. (2022). Outracing champion gran turismo drivers with deep reinforcement learning. Nature 602(7896), 223–228.
- Yu et al. (2021) Yu, L., S. Qin, M. Zhang, C. Shen, T. Jiang, and X. Guan (2021). A review of deep reinforcement learning for smart building energy management. IEEE Internet of Things Journal 8, 12046–12063.
- Yu (2018) Yu, Y. (2018). Towards sample efficient reinforcement learning. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence.
- Zhao et al. (2009) Zhao, Y., M. R. Kosorok, and D. Zeng (2009). Reinforcement learning design for cancer clinical trials. Statistics in Medicine 28(26), 3294–3315.
Appendix
Appendix A Details in Example 2
We assume a nonlinear regression sampling model
and the dose-response curve
We fix , and put a normal prior on unknown parameters ,
Sample size calculation
If at time , the decision indicates stopping and a pivotal trial is conducted to test vs. . We need to determine the sample size for the pivotal trial that can achieve desired significance level and power , and calculate the posterior predictive probability of a significant outcome, .
Let and denote the posterior mean and std. dev. of . We calculate power based on , where .
Now consider a test enrolling patients, randomizing at (placebo) and at the estimated ED95. Assuming , we need
where is the right tail cutoff for the and and are the desired significance level and power (i.e., ).
A significant outcome at the end of the 2nd trial means data in the rejection region. Let denote the sample average of patients each to be enrolled in the two arms of the 2nd trial. Then the rejection region is
Let denote the standard normal c.d.f. Then
Posterior simulation
We can implement independent posterior simulation:
-
(i)
Generate , using
-
(ii)
Then generate from the posterior conditional distribution . Based on normal linear regression, the conditional posterior is a univariate normal distribution.