From Predictions to Decisions:
The Importance of Joint Predictive Distributions
Abstract
A fundamental challenge for any intelligent system is prediction: given some inputs, can you predict corresponding outcomes? Most work on supervised learning has focused on producing accurate marginal predictions for each input. However, we show that for a broad class of decision problems, accurate joint predictions are required to deliver good performance. In particular, we establish several results pertaining to combinatorial decision problems, sequential predictions, and multi-armed bandits to elucidate the essential role of joint predictive distributions. Our treatment of multi-armed bandits introduces an approximate Thompson sampling algorithm and analytic techniques that lead to a new kind of regret bound.
1 Introduction
We are motivated by the problem of an intelligent agent interacting with an unknown environment. Although the agent is initially uncertain of the true dynamics of the environment, it might learn to predict aspects of its evolution through observation and learning. Predictions can then guide decisions aimed to optimize future outcomes.
Most work on supervised learning has focused on producing accurate marginal predictions for each input. For instance, in the image classification problem, the goal is to predict the class label of each image correctly. Though accurate marginal predictions are sufficient for some applications, in this paper, we show that for a broad class of downstream tasks, accurate joint predictions are essential. In particular, we show that even in simple combinatorial and sequential decision problems, accurate marginal predictions are insufficient for effective decisions. Rather, accurate joint predictive distributions are required to achieve good performance. We also show that, in the multi-armed bandit, a classical sequential decision problem, accurate joint predictive distributions enable near-optimal performance.
The cross-entropy loss is perhaps the most widely used performance metric in machine learning and plays a central role in training deep learning systems (LeCun et al., 2015). It is typically used to evaluate the quality of marginal predictions but readily extends to address joint predictive distributions. Baselining the cross-entropy loss with an agent-independent constant gives rise to expected KL-divergence. Hence, the cross-entropy loss is equivalent to the expected KL-divergence as a metric for ranking agents. We will focus in this paper on the use of the expected KL-divergence because the baselining allows for more elegant analysis and framing of results. In particular, the KL-divergence is closely related to fundamental notions in information theory (Cover, 1999), such as entropy and mutual information. It is worth clarifying that the cross-entropy loss and expected KL-divergence are just particular choices of evaluation metrics. The importance of evaluating joint predictions, not just marginals, should similarly be apparent under other evaluation metrics.
To elucidate the essential role of joint predictive distributions in driving downstream decision tasks, we establish several results pertaining to the universality of our metric across tasks, combinatorial decision problems, sequential predictions, and mult-armed bandits. Among these, our study of multi-armed bandits is particularly innovative, entailing the development of a novel variant of the Thompson sampling (TS) algorithm (Thompson, 1933; Russo et al., 2018) and original analytic techniques that lead to a new kind of regret bound (Theorem 5.1). Our proposed approximate TS algorithm is a generalization of the standard (exact) TS algorithm and is general in the sense that it does not depend on the agent’s uncertainty representation. Instead, it only requires that the agent can simulate hypothetical observations, sampled from a joint predictive distribution. The new analytic techniques required to establish our regret bound build on recent progress in information-theoretic analysis of bandit and reinforcement learning algorithms (Russo & Van Roy, 2016; Lu et al., 2021). We believe that the key ideas can also offer new insight into approximate TS algorithms beyond the scope of this paper, including those designed for more general sequential decision problems, such as episodic reinforcement learning.
Also novel is our analysis of sequential prediction. In this case, we establish a result that highlights the importance of joint predictive distributions even when an agent aims only to produce accurate marginal predictions sequentially, as it streams data. In particular, for the agent to be able to incrementally update its knowledge representation as it streams, it must also be capable of generating accurate joint, not just marginal, predictive distributions.
The remainder of the paper is organized as follows: in Section 2, we introduce the notation for this paper via a standard supervised learning setting. We also review and discuss the evaluation metric for joint predictive distributions. In Section 3, we illustrate through a simple recommendation system example that accurate marginal predictions are insufficient to guide effective decisions in combinatorial decision problems, and that accurate joint predictive distributions are essential. In Section 4, we formally prove that in a sequential decision problem, an agent that updates its parameters incrementally must retain sufficient information about joint predictive distributions if it is to achieve good performance, even if the performance metric only depends on marginal distributions. In Section 5, we consider the classical multi-armed bandit and show that accurate marginal predictions are insufficient to guide good decisions. We also establish a regret bound for our proposed TS algorithm: if the agent produces accurate joint predictive distributions, then the TS algorithm can attain near-optimal regret. Finally, we will discuss the related work in Section 6 and conclude the paper in Section 7.
2 Evaluating joint predictive distributions
In this section, we introduce the notation for this paper via a standard supervised learning framework. We also motivate and discuss the evaluation metrics used in this paper for joint predictive distributions.
2.1 Supervised learning
Consider a sequence of pairs , where each is a feature vector and each is its target label. Feature vectors are i.i.d. according to an input distribution . Each target label is independent of all other data, conditioned on , and distributed according to . The conditional distribution is referred to as the environment. The environment is random; and this reflects the agent’s uncertainty about how labels are generated given features. Note that and .
In supervised learning, we consider an agent that learns about the environment from a training dataset
and aims to predict the target labels
at feature vectors .
Conditioned on the environment , a predictive distribution over the target labels is given by
Conditioned instead on the training data, the predictive distribution becomes
In a sense, represents the result of perfect inference. Since the associated computations are intractable for environments of interest, we instead consider agents that approximate this predictive distribution. Specifically, we consider agents that represent the approximation in terms of a generative model. The agent’s predictions are parameterized by a vector that the agent learns from the training data . The vector is conditionally independent of conditioned on . For any inputs , determines a predictive distribution, which could be used to sample imagined outcomes . Hence, the agent’s -order predictive distribution is given by
When , we alternatively use , , and to denote , , and , respectively.
Note that if , represents a marginal prediction: it predicts the label for a single input . On the other hand, for , it represents a joint prediction over labels at input features.
2.2 Marginal vs. joint predictive distributions
Before proceeding, we use a simple coin flipping example to illustrate that evaluating marginal and joint predictive distributions can result in very different answers. Suppose that are generated by repeated tosses of a possibly biased coin with unknown probability of heads, with and indicating heads and tails, respectively. Consider two agents with different beliefs: Agent 1 assumes and models the outcome of each coin toss as independent conditioned on . Agent 2 assumes that with probability and with probability ; that is, the coin either produces only heads or only tails. Let and denote the outcomes imagined by the two agents. Despite their differing assumptions, the two agents generate identical marginal predictive distributions: .
On the other hand, the joint predictions of these two agents differ for :
Evaluating marginal predictions cannot distinguish between the two agents, though for a specific prior distribution over , one agent could be right and the other wrong. One must evaluate joint predictions to make this distinction.
2.3 Cross-entropy loss
Cross-entropy loss is perhaps the most widely used metric in machine learning, though it is typically only used to evaluate marginal predictive distributions. For our supervised learning formulation, an agent’s marginal cross-entropy loss takes the form
(1) |
where the expectation is over both and , and the superscript “” in indicates that this evaluates marginal predictions. Note that the marginal distribution is random because it depends on and .
It is straightforward to extend the cross-entropy loss to assess joint predictive distributions. Generalizing Equation (1), for any , we define the -order cross-entropy loss:
(2) |
where the expectation is over and . Note that the -order joint distribution is also random, since it depends on and .
2.4 Kullback–Leibler divergence
One can use the joint cross-entropy loss defined in Equation (2) to assess joint predictive distributions. However, it can be helpful to offset the metric by a baseline to convert it into the Kullback–Leibler (KL) divergence. We will take , which is the perfect posterior predictive, to be our baseline. As we will see, since does not depend on the agent, our measure of KL-divergence and the cross-entropy loss are effectively equivalent in the sense that they only differ by a constant that does not depend on the agent. However, KL-divergence exhibits properties that allow for a more elegant mathematical analysis.
The -order expected KL-divergence with respect to is defined by
(3) |
where the expectation is over the distributions and , which depend in turn on the data , the agent parameters , and the inputs .
Note that KL-divergence is minimized when , with the minimum being zero. Further, our two metrics are related according to
Since does not depend on the agent, our two metrics rank agents identically.
An unbiased estimate of cross-entropy loss can be computed based on a test data sample, according to
The same is not true for , which can only be estimated if also given an estimate of . Hence, serves only as conceptual tools in our analysis and not an evaluation metric that can be applied with empirical data. While it ranks agents identically with , is more natural as a metric since its minimum is zero and it accommodates more elegant analysis.
2.5 Error in predictions versus environment
Our metric assesses error incurred by the predictive distribution . A common approach to generating such a predictive distribution is through estimating a posterior distribution over environments and using that posterior distribution to generate the predictive distribution. In such a context, parameterizes the estimated posterior distribution. Let be an imaginary environment sampled from this posterior distribution. To offer some intuition for , we consider in this section its relation to the KL-divergence between the distributions of the true and imaginary environments.
Let denote a sequence of imaginary outcomes, with each sampled independently from . If the support of the input distribution is exhaustive, the support of the imaginary environment distribution contains that of the true environment distribution , and the environment distributions satisfy suitable regularity conditions, then
In other words, under certain technical conditions, as the number of test data pairs grows, converges to the error in the estimated posterior distribution over environments, measured in terms of KL-divergence.
One might wonder why we should use rather than the KL divergence between the true and imaginary environments
(4) |
to evaluate the agents. There are several reasons for it. First, practical agent designs often do not satisfy the requisite regularity conditions, and hence (4) becomes infinite. For example, it is common to approximate the posterior distribution using an ensemble of environment models (see, e.g., Lu & Van Roy (2017)). Such an ensemble represents a distribution with finite support though the posterior may have infinite support. On the other hand, for any finite , is finite. Second, , which is equivalent to up to a constant, can be computed based on data, whereas computing (4) requires access to the posterior distribution of . Finally, as we will establish later, with finite is sufficient to support effective decisions in downstream tasks such as multi-armed bandits.
2.6 Universality of
We now show that, for any , accuracy in terms of is sufficient to guarantee an effective decision if the decision is judged in relation only to . In particular, suppose an action selected from a set results in an expected reward
where is a reward function with range . The following result bounds the loss in expected reward of a decision that is based on the estimate instead of the posterior .
Proposition 2.1.
If an action maximizes
then
This results follows from Pinsker’s inequality and Jensen’s inequality. A proof is provided in Appendix A.
Proposition 2.1 ensures that if a decision maximizes an approximation to expected reward, with the expectation approximated using the predictive distribution , then the action will be within of what is achievable given the posterior predictive distribution . In this sense, is a universal evaluation metric: its value ensures a level of performance in any decision problem.
3 Combinatorial decision problems
In this section, we will present a simple example that highlights the importance of the joint predictive distributions in combinatorial decision problems. Importantly, we will show that there can be benefit to considering a joint predictive distribution even when the data are i.i.d. conditioned on the environment.
Consider the problem of a customer interacting with a recommendation system that proposes a selection of movies from an inventory of movies . Each describes the features of movie , and is the feature dimension. We model the probability that a user will enjoy movie by a logistic model , where is the standard logistic function. Note that describes the preferences of the user, which is not fully known to the recommendation system and can be viewed as a random variable. We will show that, in order to maximize the probability that the user enjoys at least one of the recommended movies, it is insufficient to examine the marginal predictive distributions.
To make this example concrete, we consider the case where the user is drawn from two possible user types , and the recommendation system should propose movies from an inventory . Table 1 presents the numerical values for each of these vectors in this problem and their associated probabilities of selection implied by their logit functions correct to two decimal places. These values are chosen to set up a tension between optimization over marginal (each individually) and joint (pairs of ) predictions. An agent that optimizes the expected probability for each movie individually will end up recommending the pair to an unknown . This means that, whether a user is type or , there is a greater than 10% chance they do not like either movie. By contrast, an agent that considers the joint predictive distribution for can see that instead selecting the pair will give close to 100% certainty that the user will enjoy one of the movies.
1 | 0 | 0.73 | 0.5 | |
0 | 1 | 0.5 | 0.73 | |
0.5 | 0.5 | 0.62 | 0.62 |
This didactic example is designed to be maximally simplistic, and in some sense it labours an obvious point. In combinatorial decision problems where the outcome depends on the joint predictive distribution, optimization based on the marginal predictive distribution is insufficient to guarantee good decisions. By contrast, as shown in Section 2.6, if your decision depends only on the -order predictive, then attaining small is sufficient for good performance. What is perhaps interesting about this example however, is that this coupling can occur even where, conditioned on the true environment, the data generating process is actually i.i.d. The key point is that, when the true underlying environment is unknown, a coupling in future observations is introduced. As such, examining the joint predictive distribution can be essential for good performance. In the rest of this paper, we will show that these distinctions become even more important as we move towards sequential decision problems.
4 Sequential predictions
In this and next section, we will establish some theoretical results to justify the importance of joint predictive distributions in sequential decision problems. We first consider sequential prediction problems, which is a subclass of sequential decision problems where the agent’s predictions (actions) do not influence its future observations. In particular, we show that in a standard sequential prediction problem, for agents with incremental updates to achieve good performance, it is necessary for them to retain significant portion of information about joint predictive distributions. This is true even if the performance metric only depends on marginal distributions.
The considered sequential prediction problem is formulated as follows: data pairs arrive sequentially, one at a time. At each time , the agent needs to compute parameters based on previously observed data pairs . Then, a new data pair arrives. We assume that the feature vector ’s are unconditionally independent, but not necessarily identically distributed. The target label is conditionally independently sampled from the distribution , where is the environment. The agent’s objective is to minimize the expected cumulative KL-divergence in the first time steps:
(5) |
where
for all time . Note that this cumulative KL-divergence (5) only depends on the marginal distributions and . Also note that this performance metric is if the agent predicts the exact posterior at each time .
We consider a setting where an agent needs to incrementally update its parameters as data arrive. Specifically, at time , the agent chooses its parameters based on its prior knowledge; and then at each time , the agent updates its parameters incrementally by sampling from a distribution that only depends on , , and :
(6) |
In other words, conditioning on , is independent of the dataset and the environment . Note that the incremental update rule in equation (6) is general: in particular, could itself be recorded in . This would allow to depend on in an arbitrary manner. However, such an approach can be impractical when there is a high volume of data. In particular, one may want to avoid sifting through a growing at each time step. In many practical applications, it is desirable for the agent to update with fixed memory space and fixed per-step computational complexity, such as the standard SGD (Goodfellow et al., 2016) and Adam (Kingma & Ba, 2015) algorithms do.
We have the following result for the considered sequential prediction problem:
Theorem 4.1.
Please refer to Appendix B for the proof of Theorem 4.1. Notice that measures the performance loss of the agent; is the conditional information in about the joint distribution of ; and similarly is the conditional information about retained in . Also notice that
always holds due to data processing inequality. In other words, Theorem 4.1 states that to be -near-optimal, an agent with incremental update must retain in all information in about the joint distribution of , except nats111In this paper, we use natural logarithms in both KL-divergence and mutual information. Notice that nats bits..
5 Multi-armed bandits
In this section we turn our attention from passive observations, where an agent’s predictions do not influence the stream of data , to active decisions, where an agent takes actions that influence future streams of data. In this context, the ability to predict outcomes over multiple time steps can significantly impact agent performance. In particular, we will show that the quality of predictions beyond marginals is essential to effectively balance the needs of exploration with exploitation (Thompson, 1933). We begin with a problem formulation around the famous “multi-armed bandit”, a simple model of sequential decisions that serves to highlight the key issues at play. Next, we show that, in this context, even agents that make perfect marginal predictions for the outcome of each action may not make good decisions for the problem as a whole. Finally, we show that agents that make good joint predictions over time steps are sufficient to drive efficient exploration in a Bernoulli bandit with actions.
5.1 Problem formulation
Consider a sequential decision problem with actions. The environment prescribes observation probabilities. During each time step , the agent selects an action and observes an outcome produced by the environment. Conditioned on the environment and action , the next observation is sampled from independently across time. There is a real-valued reward function that encodes the agent’s preferences over outcomes. The objective of the agent is to optimize the expected return over some long horizon ,
where the expectation is taken over random outcomes, algorithmic randomness, and prior over .
5.2 Marginal predictions are insufficient
We will now show that, even if an agent can make perfect predictions for the outcomes of each action, these marginal predictions per action can be insufficient to drive efficient exploration. The root cause is that, at any time step , the rewards and observations at future time steps are coupled through the unknown environment . Clearly, we can imagine problem instances where one action reveals the full structure of the environment, but does not itself produce a reward. In this case, an agent that simply makes good marginal predictions on the reward of each action is clearly insufficient to take advantage of this informative action. However, perhaps more interestingly, this coupling can occur even when rewards evolve independently per action given the environment.
To make this point clear, we will consider an extension of the coin tossing example from Section 2.2. We consider an Bernoulli bandit with independent actions where the reward is simply taken to be the . For the first actions the agents knows that the outcome is distributed , a pure 50% chance. However, the final action produces a deterministic outcome of either or , but it is equally likely to be of either kind. Clearly, the optimal policy to maximize long term reward is to first select the final action to see if it is optimal and, based on that outcome, choose the arm that maximizes expected reward given full knowledge of for all future steps. However, any agent that only matches marginal predictions cannot distinguish the final arm from any of the others. As such, it will be impossible for the agent to do better than random chance over the arms. On average, it will require time steps to identify the optimal policy.
Just as in the coin flip example, the difference between the informative and uninformative actions is fully revealed when we look at -order predictive distributions for any . In general, it is only through examining the evolution of future outcomes beyond myopic marginals that allows an agent to optimize the long term returns. In the next subsection, we will expand on this intuitive argument and relate the ability to predict future outcomes to agent performance. In fact, we will establish a bound on performance that guarantees that if you can make approximate predictions steps into the future, then it will be sufficient to drive efficient exploration.
5.3 Relating joint predictions to regret
As explained in the previous subsection, in a sequential decision problem, it is in general desirable to consider uncertainties jointly across actions in order to make effective decisions. Thus, to simplify exposition, we consider predictions for a vector of outcomes, , with each entry corresponding to the outcome of an action. In this section, we will relate the quality of future predictions about to agent performance on a Bernoulli bandit with correlated arms.
Recall that in a -armed Bernoulli bandit, the environment is identified by parameters , where is the expected reward of the action. We make no assumptions about the prior . In particular, there could be dependencies, where an observed outcome from trying one action informs our beliefs about other actions. We define the history by time as .
We use to denote a sequence of vectors sampled from the environment . Specifically, these vectors are conditionally independent given . Each vector has dimension , and the component of each vector is conditionally independently sampled from . On the other hand, consider an agent that can also generate a sequence of -dimensional binary vectors at each time . Let denote its parameters, and denote a sequence of binary vectors sampled from it. In this subsection, we establish that if an agent can produce good -order predictive distributions in the sense that
for some small , then based on this agent one can design bandit algorithms that achieve near-optimal performance.
Specifically, we consider an approximate version of Thompson sampling algorithm, which is described in Algorithm 1. This algorithm proceeds as follows: at each time , it first samples binary vectors based on the agent predictive distribution; then, it samples a vector from the conditional distribution ; finally, it chooses an action greedy to and update the agent parameters based on new observations. Note that Algorithm 1 is general in the sense that it does not depend on the agent’s uncertainty representation. Instead, it only requires that the agent can simulate hypothetical observations, sampled from a joint predictive distribution. Also note that Algorithm 1 reduces to the standard (exact) Thompson sampling algorithm when and . We use this algorithm to establish that an agent that performs well based on a particular loss function retains enough information to enable efficient exploration.222Note that is well defined. Specifically, is a set.
As is standard in bandit literature, we measure the performance of Algorithm 1 using (Bayes) cumulative regret, which is defined as
(7) |
where is one optimal action. Similarly, the expectation is over random outcomes, algorithmic randomness, and prior over . We can establish the following regret bound for Algorithm 1:
Theorem 5.1.
For any integer and any , if at each time , the agent with parameters can generate samples such that
then under Algorithm 1, we have
(8) |
Please refer to Section 5.4 for the proof sketch for Theorem 5.1, and the detailed proof is provided in Appendix C. Specifically, the proof for this theorem uses information-ratio analysis similar to previous papers (Russo & Van Roy, 2016; Lu et al., 2021), and the notions of environment proxy and learning target developed in Lu et al. (2021) (see Section 3.3 of that paper). Specifically, we choose the environment proxy as and the learning target as an action , where is an independent sample drawn from . Note that conditioning on the environment , and are independent of the history .
We now briefly discuss the regret bound in Theorem 5.1. First, note that if an agent can make good predictions steps into the future, then this regret bound reduces to , which is sufficient to ensure efficient exploration. Second, notice that this regret bound consists of three terms. The linear regret term is due to the expected KL-divergence loss of the agent. Specifically, if the agent makes a perfect prediction in the sense that for all , then this linear regret term will reduce to zero. On the other hand, another linear regret term is due to the fact that we choose as the learning target, which can be a sub-optimal action. It is obvious that as , will converge to and this linear regret term will reduce to zero. Finally, the sublinear regret term is exactly the regret bound for the exact Thompson sampling algorithm (Russo & Van Roy, 2016). This is not surprising since when (i.e. ) and , Algorithm 1 reduces to the exact Thompson sampling algorithm.
Finally, it is worth mentioning that we conjecture that we can develop results similar to Theorem 5.1 for more practical algorithms and more general sequential decision problems. Note that in Algorithm 1, sampling from can be computationally expensive. Instead, a computationally more efficient approach is to choose as the sample mean of , i.e. , where is the vector in . We conjecture that one can derive a similar regret bound with this modification, but leave the analysis to future work. Moreover, we believe that similar approximate Thompson sampling algorithms and regret bounds can be developed for more general sequential decision problems, such as multi-armed bandits with unbounded noises and episodic reinforcement learning, but leave them to future work.
5.4 Proof Sketch for Theorem 5.1
We provide a proof sketch for Theorem 5.1 in this subsection. First, notice that the expected per-step regret at time is , which can be decomposed as
(9) |
Recall that action is the learning target. We bound the two terms in the righthand side of equation (9) separately. First, based on the fact that and are conditionally i.i.d given the environment proxy , we can show that
To bound the second term , we consider its conditional version , where the subscript denotes conditioning on the history . Using information-ratio analysis, we can prove that
(10) |
Using Pinsker’s inequality, the data processing inequality, and the assumption on the expected KL-divergence in Theorem 5.1, we can bound that
On the other hand, based on Cauchy–Schwarz inequality and the chain rule for mutual information, we have
Finally, note that . Combining the above inequalities, we have proved Theorem 5.1.
6 Related work
There is extensive literature focusing on evaluating marginal distributions (e.g. Bishop (2006), Friedman et al. (2001), LeCun et al. (2015)), paired with evaluation on downstream tasks such as bandits (Riquelme et al., 2018). There is also a rich literature on developing agents for uncertainty estimation (Blundell et al., 2015; Gal & Ghahramani, 2016; Osband & Van Roy, 2015; Welling & Teh, 2011), and in most cases the developed agents are evaluated only based on marginal predictive distributions.
To the best of our knowledge, there is little literature that highlight the importance of joint predictive distributions. One exception is Wang et al. (2021). However, Wang et al. (2021) states that “joint log-likelihood scores [are] determined almost entirely by the marginal log-likelihood scores…in practice, they provide little new information beyond marginal likelihoods,” and proposes an alternative metric for joint distributions based on posterior predictive correlations (PPCs). Our paper, however, establishes that the KL-divergence for joint predictive distributions, which are equivalent to log-likelihood scores, ensure effective decisions, while marginals do not. In particular, our results in Section 5 show that agents that attain small marginal KL-divergence might not retain enough information to enable efficient exploration, while agents that attain small joint KL-divergence do. Moreover, the PPC metric proposed by Wang et al. (2021) is designed for regression problems, while cross-entropy loss can be applied more broadly, for example, to classification as well as regression problems. There is also a recent empirical work that compares agents based on their joint predictive distributions (Osband et al., 2021).
There is also extensive literature on combinatorial decision-making (Papadimitriou & Steiglitz, 1998), sequential prediction (Shalev-Shwartz et al., 2011), multi-armed bandits (Auer et al., 2002; Lattimore & Szepesvári, 2020), and Thompson sampling (Thompson, 1933; Chapelle & Li, 2011; Russo et al., 2018). There are also some recent work on approximate versions of Thompson sampling algorithms (Lu & Van Roy, 2017; Phan et al., 2019; Zhang et al., 2019; Yu et al., 2020). In this paper, we propose and analyze a novel variant of Thompson sampling algorithm for multi-armed bandits. Our analysis uses an information-theoretic approach, similar to papers in this research line (Russo & Van Roy, 2016, 2018; Lu et al., 2021). In addition, our analysis makes use of the notions of environment proxy and learning target developed in Lu et al. (2021).
7 Conclusion
This paper aims to elucidate the importance of the joint predictive distributions for a broad class of decision problems, including the combinatorial and sequential decision problems. Specifically, we have shown that in a simple combinatorial decision problem (Section 3), a sequential prediction problem (Section 4), and a multi-armed bandit problem (Section 5), accurate marginal predictions are insufficient to drive effective decisions. Instead, accurate joint predictive distributions are necessary for good performance. We also show that, in the multi-armed bandit problem, accurate joint predictive distributions are sufficient to enable near-optimal performance of a novel variant of Thompson sampling by establishing a new kind of regret bound via information-theoretic analysis (Theorem 5.1). While we evaluate the accuracy of marginal and joint predictive distributions based on KL-divergence (Section 2), or equivalently, cross-entropy loss, our insights on marginal versus joint predictions should extend to other evaluation metrics.
References
- Auer et al. (2002) Auer, P., Cesa-Bianchi, N., and Fischer, P. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2):235–256, 2002.
- Bishop (2006) Bishop, C. M. Pattern recognition. Machine learning, 128(9), 2006.
- Blundell et al. (2015) Blundell, C., Cornebise, J., Kavukcuoglu, K., and Wierstra, D. Weight uncertainty in neural network. In International Conference on Machine Learning, pp. 1613–1622. PMLR, 2015.
- Chapelle & Li (2011) Chapelle, O. and Li, L. An empirical evaluation of thompson sampling. Advances in neural information processing systems, 24:2249–2257, 2011.
- Cover (1999) Cover, T. M. Elements of information theory. John Wiley & Sons, 1999.
- Friedman et al. (2001) Friedman, J., Hastie, T., Tibshirani, R., et al. The elements of statistical learning, volume 1. Springer series in statistics New York, 2001.
- Gal & Ghahramani (2016) Gal, Y. and Ghahramani, Z. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In International Conference on Machine Learning, 2016.
- Goodfellow et al. (2016) Goodfellow, I., Bengio, Y., and Courville, A. Deep learning. MIT press, 2016.
- Kingma & Ba (2015) Kingma, D. and Ba, J. Adam: A Method for Stochastic Optimization. Proceedings of the International Conference on Learning Representations, 2015.
- Lattimore & Szepesvári (2020) Lattimore, T. and Szepesvári, C. Bandit algorithms. Cambridge University Press, 2020.
- LeCun et al. (2015) LeCun, Y., Bengio, Y., and Hinton, G. Deep learning. Nature, 521(7553):436, 2015.
- Lu & Van Roy (2017) Lu, X. and Van Roy, B. Ensemble sampling. In Advances in Neural Information Processing Systems, pp. 3260–3268, 2017.
- Lu et al. (2021) Lu, X., Van Roy, B., Dwaracherla, V., Ibrahimi, M., Osband, I., and Wen, Z. Reinforcement learning, bit by bit. arXiv preprint arXiv:2103.04047, 2021.
- Mattner (2003) Mattner, L. Mean absolute deviations of sample means and minimally concentrated binomials. Annals of probability, pp. 914–925, 2003.
- Osband & Van Roy (2015) Osband, I. and Van Roy, B. Bootstrapped Thompson sampling and deep exploration. arXiv preprint arXiv:1507.00300, 2015.
- Osband et al. (2021) Osband, I., Wen, Z., Asghari, S. M., Dwaracherla, V., Hao, B., Ibrahimi, M., Lawson, D., Lu, X., O’Donoghue, B., and Van Roy, B. Evaluating predictive distributions: Does bayesian deep learning work? arXiv preprint arXiv:2110.04629, 2021.
- Papadimitriou & Steiglitz (1998) Papadimitriou, C. H. and Steiglitz, K. Combinatorial optimization: algorithms and complexity. Courier Corporation, 1998.
- Phan et al. (2019) Phan, M., Abbasi-Yadkori, Y., and Domke, J. Thompson sampling with approximate inference. arXiv preprint arXiv:1908.04970, 2019.
- Riquelme et al. (2018) Riquelme, C., Tucker, G., and Snoek, J. Deep Bayesian bandits showdown: An empirical comparison of Bayesian deep networks for thompson sampling. arXiv preprint arXiv:1802.09127, 2018.
- Russo & Van Roy (2016) Russo, D. and Van Roy, B. An information-theoretic analysis of Thompson sampling. The Journal of Machine Learning Research, 17(1):2442–2471, 2016.
- Russo & Van Roy (2018) Russo, D. and Van Roy, B. Learning to optimize via information-directed sampling. Operations Research, 66(1):230–252, 2018.
- Russo et al. (2018) Russo, D. J., Van Roy, B., Kazerouni, A., Osband, I., and Wen, Z. A tutorial on Thompson sampling. Found. Trends Mach. Learn., 11(1):1–96, July 2018. ISSN 1935-8237.
- Shalev-Shwartz et al. (2011) Shalev-Shwartz, S. et al. Online learning and online convex optimization. Foundations and trends in Machine Learning, 4(2):107–194, 2011.
- Thompson (1933) Thompson, W. R. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3/4):285–294, 1933.
- Wang et al. (2021) Wang, C., Sun, S., and Grosse, R. Beyond marginal uncertainty: How accurately can Bayesian regression models estimate posterior predictive correlations? In International Conference on Artificial Intelligence and Statistics, pp. 2476–2484. PMLR, 2021.
- Welling & Teh (2011) Welling, M. and Teh, Y. W. Bayesian learning via stochastic gradient Langevin dynamics. In Proceedings of the 28th international conference on machine learning (ICML-11), pp. 681–688. Citeseer, 2011.
- Yu et al. (2020) Yu, T., Kveton, B., Wen, Z., Zhang, R., and Mengshoel, O. J. Graphical models meet bandits: A variational thompson sampling approach. In International Conference on Machine Learning, pp. 10902–10912. PMLR, 2020.
- Zhang et al. (2019) Zhang, R., Wen, Z., Chen, C., and Carin, L. Scalable thompson sampling via optimal transport. arXiv preprint arXiv:1902.07239, 2019.
Appendices
Appendix A Proof for Proposition 2.1
Proof.
From Pinsker’s inequality, we have
where is the total variation distance. Without loss of generality, we assume that the supports of and are countable. Then we have . Consequently, we have
Recall that maximizes
over all . Similarly, we assume maximizes
over all . Then we have
where (a) follows from . Consequently, we have
where the second inequality follows from the fact that is optimal under . Taking the expectations, we have
From Jensen’s inequality, we have
and
Consequently, we have
This concludes the proof. ∎
Appendix B Proof for Theorem 4.1
B.1 Proof for Theorem 4.1
Proof.
Notice that
where (a) follows from the chain rule of KL divergence, (b) and (c) follow from the fact that the conditional predictive distribution minimizes the KL divergence to the posterior (see Lemma B.1), and (d) follows from the definition. On the other hand, by definition, we have
Moreover, from the chain rule of mutual information, we have
where since and are conditionally independent given and . Hence we have
This concludes the proof. ∎
B.2 Lemma B.1
Lemma B.1.
Let the agent parameter . Then, the conditional predictive distribution minimizes the expected KL divergence
over all possible predictive distribution .
Proof.
We have
∎
Similarly, we can prove that
(11) |
To see it, notice that , and hence is a particular choice of . Consequently, using an analysis similar to Lemma B.1, we can prove the above inequality.
Appendix C Proof for Theorem 5.1
Proof.
We will now analyze the approximate TS algorithm. For this purpose, we continue to use a proxy and target . Further, we define , , and .
Notice that
(12) |
and we first show that . Notice that
Please refer to (Mattner, 2003) for the proof of inequality (a).
Now we are going to bound the second term on the RHS of Equation (12) through bounding its conditional version in terms of the information gain , where the subscript in and denotes that the expectation and mutual information condition on the history . In other words, and . We first note that
where the first inequality uses the chain rule of mutual information; the second inequality follows from that and are conditionally independent; the fourth inequality uses that is conditionally jointly independent of and ; the fifth inequality follows from the KL divergence form of mutual information.
Now we are ready to bound in terms of .
where the third inequality uses that is conditionally independent of ; the first inequality follows from Cauchy-Schwarz inequality and ; the third inequality uses Pinsker’s inequality and . Hence,
where the second inequalities follows from Jensen’s inequality and Pinsker’s inequality and the third inequality uses the information processing inequality, and thus
Using an analysis similar to Lu et al. (2021),
where the second inequality uses Cauchy–Schwarz inequality; the third inequality follows from the chain rule for mutual information; the last inequality follows from the non-negativity of entropy. This completes the proof. ∎