This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Off-Policy Evaluation Using Information Borrowing and Context-Based Switching

Sutanoy Dasgupta
Senior Data Scientist
Intuit, Bengaluru, India
Equal contribution. This work was done primarily when all authors were at Texas A&M University. This version includes comparisons with more methods (Nadaraya-Watson regression) and updated related work discussions in the Introduction. Email: SD: sutanoy26071991@gmail.com, YN: yniu4@uh.edu, KP: kpb@caltech.edu, DK: dileep.kalathil@tamu.edu, DP: debdeep@stat.tamu.edu, BM: bmallick@stat.tamu.edu
   Yabo Niu
Department of Mathematics
University of Houston
   Kishan Panaganti
Computing + Mathematical Sciences Department
California Institute of Technology
   Dileep Kalathil
Department of Electrical and Computer Engineering
Texas A&M University
   Debdeep Pati
Department of Statistics
Texas A&M University
   Bani Mallick
Department of Statistics
Texas A&M University
Abstract

We consider the off-policy evaluation (OPE) problem in contextual bandits, where the goal is to estimate the value of a target policy using the data collected by a logging policy. Most popular approaches to the OPE are variants of the doubly robust (DR) estimator obtained by combining a direct method (DM) estimator and a correction term involving the inverse propensity score (IPS). Existing algorithms primarily focus on strategies to reduce the variance of the DR estimator arising from large IPS. We propose a new approach called the Doubly Robust with Information borrowing and Context-based switching (DR-IC) estimator that focuses on reducing both bias and variance. The DR-IC estimator replaces the standard DM estimator with a parametric reward model that borrows information from the ‘closer’ contexts through a correlation structure that depends on the IPS. The DR-IC estimator also adaptively interpolates between this modified DM estimator and a modified DR estimator based on a context-specific switching rule. We give provable guarantees on the performance of the DR-IC estimator. We also demonstrate the superior performance of the DR-IC estimator compared to the state-of-the-art OPE algorithms on a number of benchmark problems.

1 Introduction

Contextual bandits framework finds a wide range of applications such as recommendations systems (Li et al.,, 2011), personalized healthcare (Zhou et al.,, 2017), advertising (Bottou et al.,, 2013), and education (Mandel et al.,, 2014). In contextual bandits, a decision maker (often an algorithm) observes a context and takes an action according to a policy and observes a reward. For a recommendation algorithm, the context can be the information about the user and the history is his past online clicks/purchases, and a reward is obtained when he clicks/buys the item. Importantly, the algorithm can only observe the reward for the selected actions, and not for the others. The goal is to maximize the expected reward by choosing the best policy out of a set of candidate policies. A natural approach towards that end is to evaluate the candidate policies by calculating their expected reward, called policy evaluation, and then choose the policy with the highest expected reward. We consider the fundamental problem of off-policy evaluation (OPE) in contextual bandits, where one uses the data gathered by a past policy, known as the logging policy, to estimate the expected reward of a new policy, known as the target policy. OPE eliminates the cost of performing policy evaluation via online experiments, and avoids the risks of exposing subjects to untested policies.

There are three main approaches to address the OPE problem: (i)(i) Direct Method (DM) first gets an estimate of the reward model and uses it to estimate the value of the target policy. DM often has low variance. However, its bias can be significant, especially when the true reward model does not belong to the function class used for the reward function estimation. (ii)(ii) Inverse Propensity Scoring (IPS) estimator (Horvitz and Thompson,, 1952) uses importance weighting to correct the mismatch between proportions of actions due to the difference between the logging and target polices. While the IPS estimator is unbiased, its variance can be really large due to large importance weights. (iii)(iii) Doubly Robust (DR) estimator (Robins and Rotnitzky,, 1995; Bang and Robins,, 2005; Dudík et al.,, 2011, 2014) is a combination of DM and IPS to achieve the low variance of DM and no bias of IPS. While DR is known to be asymptotically optimal under some assumption (Su et al.,, 2020), its finite-sample variance can still be quite high when importance weights are large.

Many recent works focus on reducing the variance of the DR estimator. Wang et al., (2017) introduced a switching approach that switches between using a DM and DR, depending on the importance weights. Su et al., (2020) takes a shrinkage approach where they replace the importance weights by smaller weights which optimizes the mean squared error. The variance of such estimators is usually caused by mismatches in supports of data-collecting policy and target policy. Mou et al., (2023) assume that the reward function lies within a reproducing kernel Hilbert space to address such mismatches. Khan et al., (2024) instead use partial identification methods to address these mismatches. There are however few works addressing the high bias of the DM method. Su et al., (2020) has revealed the importance of choosing an appropriate reward model. Through an extensive study, they have shown that different reward models result in the best estimates in different scenarios. Lichtenberg et al., (2023) use reward clipping approaches for achieving practical less-bias results.

In this work, we propose an innovative approach for the OPE problem, which we call the Doubly Robust with Information borrowing and Context-based switching (DR-IC) estimator. The approach is based on two key proposals for improving the performance of the standard DR estimator.

First, we propose a modified DM that uses an intuitive and widely applicable parametric reward model that can significantly reduce the bias. The fundamental idea of the proposed reward model is borrowing information from ‘similar’ and ‘important’ contexts through a correlation structure. In particular, this reward model assigns more weight to data points that are ‘similar’ to the test data point through the choice of a kernel. The bandwidth of the kernel is chosen adaptively, depending on the importance weight associated with the test data point. We call this new DM as DM with Information Borrowing (DM-IB). We theoretically show that the bias of the DM-IB goes to zero as the number of samples increases. We also empirically demonstrate the advantages of DM-IB.

Second, we propose a context-based switching scheme for adpatively choosing between the DM-IB and the DR estimator (which also uses a reward model with information borrowing). Unlike the switching scheme proposed in Wang et al., (2017) which does switching based on the importance weights, our approach uses the Kullback-Leibler (KL) divergence between the logging policy and target policy, given the context, to do the switching. We call this the DR-IC estimator. Through extensive experiments, we demonstrate the superior performance of our DR-IC estimator compared to the state-of-the-art methods (Wang et al.,, 2017; Farajtabar et al.,, 2018; Su et al.,, 2020) on a number of benchmark problems.

2 Preliminaries

In the contextual bandits problem, the algorithm observes a context x𝒳x\in\mathcal{X}, takes an action a𝒜a\in\mathcal{A}, and gets a scalar reward r(x,a)r(x,a). We assume that the context space 𝒳d\mathcal{X}\subset\mathbb{R}^{d} is continuous and the action space 𝒜\mathcal{A} is finite. The contexts are sampled i.i.d. according to a distribution νC()\nu_{C}(\cdot). Reward r(x,a)r(x,a) has a distribution conditioned on (x,a)(x,a) denoted by νR(|x,a)\nu_{R}(\cdot|x,a). We assume that νR(|x,a)\nu_{R}(\cdot|x,a) is a Lipschitz function with respect to xx for any given aa, formalizing the intuition that similar context should give similar rewards for the same action. The algorithm selects actions according a decision rule called policy, which maps the observed context to a distribution over the action space. A policy π\pi is denoted as a conditional distribution π(a|x)\pi(a|x) which specifies the probability of selecting action aa when the context xx is observed. The value of a policy π\pi, denoted as VπV^{\pi}, is defined as

Vπ=𝔼xνC𝔼aπ(|x)𝔼rνR(|x,a)[r].\displaystyle V^{\pi}=\mathbb{E}_{x\sim\nu_{C}}\mathbb{E}_{a\sim\pi(\cdot|x)}\mathbb{E}_{r\sim\nu_{R}(\cdot|x,a)}[r]. (1)

In the following, we will also write π(x,a,r)\pi(x,a,r) to denote the joint distribution over context-action-reward tuples when actions are selected by the policy π\pi. i.e., π(x,a,r)=νC(x)π(a|x)νR(r|x,a)\pi(x,a,r)=\nu_{C}(x)\pi(a|x)\nu_{R}(r|x,a).

In the off-policy evaluation problem, we are given a data set 𝒟={(xi,ai,ri)}i=1n\mathcal{D}=\{(x_{i},a_{i},r_{i})\}^{n}_{i=1} that consists of nn i.i.d. samples of context-action-reward tuple generated by a logging policy μ\mu, i.e., (xi,ai,ri)μ(x_{i},a_{i},r_{i})\sim\mu. The goal is to estimate the value VπV^{\pi} of a target policy π\pi using this offline dataset 𝒟\mathcal{D}.

There are three standard and well known approaches for off-policy evaluation.

Direct Method: In DM, we train a reward model r^\widehat{r} as

r^(x,a)=argminfi=1n(f(xi,ai)ri)2,\displaystyle\widehat{r}(x,a)=\operatorname*{arg\,min}_{f\in\mathcal{F}}\sum^{n}_{i=1}(f(x_{i},a_{i})-r_{i})^{2}, (2)

where \mathcal{F} is a suitable function class. The DM estimator is then given as

V^DMπ=1ni=1na𝒜π(a|xi)r^(xi,a).\displaystyle\widehat{V}^{\pi}_{\mathrm{DM}}=\frac{1}{n}\sum^{n}_{i=1}\sum_{a\in\mathcal{A}}\pi(a|x_{i})\widehat{r}(x_{i},a). (3)

Though the DM estimator typically has low variance, it often suffers from large bias (Dudík et al.,, 2011).

Inverse Propensity Scoring Estimator : IPS estimator makes use of the importance weights, defined as w(x,a):=π(a|x)μ(a|x)w(x,a):=\frac{\pi(a|x)}{\mu(a|x)}, to get an unbiased estimate (Horvitz and Thompson,, 1952). The IPS estimator is given as

V^IPSπ=1ni=1nw(xi,ai)ri.\displaystyle\widehat{V}^{\pi}_{\mathrm{IPS}}=\frac{1}{n}\sum^{n}_{i=1}w(x_{i},a_{i})r_{i}. (4)

We make the standard assumption that μ(a|x)>0\mu(a|x)>0 whenever π(a|x)>0\pi(a|x)>0, ensuring that w(x,a)<w(x,a)<\infty. Though the IPS estimator is unbiased, when there is a substantial mismatch between the policies μ\mu and π\pi, the importance weights will be large and hence the variance will also be large.

Doubly Robust Estimator: DR approach combines DM and IPS estimator as

V^DRπ=V^DMπ+1ni=1nw(xi,ai)(rir^(xi,ai)),\displaystyle\widehat{V}^{\pi}_{\mathrm{DR}}=\widehat{V}^{\pi}_{\mathrm{DM}}+\frac{1}{n}\sum^{n}_{i=1}w(x_{i},a_{i})(r_{i}-\widehat{r}(x_{i},a_{i})), (5)

where r^\widehat{r} is as given in (2). DR estimator preserves the unbiased nature of the IPS and hence is robust to a poor reward estimator r^\widehat{r}. At the same time, because the IPS part in the DR estimator is using a shifted reward, the variance of the DR estimator is smaller than that of the IPS estimator. It is known that DR estimator is asymptotically optimal as long the reward estimator is consistent (Su et al.,, 2020). However, its finite-sample variance can be quite high due to large importance weights when the logging policy and the evaluation policy are different.

Recently, different approaches to further reduce the variance of the DR approaches have been studied. The closest to our work is (Wang et al.,, 2017), which proposed a switching approach based on the importance weights. The proposed switch estimator uses the IPS approach when the importance weights are less than a threshold and switches to the DM approach when the importance weights exceeds that threshold. Formally, switch estimator is given as

V^Sπ\displaystyle\widehat{V}^{\pi}_{\mathrm{S}} =1ni=1nw(xi,ai)ri𝟙{w(xi,ai)τ}+\displaystyle=\frac{1}{n}\sum^{n}_{i=1}w(x_{i},a_{i})r_{i}\mathds{1}\{w(x_{i},a_{i})\leq\tau\}+
1ni=1na𝒜π(a|xi)r^(xi,a)𝟙{w(xi,ai)>τ},\displaystyle\frac{1}{n}\sum^{n}_{i=1}\sum_{a\in\mathcal{A}}\pi(a|x_{i})\widehat{r}(x_{i},a)\mathds{1}\{w(x_{i},a_{i})>\tau\}, (6)

where τ\tau is the threshold. The IPS part in the above estimator is can also be replaced by a DR estimator.

3 Our Approach

In this section, we introduce our new approach, which we call the Doubly Robust with Information borrowing and Context-based switching (DR-IC) estimator. The approach is based on two key proposals for improving the performance of the standard DR estimator. First, we propose a modified DM that uses a innovative reward model which borrows information from similar and important contexts in order to reduce the bias. We call this as DM with Information Borrowing (DM-IB). Second, we propose a context-based switching scheme for adpatively choosing between the DM-IB and the DR (which also uses the information borrowing reward model) for reducing the variance. We call this combined approach the DR-IC estimator.

3.1 Direct Method with Information Borrowing Reward Model

A major drawback of traditional DM estimators is that they do not exploit the information about the target policy while training a reward model r^\widehat{r} as given in (3). So, each data sample (xi,ai,ri)(x_{i},a_{i},r_{i}) is used uniformly in (3). However, depending on the importance weight, some of the samples are more relevant than others with respect to the off-policy evaluation objective. For example, if the importance weight w(xi,ai)w(x_{i},a_{i}) is large for sample ii, this sample is more important in estimating VπV^{\pi} than another sample with smaller importance weight. While performing a standard weighted regression with importance weights may address this problem partially, it does not exploit another important fact that “similar” contexts should produce “similar” rewards under the same action. So, a good reward model should be able to borrow more information from contexts that are “similar” than contexts that are different, in order to reduce the bias in estimation.

A standard reward model estimate used in the literature is least squares regression with 2\ell_{2} regularization and Gaussian errors (Dudík et al.,, 2011; Wang et al.,, 2017; Su et al.,, 2020). Let zi=(xi,ai)z_{i}=(x_{i}^{\top},a_{i})^{\top} be the context-action pairs, i=1,2,,ni=1,2,\ldots,n, Z=[z1,,zn]Z={[z_{1},\ldots,z_{n}]}^{\top} be the matrix of context-action pairs, and 𝐫=[r1,,rn]\mathbf{r}={[r_{1},\ldots,r_{n}]}^{\top} be the vector of rewards obtained from the data 𝒟\mathcal{D}. The reward estimate for any context-action pair z=(x,a)z=(x^{\top},a)^{\top} according least squares regression with 2\ell_{2} regularization is given by

r^ls(z)=zθ^ls,whereθ^ls=(ZTZ+λI)1ZT𝐫.\displaystyle\widehat{r}_{\mathrm{ls}}(z)=z^{\top}\widehat{\theta}_{\mathrm{ls}},~{}\text{where}~{}\widehat{\theta}_{\text{ls}}={(Z^{T}Z+\lambda I)}^{-1}Z^{T}\mathbf{r}. (7)

We now consider a derived data set 𝒟~={(x~,a~):x~𝒟,a~𝒜}\widetilde{\mathcal{D}}=\{(\widetilde{x},\widetilde{a}):\widetilde{x}\in\mathcal{D},\widetilde{a}\in\mathcal{A}\}. Note that the cardinality of 𝒟\mathcal{D} is nn and of 𝒟~\widetilde{\mathcal{D}} is n|𝒜|n|\mathcal{A}|. Let z~=(x~,a~)\widetilde{z}=(\widetilde{x}^{\top},\widetilde{a})^{\top} be the context-action pair corresponding to the sample (x~,a~)(\widetilde{x},\widetilde{a}) from the set 𝒟~\widetilde{\mathcal{D}} and let Z~=[z~1,,z~n|𝒜|]\widetilde{Z}={[\widetilde{z}_{1},\ldots,\widetilde{z}_{n|\mathcal{A}|}]}^{\top}. Our goal is to get a reward model that gives a reward estimate for each sample z~\widetilde{z} using correlation between 𝐫~\widetilde{\mathbf{r}} and 𝐫\mathbf{r}, where 𝐫~\widetilde{\mathbf{r}} is the predicted rewards given a context-action pair z~\widetilde{z}. Note that the derived dataset comprises of the original datapoints 𝒟\mathcal{D} paired with all possible actions 𝒜\mathcal{A}, as we have explained in Section 3. We emphasize that we do not need additional data to implement our algorithm.

Since we don’t have an explicit correlation structure available a priori, we model the covariance matrix between 𝐫~\widetilde{\mathbf{r}} and 𝐫\mathbf{r}, Σ(𝐫~,𝐫)\Sigma(\widetilde{\mathbf{r}},\mathbf{r}), as

[Σ(𝐫~,𝐫)]ji=1hnw~jwiK(z~jzi2hnw~jwi),\displaystyle[\Sigma(\widetilde{\mathbf{r}},\mathbf{r})]_{ji}=\frac{1}{h_{n}\sqrt{\widetilde{w}_{j}w_{i}}}K\left(\frac{{\|\widetilde{z}_{j}-z_{i}\|_{2}}}{h_{n}\sqrt{\widetilde{w}_{j}w_{i}}}\right), (8)

where wi=w(xi,ai)w_{i}=w(x_{i},a_{i}), w~j=w(x~j,a~j)\widetilde{w}_{j}=w(\widetilde{x}_{j},\widetilde{a}_{j}), j=1,,n|𝒜|j=1,\ldots,n|\mathcal{A}|, i=1,,ni=1,\ldots,n, and hnh_{n} is the bandwidth and K()K(\cdot) is an appropriate kernel.

The main motivation for defining such a covariance matrix is to exploit the fact that similar contexts should give similar rewards under the same action. Thus, the proposed covariance structure borrows more information from similar contexts due to the term x~jxi2\|\widetilde{x}_{j}-x_{i}\|_{2}. Also, if wiw_{i} or w~j\widetilde{w}_{j} are large, these samples are more important for the off-policy evaluation. So, our covariance structure borrows more information from such samples. In contrast, a standard least squares regression approach as given in (7) does not consider the similarity of contexts and borrows information uniformly from all samples, resulting in a higher bias. For our experiments, we take a specific covariance matrix of the form (8), namely a Gaussian kernel truncated at the same actions, as given by

12πhnw~jwiexp(x~jxi222hn2w~jwi)𝟙(a~j=ai).\displaystyle\frac{1}{\sqrt{2\pi}h_{n}\sqrt{\widetilde{w}_{j}w_{i}}}\exp\left(-\frac{{\|\widetilde{x}_{j}-x_{i}\|^{2}_{2}}}{2h_{n}^{2}\widetilde{w}_{j}w_{i}}\right)\mathds{1}(\widetilde{a}_{j}=a_{i}). (9)

Let Σ𝐫\Sigma_{\mathbf{r}} be the covariance matrix for the original dataset. Assume that the true underlying covariance structure between 𝐫~\widetilde{\mathbf{r}} and 𝐫\mathbf{r} is given by (8). Then, the conditional mean of 𝐫~\widetilde{\mathbf{r}}, denoted by 𝐫^IB\widehat{\mathbf{r}}_{\mathrm{IB}}, given the original rewards 𝐫\mathbf{r}, the context-action pairs ZZ and Z~\widetilde{Z} and the regression parameter θ^ls\widehat{\theta}_{\mathrm{ls}} is our reward model (estimate) for the proposed approach:

𝐫^IB(Z~)=Z~θ^ls+\displaystyle\widehat{\mathbf{r}}_{\mathrm{IB}}(\widetilde{Z})=\widetilde{Z}\widehat{\theta}_{\mathrm{ls}}+ (10)
Σ(𝐫~,𝐫)Σ𝐫1[diag{Σ(𝐫~,𝐫)Σ𝐫1𝟏n}]1(𝐫Zθ^ls),\displaystyle\hskip 36.98866pt\Sigma(\widetilde{\mathbf{r}},\mathbf{r})\Sigma_{\mathbf{r}}^{-1}\left[\mathrm{diag}\{\Sigma(\widetilde{\mathbf{r}},\mathbf{r})\Sigma_{\mathbf{r}}^{-1}\bm{1}_{n}\}\right]^{-1}(\mathbf{r}-Z\widehat{\theta}_{\mathrm{ls}}),

where 𝟏n\bm{1}_{n} is an nn-dimensional vector which all of its entries are 1. The first term in the equation is simply the linear regression estimate if the correlation structure was assumed to be non-existent, as in usual linear regression. The second term is the information borrowing term which captures the borrowing of information based on the similarity among the contexts in the original dataset and the ‘derived’ dataset. The conditional expectation of 𝐫~\widetilde{\mathbf{r}} given 𝐫\mathbf{r} as a projection of 𝐫~\widetilde{\mathbf{r}} is a natural estimator for 𝐫~\widetilde{\mathbf{r}} and is the best estimator among all functions f(𝐫)f(\mathbf{r}) of 𝐫\mathbf{r} in the sense that it minimizes the mean squared error (MSE), 𝔼[(𝐫~f(𝐫))2]\mathbb{E}[\|(\widetilde{\mathbf{r}}-f(\mathbf{r}))\|^{2}]. We refer the reader to Brunk, (1961, 1963) and Šidák, (1957) for more details. In practice, we do not know Σ𝐫\Sigma_{\mathbf{r}} and hence we replace it with an estimate. The scaling term [diag{Σ(𝐫~,𝐫)Σ𝐫1𝟏n}]1\left[\mathrm{diag}\{\Sigma(\widetilde{\mathbf{r}},\mathbf{r})\Sigma_{\mathbf{r}}^{-1}\bm{1}_{n}\}\right]^{-1} ensures that the underlying joint covariance structure of 𝐫~\widetilde{\mathbf{r}} and 𝐫\mathbf{r} is positive definite.

Using the 𝐫^IB\widehat{\mathbf{r}}_{\mathrm{IB}} as the reward estimator, we now propose a new direct method for off-policy evaluation called DM-IB, given by

V^DM-IBπ=1ni=1na𝒜π(axi)r^IB(a,xi).\displaystyle\widehat{V}^{\pi}_{\mathrm{DM\mbox{-}IB}}=\frac{1}{n}\sum_{i=1}^{n}\sum_{a\in\mathcal{A}}\pi(a\mid x_{i})\,\widehat{r}_{\mathrm{IB}}(a,x_{i}). (11)

In the Section 4, we will show the superior properties of the proposed estimator V^DM-IBπ\widehat{V}^{\pi}_{\mathrm{DM\mbox{-}IB}} compared to the standard direct method using the reward estimator as given in (7). In particular, we will show that the bias of V^DM-IBπ\widehat{V}^{\pi}_{\mathrm{DM\mbox{-}IB}} converges to zero in probability as nn increases.

3.2 Doubly Robust Method with Context-Based Switching

While borrowing information through a correlation structure to get a reward estimator as proposed in (10) can reduce the bias compared to the standard DM method, the variance can still be significant compared to the standard doubly robust approach. On the other hand, simply using the proposed reward estimator in a doubly robust structure alone will not reduce the high variance of the DR estimator arising due to the difference between the logging policy and evaluation policy (and reflected in the large importance weights). To overcome this challenge, we propose a switching approach that adaptively selects between DM-IB and DR estimator. In particular, we propose to perform the switching based on the context-specific KL divergence between the logging policy and evaluation policy.

The KL divergence DKLD_{\mathrm{KL}} between two probability mass functions p1p_{1} and p2p_{2} is given by DKL(p1||p2)=xp1(x)logp1(x)p2(x).D_{\mathrm{KL}}(p_{1}||p_{2})=\sum_{x}p_{1}(x)\log\frac{p_{1}(x)}{p_{2}(x)}. We define the context specific KL divergence DKL(x)D_{\mathrm{KL}}(x) for a given context xx, as DKL(x)=DKL(π(x)||μ(x)).D_{\mathrm{KL}}(x)=D_{\mathrm{KL}}(\pi(\cdot\mid x)~{}||~{}\mu(\cdot\mid x)). Our DR-IC estimator V^DR-ICπ\hat{V}^{\pi}_{\mathrm{DR\mbox{-}IC}} is now given as

1ni=1n(rir^IB(xi,ai))π(aixi)μ(aixi)) 1(DKL(xi)<τ)\displaystyle\frac{1}{n}\sum_{i=1}^{n}(r_{i}-\widehat{r}_{\mathrm{IB}}(x_{i},a_{i}))\frac{\pi(a_{i}\mid x_{i})}{\mu(a_{i}\mid x_{i})})\,\mathds{1}(D_{\mathrm{KL}}(x_{i})<\tau)
+1ni=1na𝒜r^IB(xi,a)π(axi) 1(DKL(xi)<τ)\displaystyle+\frac{1}{n}\sum_{i=1}^{n}\sum_{a\in\mathcal{A}}\widehat{r}_{\mathrm{IB}}(x_{i},a)\pi(a\mid x_{i})\,\mathds{1}(D_{\mathrm{KL}}(x_{i})<\tau)
+1ni=1na𝒜r^IB(xi,a)π(axi) 1(DKL(xi)τ),\displaystyle+\frac{1}{n}\sum_{i=1}^{n}\sum_{a\in\mathcal{A}}\widehat{r}_{\mathrm{IB}}(x_{i},a)\pi(a\mid x_{i})\,\mathds{1}(D_{\mathrm{KL}}(x_{i})\geq\tau), (12)

where τ\tau is the threshold parameter that determines the switching. The third term in (12) is exactly our new direct method estimate V^DM-IBπ\widehat{V}^{\pi}_{\mathrm{DM\mbox{-}IB}} given (11). The first two terms in (12) constitute a DR estimator which uses the information borrowing reward estimate r^IB\widehat{r}_{\mathrm{IB}} as proposed in (10). The DR estimator is used only for contexts with KL divergence less than the threshold τ\tau, and we switch to the DM method if the KL divergence exceeds the threshold.

We note that our approach for switching is different from that of Wang et al., (2017), which makes the switch for a specific context-action pair if the corresponding importance weight is too large. The proposed KL divergence based approach has an averaging effect on the individual weights since it is a weighted sum of the importance weights on a logarithmic scale and the thresholding is context-specific, rather than context-action pair specific. So, the chosen model is either the DM model or the DR model for the specific context and all possible actions. From a practical standpoint, the thresholding for the KL divergence based approach can be performed on a more compact grid than an importance weight based switching since the KL divergence uses the importance weights on a logarithmic scale, enabling easier optimization. We elaborate on the qualitative differences between the two techniques using a toy example in Section 4.2.

3.3 Optimizing the Threshold Parameter

One important part of implementing the DR-IC estimator is to find the optimal threshold parameter. We follow an approach similar to the one used in Wang et al., (2017) to select a τ\tau that minimizes the MSE of the resulting estimator.

Let Yi(τ)Y_{i}(\tau) denotes the estimated reward given the context xix_{i} for a switching threshold τ\tau. That is,

Yi(τ)\displaystyle Y_{i}(\tau) =(rir^IB(xi,ai))π(aixi)μ(aixi) 1(DKL(xi)<τ)\displaystyle=(r_{i}-\widehat{r}_{\mathrm{IB}}(x_{i},a_{i}))\frac{\pi(a_{i}\mid x_{i})}{\mu(a_{i}\mid x_{i})}\,\mathds{1}(D_{\mathrm{KL}}(x_{i})<\tau)
+a𝒜r^IB(xi,a)π(axi) 1(DKL(xi)<τ)\displaystyle+\sum_{a\in\mathcal{A}}\widehat{r}_{\mathrm{IB}}(x_{i},a)\pi(a\mid x_{i})\,\mathds{1}(D_{\mathrm{KL}}(x_{i})<\tau)
+a𝒜r^IB(xi,a)π(axi) 1(DKL(xi)τ).\displaystyle+\sum_{a\in\mathcal{A}}\widehat{r}_{\mathrm{IB}}(x_{i},a)\pi(a\mid x_{i})\,\mathds{1}(D_{\mathrm{KL}}(x_{i})\geq\tau).

Since VDR-IC(τ)=i=1nYi(τ)/n=Y¯(τ)V_{\mathrm{DR\mbox{-}IC}}(\tau)=\sum_{i=1}^{n}Y_{i}(\tau)/n=\overline{Y}(\tau), we have the estimate of the variance, Var^(VDR-IC(τ))\widehat{\mathrm{Var}}(V_{\mathrm{DR\mbox{-}IC}}(\tau)) as

Var^(VDR-IC(τ))=Var(Y¯(τ))=1n2i=1n(Yi(τ)Y¯(τ))2.\displaystyle\widehat{\mathrm{Var}}(V_{\mathrm{DR\mbox{-}IC}}(\tau))=\mathrm{Var}\left(\overline{Y}(\tau)\right)=\frac{1}{n^{2}}\sum_{i=1}^{n}{(Y_{i}(\tau)-\overline{Y}(\tau))}^{2}.

Next, we utilize the fact that V^IPSπ\widehat{V}^{\pi}_{\mathrm{IPS}} as defined in (4) is an unbiased estimator of VπV^{\pi} to obtain an estimate of squared bias as a function of the threshold τ\tau, given by Bias~2(τ)=(V^DR-ICπ(τ)V^IPSπ)2\widetilde{\mathrm{Bias}}^{2}(\tau)=(\widehat{V}^{\pi}_{\mathrm{DR\mbox{-}IC}}(\tau)-\widehat{V}^{\pi}_{\mathrm{IPS}})^{2}. To see this, note that

Bias2\displaystyle{\mathrm{Bias}}^{2} =(𝔼[V^DR-ICπ(τ)]Vπ)2\displaystyle=(\mathbb{E}[\widehat{V}^{\pi}_{\mathrm{DR\mbox{-}IC}}(\tau)]-V^{\pi})^{2}
=(𝔼[V^DR-ICπ(τ)]𝔼[V^IPSπ]+𝔼[V^IPSπ]Vπ)2\displaystyle=(\mathbb{E}[\widehat{V}^{\pi}_{\mathrm{DR\mbox{-}IC}}(\tau)]-\mathbb{E}[\widehat{V}^{\pi}_{\mathrm{IPS}}]+\mathbb{E}[\widehat{V}^{\pi}_{\mathrm{IPS}}]-V^{\pi})^{2}
=(𝔼[V^DR-ICπ(τ)𝔼[V^IPSπ])2+(𝔼[V^IPSπ]Vπ)2\displaystyle=(\mathbb{E}[\widehat{V}^{\pi}_{\mathrm{DR\mbox{-}IC}}(\tau)-\mathbb{E}[\widehat{V}^{\pi}_{\mathrm{IPS}}])^{2}+(\mathbb{E}[\widehat{V}^{\pi}_{\mathrm{IPS}}]-V^{\pi})^{2}
+2(𝔼[V^DR-ICπ(τ)]𝔼[V^IPSπ])(𝔼[V^IPSπ]Vπ)\displaystyle+2(\mathbb{E}[\widehat{V}^{\pi}_{\mathrm{DR\mbox{-}IC}}(\tau)]-\mathbb{E}[\widehat{V}^{\pi}_{\mathrm{IPS}}])(\mathbb{E}[\widehat{V}^{\pi}_{\mathrm{IPS}}]-V^{\pi})
(a)(𝔼[V^DR-ICπ(τ)]𝔼[V^IPSπ])2\displaystyle\stackrel{{\scriptstyle(a)}}{{\approx}}(\mathbb{E}[\widehat{V}^{\pi}_{\mathrm{DR\mbox{-}IC}}(\tau)]-\mathbb{E}[\widehat{V}^{\pi}_{\mathrm{IPS}}])^{2}
(b)(1ni=1nYi(τ)1ni=1nπ(aixi)μ(aixi)ri(xi,ai))2\displaystyle\stackrel{{\scriptstyle(b)}}{{\approx}}{\left(\frac{1}{n}\sum_{i=1}^{n}Y_{i}(\tau)-\frac{1}{n}\sum_{i=1}^{n}\frac{\pi(a_{i}\mid x_{i})}{\mu(a_{i}\mid x_{i})}r_{i}(x_{i},a_{i})\right)}^{2}
=(V^DR-ICπ(τ)V^IPSπ)2=Bias~2(τ).\displaystyle=(\widehat{V}^{\pi}_{\mathrm{DR\mbox{-}IC}}(\tau)-\widehat{V}^{\pi}_{\mathrm{IPS}})^{2}=\widetilde{\mathrm{Bias}}^{2}(\tau).

Here, to get (a)(a), we have used the fact that (𝔼[V^IPSπ]Vπ)(\mathbb{E}[\widehat{V}^{\pi}_{\mathrm{IPS}}]-V^{\pi}) goes to zero as we get more samples, since V^IPSπ\widehat{V}^{\pi}_{\mathrm{IPS}} is an unbiased estimator of VπV^{\pi}. To get (b)(b), we replace the expectation by sample average.

However, due to possibly large importance weights, V^IPSπ\widehat{V}^{\pi}_{\mathrm{IPS}} can be a poor estimate of the true expected reward VπV^{\pi}. We overcome this issue as follows. Let Rmax=maxx,ar(x,a){R}_{\max}=\max_{x,a}r(x,a). We then propose the following upperbound of the bias, BiasUB(τ)\mathrm{BiasUB}(\tau):

BiasUB(τ)\displaystyle\mathrm{BiasUB}(\tau) =1ni=1na𝒜Rmaxπ(axi) 1(DKL(xi)τ).\displaystyle=\frac{1}{n}\sum_{i=1}^{n}\sum_{a\in\mathcal{A}}R_{\max}\pi(a\mid x_{i})\,\mathds{1}(D_{\mathrm{KL}}(x_{i})\geq\tau).

Finally, we will use the following as our estimate for the squared bias:

Bias^2(τ)=min(Bias~2(τ),BiasUB2(τ)).\displaystyle\widehat{\mathrm{Bias}}^{2}(\tau)={\min}\left(\widetilde{\mathrm{Bias}}^{2}(\tau),\mathrm{BiasUB}^{2}(\tau)\right).

With these estimates, we optimize the threshold paramater τ\tau as

τ^=argmin𝜏[Bias^2(τ)+Var^(τ)].\displaystyle\widehat{\tau}=\underset{\tau}{\operatorname*{arg\,min}}\left[\widehat{\mathrm{Bias}}^{2}(\tau)+\widehat{\mathrm{Var}}(\tau)\right]. (13)

4 Analysis

4.1 Main Results

In this section, we investigate the theoretical properties of the information borrowing estimator r^IB(z)\widehat{r}_{\mathrm{IB}}(z) for the reward function and show that it is asymptotically unbiased. More precisely, in Theorem 1, we show that the bias of the modified DM estimator V^DM-IBπ\widehat{V}^{\pi}_{\mathrm{DM\mbox{-}IB}} as defined in (11) goes to zero when nn increases to infinity. In addition to this bias analysis, we also provide a bound on the MSE of the DR-IC estimator V^DR-ICπ(τ)\widehat{V}^{\pi}_{\mathrm{DR\mbox{-}IC}}(\tau) in Theorem 2.

In this section, we assume that action aa is continuous for the purpose of deriving general results. We also make the following assumptions about the problem.

Assumption 1.

(i). The reward function r(x,a)r(x,a) is continuous in xx and aa.
(ii) The importance weight w(x,a)=π(ax)/μ(ax)w(x,a)={\pi(a\mid x)}/{\mu(a\mid x)} is continuous in xx and aa. Also, there exists a positive constant wmaxw_{\max} such that w(x,a)(0,wmax]w(x,a)\in(0,w_{\max}] for all (x,a)(x,a).
(iii) The context distribution νC\nu_{C} and the reward distribution μR(|x,a)\mu_{R}(\cdot|x,a) are continuous.

We make the following assumption about the kernel K()K(\cdot) we are using in (8).

Assumption 2.

Assume the kernel function K()K(\cdot) is isotropic and satisfies K(u)𝑑u=1\int K(u)du=1, uK(u)𝑑u=0\int uK(u)du=0, u2K(u)𝑑u<\int u^{2}K(u)du<\infty.

We also make the following assumption about the least squares regression based estimate used in (7) and (10).

Assumption 3.

Let θ^ls=(θ^x,θ^a)=(ZZ)1Z𝐫\widehat{\theta}_{\mathrm{ls}}=(\widehat{\theta}_{x}^{\top},\widehat{\theta}_{a})^{\top}=(Z^{\top}Z)^{-1}Z^{\top}\mathbf{r}. θ^x2\|\widehat{\theta}_{x}\|_{2} and θ^a2\|\widehat{\theta}_{a}\|_{2} are bounded from above by some positive constant.

The following assumption is common among the problem of regression estimation, see Nadaraya, (1964) for more details.

Assumption 4.

(i)(i) The value of Σ(z,𝐫)𝟏n\Sigma(z,\mathbf{r})\bm{1}_{n} is n(1+o(1))n(1+o(1)) for every z=(x,a)z=(x^{\top},a)^{\top}, (ii)(ii) Σ𝐫\Sigma_{\mathbf{r}} is a diagonal matrix and its diagonal entries are positive constants.

We now present our theorem about the bias of V^DM-IBπ\widehat{V}^{\pi}_{\mathrm{DM\mbox{-}IB}}.

Theorem 1 (Asymptotic unbiasedness of V^DM-IBπ\widehat{V}^{\pi}_{\mathrm{DM\mbox{-}IB}}).

Let Assumptions 14 hold. Assume the bandwidth hn0h_{n}\rightarrow 0 as nn\rightarrow\infty. Then, the bias of the modified direct method estimator V^DM-IBπ\widehat{V}^{\pi}_{\mathrm{DM\mbox{-}IB}} as defined in (11) converges to zero in probability as nn goes to infinity.

The DM-IB estimator inherits its asymptotically unbiased property from r^IB(z)\widehat{r}_{\mathrm{IB}}(z). In the following, we provide a brief intuition on the proof technique. The bias of r^IB(z)\widehat{r}_{\mathrm{IB}}(z) can be written as three terms involving convolutions with the non-stationary kernel (8). As the bandwidth hnh_{n} converges to zero with the sample size, the non-stationary kernel becomes degenerate at each sample in the derived data set 𝒟~\widetilde{\mathcal{D}} such that the convolutions converge to the same sample in 𝒟~\widetilde{\mathcal{D}}. Hence the convolution approximates the target and results in the elimination of the bias caused by least squares estimate with the information borrowing term in (10).

Remark 1.

Since the underlying reward function is non-linear, it is well-known that the bias of its least squares estimate zθ^lsz^{\top}\widehat{\theta}_{\mathrm{ls}} is lower bounded by a constant. Hence, the bias of the traditional direct method estimator V^DM\widehat{V}_{\mathrm{DM}} is lower bounded by a constant as well.

We now give an upper bound on the MSE for our DR-IC estimator given in (12).

Theorem 2 (Bound on the MSE of V^DR-ICπ\widehat{V}^{\pi}_{\mathrm{DR\mbox{-}IC}}).

Let Assumption 1.(ii) holds. Let ϵIB(x,a)=r^IB(x,a)𝔼[r|x,a]\epsilon_{\mathrm{IB}}(x,a)=\widehat{r}_{\mathrm{IB}}(x,a)-\mathbb{E}[r|x,a] and assume that r^IB(x,a)[0,Rmax]\widehat{r}_{\mathrm{IB}}(x,a)\in[0,R_{\max}]. Then, the MSE of V^DR-ICπ\widehat{V}^{\pi}_{\mathrm{DR\mbox{-}IC}} as given (12) with the threshold parameter τ\tau is at most

2n𝔼μ[𝔼[(rr^IB(x,a))2|x,a]+wmax2ϵIB2(x,a)]\displaystyle\frac{2}{n}\mathbb{E}_{\mu}\left[\mathbb{E}\big{[}(r-\widehat{r}_{\mathrm{IB}}(x,a))^{2}\big{|}x,a\big{]}+w^{2}_{\max}\epsilon^{2}_{\mathrm{IB}}(x,a)\right]
+2n|𝒜|Eπ[Rmax2]+𝔼π[ϵIB(x,a)𝟙{DKL(x)<τ}]2,\displaystyle+\frac{2}{n}|\mathcal{A}|\mathrm{E}_{\pi}[R^{2}_{\max}]+\mathbb{E}_{\pi}\Big{[}\epsilon_{\mathrm{IB}}(x,a)\mathds{1}\{D_{\mathrm{KL}}(x)<\tau\}\Big{]}^{2},

Theorem 2 shows that the upper bound of the MSE of our DR-IC estimator is at most O(1/n)O(1/n). A similar result of the switch-DR estimator can be found in Wang et al., (2017) where the upper bound is O(1/n)O(1/n) as well. So, in terms of the MSE, our proposed DR-IC estimator is at least as good as the switching method proposed in Wang et al., (2017). At the same time, our approach is better in terms of the bias, as shown in Theorem 1.

4.2 Comparing Different Switch Estimators

The idea of combining two different estimators in off-policy evaluation has been explored in Wang et al., (2017), where the authors proposed a switch estimator (switch-DR) which can switch between DM and DR (or IPS) depending on the magnitude of importance weights. The DR-IC estimator introduced in this paper is different from the switch-DR estimator, though it is also designed to switch between DM and DR. The switching criterion of the DR-IC estimator is the KL divergence between the logging policy and the target policy, given any context. This measures the overall closeness between the two policies, while the switch-DR estimator uses the importance weights for each individual samples.

Refer to caption
Figure 1: A toy example for comparing two switching criteria between switch-DR and DR-IC.

To better demonstrate the differences between these two switching criteria, we consider a toy example designed as follows. Consider a setting with only two actions, denoted as 0,10,1. Assume that the logging policy has the form μ(1x)=e5x/(1+e5x)\mu(1\mid x)=e^{5x}/(1+e^{5x}) and μ(0x)=1μ(1x)\mu(0\mid x)=1-\mu(1\mid x). Also, consider the evaluation policy of the form π(1x)=e5x/(1+e5x)\pi(1\mid x)=e^{-5x}/(1+e^{-5x}) and π(0x)=1π(1x)\pi(0\mid x)=1-\pi(1\mid x). The contexts are uniformly distributed between 1-1 and 11. We generate 50 training samples and 50 test samples. The threshold of importance weights for switch-DR is 12.65 and the threshold of KL divergences for DR-IC is 2.20. We plot the KL divergences as a function of contexts in Fig. 1 and all 50 samples are marked with red circles or blue triangles based on which method (DM or DR) is employed in the switch-DR estimator. The plot are split into two parts by the threshold of KL divergences in the DR-IC estimator. It is obvious that two switching criteria do not agree with each other always. Some contexts with smaller importance weights where DR is employed in the switch-DR estimator have larger KL divergences, such that DM is employed in the DR-IC estimator.

5 Experiments

Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 2: Off-policy evaluation simulation results. MSE versus sample sizes nn for two different reward distributions are showcased for ecoli, glass, yeast, and optdigits UCI datasets.

In this section, we outline our experimental results built on the Open Bandit Pipeline (Saito et al.,, 2020) library. For the choice of datasets, we follow (Dudík et al.,, 2011, 2014; Wang et al.,, 2017; Farajtabar et al.,, 2018; Su et al.,, 2020) to transform the UC Irvine (UCI) multi-class classification datasets (Dua and Graff,, 2017) into off-policy evaluation datasets. The notion of classes in a multi-class dataset becomes the action space in a contextual bandit problem, thus the predicted label becomes the action taken. The classification error becomes the observed loss ll. Note that this is equivalent to observing a reward rr, under the transformation r=1lr=1-l. Specifically, we consider the indicator loss. That is, for a multi-class example (x,a)(x,a^{*}), whenever the rewards are deterministic, then taking an action aa yields the reward r=𝟙{a=a}r=\mathds{1}\{a=a^{*}\}, or whenever the rewards are stochastic, then taking an action aa yields the reward r=𝟙{a=a}r=\mathds{1}\{a=a^{*}\} with probability 0.70.7 and r=1𝟙{a=a}r=1-\mathds{1}\{a=a^{*}\} otherwise.

We split every dataset into two parts - a training set (70%) and a test set (30%). We generate a context-dependent logging policy based on the training set, and use this logging policy to generate the actions. Towards this end, we generate actions uniformly at random, and train a logistic regression model on this training set with randomized actions. We use this model on the test set to get the predicted action distribution that is used as a logging policy. This logging policy μ\mu then is used to generate nn sized bandit data by sampling a context xx from the entire dataset, sampling an action aμ(x)a\sim\mu(\cdot\mid x), and then observing a deterministic or stochastic reward rr. The value of nn varies across different datasets.

We also use this training set to generate an evaluation policy by repeating the procedure of getting a logging policy, with the only difference is that the logistic regression model is trained with the true actions present in the training set. We use this evaluation policy π\pi to predict the actions in the testing set, and calculate the expected reward, which forms the ground truth value for the given UCI dataset.

We measure the performance of various estimators with clipped mean squared error (MSE), i.e. 𝔼[(V^πVπ)21]\mathbb{E}[(\widehat{V}^{\pi}-V^{\pi})^{2}\wedge 1] where V^π\widehat{V}^{\pi} is the value of the estimator and VπV^{\pi} is the ground truth. We use 300500300-500 replicates of bandit data generation to estimate the MSE and repeat this process for 104010-40 different seeds to generate the bands (which is ±0.5\pm 0.5 the standard deviation) around the estimated MSE.

We compare the MSE of the following estimators. 1.1. DR-IC: our proposed estimator given in (12) with the tuning parameters optimized according to (13), 2.2. DR-IC (oracle): our proposed estimator with the tuning parameter optimized using the actual MSE with respect to the ground truth, 3.3. DR-IC(τ=0\tau=0): this is our proposed V^DM-IBπ\widehat{V}^{\pi}_{\mathrm{DM\mbox{-}IB}} as given in (11), 4.4. DM: as given (3) using traditional ridge regression reward estimate (7), 5.5. DR: standard DR estimator according to (5) using the reward estimate (7), 6.6. MR-DR: More robust Doubly robust estimator based on Farajtabar et al., (2018), 7.7. DR-OS (oracle): the oracle tuned version of the shrinkage based estimator proposed in Su et al., (2020) , 8.8. DR-OS: optimized version of the shrinkage estimator proposed in Su et al., (2020), 9.9. switch-DR (oracle): the oracle version of the switch estimator proposed in Wang et al., (2017), and 10.10. switch-DR : the switch estimator proposed in Wang et al., (2017) with tuning parameters optimized using an estimated MSE. In Appendix D, we include a separate simulation study comparing DR-IC(τ=0\tau=0) with a standard Nadaraya Watson estimator, to highlight the benefits of the specific kernel introduced in our model, in effectively borrowing information.

In Fig 2, we show several attractive properties of the proposed method with respect to the state of the art algorithms. First, in seven out of the eight cases considered, the borrowing of information displays a significant practical improvement as shown in the performance of DR-IC(τ=0\tau=0) (DM-IB) over the standard DM across all sample sizes. In fact, in five out of the eight cases, DR-IC (τ=0\tau=0) either outperforms or is as good as a traditional DR approach across the different sample sizes.

Second, the proposed context-based switching algorithm improves upon the performance of DR-IC(τ=0\tau=0) and clearly dominates the state-of-the-art methods in seven out of the eight cases considered. The oracle version of the proposed method DR-IC (oracle) significantly outperforms the oracle versions of other methods, while the tuned version DR-IC is either as good as or better than the tuned versions of the competitors. This also highlights the advantages of the KL based switching approach over the importance weight based switching approach of Wang et al., (2017). In five out of the eight cases, DR-IC shows significant improvement over switch-DR, and is as good as switch-DR in the remaining three cases.

Third, the results indicate that for the deterministic reward model, the DR-IC and DR-IC(oracle) are at least as good as, or significantly better than all their counterparts. For the stochastic reward model, the variability increases for all estimators. Nonetheless, DR-IC is still atleast as good as, or better than all the other estimators in three out of four cases.

Additional experiments: We note that we have included more experiment results and details of the implementation in the appendix. In Appendix D, we have provided a comparison of the proposed approach with a kernel based non-parametric reward estimator, and illustrate the advantages of the proposed formulation. We have also included the code in this github page.

6 Conclusions

In this work, we have introduced the idea of information borrowing which utilizes the similarity structure among context-action pairs and the importance weights associated with the logging and evaluation policies to significantly improve the practical properties of a traditional direct method for OPE. Under assumptions discussed in Section 4, the proposed reward estimator is asymptoticallly unbiased, which is usually not the case for traditional DM reward models. We further introduce a KL divergence based switching algorithm that employs the strengths of information borrowing, resulting in a context-based switch estimator that dominates the state of the art algorithms in most cases. These advantages are relevant not just for the fundamental problem of off-policy evaluation as considered in this article, but for more general problems in off-policy optimization and reinforcement learning as well. The notion of effective information borrowing becomes particularly crucial while assimilating information from multiple context distributions or logging policies with varying similarities to the evaluation policy. These problems indicate exciting possibilities for future research using the idea of effective policy dependent information borrowing introduced in this paper. Another exciting direction of research would be to identify better optimization schemes for the tuning parameter τ\tau and get the tuned estimate closer to the oracle version.

References

  • Bang and Robins, (2005) Bang, H. and Robins, J. M. (2005). Doubly robust estimation in missing data and causal inference models. Biometrics, 61(4):962–973.
  • Bottou et al., (2013) Bottou, L., Peters, J., Quiñonero-Candela, J., Charles, D. X., Chickering, D. M., Portugaly, E., Ray, D., Simard, P., and Snelson, E. (2013). Counterfactual reasoning and learning systems: The example of computational advertising. Journal of Machine Learning Research, 14(11).
  • Brunk, (1961) Brunk, H. (1961). Best fit to a random variable by a random variable measurable with respect to a σ\sigma-lattice. Pacific Journal of Mathematics, 11(3):785–802.
  • Brunk, (1963) Brunk, H. (1963). On an extension of the concept conditional expectation. Proceedings of the American Mathematical Society, 14(2):298–304.
  • Dua and Graff, (2017) Dua, D. and Graff, C. (2017). UCI machine learning repository.
  • Dudík et al., (2014) Dudík, M., Erhan, D., Langford, J., and Li, L. (2014). Doubly robust policy evaluation and optimization. Statistical Science, 29(4):485–511.
  • Dudík et al., (2011) Dudík, M., Langford, J., and Li, L. (2011). Doubly robust policy evaluation and learning. arXiv preprint arXiv:1103.4601.
  • Farajtabar et al., (2018) Farajtabar, M., Chow, Y., and Ghavamzadeh, M. (2018). More robust doubly robust off-policy evaluation. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 1446–1455.
  • Horvitz and Thompson, (1952) Horvitz, D. G. and Thompson, D. J. (1952). A generalization of sampling without replacement from a finite universe. Journal of the American statistical Association, 47(260):663–685.
  • Khan et al., (2024) Khan, S., Saveski, M., and Ugander, J. (2024). Off-policy evaluation beyond overlap: Sharp partial identification under smoothness. In Forty-first International Conference on Machine Learning.
  • Li et al., (2011) Li, L., Chu, W., Langford, J., and Wang, X. (2011). Unbiased offline evaluation of contextual-bandit-based news article recommendation algorithms. In Proceedings of the fourth ACM international conference on Web search and data mining, pages 297–306.
  • Lichtenberg et al., (2023) Lichtenberg, J. M., Buchholz, A., Di Benedetto, G., Ruffini, M., and London, B. (2023). Double clipping: Less-biased variance reduction in off-policy evaluation. arXiv preprint arXiv:2309.01120.
  • Mandel et al., (2014) Mandel, T., Liu, Y.-E., Levine, S., Brunskill, E., and Popovic, Z. (2014). Offline policy evaluation across representations with applications to educational games. In AAMAS, volume 1077.
  • Mou et al., (2023) Mou, W., Ding, P., Wainwright, M. J., and Bartlett, P. L. (2023). Kernel-based off-policy estimation without overlap: Instance optimality beyond semiparametric efficiency. arXiv preprint arXiv:2301.06240.
  • Nadaraya, (1964) Nadaraya, E. A. (1964). On estimating regression. Theory of Probability & Its Applications, 9(1):141–142.
  • Robins and Rotnitzky, (1995) Robins, J. M. and Rotnitzky, A. (1995). Semiparametric efficiency in multivariate regression models with missing data. Journal of the American Statistical Association, 90(429):122–129.
  • Saito et al., (2020) Saito, Y., Shunsuke, A., Megumi, M., and Yusuke, N. (2020). Open bandit dataset and pipeline: Towards realistic and reproducible off-policy evaluation. arXiv preprint arXiv:2008.07146.
  • Šidák, (1957) Šidák, Z. (1957). On relations between strict-sense and wide-sense conditional expectations. Theory of Probability & Its Applications, 2(2):267–272.
  • Su et al., (2020) Su, Y., Dimakopoulou, M., Krishnamurthy, A., and Dudík, M. (2020). Doubly robust off-policy evaluation with shrinkage. In International Conference on Machine Learning, pages 9167–9176.
  • Wang et al., (2017) Wang, Y.-X., Agarwal, A., and Dudık, M. (2017). Optimal and adaptive off-policy evaluation in contextual bandits. In International Conference on Machine Learning, pages 3589–3597.
  • Zhou et al., (2017) Zhou, X., Mayer-Hamblett, N., Khan, U., and Kosorok, M. R. (2017). Residual weighted learning for estimating individualized treatment rules. Journal of the American Statistical Association, 112(517):169–187.

Appendix A Proof of Theorem 1

For simplicity, we prove Theorem 1 when xx and aa are scalars and we assume νC(x)=U(0,1)\nu_{C}(x)=U(0,1) and μ(ax)=U(0,1)\mu(a\mid x)=U(0,1), where U(0,1)U(0,1) is the uniform distribution between 0 and 1 with density function 𝟏(0,1)()\mathbf{1}_{(0,1)}(\cdot). In addition, we assume there is no intercept term in the lest square estimate θ^ls\widehat{\theta}_{\mathrm{ls}}. For more general scenarios, some modifications of the assumptions may be necessary. The kernel function is

[Σ(𝐫~,𝐫)]ji\displaystyle[\Sigma(\widetilde{\mathbf{r}},\mathbf{r})]_{ji} =1hnw~jwiK(x~jxihnw~jwi)1hnw~jwiK(a~jaihnw~jwi).\displaystyle=\frac{1}{h_{n}\sqrt{\widetilde{w}_{j}w_{i}}}K\Big{(}\frac{\widetilde{x}_{j}-x_{i}}{h_{n}\sqrt{\widetilde{w}_{j}w_{i}}}\Big{)}\frac{1}{h_{n}\sqrt{\widetilde{w}_{j}w_{i}}}K\Big{(}\frac{\widetilde{a}_{j}-a_{i}}{h_{n}\sqrt{\widetilde{w}_{j}w_{i}}}\Big{)}.

Main idea of the proof: We first show that the information borrowing estimator r^IB(z)\widehat{r}_{\mathrm{IB}}(z) evaluated at any point z=(x,a)z=(x,a)^{\top} is asymptotically unbiased. This essentially leads to the asymptotic unbiasedness of the direct method estimator with r^IB(z)\widehat{r}_{\mathrm{IB}}(z), V^DMIB\widehat{V}_{\mathrm{DM-IB}}. The proof follows three main steps:

(i). We decompose r^IB(z)\widehat{r}_{\mathrm{IB}}(z) into two parts: a weighted sum of the least squared estimate θ^ls\widehat{\theta}_{\mathrm{ls}}, i.e., the first two terms in (14), and a weighted sum of 𝐫\mathbf{r}, i.e., the third term in (14). Hence, its expectation with respect to νR(r|x,a)\nu_{R}(r|x,a) follows the same construction where only θ^ls\widehat{\theta}_{\mathrm{ls}} and 𝐫\mathbf{r} are replaced by their expectations, see (15).

(ii). We show Anx,BnaA_{n}\rightarrow x,B_{n}\rightarrow a and Cnr(x,a)C_{n}\rightarrow r(x,a) for any (x,a)(x,a). They can be seen as three convolutions with the non-stationary kernel (8). Due to the asymptotic degeneracy of the non-stationary kernel when the bandwidth hnh_{n} goes to zero with the sample size, all the three convolutions converge to its corresponding degenerate point xx, aa, r(x,a)r(x,a), respectively.

(iii). As θ^ls\widehat{\theta}_{\mathrm{ls}} is norm-bounded, the first two terms in (15) converge to zero and the last term converges to the reward function at r(x,a)r(x,a) at (x,a)(x,a). These observations lead to 𝔼rx,a[r^IB(z)𝒙,𝒂]pr(x,a)\mathbb{E}_{r\mid x,a}[\widehat{r}_{\mathrm{IB}}(z)\mid\bm{x},\bm{a}]\stackrel{{\scriptstyle p}}{{\rightarrow}}r(x,a). and subsequently the asymptotic unbiasedness of V^DMIB\widehat{V}_{\mathrm{DM-IB}}.

The information borrowing estimator r^IB(z)\widehat{r}_{\mathrm{IB}}(z) for the rewards function r(x,a)r(x,a) can be decomposed into three terms defined as follows.

r^IB(z)\displaystyle\widehat{r}_{\mathrm{IB}}(z) =zθ^ls+Σ(z,𝐫)Σ𝐫1[diag{Σ(z,𝒓)Σ𝐫1𝟏n}]1(𝐫Zθ^ls)\displaystyle=z^{\top}\widehat{\theta}_{\mathrm{ls}}+\Sigma({z,\mathbf{r}})\Sigma^{-1}_{\mathbf{r}}[\mathrm{diag}\{\Sigma(z,\bm{r})\Sigma^{-1}_{\mathbf{r}}\bm{1}_{n}\}]^{-1}(\mathbf{r}-Z\widehat{\theta}_{\mathrm{ls}})
=(z1n(1+o(1))Σ(z,𝐫)Z)θ^ls+1n(1+o(1))Σ(z,𝐫)𝐫, by Assumption 4\displaystyle=\bigg{(}z-\frac{1}{n(1+o(1))}\Sigma(z,\mathbf{r})Z\bigg{)}\widehat{\theta}_{\mathrm{ls}}+\frac{1}{n(1+o(1))}\Sigma(z,\mathbf{r})\,\mathbf{r},\qquad\text{ by Assumption \ref{assump:sigmar}}
=(x1n(1+o(1))i=1n1hnwzwiK(xxihnwzwi)1hnwzwiK(aaihnwzwi)xi)θ^x\displaystyle=\bigg{(}x-\frac{1}{n(1+o(1))}\sum_{i=1}^{n}\frac{1}{h_{n}\sqrt{w_{z}w_{i}}}K\Big{(}\frac{x-x_{i}}{h_{n}\sqrt{w_{z}w_{i}}}\Big{)}\frac{1}{h_{n}\sqrt{w_{z}w_{i}}}K\Big{(}\frac{a-a_{i}}{h_{n}\sqrt{w_{z}w_{i}}}\Big{)}\,x_{i}\bigg{)}\widehat{\theta}_{x}
+(a1n(1+o(1))i=1n1hnwzwiK(xxihnwzwi)1hnwzwiK(aaihnwzwi)ai)θ^a\displaystyle\phantom{1111111111}+\bigg{(}a-\frac{1}{n(1+o(1))}\sum_{i=1}^{n}\frac{1}{h_{n}\sqrt{w_{z}w_{i}}}K\Big{(}\frac{x-x_{i}}{h_{n}\sqrt{w_{z}w_{i}}}\Big{)}\frac{1}{h_{n}\sqrt{w_{z}w_{i}}}K\Big{(}\frac{a-a_{i}}{h_{n}\sqrt{w_{z}w_{i}}}\Big{)}\,a_{i}\bigg{)}\widehat{\theta}_{a}
+1n(1+o(1))i=1n1hnwzwiK(xxihnwzwi)1hnwzwiK(aaihnwzwi)ri,\displaystyle\phantom{11111111111111111111}+\frac{1}{n(1+o(1))}\sum_{i=1}^{n}\frac{1}{h_{n}\sqrt{w_{z}w_{i}}}K\Big{(}\frac{x-x_{i}}{h_{n}\sqrt{w_{z}w_{i}}}\Big{)}\frac{1}{h_{n}\sqrt{w_{z}w_{i}}}K\Big{(}\frac{a-a_{i}}{h_{n}\sqrt{w_{z}w_{i}}}\Big{)}\,r_{i}, (14)

where 𝒙=(x1,,xn),𝒂=(a1,,an)\bm{x}=(x_{1},\ldots,x_{n})^{\top},\bm{a}=(a_{1},\ldots,a_{n})^{\top} and wz=w(x,a)=π(ax)μ(ax),wi=w(xi,ai)=π(aixi)μ(aixi).w_{z}=w(x,a)=\frac{\pi(a\mid x)}{\mu(a\mid x)},w_{i}=w(x_{i},a_{i})=\frac{\pi(a_{i}\mid x_{i})}{\mu(a_{i}\mid x_{i})}. Now,

𝔼rx,a[r^IB(z)𝒙,𝒂]\displaystyle\mathbb{E}_{r\mid x,a}[\widehat{r}_{\mathrm{IB}}(z)\mid\bm{x},\bm{a}] ={x1n(1+o(1))i=1n1hnwzwiK(xxihnwzwi)1hnwzwiK(aaihnwzwi)xi}𝔼r[θ^x𝒙,𝒂]\displaystyle=\bigg{\{}x-\frac{1}{n(1+o(1))}\sum_{i=1}^{n}\frac{1}{h_{n}\sqrt{w_{z}w_{i}}}K\Big{(}\frac{x-x_{i}}{h_{n}\sqrt{w_{z}w_{i}}}\Big{)}\frac{1}{h_{n}\sqrt{w_{z}w_{i}}}K\Big{(}\frac{a-a_{i}}{h_{n}\sqrt{w_{z}w_{i}}}\Big{)}\,x_{i}\bigg{\}}\cdot\mathbb{E}_{r}[\widehat{\theta}_{x}\mid\bm{x},\bm{a}]
+{a1n(1+o(1))i=1n1hnwzwiK(xxihnwzwi)1hnwzwiK(aaihnwzwi)ai}𝔼r[θ^a𝒙,𝒂]\displaystyle\phantom{11111}+\bigg{\{}a-\frac{1}{n(1+o(1))}\sum_{i=1}^{n}\frac{1}{h_{n}\sqrt{w_{z}w_{i}}}K\Big{(}\frac{x-x_{i}}{h_{n}\sqrt{w_{z}w_{i}}}\Big{)}\frac{1}{h_{n}\sqrt{w_{z}w_{i}}}K\Big{(}\frac{a-a_{i}}{h_{n}\sqrt{w_{z}w_{i}}}\Big{)}\,a_{i}\bigg{\}}\cdot\mathbb{E}_{r}[\widehat{\theta}_{a}\mid\bm{x},\bm{a}]
+1n(1+o(1))i=1n1hnwzwiK(xxihnwzwi)1hnwzwiK(aaihnwzwi)r(xi,ai)\displaystyle\phantom{1111111111}+\frac{1}{n(1+o(1))}\sum_{i=1}^{n}\frac{1}{h_{n}\sqrt{w_{z}w_{i}}}K\Big{(}\frac{x-x_{i}}{h_{n}\sqrt{w_{z}w_{i}}}\Big{)}\frac{1}{h_{n}\sqrt{w_{z}w_{i}}}K\Big{(}\frac{a-a_{i}}{h_{n}\sqrt{w_{z}w_{i}}}\Big{)}\cdot r(x_{i},a_{i})
:=(xAn)𝔼r[θ^x𝒙,𝒂]+(aBn)𝔼r[θ^a𝒙,𝒂]+Cn.\displaystyle:=(x-A_{n})\cdot\mathbb{E}_{r}[\widehat{\theta}_{x}\mid\bm{x},\bm{a}]+(a-B_{n})\cdot\mathbb{E}_{r}[\widehat{\theta}_{a}\mid\bm{x},\bm{a}]+C_{n}. (15)

Next we show AnpxA_{n}\stackrel{{\scriptstyle p}}{{\rightarrow}}x and BnpaB_{n}\stackrel{{\scriptstyle p}}{{\rightarrow}}a, where p\stackrel{{\scriptstyle p}}{{\rightarrow}} represents convergence in probability.

𝔼x,a[An]\displaystyle\mathbb{E}_{x,a}[A_{n}] =1n(1+o(1))i=1naixi1hnwzwiK(xxihnwzwi)1hnwzwiK(aaihnwzwi)xi𝟏(0,1)(xi)𝟏(0,1)(ai)𝑑xi𝑑ai\displaystyle=\frac{1}{n(1+o(1))}\sum_{i=1}^{n}\int_{a_{i}}\int_{x_{i}}\frac{1}{h_{n}\sqrt{w_{z}w_{i}}}K\Big{(}\frac{x-x_{i}}{h_{n}\sqrt{w_{z}w_{i}}}\Big{)}\frac{1}{h_{n}\sqrt{w_{z}w_{i}}}K\Big{(}\frac{a-a_{i}}{h_{n}\sqrt{w_{z}w_{i}}}\Big{)}\,x_{i}\bm{1}_{(0,1)}(x_{i})\bm{1}_{(0,1)}(a_{i})\,dx_{i}\,da_{i}
=+{+11+o(1)1hnwzwzK(xxhnwzwz)1hnwzwzK(aahnwzwz)x𝟏(0,1)(x)𝑑x}𝟏(0,1)(a)𝑑a\displaystyle=\int_{-\infty}^{+\infty}\Bigg{\{}\int_{-\infty}^{+\infty}\frac{1}{1+o(1)}\frac{1}{h_{n}\sqrt{w_{z}w_{z^{\prime}}}}K\Big{(}\frac{x-x^{\prime}}{h_{n}\sqrt{w_{z}w_{z^{\prime}}}}\Big{)}\frac{1}{h_{n}\sqrt{w_{z}w_{z^{\prime}}}}K\Big{(}\frac{a-a^{\prime}}{h_{n}\sqrt{w_{z}w_{z^{\prime}}}}\Big{)}\,x^{\prime}\bm{1}_{(0,1)}(x^{\prime})\,dx^{\prime}\Bigg{\}}\bm{1}_{(0,1)}(a^{\prime})\,da^{\prime}
=(acnhnwza+cnhnwz+acnhnwz+a+cnhnwz+)(xcnhnwzx+cnhnwz+xcnhnwz+x+cnhnwz+)\displaystyle=\Bigg{(}\int_{a-c_{n}h_{n}\sqrt{w_{z}}}^{a+c_{n}h_{n}\sqrt{w_{z}}}+\int_{-\infty}^{a-c_{n}h_{n}\sqrt{w_{z}}}+\int_{a+c_{n}h_{n}\sqrt{w_{z}}}^{+\infty}\Bigg{)}\Bigg{(}\int_{x-c_{n}h_{n}\sqrt{w_{z}}}^{x+c_{n}h_{n}\sqrt{w_{z}}}+\int_{-\infty}^{x-c_{n}h_{n}\sqrt{w_{z}}}+\int_{x+c_{n}h_{n}\sqrt{w_{z}}}^{+\infty}\Bigg{)}
11+o(1)1hnwzwzK(xxhnwzwz)1hnwzwzK(aahnwzwz)x𝟏(0,1)(x)𝟏(0,1)(a)dxda\displaystyle\phantom{1111111111}\frac{1}{1+o(1)}\frac{1}{h_{n}\sqrt{w_{z}w_{z^{\prime}}}}K\Big{(}\frac{x-x^{\prime}}{h_{n}\sqrt{w_{z}w_{z^{\prime}}}}\Big{)}\frac{1}{h_{n}\sqrt{w_{z}w_{z^{\prime}}}}K\Big{(}\frac{a-a^{\prime}}{h_{n}\sqrt{w_{z}w_{z^{\prime}}}}\Big{)}\,x^{\prime}\bm{1}_{(0,1)}(x^{\prime})\bm{1}_{(0,1)}(a^{\prime})\,dx^{\prime}\,da^{\prime}
=(acnhnwza+cnhnwzxcnhnwzx+cnhnwz+acnhnwza+cnhnwzxcnhnwz+acnhnwza+cnhnwzx+cnhnwz+\displaystyle=\Bigg{(}\int_{a-c_{n}h_{n}\sqrt{w_{z}}}^{a+c_{n}h_{n}\sqrt{w_{z}}}\int_{x-c_{n}h_{n}\sqrt{w_{z}}}^{x+c_{n}h_{n}\sqrt{w_{z}}}+\int_{a-c_{n}h_{n}\sqrt{w_{z}}}^{a+c_{n}h_{n}\sqrt{w_{z}}}\int_{-\infty}^{x-c_{n}h_{n}\sqrt{w_{z}}}+\int_{a-c_{n}h_{n}\sqrt{w_{z}}}^{a+c_{n}h_{n}\sqrt{w_{z}}}\int_{x+c_{n}h_{n}\sqrt{w_{z}}}^{+\infty}
+acnhnwzxcnhnwzx+cnhnwz+acnhnwzxcnhnwz+acnhnwzx+cnhnwz+\displaystyle+\int_{-\infty}^{a-c_{n}h_{n}\sqrt{w_{z}}}\int_{x-c_{n}h_{n}\sqrt{w_{z}}}^{x+c_{n}h_{n}\sqrt{w_{z}}}+\int_{-\infty}^{a-c_{n}h_{n}\sqrt{w_{z}}}\int_{-\infty}^{x-c_{n}h_{n}\sqrt{w_{z}}}+\int_{-\infty}^{a-c_{n}h_{n}\sqrt{w_{z}}}\int_{x+c_{n}h_{n}\sqrt{w_{z}}}^{+\infty}
+a+cnhnwz+xcnhnwzx+cnhnwz+a+cnhnwz+xcnhnwz+a+cnhnwz+x+cnhnwz+)\displaystyle+\int_{a+c_{n}h_{n}\sqrt{w_{z}}}^{+\infty}\int_{x-c_{n}h_{n}\sqrt{w_{z}}}^{x+c_{n}h_{n}\sqrt{w_{z}}}+\int_{a+c_{n}h_{n}\sqrt{w_{z}}}^{+\infty}\int_{-\infty}^{x-c_{n}h_{n}\sqrt{w_{z}}}+\int_{a+c_{n}h_{n}\sqrt{w_{z}}}^{+\infty}\int_{x+c_{n}h_{n}\sqrt{w_{z}}}^{+\infty}\Bigg{)}
11+o(1)1hnwzwzK(xxhnwzwz)1hnwzwzK(aahnwzwz)x𝟏(0,1)(x)𝟏(0,1)(a)dxda\displaystyle\phantom{1111111111}\frac{1}{1+o(1)}\frac{1}{h_{n}\sqrt{w_{z}w_{z^{\prime}}}}K\Big{(}\frac{x-x^{\prime}}{h_{n}\sqrt{w_{z}w_{z^{\prime}}}}\Big{)}\frac{1}{h_{n}\sqrt{w_{z}w_{z^{\prime}}}}K\Big{(}\frac{a-a^{\prime}}{h_{n}\sqrt{w_{z}w_{z^{\prime}}}}\Big{)}\,x^{\prime}\bm{1}_{(0,1)}(x^{\prime})\bm{1}_{(0,1)}(a^{\prime})\,dx^{\prime}\,da^{\prime}
:=A1n+A2n+A3n+A4n+A5n+A6n+A7n+A8n+A9n,\displaystyle:=A_{1n}+A_{2n}+A_{3n}+A_{4n}+A_{5n}+A_{6n}+A_{7n}+A_{8n}+A_{9n},

where cn+c_{n}\rightarrow+\infty, hn0h_{n}\rightarrow 0, and cnhn0c_{n}h_{n}\rightarrow 0 as nn\rightarrow\infty and z=(x,a),wz=w(x,a)=π(ax)μ(ax).z^{\prime}=(x^{\prime},a^{\prime}),w_{z^{\prime}}=w(x^{\prime},a^{\prime})=\frac{\pi(a^{\prime}\mid x^{\prime})}{\mu(a^{\prime}\mid x^{\prime})}.

A1n\displaystyle A_{1n} =acnhnwza+cnhnwzxcnhnwzx+cnhnwz11+o(1)1hnwzwzK(xxhnwzwz)1hnwzwzK(aahnwzwz)\displaystyle=\int_{a-c_{n}h_{n}\sqrt{w_{z}}}^{a+c_{n}h_{n}\sqrt{w_{z}}}\int_{x-c_{n}h_{n}\sqrt{w_{z}}}^{x+c_{n}h_{n}\sqrt{w_{z}}}\frac{1}{1+o(1)}\frac{1}{h_{n}\sqrt{w_{z}w_{z^{\prime}}}}K\Big{(}\frac{x-x^{\prime}}{h_{n}\sqrt{w_{z}w_{z^{\prime}}}}\Big{)}\frac{1}{h_{n}\sqrt{w_{z}w_{z^{\prime}}}}K\Big{(}\frac{a-a^{\prime}}{h_{n}\sqrt{w_{z}w_{z^{\prime}}}}\Big{)}\,
x𝟏(0,1)(x)𝟏(0,1)(a)dxda\displaystyle\phantom{111111111111111111111111111111111111111111111111111111111111111111111111}\cdot x^{\prime}\bm{1}_{(0,1)}(x^{\prime})\bm{1}_{(0,1)}(a^{\prime})\,dx^{\prime}\,da^{\prime}
let t=xxhnwz and s=aahnwz\displaystyle\text{let }t=\frac{x^{\prime}-x}{h_{n}\sqrt{w_{z}}}\text{ and }s=\frac{a^{\prime}-a}{h_{n}\sqrt{w_{z}}}
=cncncncn11+o(1)1wzK(twz)1wzK(swz)(x+thnwz)\displaystyle=\int_{-c_{n}}^{c_{n}}\int_{-c_{n}}^{c_{n}}\frac{1}{1+o(1)}\frac{1}{\sqrt{w_{z^{\prime}}}}K\Big{(}\frac{t}{\sqrt{w_{z^{\prime}}}}\Big{)}\frac{1}{\sqrt{w_{z^{\prime}}}}K\Big{(}\frac{s}{\sqrt{w_{z^{\prime}}}}\Big{)}(x+th_{n}\sqrt{w_{z}})
𝟏(xhnwz,1xhnwz)(t)𝟏(xhnwz,1xhnwz)(s)dtds\displaystyle\phantom{11111111111111111111111111111111111111111111111111111}\cdot\bm{1}_{(-\frac{x}{h_{n}\sqrt{w_{z}}},\frac{1-x}{h_{n}\sqrt{w_{z}}})}(t)\bm{1}_{(-\frac{x}{h_{n}\sqrt{w_{z}}},\frac{1-x}{h_{n}\sqrt{w_{z}}})}(s)\,dt\,ds
and wz=w(x,a)=w(x+thnwz,a+thnwz)w(x,a)=wz, as n\displaystyle\text{and }w_{z^{\prime}}=w(x^{\prime},a^{\prime})=w(x+th_{n}\sqrt{w_{z}},a+th_{n}\sqrt{w_{z}})\rightarrow w(x,a)=w_{z},\text{ as }n\rightarrow\infty
x++1wzK(twz)1wzK(swz)𝑑t𝑑s=x.\displaystyle\rightarrow x\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}\frac{1}{\sqrt{w_{z}}}K\Big{(}\frac{t}{\sqrt{w_{z}}}\Big{)}\frac{1}{\sqrt{w_{z}}}K\Big{(}\frac{s}{\sqrt{w_{z}}}\Big{)}\,dt\,ds=x.
A2n\displaystyle A_{2n} =acnhnwza+cnhnwzxcnhnwz11+o(1)1hnwzwzK(xxhnwzwz)1hnwzwzK(aahnwzwz)\displaystyle=\int_{a-c_{n}h_{n}\sqrt{w_{z}}}^{a+c_{n}h_{n}\sqrt{w_{z}}}\int_{-\infty}^{x-c_{n}h_{n}\sqrt{w_{z}}}\frac{1}{1+o(1)}\frac{1}{h_{n}\sqrt{w_{z}w_{z^{\prime}}}}K\Big{(}\frac{x-x^{\prime}}{h_{n}\sqrt{w_{z}w_{z^{\prime}}}}\Big{)}\frac{1}{h_{n}\sqrt{w_{z}w_{z^{\prime}}}}K\Big{(}\frac{a-a^{\prime}}{h_{n}\sqrt{w_{z}w_{z^{\prime}}}}\Big{)}\,
x𝟏(0,1)(x)𝟏(0,1)(a)dxda\displaystyle\phantom{111111111111111111111111111111111111111111111111111111111111111111111111}\cdot x^{\prime}\bm{1}_{(0,1)}(x^{\prime})\bm{1}_{(0,1)}(a^{\prime})\,dx^{\prime}\,da^{\prime}
let t=xxhnwz and s=aahnwz\displaystyle\text{let }t=\frac{x^{\prime}-x}{h_{n}\sqrt{w_{z}}}\text{ and }s=\frac{a^{\prime}-a}{h_{n}\sqrt{w_{z}}}
=cncncn11+o(1)1wzK(twz)1wzK(swz)(x+thnwz)\displaystyle=\int_{-c_{n}}^{c_{n}}\int_{-\infty}^{-c_{n}}\frac{1}{1+o(1)}\frac{1}{\sqrt{w_{z^{\prime}}}}K\Big{(}\frac{t}{\sqrt{w_{z^{\prime}}}}\Big{)}\frac{1}{\sqrt{w_{z^{\prime}}}}K\Big{(}\frac{s}{\sqrt{w_{z^{\prime}}}}\Big{)}(x+th_{n}\sqrt{w_{z}})
𝟏(xhnwz,1xhnwz)(t)𝟏(xhnwz,1xhnwz)(s)dtds\displaystyle\phantom{11111111111111111111111111111111111111111111111111111}\cdot\bm{1}_{(-\frac{x}{h_{n}\sqrt{w_{z}}},\frac{1-x}{h_{n}\sqrt{w_{z}}})}(t)\bm{1}_{(-\frac{x}{h_{n}\sqrt{w_{z}}},\frac{1-x}{h_{n}\sqrt{w_{z}}})}(s)\,dt\,ds
and wz=w(x,a)=w(x+thnwz,a+thnwz)w(x,a)=wz, as n\displaystyle\text{and }w_{z^{\prime}}=w(x^{\prime},a^{\prime})=w(x+th_{n}\sqrt{w_{z}},a+th_{n}\sqrt{w_{z}})\rightarrow w(x,a)=w_{z},\text{ as }n\rightarrow\infty
x+1wzK(twz)1wzK(swz)𝑑t𝑑s=0.\displaystyle\rightarrow x\int_{-\infty}^{+\infty}\int_{-\infty}^{-\infty}\frac{1}{\sqrt{w_{z}}}K\Big{(}\frac{t}{\sqrt{w_{z}}}\Big{)}\frac{1}{\sqrt{w_{z}}}K\Big{(}\frac{s}{\sqrt{w_{z}}}\Big{)}\,dt\,ds=0.

Similarly, we can show Ajn0A_{jn}\rightarrow 0 as nn\rightarrow\infty, j=3,,9j=3,\ldots,9. Thus,

𝔼x,a[An]x, as n.\mathbb{E}_{x,a}[A_{n}]\rightarrow x,\text{ as }n\rightarrow\infty.

Therefore, AnpxA_{n}\stackrel{{\scriptstyle p}}{{\rightarrow}}x as nn\rightarrow\infty. Similarly, we can show BnpaB_{n}\stackrel{{\scriptstyle p}}{{\rightarrow}}a as nn\rightarrow\infty. Finally, we show Cnpr(x,a)C_{n}\stackrel{{\scriptstyle p}}{{\rightarrow}}r(x,a).

𝔼x,a[Cn]\displaystyle\mathbb{E}_{x,a}[C_{n}] =1n(1+o(1))i=1naixi1hnwzwiK(xxihnwzwi)1hnwzwiK(aaihnwzwi) 1(0,1)(xi)𝟏(0,1)(ai)r(xi,ai)𝑑xi𝑑ai\displaystyle=\frac{1}{n(1+o(1))}\sum_{i=1}^{n}\int_{a_{i}}\int_{x_{i}}\frac{1}{h_{n}\sqrt{w_{z}w_{i}}}K\Big{(}\frac{x-x_{i}}{h_{n}\sqrt{w_{z}w_{i}}}\Big{)}\frac{1}{h_{n}\sqrt{w_{z}w_{i}}}K\Big{(}\frac{a-a_{i}}{h_{n}\sqrt{w_{z}w_{i}}}\Big{)}\,\bm{1}_{(0,1)}(x_{i})\bm{1}_{(0,1)}(a_{i})\,r(x_{i},a_{i})\,dx_{i}\,da_{i}
=++11+o(1)1hnwzwzK(xxhnwzwz)1hnwzwzK(aahnwzwz)r(x,a) 1(0,1)(x)𝟏(0,1)(a)𝑑x𝑑a\displaystyle=\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}\frac{1}{1+o(1)}\frac{1}{h_{n}\sqrt{w_{z}w_{z^{\prime}}}}K\Big{(}\frac{x-x^{\prime}}{h_{n}\sqrt{w_{z}w_{z^{\prime}}}}\Big{)}\frac{1}{h_{n}\sqrt{w_{z}w_{z^{\prime}}}}K\Big{(}\frac{a-a^{\prime}}{h_{n}\sqrt{w_{z}w_{z^{\prime}}}}\Big{)}\,r(x^{\prime},a^{\prime})\,\bm{1}_{(0,1)}(x^{\prime})\bm{1}_{(0,1)}(a^{\prime})\,dx^{\prime}\,da^{\prime}
=(acnhnwza+cnhnwzxcnhnwzx+cnhnwz+acnhnwza+cnhnwzxcnhnwz+acnhnwza+cnhnwzx+cnhnwz+\displaystyle=\Bigg{(}\int_{a-c_{n}h_{n}\sqrt{w_{z}}}^{a+c_{n}h_{n}\sqrt{w_{z}}}\int_{x-c_{n}h_{n}\sqrt{w_{z}}}^{x+c_{n}h_{n}\sqrt{w_{z}}}+\int_{a-c_{n}h_{n}\sqrt{w_{z}}}^{a+c_{n}h_{n}\sqrt{w_{z}}}\int_{-\infty}^{x-c_{n}h_{n}\sqrt{w_{z}}}+\int_{a-c_{n}h_{n}\sqrt{w_{z}}}^{a+c_{n}h_{n}\sqrt{w_{z}}}\int_{x+c_{n}h_{n}\sqrt{w_{z}}}^{+\infty}
+acnhnwzxcnhnwzx+cnhnwz+acnhnwzxcnhnwz+acnhnwzx+cnhnwz+\displaystyle+\int_{-\infty}^{a-c_{n}h_{n}\sqrt{w_{z}}}\int_{x-c_{n}h_{n}\sqrt{w_{z}}}^{x+c_{n}h_{n}\sqrt{w_{z}}}+\int_{-\infty}^{a-c_{n}h_{n}\sqrt{w_{z}}}\int_{-\infty}^{x-c_{n}h_{n}\sqrt{w_{z}}}+\int_{-\infty}^{a-c_{n}h_{n}\sqrt{w_{z}}}\int_{x+c_{n}h_{n}\sqrt{w_{z}}}^{+\infty}
+a+cnhnwz+xcnhnwzx+cnhnwz+a+cnhnwz+xcnhnwz+a+cnhnwz+x+cnhnwz+)\displaystyle+\int_{a+c_{n}h_{n}\sqrt{w_{z}}}^{+\infty}\int_{x-c_{n}h_{n}\sqrt{w_{z}}}^{x+c_{n}h_{n}\sqrt{w_{z}}}+\int_{a+c_{n}h_{n}\sqrt{w_{z}}}^{+\infty}\int_{-\infty}^{x-c_{n}h_{n}\sqrt{w_{z}}}+\int_{a+c_{n}h_{n}\sqrt{w_{z}}}^{+\infty}\int_{x+c_{n}h_{n}\sqrt{w_{z}}}^{+\infty}\Bigg{)}
11+o(1)1hnwzwzK(xxhnwzwz)1hnwzwzK(aahnwzwz)r(x,a) 1(0,1)(x)𝟏(0,1)(a)dxda\displaystyle\phantom{1111111111}\frac{1}{1+o(1)}\frac{1}{h_{n}\sqrt{w_{z}w_{z^{\prime}}}}K\Big{(}\frac{x-x^{\prime}}{h_{n}\sqrt{w_{z}w_{z^{\prime}}}}\Big{)}\frac{1}{h_{n}\sqrt{w_{z}w_{z^{\prime}}}}K\Big{(}\frac{a-a^{\prime}}{h_{n}\sqrt{w_{z}w_{z^{\prime}}}}\Big{)}\,r(x^{\prime},a^{\prime})\,\bm{1}_{(0,1)}(x^{\prime})\bm{1}_{(0,1)}(a^{\prime})\,dx^{\prime}\,da^{\prime}
:=C1n+C2n+C3n+C4n+C5n+C6n+C7n+C8n+C9n.\displaystyle:=C_{1n}+C_{2n}+C_{3n}+C_{4n}+C_{5n}+C_{6n}+C_{7n}+C_{8n}+C_{9n}.
C1n\displaystyle C_{1n} =acnhnwza+cnhnwzxcnhnwzx+cnhnwzr(x,a)1+o(1)1hnwzwzK(xxhnwzwz)1hnwzwzK(aahnwzwz)\displaystyle=\int_{a-c_{n}h_{n}\sqrt{w_{z}}}^{a+c_{n}h_{n}\sqrt{w_{z}}}\int_{x-c_{n}h_{n}\sqrt{w_{z}}}^{x+c_{n}h_{n}\sqrt{w_{z}}}\frac{r(x^{\prime},a^{\prime})}{1+o(1)}\frac{1}{h_{n}\sqrt{w_{z}w_{z^{\prime}}}}K\Big{(}\frac{x-x^{\prime}}{h_{n}\sqrt{w_{z}w_{z^{\prime}}}}\Big{)}\frac{1}{h_{n}\sqrt{w_{z}w_{z^{\prime}}}}K\Big{(}\frac{a-a^{\prime}}{h_{n}\sqrt{w_{z}w_{z^{\prime}}}}\Big{)}\,
𝟏(0,1)(x)𝟏(0,1)(a)dxda\displaystyle\phantom{111111111111111111111111111111111111111111111111111111111111111111111111}\cdot\bm{1}_{(0,1)}(x^{\prime})\bm{1}_{(0,1)}(a^{\prime})\,dx^{\prime}\,da^{\prime}
let t=xxhnwz and s=aahnwz\displaystyle\text{let }t=\frac{x^{\prime}-x}{h_{n}\sqrt{w_{z}}}\text{ and }s=\frac{a^{\prime}-a}{h_{n}\sqrt{w_{z}}}
=cncncncnr(x+thnwz,a+thnwz)1+o(1)1wzK(twz)1wzK(swz)\displaystyle=\int_{-c_{n}}^{c_{n}}\int_{-c_{n}}^{c_{n}}\frac{r(x+th_{n}\sqrt{w_{z}},a+th_{n}\sqrt{w_{z}})}{1+o(1)}\frac{1}{\sqrt{w_{z^{\prime}}}}K\Big{(}\frac{t}{\sqrt{w_{z^{\prime}}}}\Big{)}\frac{1}{\sqrt{w_{z^{\prime}}}}K\Big{(}\frac{s}{\sqrt{w_{z^{\prime}}}}\Big{)}
𝟏(xhnwz,1xhnwz)(t)𝟏(xhnwz,1xhnwz)(s)dtds\displaystyle\phantom{11111111111111111111111111111111111111111111111111111}\cdot\bm{1}_{(-\frac{x}{h_{n}\sqrt{w_{z}}},\frac{1-x}{h_{n}\sqrt{w_{z}}})}(t)\bm{1}_{(-\frac{x}{h_{n}\sqrt{w_{z}}},\frac{1-x}{h_{n}\sqrt{w_{z}}})}(s)\,dt\,ds
wz=w(x,a)=w(x+thnwz,a+thnwz)w(x,a)=wz, as n\displaystyle w_{z^{\prime}}=w(x^{\prime},a^{\prime})=w(x+th_{n}\sqrt{w_{z}},a+th_{n}\sqrt{w_{z}})\rightarrow w(x,a)=w_{z},\text{ as }n\rightarrow\infty
r(x+thnwz,a+thnwz)r(x,a) as n\displaystyle r(x+th_{n}\sqrt{w_{z}},a+th_{n}\sqrt{w_{z}})\rightarrow r(x,a)\text{ as }n\rightarrow\infty
r(x,a)++1wzK(twz)1wzK(swz)𝑑t𝑑s=r(x,a).\displaystyle\rightarrow r(x,a)\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}\frac{1}{\sqrt{w_{z}}}K\Big{(}\frac{t}{\sqrt{w_{z}}}\Big{)}\frac{1}{\sqrt{w_{z}}}K\Big{(}\frac{s}{\sqrt{w_{z}}}\Big{)}\,dt\,ds=r(x,a).

Similarly, we can show Cjn0C_{jn}\rightarrow 0 as nn\rightarrow\infty, j=2,,9j=2,\ldots,9. Thus,

𝔼x,a[Cn]r(x,a), as n.\mathbb{E}_{x,a}[C_{n}]\rightarrow r(x,a),\text{ as }n\rightarrow\infty.

Therefore, Cnpr(x,a)C_{n}\stackrel{{\scriptstyle p}}{{\rightarrow}}r(x,a) as nn\rightarrow\infty. By Assumption 3, 𝔼rx,a[r^IB(z)𝒙,𝒂]pr(x,a), as n\mathbb{E}_{r\mid x,a}[\widehat{r}_{\mathrm{IB}}(z)\mid\bm{x},\bm{a}]\stackrel{{\scriptstyle p}}{{\rightarrow}}r(x,a),\text{ as }n\rightarrow\infty.

𝔼[V^DMIB]\displaystyle\mathbb{E}[\widehat{V}_{\mathrm{DM-IB}}] =𝔼[1ni=1na𝒜π(axi)r^IB(xi,a)]\displaystyle=\mathbb{E}\Big{[}\frac{1}{n}\sum_{i=1}^{n}\sum_{a\in\mathcal{A}}\pi(a\mid x_{i})\widehat{r}_{\mathrm{IB}}(x_{i},a)\Big{]}
=𝔼[a𝒜π(ax)r^IB(x,a)]\displaystyle=\mathbb{E}\Big{[}\sum_{a\in\mathcal{A}}\pi(a\mid x)\widehat{r}_{\mathrm{IB}}(x,a)\Big{]}
=𝔼π[r^IB(x,a)]\displaystyle=\mathbb{E}_{\pi}[\widehat{r}_{\mathrm{IB}}(x,a)]
=𝔼x𝔼axπ𝔼rx,a[r^IB(x,a)]p𝔼x𝔼a|xπ𝔼rx,a[r(x,a)]=𝔼π[r]=Vπ,as n.\displaystyle=\mathbb{E}_{x}\mathbb{E}_{a\mid x\sim\pi}\mathbb{E}_{r\mid x,a}[\widehat{r}_{\mathrm{IB}}(x,a)]\stackrel{{\scriptstyle p}}{{\rightarrow}}\mathbb{E}_{x}\mathbb{E}_{a|x\sim\pi}\mathbb{E}_{r\mid x,a}[r(x,a)]=\mathbb{E}_{\pi}[r]=V^{\pi},\quad\text{as }n\rightarrow\infty.

We conclude that the bias of the direct method estimator with the information borrowing estimator r^IB(z)\widehat{r}_{\mathrm{IB}}(z) goes to zero in probability as nn\rightarrow\infty.

Appendix B Proof of Theorem 2

Our proof follows the same steps as of (Wang et al.,, 2017, Theorem 2). Without loss of generosity, we assume the context xx is a scalar. Let 𝕏τ:={x𝒳:DKL(x)<τ}\mathbb{X}_{\tau}:=\{x\in\mathcal{X}:D_{\mathrm{KL}}(x)<\tau\}. The mean squared error can be decomposed into squared bias and variance,

MSE(V^DRIC)=(𝔼[V^DRIC]Vπ)2+Var[V^DRIC].\displaystyle\text{MSE}(\widehat{V}_{\mathrm{DR-IC}})=(\mathbb{E}[\widehat{V}_{\mathrm{DR-IC}}]-V^{\pi})^{2}+\text{Var}[\widehat{V}_{\mathrm{DR-IC}}]. (16)

For the bias part, notice we only need to consider for x𝕏τcx\in\mathbb{X}_{\tau}^{c}, so

𝔼[V^DRIC]Vπ\displaystyle\mathbb{E}[\widehat{V}_{\mathrm{DR-IC}}]-V^{\pi} =𝔼[a𝒜r^IB(x,a)π(a|x)𝟙(x𝕏τc)]𝔼[a𝒜𝔼[r|x,a]π(a|x)𝟙(x𝕏τc)]\displaystyle=\mathbb{E}\Bigg{[}\sum_{a\in\mathcal{A}}\widehat{r}_{\mathrm{IB}}(x,a)\pi(a|x)\mathds{1}(x\in\mathbb{X}_{\tau}^{c})\Bigg{]}-\mathbb{E}\Bigg{[}\sum_{a\in\mathcal{A}}\mathbb{E}[r|x,a]\pi(a|x)\mathds{1}(x\in\mathbb{X}_{\tau}^{c})\Bigg{]}
=𝔼π[(r^IB(x,a)𝔼[r|x,a])𝟙(x𝕏τc)]\displaystyle=\mathbb{E}_{\pi}\Bigg{[}\Big{(}\widehat{r}_{\mathrm{IB}}(x,a)-\mathbb{E}[r|x,a]\Big{)}\mathds{1}(x\in\mathbb{X}_{\tau}^{c})\Bigg{]}
=𝔼π[ϵIB(x,a)𝟙(x𝕏τc)].\displaystyle=\mathbb{E}_{\pi}\Big{[}\epsilon_{\mathrm{IB}}(x,a)\mathds{1}(x\in\mathbb{X}_{\tau}^{c})\Big{]}. (17)

Next, we derive the upper bound of the variance term in the MSE. For any random variable XX and YY,

Var(X+Y)2Var(X)+2Var(Y).\text{Var}(X+Y)\leq 2\text{Var}(X)+2\text{Var}(Y).

Thus,

Var[V^DRIC]\displaystyle\text{Var}[\widehat{V}_{\mathrm{DR-IC}}] =Var[1ni=1na𝒜r^IB(xi,a)π(a|xi)+1ni=1n(rir^IB(xi,ai))w(xi,ai)𝟙(xi𝕏τ)]\displaystyle=\text{Var}\Bigg{[}\frac{1}{n}\sum_{i=1}^{n}\sum_{a\in\mathcal{A}}\widehat{r}_{\mathrm{IB}}(x_{i},a)\pi(a|x_{i})+\frac{1}{n}\sum_{i=1}^{n}(r_{i}-\widehat{r}_{\mathrm{IB}}(x_{i},a_{i}))w(x_{i},a_{i})\mathds{1}(x_{i}\in\mathbb{X}_{\tau})\Bigg{]}
2nVar[a𝒜r^IB(x,a)π(a|x)]+2nVarμ[(rr^IB(x,a))w(x,a)𝟙(x𝕏τ)]\displaystyle\leq\frac{2}{n}\text{Var}\Bigg{[}\sum_{a\in\mathcal{A}}\widehat{r}_{\mathrm{IB}}(x,a)\pi(a|x)\Bigg{]}+\frac{2}{n}\text{Var}_{\mu}\Bigg{[}(r-\widehat{r}_{\mathrm{IB}}(x,a))w(x,a)\mathds{1}(x\in\mathbb{X}_{\tau})\Bigg{]}
2n𝔼[a𝒜r^IB(x,a)π(a|x)]2+2n𝔼μVar[(rr^IB(x,a))w(x,a)𝟙(x𝕏τ)|x,a]\displaystyle\leq\frac{2}{n}\mathbb{E}\Bigg{[}\sum_{a\in\mathcal{A}}\widehat{r}_{\mathrm{IB}}(x,a)\pi(a|x)\Bigg{]}^{2}+\frac{2}{n}\mathbb{E}_{\mu}\text{Var}\Bigg{[}(r-\widehat{r}_{\mathrm{IB}}(x,a))w(x,a)\mathds{1}(x\in\mathbb{X}_{\tau})\Bigg{|}x,a\Bigg{]}
+2nVarμ𝔼[(rr^IB(x,a))w(x,a)𝟙(x𝕏τ)|x,a]\displaystyle\phantom{\leq\frac{2}{n}\mathbb{E}\Bigg{[}\sum_{a\in A}\widehat{r}_{\text{ll}}(a,x)\pi(a|x)\Bigg{]}^{2}+}+\frac{2}{n}\text{Var}_{\mu}\mathbb{E}\Bigg{[}(r-\widehat{r}_{\mathrm{IB}}(x,a))w(x,a)\mathds{1}(x\in\mathbb{X}_{\tau})\Bigg{|}x,a\Bigg{]}
2n𝔼[|𝒜|a𝒜r^IB2(x,a)π2(a|x)]+2n𝔼μ[𝔼(rr^IB(x,a))2w2(x,a)]\displaystyle\leq\frac{2}{n}\mathbb{E}\Bigg{[}|\mathcal{A}|\sum_{a\in\mathcal{A}}\widehat{r}^{2}_{\mathrm{IB}}(x,a)\pi^{2}(a|x)\Bigg{]}+\frac{2}{n}\mathbb{E}_{\mu}\Bigg{[}\mathbb{E}(r-\widehat{r}_{\mathrm{IB}}(x,a))^{2}w^{2}(x,a)\Bigg{]}
+2n𝔼μ𝔼[(rr^IB(x,a))w(x,a)𝟙(x𝕏τ)|x,a]2\displaystyle\phantom{\leq\frac{2}{n}\mathbb{E}\Bigg{[}|\mathcal{A}|\sum_{a\in\mathcal{A}}\widehat{r}^{2}_{\text{ll}}(a,x)\pi^{2}(a|x)\Bigg{]}+}+\frac{2}{n}\mathbb{E}_{\mu}\mathbb{E}\Bigg{[}(r-\widehat{r}_{\mathrm{IB}}(x,a))w(x,a)\mathds{1}(x\in\mathbb{X}_{\tau})\Bigg{|}x,a\Bigg{]}^{2}
2n|𝒜|𝔼[a𝒜r^IB2(x,a)π(a|x)]+2n𝔼μ[wmax2(r^IB(x,a)𝔼[r|x,a])2]\displaystyle\leq\frac{2}{n}|\mathcal{A}|\,\mathbb{E}\Bigg{[}\sum_{a\in\mathcal{A}}\widehat{r}^{2}_{\mathrm{IB}}(x,a)\pi(a|x)\Bigg{]}+\frac{2}{n}\mathbb{E}_{\mu}\Bigg{[}w_{\max}^{2}(\widehat{r}_{\mathrm{IB}}(x,a)-\mathbb{E}[r|x,a])^{2}\Bigg{]}
+2n𝔼μ𝔼[(rr^IB(x,a))2w2(x,a)|x,a]\displaystyle\phantom{\leq\frac{2}{n}|\mathcal{A}|\,\mathbb{E}\Bigg{[}\sum_{a\in A}\widehat{r}^{2}_{\text{ll}}(a,x)\pi(a|x)\Bigg{]}+}+\frac{2}{n}\mathbb{E}_{\mu}\mathbb{E}\Bigg{[}(r-\widehat{r}_{\mathrm{IB}}(x,a))^{2}w^{2}(x,a)\Bigg{|}x,a\Bigg{]}
2n|𝒜|𝔼π[Rmax2]+2n𝔼μ{wmax2ϵIB2(x,a)+𝔼[(rr^IB(x,a))2x,a]}.\displaystyle\leq\frac{2}{n}|\mathcal{A}|\,\mathbb{E}_{\pi}[R^{2}_{\max}]+\frac{2}{n}\mathbb{E}_{\mu}\Big{\{}w^{2}_{\max}\epsilon^{2}_{\mathrm{IB}}(x,a)+\mathbb{E}[(r-\widehat{r}_{\mathrm{IB}}(x,a))^{2}\mid x,a]\Big{\}}. (18)

Using (17) and (18) in (16), we get the desired bound.

Appendix C Additional Experiments and Implementation Details

We use 1010 multi-class classification datasets from the UCI Machine Learning Repository (available at https://archive.ics.uci.edu/ml/index.php) (Dua and Graff,, 2017). We use these datasets in the comma-separated values format provided by the Datahub Machine Learning Repository (available at https://datahub.io/machine-learning). Information of the datasets are given in Table 1.

Dataset Glass Ecoli Wdbc Vehicle Yeast Page-Blocks OptDigits SatImage PenDigits Letter
Contexts dimension 9 7 30 18 8 10 64 36 16 16
No. of actions 6 8 2 4 10 5 10 6 10 26
Max sample size 214 336 569 846 1484 5473 5620 6435 10992 20000
Table 1: Dataset information.

We now describe the hyperparameter grids we use in our experiments. We adapt from (Dudík et al.,, 2014; Wang et al.,, 2017; Su et al.,, 2020) for these grids.
(1) For our DR-IC estimator, we have two hyperparameters, namely, bandwidth hnh_{n}’s and the switching τ\tau. With τ=\tau=\infty, we first find the best hnh_{n} from a grid of 30 geometrically spaced values between the 0.010.01 and 15.015.0. With these best bandwidths for different nn’s, we choose the switching parameter from a grid of 30 geometrically spaced values between the 0.010.01 quantile and 1.01.0 quantile of the KL divergence of the contexts observed from the data. We also include the 0th0^{\mathrm{th}} quantile in this grid for the switching parameter.
(2) For the DR-OS estimator, we choose the shrinkage coefficients from a grid of 30 geometrically spaced values between 0.01×(w0.05)20.01\times(w_{0.05})^{2} and 100×(w0.95)2100\times(w_{0.95})^{2} where w0.05w_{0.05} and w0.95w_{0.95} are the 0.05 and 0.95 quantile of the importance weights (π/μ\pi/\mu) observed in the data.
(3) For the Switch-DR estimator, we choose the switching parameter from a grid of 25 exponentially spaced values between the 0.050.05 quantile and 0.950.95 quantile of the importance weights observed in the data.
We note that we use 15 grid values for the datasets with maximum sample size larger than 5000 due to the hardware limitations.

We use the Open Bandit Pipeline (available at https://github.com/st-tech/zr-obp) library to implement our DR-IC estimator and benchmark it with the other estimators. We worked with this commit-version (https://github.com/st-tech/zr-obp/commit/a4b61e9e14d1954aa2953d2b21b4e710a8a725d8) of the Open Bandit Pipeline library and provide the same in our supplementary material. We now describe practical implementation details of our DR-IC estimator, focusing on the discrete action setting.

Recall from (10) that

𝐫^IB(Z~)=Z~θ^ls+Σ(𝐫~,𝐫)Σ𝐫1[diag{Σ(𝐫~,𝐫)Σ𝐫1𝟏n}]1(𝐫Zθ^ls).\displaystyle\widehat{\mathbf{r}}_{\mathrm{IB}}(\widetilde{Z})=\widetilde{Z}\widehat{\theta}_{\mathrm{ls}}+\Sigma(\widetilde{\mathbf{r}},\mathbf{r})\Sigma_{\mathbf{r}}^{-1}\left[\mathrm{diag}\{\Sigma(\widetilde{\mathbf{r}},\mathbf{r})\Sigma_{\mathbf{r}}^{-1}\bm{1}_{n}\}\right]^{-1}(\mathbf{r}-Z\widehat{\theta}_{\mathrm{ls}}).

We use the Gaussian kernel for Σ(𝐫~,𝐫)\Sigma(\widetilde{\mathbf{r}},\mathbf{r}) as in (9). We use the traditional logistic regression as the estimate θ^ls\widehat{\theta}_{\mathrm{ls}} with three-fold cross fitting. Finally, we set Σ^𝐫=Var(𝐫Zθ^ls)I\hat{\Sigma}_{\mathbf{r}}=\mathrm{Var}(\mathbf{r}-Z\widehat{\theta}_{\mathrm{ls}})I where Var(x)\mathrm{Var}(x) is the variance of vector xx and II is an appropriate sized identity matrix. Since Σ𝐫\Sigma_{\mathbf{r}} is unknown in practice, we use the above estimate under an independent homoscedastic assumption.

We now summarize our results with the plots for all UCI datasets presented in Fig. 3. At foremost, we present the superior performance statistics of our estimator compared to seven different estimators from the off-policy evaluation literature in Tables 2 and 3. The uniformly superior performance of the information-borrowed DM (DR-IC (τ=0)(\tau=0)) over the traditional DM corroborates the recent findings of Su et al., (2020) that the choice of the DM is indeed important. We also demonstrate the uniformly dominating performance of the proposed DR-IC method over its counterparts across all the datasets, especially for the deterministic reward model. Even for the stochastic reward model, DR-IC beats its counterparts in most cases. Note that, for the current experiments, the stochasticity in the reward model causes the rewards to have a discrete jump (between 0 and 11) due to the model choice. We particularly expect a stronger performance for the proposed approach when the considered reward model is continuous in context and action space because the DR-IC approach can borrow information effectively from similar contexts. This suggests that the proposed approach is appropriate for a wide range of datasets and reward models. Furthermore, the KL approach to switching dominates the importance weights based switching in 1212 cases, and is equivalent in 77 cases out of the 2020 cases considered. This clearly suggests that the KL approach is a preferable and robust choice of switching across a variety of datasets.

Alg. DM DR MRDR DR-OS (oracle) DR-OS Switch-DR (oracle) Switch-DR
DR-IC (oracle) 10/0/0 10/0/0 10/0/0 10/0/0 10/0/0 10/0/0 10/0/0
DR-IC 10/0/0 8/2/0 7/0/3 5/1/4 8/1/1 3/1/6 8/2/0
DR-IC (τ=0\tau=0) 10/0/0 - - - - - -
Table 2: Performance Statistics: No. of wins/No. of draws/No. of losses by our estimator during off-policy evaluation training compared to others in the deterministic reward setting. Here the MSE metric is used for the comparison with no. of dataset as units.
Alg. DM DR MRDR DR-OS (oracle) DR-OS Switch-DR (oracle) Switch-DR
DR-IC (oracle) 10/0/0 10/0/0 10/0/0 7/1/2 10/0/0 6/1/3 10/0/0
DR-IC 10/0/0 3/6/1 3/6/1 2/5/3 5/3/2 4/2/4 4/5/1
DR-IC (τ=0\tau=0) 9/1/0 - - - - - -
Table 3: Performance Statistics: No. of wins/No. of draws/No. of losses by our estimator during off-policy evaluation training compared to others in the stochastic reward setting. Here the MSE metric is used for the comparison with no. of dataset as units.
[Uncaptioned image]
[Uncaptioned image]
[Uncaptioned image]
[Uncaptioned image]
[Uncaptioned image]
[Uncaptioned image]
[Uncaptioned image]
[Uncaptioned image]
[Uncaptioned image]
[Uncaptioned image]
[Uncaptioned image]
[Uncaptioned image]
[Uncaptioned image]
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 3: Off-policy evaluation simulation results. MSE versus sample sizes nn for two different reward distributions are showcased for all 10 UCI datasets described in Table 1

Appendix D Experiments with Nadaraya-Watson regression (non-parametric method)

Our proposed estimator is a semiparametric estimator, with a kernel function to model the correlation structure. In this section, we compare the performance of our proposed estimators with a standard Nadaraya-Watson (NW) estimator. An NW estimator is a very well known non-parametric technique for kernel regression. Given the rewards rir_{i} as responses, the context action pairs ziz_{i} as predictors and a kernel function K, the NW estimate r^NW\hat{r}_{\mathrm{NW}} at a point zz is given by

r^NW(z)=i=1nriK((zzi)/h)j=1nK((zzj)/h)\hat{r}_{\mathrm{NW}}(z)=\frac{\sum_{i=1}^{n}r_{i}K\left((z-z_{i})/h\right)}{\sum_{j=1}^{n}K\left((z-z_{j})/h\right)}

Here, hh is a bandwidth parameter, which is often optimized (using cross validation, for example) given the data and acts as a hyperparameter. In this section we compare our estimators with the DM and DR estimators that use NW kernel regression models for estimating the rewards. We use Sklearn’s (https://scikit-learn.org/stable/related_projects.html) implementation of NW kernel regression for our experiments and this is made available in our code. In these experiments, we use radial basis functions as the kernel, that is, K((zzi)/h)=exp(hzzi2)K((z-z_{i})/h)=\exp(-h\|z-z_{i}\|_{2}). For the hyperparameter hh, we choose the best among the grid of 20 logarithmically spaced values between 0.01 and 100 via the one-leave-out cross validation technique.

The experiments show the superior performance of our approach in most cases, as given in Fig.4. Interestingly, both our approach and NW approach have a single bandwidth parameter. However, our DM-IB (dr-ic( τ=0\tau=0)) method has the importance weight terms in the kernel (Eq. 88) which implicitly allows an adaptive bandwidth for different actions. This allows more flexible borrowing, which is not possible in a traditional implementation of NW estimator. This is crucial because the information shared across (x1,a1)(x_{1},a_{1}) and (x2,a1)(x_{2},a_{1}) should be different from information shared across (x1,a2)(x_{1},a_{2}) and (x2,a2)(x_{2},a_{2}) in general, when a1a2.a_{1}\neq a_{2}.
Our formulation allows borrowing of information across x1x_{1} and x2x_{2} only when the corresponding actions are same, through an indicator function. A traditional nonparametric estimator will borrow information even if the actions are different, leading to higher bias, especially when the actions are nominal. Traditional nonparametric methods also suffer because their non-asymptotic terms become very large, especially in higher dimensional problems, as pointed out in Wang et al., (2017) and the references therein.

[Uncaptioned image]
[Uncaptioned image]
[Uncaptioned image]
[Uncaptioned image]
[Uncaptioned image]
[Uncaptioned image]
[Uncaptioned image]
[Uncaptioned image]
[Uncaptioned image]
Refer to caption
Refer to caption
Figure 4: NW based DM and DR comparisons.

For a more fair comparison, we also emulate the indicator function for action covariates by choosing appropriate fixed bandwidth. These experiments, whose results are given in Fig.5, still show superior performance of our approach in most cases. The reason is that the role of the kernel function is fundamentally different in the two approaches. In the proposed estimator, the parametric estimate through the ridge regression performs the overall borrowing of information, while the bandwidth parameter controls the differential borrowing of information across context-action pairs when the actions are same (in an adaptive manner depending on the importance weights), which leads to a significant improvement in practical performance. As an example, if the bandwidth parameter is very small, resulting in no borrowing of information, the estimate for that context action pair reduces to the ridge regression estimate. In contrast, the NW estimator has a bandwidth parameter which completely controls the borrowing information across the observations. Thus, if the bandwidth parameter is very small, the final estimate reduces to Dirac delta spikes with no borrowing among observations whatsoever.

[Uncaptioned image]
[Uncaptioned image]
[Uncaptioned image]
[Uncaptioned image]
[Uncaptioned image]
[Uncaptioned image]
[Uncaptioned image]
[Uncaptioned image]
[Uncaptioned image]
Refer to caption
Refer to caption
Figure 5: NW based DM and DR comparisons with indicator approximation for action covariates.

We also repeat the experiments in Fig.5 with an adaptive version of NW estimator (labelled as dm-anw and dr-anw). That is, we use the adaptive radial basis function kernel K((zizj)/h)=exp(hzizj2/(wiwj))K((z_{i}-z_{j})/h)=\exp(-h\|z_{i}-z_{j}\|_{2}/(w_{i}w_{j})) where the weights are as in Eq.8. The experiments still show superior performance of our approach in most cases, as illustrated in Fig.6. However, we also note the improvement in NW estimate from the previous experiment. Thus, including the weights in the kernel leading to an adaptive nature improves the overall performance. The adaptive nature enables the algorithm to control the borrowing of information relative to the importance weights, which attributes to the improvement. Thus, irrespective of the kind of estimator we use, the effect of appropriately borrowing of information is the key to a better reward model, which can bring substantial improvement to practical performance of the off-policy evaluation.

Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 6: Adaptive NW based DM and DR comparisons with indicator approximation for action covariates.