This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Universal sieve-based strategies for efficient estimation
using machine learning tools

Hongxiang Qiu Department of Biostatistics, University of Washington Alex Luedtke Marco Carone Department of Biostatistics, University of Washington
Abstract

Suppose that we wish to estimate a finite-dimensional summary of one or more function-valued features of an underlying data-generating mechanism under a nonparametric model. One approach to estimation is by plugging in flexible estimates of these features. Unfortunately, in general, such estimators may not be asymptotically efficient, which often makes these estimators difficult to use as a basis for inference. Though there are several existing methods to construct asymptotically efficient plug-in estimators, each such method either can only be derived using knowledge of efficiency theory or is only valid under stringent smoothness assumptions. Among existing methods, sieve estimators stand out as particularly convenient because efficiency theory is not required in their construction, their tuning parameters can be selected data adaptively, and they are universal in the sense that the same fits lead to efficient plug-in estimators for a rich class of estimands. Inspired by these desirable properties, we propose two novel universal approaches for estimating function-valued features that can be analyzed using sieve estimation theory. Compared to traditional sieve estimators, these approaches are valid under more general conditions on the smoothness of the function-valued features by utilizing flexible estimates that can be obtained, for example, using machine learning.

1 Introduction

1.1 Motivation

A common statistical problem consists of using available data in order to learn about a summary of the underlying data-generating mechanism. In many cases, this summary involves function-valued features of the distribution that cannot be estimated at a parametric rate under a nonparametric model — for example, a regression function or the density function of the distribution. Examples of useful summaries involving such features include average treatment effects [26], average derivatives [13], moments of the conditional mean function [28], variable importance measures [35] and treatment effect heterogeneity measures [14]. For ease of implementation and interpretation, in traditional approaches to estimation, these features have typically been restricted to have simple forms encoded by parametric or restrictive semiparametric models. However, when these models are misspecified, both the interpretation and validity of subsequent inferences can be compromised. To circumvent this difficulty, investigators have increasingly relied on machine learning (ML) methods to flexibly estimate these function-valued features.

Once estimates of the function-valued features are obtained, it is natural to consider plug-in estimators of the summary of interest. However, in general, such estimators are not root-nn-consistent and asymptotically normal, and hence not asymptotically efficient (referred to as efficient henceforth). Lacking this property is problematic since it often forms the basis for constructing valid confidence intervals and hypothesis tests [3, 23]. When the function-valued features are estimated by ML methods, in order for the plug-in estimator to be CAN, the ML methods must not only estimate the involved function-valued features well, but must also satisfy a small-bias property with respect to the summary of interest [23, 33]. Unfortunately, because ML methods generally seek to optimize out-of-sample performance, they seldom satisfy the latter property.

1.2 Existing methodological frameworks

The targeted minimum loss-based estimation (TMLE) framework provides a means of constructing efficient plug-in estimators [30, 33]. Given an (almost arbitrary) initial ML fit that provides a good estimate of the function-valued features involved, TMLE produces an adjusted fit such that the resulting plug-in estimator has reduced bias and is efficient. This adjustment process is referred to as targeting since a generic estimate of the function-valued features is modified to better suit the goal of estimating the summary of interest. Though TMLE provides a general template for constructing efficient estimators, its implementation requires specialized expertise, namely knowledge of the analytic expression for an influence function of the summary of interest. Influence functions arise in semiparametric efficiency theory and are key to establishing efficiency, but can be difficult to derive. Furthermore, even when an influence function is known analytically, additional expertise is needed to construct a TMLE for a given problem.

Alternative approaches for constructing efficient plug-in estimators have been proposed in the literature, including the use of undersmoothing [21], twicing kernels [23], and sieves [4, 22, 28]. These methods neither require knowing an influence function nor performing any targeting of the function-valued feature estimates. Hence, the same fits can be used to simultaneously estimate different summaries of the data-generating distribution, even if these summaries were not pre-specified when obtaining the fit. These approaches also circumvent the difficulties in obtaining an influence function. However, these methods all rely on smoothness conditions on derivatives of the functional features that may be overly stringent. In addition, undersmoothing provides limited guidance on the choice of the tuning parameter; the twicing kernel method requires the use of a higher-order kernel, which may lead to poor performance in small to moderate samples [17].

In contrast, under some conditions, sieve estimation can produce a flexible fit with the optimal out-of-sample performance while also yielding an efficient — and therefore root-nn-consistent and asymptotically normal — plug-in estimator [28]. In this paper, we focus on extensions of this approach. In sieve estimation, we first assume that the unknown function falls in a rich function space, and construct a sequence of approximating subspaces indexed by sample size that increase in complexity as sample size grows. We require that, in the limit, the functions in the subspaces can approximate any function in the rich function space arbitrarily well. These approximating subspaces are referred to as sieves. By using an ordinary fitting procedure that optimizes the estimation of the function-valued feature within the sieve, the bias of the plug-in estimator can decrease sufficiently fast as the sieve grows in order for that estimator to be efficient. Thus sieve estimation requires no explicit targeting for the summary of interest.

The series estimator is one of the best known and most widely used sieve techniques. These sieves are taken as the span of the first finitely many terms in a basis that is chosen by the user to approximate the true function well. Common choices of the basis include polynomials, splines, trigonometric series and wavelets, among others. However, series estimators usually require strong smoothness assumptions on derivative of the unknown function in order for the flexible fit to converge at a sufficient rate to ensure the resulting plug-in estimator is efficient. As the dimension of the problem increases, the smoothness requirement may become prohibitive. Moreover, even if the smoothness assumption is satisfied, a prohibitively large sample size may be needed for some series estimators to produce a good fit. For example, if the unknown function is smooth but is a constant over a region, estimation based on a polynomial series can perform poorly in small to moderate samples.

Series estimators may also require the user to choose the number of terms in the series in such a way that results in a sufficient convergence rate. The rates at which the number of terms should grow with sample size have been thoroughly studied (e.g. [4, 22, 28]). However, these results only provide minimal guidance for applications because there is no indication on how to select the actual number of terms for a given sample size. In practice, the number of terms is the series is often chosen by CV. Upper bounds on the convergence rate of the series estimator as a function of sample size and the number of terms have been derived, and it has been shown that the optimal number of terms that minimizes the bound can also lead to an efficient plug-in estimator [28]. However, CV tends to select the number of terms that optimizes the actual convergence rate [31], which may differ from the number of terms minimizing the derived bound on the convergence rate. Even though the use of CV-tuned sieve estimators has achieved good numerical performance, to the best of our knowledge, there is no theoretical guarantee that they lead to an efficient plug-in estimator.

Two variants of traditional series estimators were proposed in [3]. These methods can use two bases to approximate the unknown function-valued features and the corresponding gradient separately, whereas in traditional series estimators, only one basis is used for both approximations. Consequently, these variants may be applied to more general cases than traditional series estimators. However, like traditional series estimators, they also suffer from the inflexibility of the pre-specified bases.

1.3 Contributions and organization of this article

In this paper we present two approaches that can partially overcome these shortcomings.

  1. 1.

    Estimating the unknown function with Highly Adaptive Lasso (HAL) [1, 29].
    If we are willing to assume the unknown functions have a finite variation norm, then they may be estimated via HAL. If the tuning parameter is chosen carefully, then we may obtain an efficient plug-in estimator. This method can help overcome the stringent smoothness assumptions on derivatives that are required by existing series estimators, as we discussed earlier.

  2. 2.

    Using data-adaptive series based on an initial ML fit.
    As long as the initial ML algorithm converges to the unknown function at a sufficient rate, we show that, for certain types of summaries, it is possible to obtain an efficient plug-in estimator with a particular data-adaptive series. The smoothness assumption on the unknown function can be greatly relaxed due to the introduction of the ML algorithm into the procedure. Moreover, for summaries that are highly smooth, we show that the number of terms in the series can be selected by CV.

Although the first approach is not an example of sieve estimation, both approaches are motivated by the sieve literature and can be shown to lead to asymptotically efficient plug-in estimators using the sieve estimation theory derived in [28]. The flexible fits of the functional features from both approaches can be plugged in for a rich class of estimands.

We remark that, although we do not have to restrict ourselves to the plug-in approach in order to construct an asymptotically efficient estimator, other estimators do not overcome the shortcomings described in Section 1.2 and can have other undesirable properties. For example, the popular one-step correction approach (also called debiasing in the recent literature on high-dimensional statistics) [25] constructs efficient estimators by adding a bias reduction term to the plug-in estimator. Thus, it is not a plug-in estimator itself, and as a consequence, one-step estimators may not respect known constraints on the estimand — for example, bounds on a scalar-valued estimand (e.g., the estimand is a probability and must lie in [0,1][0,1]) or shape constraints on a vector-valued estimand (e.g., monotonicity constraints). This drawback is also typical for other non-plug-in estimators, such as those derived via estimating equations [32] and double machine learning [5, 6]. Additionally, as with the other procedures described above, the one-step correction approach requires the analytic expression of an influence function.

Our paper is organized as follows. We introduce the problem setup and notation in Section 2. We consider plug-in estimators based on HAL in Section 3, data-adaptive series in Section 4, and its generalized version that is applicable to more general summaries in Section 5. Section 6 concludes with a discussion. Technical proofs of lemmas and theorems (Appendix D), simulation details (Appendix E) and other additional details are provided in the Appendix.

2 Problem setup and traditional sieve estimation review

Suppose we have independent and identically distributed observations V1,,VnV_{1},\ldots,V_{n} drawn from P0P_{0}. Let Θ\Theta be a class of functions, and denote by θ0Θ\theta_{0}\in\Theta a (possibly vector-valued) functional feature of P0P_{0} — for example, θ0\theta_{0} may be a regression function. Throughout this paper we assume that the generic data unit is V=(X,Z)P0V=(X,Z)\sim P_{0}, where XX is a (possibly vector-valued) random variable corresponding to the argument of θ0\theta_{0}, and ZZ may also be a vector-valued random variable. In some cases V=XV=X and ZZ is trivial. We use 𝒳\mathcal{X} to denote the support of XX. The estimand of interest is a finite-dimensional summary Ψ(θ0)\Psi(\theta_{0}) of θ0\theta_{0}. We consider a plug-in estimator Ψ(θ^n)\Psi(\hat{\theta}_{n}), where θ^n\hat{\theta}_{n} is an estimator of θ0\theta_{0}, and aim for this plug-in estimator to be asymptotically linear, in the sense that Ψ(θ^n)=Ψ(θ0)+n1i=1nIF(Vi)+op(n1/2)\Psi(\hat{\theta}_{n})=\Psi(\theta_{0})+n^{-1}\sum_{i=1}^{n}\text{IF}(V_{i})+o_{p}(n^{-1/2}) with IF an influence function satisfying 𝔼P0[IF(V)]=0\mathbb{E}_{P_{0}}[\text{IF}(V)]=0 and 𝔼P0[IF(V)2]<\mathbb{E}_{P_{0}}[\text{IF}(V)^{2}]<\infty. This estimator is efficient under a nonparametric model if the estimator is also regular. By the central limit theorem and Slutsky’s theorem, it follows that Ψ(θ^n)\Psi(\hat{\theta}_{n}) is a CAN estimator of Ψ(θ0)\Psi(\theta_{0}), and therefore, n[Ψ(θ^n)Ψ(θ0)]𝑑N(0,𝔼P0[IF(V)2])\sqrt{n}[\Psi(\hat{\theta}_{n})-\Psi(\theta_{0})]\overset{d}{\rightarrow}N(0,\mathbb{E}_{P_{0}}[\text{IF}(V)^{2}]). This provides a basis for constructing valid confidence intervals for Ψ(θ0)\Psi(\theta_{0}).

We now list some examples of such problems.

Example 1.

Moments of the conditional mean function [28]: Let θ0:x𝔼P0[Z|X=x]\theta_{0}:x\mapsto\mathbb{E}_{P_{0}}[Z|X=x] be the conditional mean function. The κ\kappa-th moment of θ0(X)\theta_{0}(X), XP0X\sim P_{0}, namely Ψκ(θ0)=𝔼P0[θ0κ(X)]\Psi_{\kappa}(\theta_{0})=\mathbb{E}_{P_{0}}[\theta_{0}^{\kappa}(X)], can be a summary of interest. The values of Ψ1(θ0)\Psi_{1}(\theta_{0}) and Ψ2(θ0)\Psi_{2}(\theta_{0}) are useful for defining the proportion of VarP0(Z)\textnormal{Var}_{P_{0}}(Z) that is explained by XX, which may be written as VarP0(θ0(X))/VarP0(Z)\textnormal{Var}_{P_{0}}(\theta_{0}(X))/\textnormal{Var}_{P_{0}}(Z). This proportion is a measure of variable importance [35]. Generally, we may consider Ψ(θ0)=𝔼P0[f(θ0(X))]\Psi(\theta_{0})=\mathbb{E}_{P_{0}}[f(\theta_{0}(X))] for a fixed function ff.

Example 2.

Average derivative [13]: Let XX follow a continuous distribution on d\mathbb{R}^{d} and θ0:x𝔼P0[Z|X=x]\theta_{0}:x\mapsto\mathbb{E}_{P_{0}}[Z|X=x] be the conditional mean function. Let θ0\theta_{0}^{\prime} denote the vector of partial derivatives of θ0\theta_{0}. Then Ψ(θ0)=𝔼P0[θ0(X)]\Psi(\theta_{0})=\mathbb{E}_{P_{0}}[\theta_{0}^{\prime}(X)] summarizes the overall (adjusted) effect of each component of XX on YY. Under certain conditions, we can rewrite Ψ(θ0)=𝔼P0[θ0(X)p0(X)/p0(X)]\Psi(\theta_{0})=\mathbb{E}_{P_{0}}[\theta_{0}(X)p_{0}^{\prime}(X)/p_{0}(X)], where p0p_{0} is the Lebesgue density of XX and p0p_{0}^{\prime} is the vector of partial derivatives of p0p_{0}. This expression clearly shows the important role of the Lebesgue density of XX in this summary.

Example 3.

Mean counterfactual outcome [26]: Suppose that Z=(A,Y)Z=(A,Y) where AA is a binary treatment indicator and YY is the outcome of interest. Let θ0:x𝔼P0[Y|A=1,X=x]\theta_{0}:x\mapsto\mathbb{E}_{P_{0}}[Y|A=1,X=x] be the outcome regression function under treatment value 1. Under causal assumptions, the mean counterfactual outcome corresponding to the intervention that assigns treatment 1 to the entire population can be nonparametrically identified by the G-computation formula Ψ(θ0)=𝔼P0[θ0(X)]\Psi(\theta_{0})=\mathbb{E}_{P_{0}}[\theta_{0}(X)].

Example 4.

Treatment effect heterogeneity measures [14]: Similarly to Example 3, suppose that AA is a binary treatment indicator and ZZ is the outcome of interest. Let θ0=(μ00,μ01)\theta_{0}=(\mu_{00},\mu_{01})^{\top}, where μ0a:x𝔼P0[Z|A=a,X=x]\mu_{0a}:x\mapsto\mathbb{E}_{P_{0}}[Z|A=a,X=x] is the outcome regression function for treatment arm a{0,1}a\in\{0,1\}. Then, Ψ(θ0)=VarP0(μ01(X)μ00(X))\Psi(\theta_{0})=\textnormal{Var}_{P_{0}}(\mu_{01}(X)-\mu_{00}(X)) is an overall summary of treatment effect heterogeneity.

To obtain an asymptotically linear plug-in estimator, θ^n\hat{\theta}_{n} must converge to θ0\theta_{0} at a sufficiently fast rate and approximately solve an estimating equation to achieve the small bias property with respect to the summary of interest [23, 29, 33]. For simplicity, we assume the estimand to be scalar-valued — when the estimand is vector-valued, we can treat each entry as a separate estimand, and the plug-in estimators of all entries are jointly asymptotically linear if each estimator is asymptotically linear. Therefore, this leads to no loss in generality if the same fits are used for all entries in the summary of interest.

Sieve estimation allows us to obtain an estimator Ψ(θ^n)\Psi(\hat{\theta}_{n}) with the small bias property with respect to Ψ(θ0)\Psi(\theta_{0}) while maintaining the optimal convergence rate of θ^n\hat{\theta}_{n} [4, 28]. The construction of sieve estimators is based on a sequence of approximating spaces Θn\Theta_{n} to Θ\Theta. These approximating spaces are referred to as sieves. Usually Θn\Theta_{n} is much simpler than Θ\Theta to avoid over-fitting but complex enough to avoid under-fitting. For example, Θn\Theta_{n} can be the space of all polynomials with degree KK or splines with KK knots with K=K(n)K=K(n)\rightarrow\infty as nn\rightarrow\infty. In this paper, with a loss function \ell such that θ0argminθΘ𝔼P0[(θ)(V)]\theta_{0}\in\operatorname*{argmin}_{\theta\in\Theta}\mathbb{E}_{P_{0}}[\ell(\theta)(V)], we consider estimating θ0\theta_{0} by minimizing an empirical risk based on \ell, i.e., θ^nargminθΘnn1i=1n(θ)(Vi)\hat{\theta}_{n}\in\operatorname*{argmin}_{\theta\in\Theta_{n}}n^{-1}\sum_{i=1}^{n}\ell(\theta)(V_{i}). Under some conditions, the growth rate of Θn\Theta_{n} can be carefully chosen so that Ψ(θ^n)\Psi(\hat{\theta}_{n}) is an asymptotically linear estimator of Ψ(θ0)\Psi(\theta_{0}) while θ^n\hat{\theta}_{n} converges to θ0\theta_{0} at the optimal rate.

Throughout this paper, for a probability distribution PP and an integrable function ff with respect to PP, we define Pf:=f(v)𝑑P(v)=𝔼P[f(V)]Pf:=\int f(v)dP(v)=\mathbb{E}_{P}[f(V)]. We use PnP_{n} to denote the empirical distribution. We take ,\langle\cdot,\cdot\rangle to be the L2(P0)L^{2}(P_{0})-inner product, i.e., θ1,θ2=P0(θ1θ2)\langle\theta_{1},\theta_{2}\rangle=P_{0}(\theta_{1}\theta_{2}), where L2(P0)L^{2}(P_{0}) is the set of real-valued P0P_{0}-squared-integrable functions defined on the support of P0P_{0}. When the functions are vector-valued, we take θ1,θ2=P0(θ1θ2)\langle\theta_{1},\theta_{2}\rangle=P_{0}(\theta_{1}^{\top}\theta_{2}). We use \|\cdot\| to denote the induced norm of ,\langle\cdot,\cdot\rangle. We assume that ΘL2(P0)\Theta\subseteq L^{2}(P_{0}). We remark that we have committed to a specific choice of inner product and norm to fix ideas; other inner products can also be adopted, and our results will remain valid upon adaptation of our upcoming conditions. We discuss this explicitly via a case study in Appendix A.

For the methods we propose in this article, we assume that Θ\Theta is convex. Throughout this paper, we will further require a set of conditions similar to those in [28]. For any θΘ\theta\in\Theta, let 0[θθ0](v):=limδ0[(θ0+δ(θθ0))(v)(θ0)(v)]/δ\ell_{0}^{\prime}[\theta-\theta_{0}](v):=\lim_{\delta\rightarrow 0}[\ell(\theta_{0}+\delta(\theta-\theta_{0}))(v)-\ell(\theta_{0})(v)]/\delta be the Gâteaux derivative of \ell at θ0\theta_{0} in the direction θθ0\theta-\theta_{0} and r[θθ0](v):=(θ)(v)(θ0)(v)0[θθ0](v)r[\theta-\theta_{0}](v):=\ell(\theta)(v)-\ell(\theta_{0})(v)-\ell_{0}^{\prime}[\theta-\theta_{0}](v) be the corresponding remainder.

Condition A1 (Linearity and boundedness of Gâteaux derivative operator of loss function).

For all θΘ\theta\in\Theta, 0[θθ0]\ell_{0}^{\prime}[\theta-\theta_{0}] exists and 0[θθ0](v)P00[θθ0]\ell_{0}^{\prime}[\theta-\theta_{0}](v)-P_{0}\ell_{0}^{\prime}[\theta-\theta_{0}] is linear and bounded in θθ0\theta-\theta_{0}.

Condition A2 (Local quadratic behavior of loss function).

There exists a constant α0,(0,)\alpha_{0,\ell}\in(0,\infty) such that, for all θΘ\theta\in\Theta such that P0{(θ)(θ0)}P_{0}\{\ell(\theta)-\ell(\theta_{0})\} or θθ0\|\theta-\theta_{0}\| is sufficiently small, it holds that P0{(θ)(θ0)}=α0,θθ02/2+o(θθ02)P_{0}\{\ell(\theta)-\ell(\theta_{0})\}=\alpha_{0,\ell}\|\theta-\theta_{0}\|^{2}/2+o(\|\theta-\theta_{0}\|^{2}).

Remark 1.

We now present an equivalent form of A2 that may be easier to verify in practice. For all θΘ\{θ0}\theta\in\Theta\backslash\{\theta_{0}\}, define hθ:=(θθ0)/θθ0h_{\theta}:=(\theta-\theta_{0})/\|\theta-\theta_{0}\| and aθ:=d2dδ2P0(θ0+δhθ)|δ=0a_{\theta}:=\frac{d^{2}}{d\delta^{2}}P_{0}\ell(\theta_{0}+\delta h_{\theta})|_{\delta=0}. Requiring Condition A2 is equivalent to requiring that aθ1=aθ2a_{\theta_{1}}=a_{\theta_{2}} for all θ1,θ2Θ\{θ0}\theta_{1},\theta_{2}\in\Theta\backslash\{\theta_{0}\} and that

supθΘ|P0(θ0+δhθ)P0(θ0)aθ2|\displaystyle\sup_{\theta\in\Theta}\left|P_{0}\ell(\theta_{0}+\delta h_{\theta})-P_{0}\ell(\theta_{0})-\frac{a_{\theta}}{2}\right| =o(δ2).\displaystyle=o(\delta^{2}).

Moreover, if A2 holds, then, for any θΘ\{θ0}\theta\in\Theta\backslash\{\theta_{0}\}, it is true that α0,=aθ\alpha_{0,\ell}=a_{\theta}.

A large class of loss functions satisfy Conditions A1 and A2. For example, in the regression setting where ZZ is the outcome, the squared-error loss (θ):v[zθ(x)]2\ell(\theta):v\mapsto[z-\theta(x)]^{2} and the logistic loss (θ):vzθ(x)+log{1+exp(θ(x))}\ell(\theta):v\mapsto-z\theta(x)+\log\{1+\exp(\theta(x))\} both satisfy these conditions; a negative working log-likelihood usually also satisfies these conditions. In Examples 14, the unknown functions are all conditional mean functions, which can be estimated with the above loss functions. Thus, Conditions A1 and A2 hold. Examples 3 and 4 require a slight modification discussed in more details in Appendix A. We also note that Condition A2 is sufficient for Condition B in [28].

Condition A3 (Differentiability of summary of interest).

Ψθ0[θθ0]:=limδ0[Ψ(θ0+δ(θθ0))Ψ(θ0)]/δ\Psi_{\theta_{0}}^{\prime}[\theta-\theta_{0}]:=\lim_{\delta\rightarrow 0}[\Psi(\theta_{0}+\delta(\theta-\theta_{0}))-\Psi(\theta_{0})]/\delta exists for all θΘ\theta\in\Theta and is a linear bounded operator.

If Condition A3 holds, then, by the Riesz representation theorem, Ψθ0[θθ0]=θθ0,Ψ˙\Psi_{\theta_{0}}^{\prime}[\theta-\theta_{0}]=\langle\theta-\theta_{0},\dot{\Psi}\rangle for a gradient function Ψ˙=Ψ˙θ0\dot{\Psi}=\dot{\Psi}_{\theta_{0}} in the completion of the space spanned by Θθ0:={xθ(x)θ0(x):θΘ}\Theta-\theta_{0}:=\{x\mapsto\theta(x)-\theta_{0}(x):\theta\in\Theta\}.

Condition A4 (Locally quadratic remainder).

There exists a constant C>0C>0 so that, for all θ\theta with sufficiently small θθ0\|\theta-\theta_{0}\|, it holds that

|Ψ(θ)Ψ(θ0)Ψθ0[θθ0]|Cθθ02.|\Psi(\theta)-\Psi(\theta_{0})-\Psi_{\theta_{0}}^{\prime}[\theta-\theta_{0}]|\leq C\|\theta-\theta_{0}\|^{2}.

The above condition states that the remainder of the linear approximation to Ψ\Psi is locally bounded by a quadratic function.

Conditions A3 and A4 hold for Examples 14. For the generalized moment of the conditional mean function in Example 1, it holds that Ψ˙=fθ0\dot{\Psi}=f^{\prime}\circ\theta_{0}. For the average derivative of the conditional mean function in Example 2, it holds that Ψ˙=p0/p0\dot{\Psi}=p_{0}^{\prime}/p_{0}. For the average treatment effect and the treatment effect heterogeneity measure in Examples 3 and 4, as we show in Appendix A, Ψ˙\dot{\Psi} also exists and depends on the propensity score function xP0(A=1X=x)x\mapsto P_{0}(A=1\mid X=x).

3 Estimation with Highly Adaptive Lasso

3.1 Brief review of Highly Adaptive Lasso

Recently, the Highly Adaptive Lasso (HAL) was proposed as a flexible ML algorithm that only requires a mild smoothness condition on the unknown function and has a well-described implementation [1, 29]. In this subsection, we briefly review HAL. We first heuristically introduce its definition and desirable properties, and then introduce the definition and implementation more formally. For ease of presentation, for the moment, we assume that θ0\theta_{0} is real-valued.

In HAL, θ0\theta_{0} is assumed to fall in the class of càdlàg functions (right-continuous with left limits) defined on 𝒳d\mathcal{X}\subseteq\mathbb{R}^{d} with variation norm bounded by a finite constant MM. In this section, we denote this function class by Θv,M\Theta_{{\mathrm{v}},M}. The variation norm of a càdlàg function θ\theta, denoted by θv\|\theta\|_{{\mathrm{v}}}, characterizes the total variability of θ\theta as its argument ranges over the domain, so v\|\cdot\|_{{\mathrm{v}}} is a global smoothness measure and Θv,M\Theta_{{\mathrm{v}},M} is a large function class that even contains functions with discontinuities. Fig. 1 presents some examples of univariate càdlàg functions with finite variation norms for illustration. Because Θv,M\Theta_{{\mathrm{v}},M} is a rich class, it can be plausible that θ0Θv,M\theta_{0}\in\Theta_{{\mathrm{v}},M} for some M<M<\infty. The HAL estimator of θ0\theta_{0} is then θ^n=θ^n,MargminθΘv,Mn1i=1n(θ)(Vi)\hat{\theta}_{n}=\hat{\theta}_{n,M}\in\operatorname*{argmin}_{\theta\in\Theta_{{\mathrm{v}},M}}n^{-1}\sum_{i=1}^{n}\ell(\theta)(V_{i}). Under this assumption, it has been shown that θ^nθ0=op(n1/4)\|\hat{\theta}_{n}-\theta_{0}\|=o_{p}(n^{-1/4}) regardless of the dimension of XX under additional mild conditions [29]. Thus, estimation with HAL replaces the usual smoothness requirement on derivatives of traditional series estimators by a requirement on global smoothness, namely θ0Θv,M\theta_{0}\in\Theta_{{\mathrm{v}},M} for some MM.

Refer to caption
Figure 1: Examples of univariate càdlàg functions with finite variation norms. The top-left, top-right, bottom-left and bottom-right plots present the standard normal density function, a minimax concave penalty function [37], a step function and the real part of a Morlet wavelet [16] respectively.

We next formally present the definition of variation norm of a càdlàg function θ:[x(),x(u)]d\theta:[x^{(\ell)},x^{(u)}]\subseteq\mathbb{R}^{d}\rightarrow\mathbb{R}. Here, x()x^{(\ell)} and x(u)x^{(u)} are vectors in d\mathbb{R}^{d}; with \leq being entrywise, [x(),x(u)]:={xd:x()xx(u)}[x^{(\ell)},x^{(u)}]:=\{x\in\mathbb{R}^{d}:x^{(\ell)}\leq x\leq x^{(u)}\}.

For any nonempty index set s{1,2,,d}s\subseteq\{1,2,\ldots,d\} and any x=(x1,x2,,xd)[x(),x(u)]x=(x_{1},x_{2},\ldots,x_{d})\in[x^{(\ell)},x^{(u)}], we define xs:={xj:js}x_{s}:=\{x_{j}:j\in s\} and xs:={xj:j{1,2,,d}s}x_{-s}:=\{x_{j}:j\in\{1,2,\ldots,d\}\setminus s\} to be entries of xx with indices in and not in ss respectively. We defined the ss-section of θ\theta as θs:=θ(x1𝟙(1s),x2𝟙(2s),,xd𝟙(ds))\theta_{s}:=\theta(x_{1}\mathbbm{1}(1\in s),x_{2}\mathbbm{1}(2\in s),\ldots,x_{d}\mathbbm{1}(d\in s)). We can subsequently obtain the following representation of θ\theta at any x[x(),x(u)]x\in[x^{(\ell)},x^{(u)}] in terms of sums and integrals of the variation of ss-sections of θ\theta [10]:

θ(x)=θ(x())+s{1,,d},s(x(),x]θs(dx~).\theta(x)=\theta(x^{(\ell)})+\sum_{s\in\{1,\ldots,d\},s\neq\emptyset}\int_{(x^{(\ell)},x]}\theta_{s}(d\tilde{x}).

The variation norm is then subsequently defined as

θv:=|θ(x())|+s{1,,d},s(x(),x(u)]|θs(dx~)|.\|\theta\|_{\mathrm{v}}:=|\theta(x^{(\ell)})|+\sum_{s\in\{1,\ldots,d\},s\neq\emptyset}\int_{(x^{(\ell)},x^{(u)}]}|\theta_{s}(d\tilde{x})|.

We refer to [1] and [29] for more details on variation norm. Notably, this notion of variation norm coincides with that of Hardy and Krause [24].

We finally briefly introduce the algorithm to compute a HAL estimator. It can be shown that an empirical risk minimizer in Θv,M\Theta_{{\mathrm{v}},M} is a step function that only jumps at sample points, namely

xβ0+s{1,,d},sj=1n𝟙(Xj,sxs)βs,j.x\mapsto\beta_{0}+\sum_{s\subseteq\{1,\ldots,d\},s\neq\emptyset}\sum_{j=1}^{n}\mathbbm{1}(X_{j,s}\leq x_{s})\beta_{s,j}.

Here, β0\beta_{0} and all βs,j\beta_{s,j} are real numbers. To find an empirical risk minimizer in Θv,M\Theta_{{\mathrm{v}},M} in the above form, we may solve the following optimization problem:

minθi=1n(θ)(Vi)\displaystyle\min_{\theta}\sum_{i=1}^{n}\ell(\theta)(V_{i})
subject to θ:xβ0+s{1,,d},sj=1n𝟙(Xj,sxs)βs,j\displaystyle\theta:x\mapsto\beta_{0}+\sum_{s\subseteq\{1,\ldots,d\},s\neq\emptyset}\sum_{j=1}^{n}\mathbbm{1}(X_{j,s}\leq x_{s})\beta_{s,j}
|β0|+s{1,,d},sj=1n|βs,j|M.\displaystyle|\beta_{0}|+\sum_{s\subseteq\{1,\ldots,d\},s\neq\emptyset}\sum_{j=1}^{n}|\beta_{s,j}|\leq M.

The constraint imposes an upper bound on the 1\ell_{1} norm of a vector. Therefore, for common loss functions, we may use software for LASSO regression [Tibshirani1996]. For example, if the loss function is the squared-error loss, then we may run a LASSO linear regression to obtain a HAL estimate.

3.2 Estimation with an oracle tuning parameter

In this section, we consider plug-in estimators based on HAL. For ease of illustration, for the rest of this section, we consider scalar-valued Ψ\Psi, and will discuss vector-valued Ψ\Psi only at the end of this subsection. We further introduce the following conditions needed to establish that the HAL-based plug-in estimator is efficient.

Condition B1 (Càdlàg functions).

θ0\theta_{0} and Ψ˙\dot{\Psi} are càdlàg.

Condition B2 (Bound on variation norm).

For some M<M<\infty, θ0v+Ψ˙vM\|\theta_{0}\|_{{\mathrm{v}}}+\|\dot{\Psi}\|_{{\mathrm{v}}}\leq M.

Condition B2 ensures that certain perturbations of θ0\theta_{0} still lie in Θv,M\Theta_{{\mathrm{v}},M}, a crucial requirement for proving the asymptotic linearity of our proposed plug-in estimator. In addition, since Ψ˙\dot{\Psi} may depend on components of P0P_{0} other than θ0\theta_{0} as in Examples 24, Conditions B1B2 may also impose conditions on these components.

In this section, we fix an MM that satisfies Condition B2. Additional technical conditions can be found in Appendix B.1. Let θ^n=θ^n,MargminθΘv,Mn1i=1n(θ)(Vi)\hat{\theta}_{n}=\hat{\theta}_{n,M}\in\operatorname*{argmin}_{\theta\in\Theta_{{\mathrm{v}},M}}n^{-1}\sum_{i=1}^{n}\ell(\theta)(V_{i}) denote the HAL fit obtained using the bound MM in Condition B2.

We note that θ^n\hat{\theta}_{n} is not a typical sieve estimator because MM is fixed and there is no explicit sequence of growing approximating spaces Θn\Theta_{n}. Nevertheless, we may view this method as a special case of sieve estimation with degenerate sieves Θn=Θv,M\Theta_{n}=\Theta_{{\mathrm{v}},M} for all nn. This allows us to utilize existing results [28] to show the asymptotic linearity and efficiency of the plug-in estimator based on θ^n\hat{\theta}_{n}. We next formally present this result.

Theorem 1 (Efficincy of plug-in estimator).

Under Conditions A1A4 and B1B4, Ψ(θ^n)\Psi(\hat{\theta}_{n}) is an asymptotically linear estimator of Ψ(θ0)\Psi(\theta_{0}) with the influence function being vα0,1{0[Ψ˙](v)+𝔼P0[0[Ψ˙](V)]}v\mapsto\alpha_{0,\ell}^{-1}\{-\ell_{0}^{\prime}[\dot{\Psi}](v)+\mathbb{E}_{P_{0}}[\ell_{0}^{\prime}[\dot{\Psi}](V)]\}, that is,

Ψ(θ^n)=Ψ(θ0)+1ni=1nα0,1{0[Ψ˙](Vi)+𝔼P0[0[Ψ˙](V)]}+op(n1/2).\Psi(\hat{\theta}_{n})=\Psi(\theta_{0})+\frac{1}{n}\sum_{i=1}^{n}\alpha_{0,\ell}^{-1}\left\{-\ell_{0}^{\prime}[\dot{\Psi}](V_{i})+\mathbb{E}_{P_{0}}\left[\ell_{0}^{\prime}[\dot{\Psi}](V)\right]\right\}+o_{p}(n^{-1/2}).

As a consequence, n[Ψ(θ^n)Ψ(θ0)]𝑑N(0,ξ2)\sqrt{n}[\Psi(\hat{\theta}_{n})-\Psi(\theta_{0})]\overset{d}{\rightarrow}\text{N}(0,\xi^{2}) with ξ2:=VarP0(0[Ψ˙](V))/α0,2\xi^{2}:=\textnormal{Var}_{P_{0}}(\ell_{0}^{\prime}[\dot{\Psi}](V))/\alpha_{0,\ell}^{2}. In addition, under Conditions E1 and E2 in Appendix B.4, Ψ(θ^n)\Psi(\hat{\theta}_{n}) is efficient under a nonparametric model.

We note that, for HAL to achieve the optimal convergence rate, we only need that Mθ0vM\geq\|\theta_{0}\|_{{\mathrm{v}}} [1, 29]. The requirement of a larger MM imposed by Condition B2 resembles undersmoothing [21], as using a larger MM would result in a fit that is less smooth than that based on the CV-selected bound. The L2(P0)L^{2}(P_{0})-convergence rate of the flexible fit using the larger bound remains the same, but the leading constant may be larger. This is in contrast to traditional undersmoothing, which leads to a fit with a suboptimal rate of convergence.

Under some conditions, the following lemma provides a loose bound on Ψ˙v\|\dot{\Psi}\|_{{\mathrm{v}}} in the case that Ψ˙\dot{\Psi} has a particular structure. Such a bound can be used to select an appropriate bound on variation norm that satisfies Condition B2.

Lemma 1.

Suppose that Ψ˙=ψ˙θ0\dot{\Psi}=\dot{\psi}\circ\theta_{0}, where ψ˙:\dot{\psi}:\mathbb{R}\rightarrow\mathbb{R} is differentiable. Let x()=sup{x:P0(Xx)=1}x^{(\ell)}=\sup\{x:P_{0}(X\geq x)=1\} where sup\sup and \geq are entrywise. Assume that θ0\theta_{0} is differentiable. If each of θ0v\|\theta_{0}\|_{{\mathrm{v}}}, |Ψ˙(x())||\dot{\Psi}(x^{(\ell)})| and B:=supz:|z|θ0v|ψ˙(z)|B:=\sup_{z:|z|\leq\|\theta_{0}\|_{{\mathrm{v}}}}|\dot{\psi}^{\prime}(z)| is finite, then Ψ˙vBθ0v+|Ψ˙(x())|\|\dot{\Psi}\|_{{\mathrm{v}}}\leq B\|\theta_{0}\|_{{\mathrm{v}}}+|\dot{\Psi}(x^{(\ell)})|. Hence, θ0v+Ψ˙v(B+1)θ0v+|Ψ˙(x())|<\|\theta_{0}\|_{{\mathrm{v}}}+\|\dot{\Psi}\|_{{\mathrm{v}}}\leq(B+1)\|\theta_{0}\|_{{\mathrm{v}}}+|\dot{\Psi}(x^{(\ell)})|<\infty.

As we discussed at the end of Section 2, such structures as Ψ˙=ψ˙θ0\dot{\Psi}=\dot{\psi}\circ\theta_{0} are common, especially if we augment θ0\theta_{0} to include other implicitly relevant components of P0P_{0}. For example, in Example 2, we may augment θ0\theta_{0} with p0p_{0} and p0p_{0}^{\prime}; in Examples 3 and 4, we may augment θ0\theta_{0} with the propensity score function.

When θ0\theta_{0} is q\mathbb{R}^{q}-valued, θ0\theta_{0} can often be viewed as a collection of qq real-valued variation-independent functions η10,,ηq0\eta_{10},\ldots,\eta_{q0}. In this case, we can define Θv,M={(η1,,ηq):ηj is càdlàg,ηjvMj,j=1,,q}\Theta_{{\mathrm{v}},M}=\{(\eta_{1},\ldots,\eta_{q}):\eta_{j}\text{ is c\`{a}dl\`{a}g},\|\eta_{j}\|_{{\mathrm{v}}}\leq M_{j},j=1,\ldots,q\} for a positive vector M=(M1,,Mq)M=(M_{1},\ldots,M_{q}). The subsequent arguments follow analogously, where now each ηj\eta_{j} is treated as a separate function.

We remark that an undersmoothing condition such as B2 appears to be necessary for a HAL-based plug-in estimator to be efficient. We illustrate this numerically in Section 3.3. The choice of a sufficiently large bound MM required by Theorem 1 is by no means trivial, since this choice requires knowledge that the user may not have. Nevertheless, this result forms the basis of the data-driven method that we propose in Section 3.3 for choosing MM. We also remark that, if we wish to plug in the same θ^n\hat{\theta}_{n} based on HAL for a rich estimands, the chosen bound MM needs to be sufficiently large for all estimands of interest.

Another method to construct efficient plug-in estimators based on HAL has been independently developed [VanderLaan2019]. Unlike our approach based on sieve theory, in this work, the authors directly analyzed the first-order bias of the plug-in estimator using influence functions. In terms of ease of implementation, their method requires specifying a constant involved in a threshold of the empirical mean of the basis functions, which may be difficult to specify in applications. Our approach in Section 3.3 may also require specifying an unknown constant to obtain a valid upper bound on Ψ˙v\|\dot{\Psi}\|_{{\mathrm{v}}}, but in some cases the constant may be set to zero, and our simulation suggests that the performance is not sensitive to the choice of the constant.

3.3 Data-adaptive selection of the tuning parameter

Since it is hard to prespecify a bound MM on the variation norm that is sufficiently large to satisfy Condition B2 but also sufficiently small to avoid overfitting for a given data set, it is desirable to select MM in a data-adaptive manner. A seemingly natural approach makes use of kk-fold CV. In particular, for each candidate bound MM, partition the data into kk folds of approximately equal size (kk is fixed and does not depend on nn), in each fold evaluate the performance of the HAL estimator fitted on all other folds based on this candidate MM, and use the candidate bound MnM_{n} with the best average performance across all folds to obtain the final fit. It has been shown that θ^n,Mn\hat{\theta}_{n,M_{n}} can achieve the optimal convergence rate under mild conditions [31], but MnM_{n} appears not to satisfy Condition B2 in general. In particular, the derived bound on θ^nθ0\|\hat{\theta}_{n}-\theta_{0}\| relies on an empirical process term, namely supθΘv,M|(PnP0){(θ)(θ0)}|\sup_{\theta\in\Theta_{{\mathrm{v}},M}}|(P_{n}-P_{0})\{\ell(\theta)-\ell(\theta_{0})\}|, and a larger MM implies a larger space Θv,M\Theta_{{\mathrm{v}},M}. Therefore, the bound on θ^nθ0\|\hat{\theta}_{n}-\theta_{0}\| grows with MM. Because kk-fold CV seeks to optimize out-of-sample performance, MnM_{n} generally appears to be close to θ0v\|\theta_{0}\|_{{\mathrm{v}}} and not sufficiently large to obtain an efficient plug-in estimator.

To avoid this issue with the CV-selected bound, we propose a method that takes inspiration from kk-fold CV, but modifies the bound so that it is guaranteed to yield an efficient plug-in estimator for Ψ(θ0)\Psi(\theta_{0}). This method may require the analytic expression for Ψ˙\dot{\Psi}. In Sections 4 and 5, we present methods that do not require this knowledge.

  1. 1.

    Derive an upper bound on Ψ˙v\|\dot{\Psi}\|_{{\mathrm{v}}}. This bound is a non-decreasing function of the variation norms of functions that can be learned from data (e.g., using Lemma 1). In other words, find a non-decreasing function FF such that Ψ˙vF(η10v,,ηq0v)\|\dot{\Psi}\|_{{\mathrm{v}}}\leq F(\|\eta_{10}\|_{{\mathrm{v}}},\ldots,\|\eta_{q0}\|_{{\mathrm{v}}}) for unknown functions η10,,ηq0\eta_{10},\ldots,\eta_{q0} that can be assumed to be càdlàg with finite variation norm and can be estimated with HAL.

  2. 2.

    Estimate θ0,η10,,ηq0\theta_{0},\eta_{10},\ldots,\eta_{q0} by HAL with kk-fold CV, and denote the CV-selected bounds for these functions by Mn,M1n,,MqnM_{n},M_{1n},\ldots,M_{qn}.

  3. 3.

    For a small ϵ>0\epsilon>0, use the bound Mn+ϵ+F(M1n+ϵ,,Mqn+ϵ)M_{n}+\epsilon+F(M_{1n}+\epsilon,\ldots,M_{qn}+\epsilon) to estimate θ0\theta_{0} with HAL and plug in the fit. We refer to this step of slightly increasing the bounds as ϵ\epsilon-relaxation.

It follows from Lemma 2 in the Appendix that this method would yield a sufficiently large bound with probability tending to one. In practice, it is desirable for the bound derived on Ψ˙v\|\dot{\Psi}\|_{{\mathrm{v}}} to be relatively tight to avoid choosing an overly large bound that leads to overfitting in small to moderate samples. We remark that multiplying by 1+ϵ1+\epsilon rater than adding ϵ\epsilon to each argument also leads to a valid choice for the bound; that is, the bound Mn(1+ϵ)+F(M1n(1+ϵ),,Mqn(1+ϵ))M_{n}(1+\epsilon)+F(M_{1n}(1+\epsilon),\ldots,M_{qn}(1+\epsilon)) is also sufficiently large with probability tending to one. In practice, the user may increase each CV-selected bound by, for example, 5% or 10%. Although it is more natural and convenient to directly use Mn+F(M1n,,Mqn)M_{n}+F(M_{1n},\ldots,M_{qn}) as the bound, we have only been able to prove the result with a small ϵ\epsilon-relaxation. However, if the bound is loose and FF is continuous, we can show that ϵ\epsilon-relaxation is unnecessary. The formal argument can be found after Lemma 2 in the Appendix.

As for methods based on knowledge of an influence function, deriving Ψ˙\dot{\Psi} and a bound for its variation norm requires some expertise, but in some cases this task can be straightforward. The derivation of an influence function is typically based on a fluctuation in the space of distributions, but in many cases, the relation between such fluctuations and the summary of interest is implicit and difficult to handle. In contrast, the derivation of Ψ˙\dot{\Psi} is based on a fluctuation of θ0\theta_{0}, and the summary of interest explicitly depends on θ0\theta_{0}. As a consequence, it can be simpler to derive Ψ˙\dot{\Psi} than to derive an influence function. For example, for the summary Ψκ(θ0)=P0θ0κ\Psi_{\kappa}(\theta_{0})=P_{0}\theta_{0}^{\kappa} in Example 1, we find that Ψ˙κ=κθ0κ1\dot{\Psi}_{\kappa}=\kappa\theta_{0}^{\kappa-1} by straightforward calculation, whereas the influence function given in Theorem 1 is more difficult to directly derive analytically.

We illustrate the fact that MnM_{n} may not be sufficiently large and show that our proposed method resolves this issue via a simulation study in which θ0:x𝔼P0[Y|X=x]\theta_{0}:x\mapsto\mathbb{E}_{P_{0}}[Y|X=x] and Ψ:θ0P0θ02\Psi:\theta_{0}\mapsto P_{0}\theta_{0}^{2}. We compare the performance of the plug-in estimators based on the 10-fold CV-selected bound on variation norm (M.cv), the bound derived from the analytic expression of Ψ˙\dot{\Psi} with and without ϵ\epsilon-relaxation (M.gcv+ and M.gcv respectively), and a sufficiently large oracle choice satisfying Condition B2 (M.oracle). We According to Lemma 1, M.oracle is 3θ0v3\|\theta_{0}\|_{{\mathrm{v}}} and M.gcv is 3×\timesM.cv. We also investigate the performance of 95% Wald CIs based on the influence function. For each resulting plug-in estimator, we investigate the following quantities: nMSEn\cdot\text{MSE}, n|bias|\sqrt{n}\cdot|\text{bias}| and CI coverage. More details of this simulation are provided in Appendix E. In theory, for an efficient estimator, we should find that nMSEn\cdot\text{MSE} tends to a constant (the variance of the influence function ξ2:=P0IF2\xi^{2}:=P_{0}\text{IF}^{2}), n|bias|\sqrt{n}\cdot|\text{bias}| tends to 0, and 95% Wald CIs have approximately 95% coverage.

We report performance summaries in Fig 2 and Table 1 with this criterion, from which it appears that the plug-in estimators with M.oracle and M.gcv+ achieve efficiency, while the plug-in estimator based on M.cv does not. The desirable performance of M.oracle and M.gcv+ agrees with the available theory, whereas the poor performance of M.cv suggests that cross-validation may not yield a valid choice of variation norm in general. Interestingly, M.gcv performs similarly to M.oracle and M.gcv+. We conjecture that using an ϵ\epsilon-relaxation is unnecessary in this setting. In Fig 3, we can also see that M.cv tends to θ0v\|\theta_{0}\|_{{\mathrm{v}}} and has a high probability of being less than M.oracle. Therefore, this simulation suggests that using a sufficiently large bound — in particular, a bound larger than the CV-selected bound — may be necessary and sufficient for the plug-in estimator to achieve efficiency.

Refer to caption
Figure 2: The relative MSE, nMSE/ξ2n\cdot\text{MSE}/\xi^{2}, and the relative absolute bias, n|bias/Ψ(θ0)|\sqrt{n}\cdot|\text{bias}/\Psi(\theta_{0})|, of the plug-in estimator of Ψ(θ0)=P0θ02\Psi(\theta_{0})=P_{0}\theta_{0}^{2} based on HAL for an oracle choice of the bound on variation norm (M.oracle), the 10-fold CV-selected bound (M.cv), a bound based on M.cv and analytic expression of Ψ˙\dot{\Psi} without and with ϵ\epsilon-relaxation (M.gcv and M.gcv+ respectively). ξ2:=P0IF2\xi^{2}:=P_{0}\text{IF}^{2} is the asymptotic variance that the nMSEn\cdot\text{MSE} of an AL estimator should converge to. Note that the nMSEn\cdot\text{MSE} for M.oracle, M.gcv and M.gcv+ tends to ξ2\xi^{2} but that for M.cv does not.
Table 1: Coverage probability of 95% Wald CI of the plug-in estimator of Ψ(θ0)=P0θ02\Psi(\theta_{0})=P_{0}\theta_{0}^{2} based on HAL for an oracle choice of the bound on variation norm (M.oracle), the 10-fold CV-selected bound (M.cv), a bound based on M.cv and analytic expression of Ψ˙\dot{\Psi} without and with ϵ\epsilon-relaxation (M.gcv and M.gcv+ respectively). The CI is constructed based on the influence function. The coverage for M.oracle, M.gcv and M.gcv+ is approximately 95%, but that for M.cv is not.
n M.cv M.gcv M.gcv+ M.oracle
500 0.87 0.96 0.96 0.97
1000 0.87 0.97 0.97 0.97
2000 0.90 0.95 0.95 0.96
5000 0.93 0.95 0.95 0.95
10000 0.89 0.95 0.95 0.95
Refer to caption
Figure 3: A boxplot of the ratio of bounds based on 10-fold CV and M.oracle. The horizontal gray thick dashed lines are 11 and 1/31/3. The y-axis is scaled based on logarithm for readability. There is a high probability that M.cv is much smaller than M.oracle; M.cv tends to the variation norm of the function being estimated, θ0v\|\theta_{0}\|_{{\mathrm{v}}}, corresponding to 1/31/3 of M.oracle. Enlarging M.cv according to the analytic expression of Ψ˙\dot{\Psi} with ϵ\epsilon-relaxation results in sufficiently large bounds. The enlargement without ϵ\epsilon-relaxation appears to have similar performance.

4 Data-adaptive series

4.1 Proposed method

For ease of illustration, we consider the case that Ψ\Psi is scalar-valued in this section. As we will describe next, our proposed estimation procedure for function-valued features does not rely on Ψ\Psi and hence can be used for a class of summaries.

Suppose that Θ\Theta is a vector space of q\mathbb{R}^{q}-valued functions equipped with the L2(P0)L^{2}(P_{0})-inner product. Further, suppose that Ψ˙=ψ˙θ0\dot{\Psi}=\dot{\psi}\circ\theta_{0} for some function ψ˙:qq\dot{\psi}:\mathbb{R}^{q}\rightarrow\mathbb{R}^{q}. This holds, for example, when Ψ:θP0(fθ)\Psi:\theta\mapsto P_{0}(f\circ\theta) for a fixed differentiable function ff in Example 1. In this case, Ψ˙=fθ0\dot{\Psi}=f^{\prime}\circ\theta_{0} and hence ψ˙=f\dot{\psi}=f^{\prime}. Particularly useful examples include Examples 1 and 4. For now we assume that the marginal distribution of XX is known so that we only need to estimate θ0\theta_{0} for this summary. We will address the more difficult case in which the marginal distribution of XX is unknown in Section 4.3.

Let θn0\theta_{n}^{0} be a given initial flexible ML fit of θ0\theta_{0} and consider the data-adaptive sieve-like subspaces based on θn0\theta_{n}^{0}, Θn:=Θn,θn0:=Span{ϕ1,ϕ2,,ϕK}θn0\Theta_{n}:=\Theta_{n,\theta_{n}^{0}}:=\text{Span}\{\phi_{1},\phi_{2},\ldots,\phi_{K}\}\circ\theta_{n}^{0}, where ϕ1,ϕ2,\phi_{1},\phi_{2},\ldots are q\mathbb{R}^{q}-valued basis functions in a series defined on q\mathbb{R}^{q} and K=K(n)K=K(n) is a deterministic number of terms in the series — we will consider selecting KK via CV in Section 4.4. Let θn=θn(θn0)argminθΘnn1i=1n(θ)(Vi)\theta_{n}^{*}=\theta_{n}^{*}(\theta_{n}^{0})\in\operatorname*{argmin}_{\theta\in\Theta_{n}}n^{-1}\sum_{i=1}^{n}\ell(\theta)(V_{i}) denote the series estimator within this data-adaptive sieve-like subspace that minimizes the empirical risk. We propose to use Ψ(θn)\Psi(\theta_{n}^{*}) to estimate Ψ(θ0)\Psi(\theta_{0}).

4.2 Results for a deterministic number of terms

Following [4, 28], our proofs of the validity of our data-adaptive series approach make heavy use of projection operators. We use πn:=πn,θn0\pi_{n}:=\pi_{n,\theta_{n}^{0}} to denote the projection operator for functions in Θ\Theta onto Θn=Θn,θn0\Theta_{n}=\Theta_{n,\theta_{n}^{0}} with respect to ,\langle\cdot,\cdot\rangle. For any function θΘ\theta\in\Theta, let Πn,θ\Pi_{n,\theta} denote the operator that takes as input a function g:qqg:\mathbb{R}^{q}\rightarrow\mathbb{R}^{q} for which gθL2(P0)g\circ\theta\in L^{2}(P_{0}) and outputs a function Πn,θ(g):qq\Pi_{n,\theta}(g):\mathbb{R}^{q}\rightarrow\mathbb{R}^{q} such that Πn,θ(g)θ=πn,θ(gθ)\Pi_{n,\theta}(g)\circ\theta=\pi_{n,\theta}(g\circ\theta). In other words, letting βj\beta_{j} be the quantity that depends on gg and θ\theta such that πn,θ(gθ)=(j=1Kβjϕj)θ\pi_{n,\theta}(g\circ\theta)=(\sum_{j=1}^{K}\beta_{j}\phi_{j})\circ\theta, we define Πn,θ(g):=j=1Kβjϕj\Pi_{n,\theta}(g):=\sum_{j=1}^{K}\beta_{j}\phi_{j}. The operator Πn,θ\Pi_{n,\theta} may also be interpreted as follows: letting PθP_{\theta} be the distribution of θ(X)\theta(X) with V=(X,Z)P0V=(X,Z)\sim P_{0}, then Πn,θ\Pi_{n,\theta} is the projection operator of functions qq\mathbb{R}^{q}\rightarrow\mathbb{R}^{q} with respect to the L2(Pθ)L^{2}(P_{\theta})-inner product. We use \mathcal{I} to denote the identity function in q\mathbb{R}^{q}.

We now present additional conditions we will require to ensure that Ψ(θn)\Psi(\theta_{n}^{*}) is an efficient estimator of Ψ(θ0)\Psi(\theta_{0}).

Condition C1 (Sufficient convergence rate of initial ML fit).

θn0θ0=op(n1/4)\|\theta_{n}^{0}-\theta_{0}\|=o_{p}(n^{-1/4}).

Condition C2 (Sufficiently small estimation error).

θnπn(θ0)=op(n1/4)\|\theta_{n}^{*}-\pi_{n}(\theta_{0})\|=o_{p}(n^{-1/4}).

Condition C3 (Sufficiently small approximation error to \mathcal{I} for Θn,θ0\Theta_{n,\theta_{0}}).

θ0Πn,θ0()θ0=o(n1/4)\|\theta_{0}-\Pi_{n,\theta_{0}}(\mathcal{I})\circ\theta_{0}\|=o(n^{-1/4}).

Condition C4 (Sufficiently small approximation error to ψ˙\dot{\psi} for Θn,θ0\Theta_{n,\theta_{0}} and convergence rate of θn\theta_{n}^{*}).

[ψ˙Πn,θ0(ψ˙)]θ0θnθ0=op(n1/2)\|[\dot{\psi}-\Pi_{n,\theta_{0}}(\dot{\psi})]\circ\theta_{0}\|\cdot\|\theta_{n}^{*}-\theta_{0}\|=o_{p}(n^{-1/2}).

Appendix B.2 contains further technical conditions and Appendix C discusses their plausibility. As discussed in Appendix C, Conditions C2C4 typically imply restrictions on the growth rate of KK: if KK grows too fast with nn, then Condition C2 may be violated; if KK instead grows too slow, then Conditions C3 and C4 may be violated. For the generalized moment Ψ:θP0(fθ)\Psi:\theta\mapsto P_{0}(f\circ\theta) with a fixed known function ff in Example 1, Condition C4 typically also imposes a smoothness condition ff so that ff^{\prime} can be approximated by the series well. Our conditions are closely related to the conditions in Theorem 1 of [28]. Conditions C1C3 and C6 serve as sufficient conditions for the condition on the smoothness of Ψ\Psi and the convergence rate of θn\theta^{*}_{n} in Theorem 1 of [28]. Together with Conditions C4 and C7, we can derive Lemma 4, which is similar to the first part of Condition C of [28]. The empirical process condition C8 is sufficient for Conditions A, D and the second part of C in Theorem 1 in [28].

We now present a theorem ensuring the asymptotic linearity and efficiency of the plug-in estimator based on θn\theta_{n}^{*}.

Theorem 2 (Efficiency of plug-in estimator).

Under Conditions A1A4 and C1C9, Ψ(θn)\Psi(\theta_{n}^{*}) is an asymptotically linear estimator of Ψ(θ0)\Psi(\theta_{0}) with the influence function being vα0,1{0[ψ˙θ0](v)+𝔼P0[0[ψ˙θ0](V)]}v\mapsto\alpha_{0,\ell}^{-1}\{-\ell_{0}^{\prime}[\dot{\psi}\circ\theta_{0}](v)+\mathbb{E}_{P_{0}}[\ell_{0}^{\prime}[\dot{\psi}\circ\theta_{0}](V)]\}, that is,

Ψ(θn)=Ψ(θ0)+1ni=1nα0,1{0[ψ˙θ0](Vi)+𝔼P0[0[ψ˙θ0](V)]}+op(n1/2).\Psi(\theta_{n}^{*})=\Psi(\theta_{0})+\frac{1}{n}\sum_{i=1}^{n}\alpha_{0,\ell}^{-1}\left\{-\ell_{0}^{\prime}[\dot{\psi}\circ\theta_{0}](V_{i})+\mathbb{E}_{P_{0}}\left[\ell_{0}^{\prime}[\dot{\psi}\circ\theta_{0}](V)\right]\right\}+o_{p}(n^{-1/2}).

As a consequence, n[Ψ(θn)Ψ(θ0)]𝑑N(0,ξ2)\sqrt{n}[\Psi(\theta_{n}^{*})-\Psi(\theta_{0})]\overset{d}{\rightarrow}N(0,\xi^{2}) with ξ2:=VarP0(0[ψ˙θ0](V))/α0,2\xi^{2}:=\textnormal{Var}_{P_{0}}(\ell_{0}^{\prime}[\dot{\psi}\circ\theta_{0}](V))/\alpha_{0,\ell}^{2}. In addition, under Conditions E1 and E2 in Appendix B.4, Ψ(θ^n)\Psi(\hat{\theta}_{n}) is efficient under a nonparametric model.

Remark 2.

Consider the general case in which it may not be true that Ψ˙\dot{\Psi} can be represented as ψ˙θ0\dot{\psi}\circ\theta_{0} for some ψ˙:qq\dot{\psi}:\mathbb{R}^{q}\rightarrow\mathbb{R}^{q}. If the analytic expression of Ψ˙\dot{\Psi} can be derived and Ψ˙\dot{\Psi} can be estimated by Ψ˙n\dot{\Psi}_{n} such that Ψ˙nΨ˙θn0θ0=op(n1/2)\|\dot{\Psi}_{n}-\dot{\Psi}\|\cdot\|\theta_{n}^{0}-\theta_{0}\|=o_{p}(n^{-1/2}), then our data-adaptive series can take a special form that is targeted towards Ψ\Psi. Specifically, letting ϑ0:=(θ0,Ψ˙)\vartheta_{0}:=(\theta_{0},\dot{\Psi})^{\top} and Ψ(ϑ0):=Ψ(θ0)\varPsi(\vartheta_{0}):=\Psi(\theta_{0}), it is straightforward to show that the gradient of Ψ\varPsi is Ψ˙=(Ψ˙,0)=(e2,𝟎)ϑ0\dot{\varPsi}=(\dot{\Psi},0)^{\top}=(e_{2},\bm{0})^{\top}\vartheta_{0} with 𝟎=(0,0)\bm{0}=(0,0)^{\top} and e2=(0,1)e_{2}=(0,1)^{\top}, which is a function composed with ϑ0\vartheta_{0}. We can set ϑn0=(θn0,Ψ˙n)\vartheta_{n}^{0}=(\theta_{n}^{0},\dot{\Psi}_{n})^{\top} and Θn=Span{θn0,Ψ˙n}\Theta_{n}=\text{Span}\{\theta_{n}^{0},\dot{\Psi}_{n}\} in our data-adaptive series. This approach does not have a growing number of terms in Θn\Theta_{n} and is not similar to sieve estimation, but can be treated as a special case of data-adaptive series. It can be shown that Conditions C1C4 are still satisfied for ϑ\vartheta and Ψ\varPsi with this choice of Θn\Theta_{n}, and hence our data-adaptive series estimator leads to an efficient plug-in estimator. We remark that the introduction of ϑ\vartheta and Ψ\varPsi is a purely theoretical device, and this targeted approach to estimation is quite similar to that used in the context of TMLE [30, 33].

4.3 Summaries involving the marginal distribution of XX

We now generalize the setting considered thus far by allowing the parameter to depend both on θ0\theta_{0} and on P0P_{0}, i.e., estimating Ψ(θ0,P0)\Psi(\theta_{0},P_{0}). The example given at the beginning of Section 4.1, namely that of estimating Ψ(θ0)=P0(fθ0)\Psi(\theta_{0})=P_{0}(f\circ\theta_{0}), is a special case of this more general setting. In what follows, we will make use of the following conditions:

Condition D1 (Conditions with P0P_{0} fixed).

When we regard Ψ(θ0,P0)\Psi(\theta_{0},P_{0}) as the mapping θΨ(θ,P0)\theta\mapsto\Psi(\theta,P_{0}) evaluated at θ0\theta_{0}, Conditions A1A4, C1C4 and C6C9 are satisfied for estimating Ψ(θ0,P0)\Psi(\theta_{0},P_{0}).

Condition D2 (Hadamard differentiability with θ0\theta_{0} fixed).

The mapping PΨ(θ0,P)P\mapsto\Psi(\theta_{0},P) is Hadamard differentiable at P0P_{0}.

By the functional delta method, it follows that Ψ(θ0,Pn)=Ψ(θ0,P0)+PnIF0+op(n1/2)\Psi(\theta_{0},P_{n})=\Psi(\theta_{0},P_{0})+P_{n}\text{IF}_{0}+o_{p}(n^{-1/2}) for a function IF0\text{IF}_{0} satisfying P0IF0=0P_{0}\text{IF}_{0}=0 and P0IF02<P_{0}\text{IF}_{0}^{2}<\infty.

Condition D3 (Negligible second-order difference).
[Ψ(θn,Pn)Ψ(θ0,Pn)][Ψ(θn,P0)Ψ(θ0,P0)]=op(n1/2).[\Psi(\theta_{n}^{*},P_{n})-\Psi(\theta_{0},P_{n})]-[\Psi(\theta_{n}^{*},P_{0})-\Psi(\theta_{0},P_{0})]=o_{p}(n^{-1/2}).

This condition usually holds, for example, when Ψ(θ0,P0)=P0(fθ0)\Psi(\theta_{0},P_{0})=P_{0}(f\circ\theta_{0}), as in this case the left-hand side is equal to (PnP0)(fθnfθ0)(P_{n}-P_{0})(f\circ\theta_{n}^{*}-f\circ\theta_{0}), which is op(n1/2)o_{p}(n^{-1/2}) under empirical process conditions.

Theorem 3 (Asymptotic linearity of plug-in estimator).

Under Conditions D1D3, Ψ(θn,Pn)\Psi(\theta_{n}^{*},P_{n}) is an asymptotically linear estimator of Ψ(θ0,P0)\Psi(\theta_{0},P_{0}) with influence function

vα0,1{0[ψ˙θ0](v)+𝔼P0[0[ψ˙θ0](V)]}+IF0(V),v\mapsto\alpha_{0,\ell}^{-1}\left\{-\ell_{0}^{\prime}[\dot{\psi}\circ\theta_{0}](v)+\mathbb{E}_{P_{0}}[\ell_{0}^{\prime}[\dot{\psi}\circ\theta_{0}](V)]\right\}+\mathrm{IF}_{0}(V),

that is,

Ψ(θn,Pn)\displaystyle\Psi(\theta_{n}^{*},P_{n}) =Ψ(θ0,P0)+1ni=1n{α0,10[ψ˙θ0](Vi)+α0,1𝔼P0[0[ψ˙θ0](V)]+IF(Vi)}\displaystyle=\Psi(\theta_{0},P_{0})+\frac{1}{n}\sum_{i=1}^{n}\left\{-\alpha_{0,\ell}^{-1}\ell_{0}^{\prime}[\dot{\psi}\circ\theta_{0}](V_{i})+\alpha_{0,\ell}^{-1}\mathbb{E}_{P_{0}}\left[\ell_{0}^{\prime}[\dot{\psi}\circ\theta_{0}](V)\right]+\mathrm{IF}(V_{i})\right\}
+op(n1/2).\displaystyle\quad+o_{p}(n^{-1/2}).

As a consequence, n[Ψ(θn,Pn)Ψ(θ0,P0)]𝑑N(0,ξ2)\sqrt{n}[\Psi(\theta_{n}^{*},P_{n})-\Psi(\theta_{0},P_{0})]\overset{d}{\rightarrow}N(0,\xi^{2}) with ξ2:=VarP0(α0,10[ψ˙θ0](V)+IF(V))\xi^{2}:=\textnormal{Var}_{P_{0}}(\alpha_{0,\ell}^{-1}\ell_{0}^{\prime}[\dot{\psi}\circ\theta_{0}](V)+\mathrm{IF}(V)).

This result is easy to verify by decomposing Ψ(θn,Pn)Ψ(θ0,P0)\Psi(\theta_{n}^{*},P_{n})-\Psi(\theta_{0},P_{0}) as

[Ψ(θn,P0)Ψ(θ0,P0)]+[Ψ(θ0,Pn)Ψ(θ0,P0)]\displaystyle[\Psi(\theta_{n}^{*},P_{0})-\Psi(\theta_{0},P_{0})]+[\Psi(\theta_{0},P_{n})-\Psi(\theta_{0},P_{0})]
+{[Ψ(θn,Pn)Ψ(θ0,Pn)][Ψ(θn,P0)Ψ(θ0,P0)]}\displaystyle\quad\quad\quad\quad+\{[\Psi(\theta_{n}^{*},P_{n})-\Psi(\theta_{0},P_{n})]-[\Psi(\theta_{n}^{*},P_{0})-\Psi(\theta_{0},P_{0})]\}

Moreover, under conditions similar to the conditions E1 and E2 given in Appendix B.4, we can show that Ψ(θn,Pn)\Psi(\theta_{n}^{*},P_{n}) is efficient under a nonparametric model.

Remark 3.

Conditions D2 and D3 can be relaxed. Specifically, if P^n\hat{P}_{n} is an estimator of P0P_{0} that satisfies that Ψ(θ0,P^n)=Ψ(θ0,P0)+PnIF0+op(n1/2)\Psi(\theta_{0},\hat{P}_{n})=\Psi(\theta_{0},P_{0})+P_{n}\text{IF}_{0}+o_{p}(n^{-1/2}) for an influence function IF0\text{IF}_{0} and Condition D3 holds with PnP_{n} replaced by P^n\hat{P}_{n}, then Ψ(θn,P^n)\Psi(\theta_{n}^{*},\hat{P}_{n}) is an asymptotically linear estimator of Ψ(θ0,P0)\Psi(\theta_{0},P_{0}).

4.4 CV selection of the number of terms in data-adaptive series

In the preceding subsections, we established the efficiency of the plug-in estimator based on suitable rates of growth for KK relative to the sample size nn. In this subsection, we show that, under some conditions, such a KK can be selected by kk-fold CV: after obtaining θn0\theta_{n}^{0}, for each KK in a range of candidates, we can calculate the cross-validated risk from kk folds and choose the value of KK with the smallest CV risk. We denote the number of terms in the series that CV selects by KK^{*}. In this section, we use KK in the subscripts for notation related to data-adaptive sieves-like spaces and projections; this represents a slight abuse of notation because, in Sections 4.1 and 4.2, these subscripts were instead used for sample size nn. That is, we use ΘK,θ\Theta_{K,\theta} to denote Span{ϕ1,ϕ2,,ϕK}θ\text{Span}\{\phi_{1},\phi_{2},\ldots,\phi_{K}\}\circ\theta, πK,θ\pi_{K,\theta} to denote the projection onto ΘK,θ\Theta_{K,\theta}, ΠK,θ\Pi_{K,\theta} to denote the operator such that ΠK,θ(g)θ=πK,θ(gθ)\Pi_{K,\theta}(g)\circ\theta=\pi_{K,\theta}(g\circ\theta) for all g:qqg:\mathbb{R}^{q}\rightarrow\mathbb{R}^{q} with gθL2(P0)g\circ\theta\in L^{2}(P_{0}), and θn:=θK(θn0)\theta_{n}^{\sharp}:=\theta_{K^{*}}^{*}(\theta_{n}^{0}) to be the data-adaptive series estimator based on θn0\theta_{n}^{0} and KK^{*}.

Condition C5 (Bounded approximation error of ψ˙\dot{\psi} relative to \mathcal{I}).

There exists a constant C>0C>0 such that, with probability tending to one, ψ˙θn0ΠK,θn0(ψ˙)θn0Cθn0ΠK,θn0()θn0\|\dot{\psi}\circ\theta_{n}^{0}-\Pi_{K,\theta_{n}^{0}}(\dot{\psi})\circ\theta_{n}^{0}\|\leq C\|\theta_{n}^{0}-\Pi_{K,\theta_{n}^{0}}(\mathcal{I})\circ\theta_{n}^{0}\| for all KK.

This condition is equivalent to

ψ˙ΠK,θn0(ψ˙)L2(Pθn0)CΠK,θn0()L2(Pθn0)\|\dot{\psi}-\Pi_{K,\theta_{n}^{0}}(\dot{\psi})\|_{L^{2}(P_{\theta_{n}^{0}})}\leq C\|\mathcal{I}-\Pi_{K,\theta_{n}^{0}}(\mathcal{I})\|_{L^{2}(P_{\theta_{n}^{0}})}

for all KK with probability tending to one, which may be interpreted in terms of two simultaneous requirements. The first requirement is that the identity function \mathcal{I} is not exactly contained in the span of ϕ1,,ϕK\phi_{1},\ldots,\phi_{K} for any KK, since otherwise, the right-hand side would be zero for all sufficiently large KK. Therefore, common series such as polynomial and spline series are not permitted for general summaries. In contrast, other series such as trigonometric series and wavelets satisfy this requirement. The second requirement is that the approximation error of the chosen series for the identity function \mathcal{I} is not much larger than ψ˙\dot{\psi}. If a trigonometric or wavelet series is used, then this condition imposes a strong smoothness condition on derivatives of ψ˙\dot{\psi}. Nonetheless, this may not be stringent in some interesting examples. For example, if Ψ(θ)=P0(fθ)\Psi(\theta)=P_{0}(f\circ\theta) for a fixed function ff in Example 1, then ψ˙\dot{\psi} equals ff^{\prime} and hence can be expected to satisfy this strong smoothness condition provided that ff is infinitely differentiable with bounded derivatives. The estimands encountered in many applications involve ff satisfying this smoothness condition.

The following theorem justifies the use of kk-fold CV to select KK under appropriate conditions.

Theorem 4 (Efficiency of CV-based plug-in estimator).

Assume that Conditions A1A4, C1C3, C5, C8 and C9 hold for a deterministic K=K(n)K=K(n). Suppose part (a) of Condition C7 holds, then, with θn:=θK(θn0)\theta_{n}^{\sharp}:=\theta_{K^{*}}(\theta_{n}^{0}), Ψ(θn)\Psi(\theta_{n}^{\sharp}) is an asymptotically linear estimator of Ψ(θ0)\Psi(\theta_{0}) with influence function vα0,1{0[ψ˙θ0](v)+𝔼P0[0[ψ˙θ0](V)]}v\mapsto\alpha_{0,\ell}^{-1}\{-\ell_{0}^{\prime}[\dot{\psi}\circ\theta_{0}](v)+\mathbb{E}_{P_{0}}[\ell_{0}^{\prime}[\dot{\psi}\circ\theta_{0}](V)]\}, that is,

Ψ(θn)=Ψ(θ0)+1ni=1nα0,1{0[ψ˙θ0](Vi)+𝔼P0[0[ψ˙θ0](V)]}+op(n1/2).\Psi(\theta_{n}^{\sharp})=\Psi(\theta_{0})+\frac{1}{n}\sum_{i=1}^{n}\alpha_{0,\ell}^{-1}\left\{-\ell_{0}^{\prime}[\dot{\psi}\circ\theta_{0}](V_{i})+\mathbb{E}_{P_{0}}\left[\ell_{0}^{\prime}[\dot{\psi}\circ\theta_{0}](V)\right]\right\}+o_{p}(n^{-1/2}).

As a consequence, n[Ψ(θn)Ψ(θ0)]𝑑N(0,ξ2)\sqrt{n}[\Psi(\theta_{n}^{\sharp})-\Psi(\theta_{0})]\overset{d}{\rightarrow}N(0,\xi^{2}) with ξ2:=VarP0(0[ψ˙θ0](V))/α0,2\xi^{2}:=\textnormal{Var}_{P_{0}}(\ell_{0}^{\prime}[\dot{\psi}\circ\theta_{0}](V))/\alpha_{0,\ell}^{2}. In addition, under Conditions E1 and E2 in Appendix B.4, Ψ(θ^n)\Psi(\hat{\theta}_{n}) is efficient under a nonparametric model.

4.5 Simulation

4.5.1 Demonstration of Theorem 4

We illustrate our method in a simulation in which we take θ0(x)=𝔼P0[Z|X=x]\theta_{0}(x)=\mathbb{E}_{P_{0}}[Z|X=x] and Ψ(θ0)=P0θ02\Psi(\theta_{0})=P_{0}\theta_{0}^{2}. This is a special case of Example 1. The true function θ0\theta_{0} is chosen to be discontinuous, which violates the smoothness assumptions commonly required in traditional series estimation. In this case, ψ˙=2\dot{\psi}=2\mathcal{I} and so the constant in Condition C5 is 2. We compare the performance of plug-in estimators based on three different nonparametric regressions: (i) polynomial regression with degree selected by 10-fold CV (poly), which results in a traditional sieve estimator, (ii) gradient boosting (xgb) [8, 9, 18, 19], and (iii) data-adaptive trigonometric series estimation with gradient boosting as the initial ML fit and 10-fold CV to select the number of terms in the series (xgb.trig). We also compare these plug-in estimators with the one-step correction estimator [25] based on gradient boosting (xgb.1step). Further details of this simulation can be found in Appendix E.

Fig 4 presents nMSEn\cdot\text{MSE} and n|bias|\sqrt{n}\cdot|\text{bias}| for each estimator, whereas Table 2 presents the coverage probability of 95% Wald CIs based on these estimators. We find that xgb.trig and xgb.1step estimators perform well, while poly and xgb plug-in estimators do not appear to be efficient. Since polynomial series estimators only work well when estimating smooth functions, in this simulation, we would not expect the fit from the polynomial series estimator to converge sufficiently fast, and consequently, we would not expect the resulting plug-in estimator to be efficient. In contrast, gradient boosting is a flexible ML method that can learn discontinuous functions, so we can expect an efficient plug-in estimator based on this ML method. However, gradient boosting is not designed to approximately solve the estimating equation that achieves the small-bias property for this particular summary, so we would not expect its naïve plug-in estimator to be efficient. Based on gradient boosting, our estimator and the one-step corrected estimator both appear to be efficient, but our method has the advantage of being a plug-in estimator. Moreover, the construction of our estimator does not require knowledge of the analytic expression of an influence function.

Refer to caption
Figure 4: The relative MSE, nMSE/ξ2n\cdot\text{MSE}/\xi^{2}, and the relative absolute bias, n|bias/Ψ(θ0)|\sqrt{n}\cdot|\text{bias}/\Psi(\theta_{0})|, of estimators of Ψ(θ0)=P0θ02\Psi(\theta_{0})=P_{0}\theta_{0}^{2}. ξ2:=P0IF2\xi^{2}:=P_{0}\text{IF}^{2} is the asymptotic variance that the nMSEn\cdot\text{MSE} of an AL estimator should converge to. poly: plug-in estimator based on polynomial sieve estimation. xgb: plug-in estimator based on gradient boosting. xgb.1step: one-step correction (debiasing) of the plug-in estimator based on gradient boosting. xgb.trig: data-adaptive series with trigonometric series composed with gradient boosting. All tuning parameters are CV-selected. The y-axis for relative MSE is scaled based on logarithm for readability. Note that the nMSEn\cdot\text{MSE} for xgb.trig and xgb.1step tend to ξ2\xi^{2}, but those for poly and xgb do not.
Table 2: Coverage probability of 95% Wald CI based on estimators of Ψ(θ0)=P0θ02\Psi(\theta_{0})=P_{0}\theta_{0}^{2}. poly: plug-in estimator based on polynomial sieve estimation. xgb: plug-in estimator based on gradient boosting. xgb.1step: one-step correction (debiasing) of the plug-in estimator based on gradient boosting. xgb.trig: data-adaptive series with trigonometric series composed with gradient boosting. All tuning parameters are CV-selected. The CI is constructed based on the influence function. The coverage probabilities for xgb.trig and xgb.1step are approximately 95%, but those for poly and xgb are not.
n poly xgb xgb.1step xgb.trig
500 0.90 0.90 0.95 0.95
1000 0.86 0.89 0.95 0.95
2000 0.74 0.88 0.96 0.96
5000 0.47 0.88 0.94 0.94
10000 0.16 0.87 0.95 0.96
20000 0.02 0.86 0.96 0.96

We also investigate the effect of the choice of KK on the performance of our method. Fig 5 presents nMSEn\cdot\text{MSE} for the data-adaptive series estimator with different choices of KK. We can see that our method is insensitive to the choice of KK in this simulation setting. Although a relatively small KK performs better, choosing a much larger KK does not appear to substantially harm the behavior of the estimator. This insensitivity to the selected tuning parameter suggests that in some applications, without using CV, an almost arbitrary choice of KK that is sufficiently large might perform well.

Refer to caption
Figure 5: nMSEn\cdot\text{MSE} of estimators of Ψ(θ0)=P0θ02\Psi(\theta_{0})=P_{0}\theta_{0}^{2} based on data-adaptive series with different choices of KK. The horizontal gray thick dashed line is the asymptotic variance that the nMSEn\cdot\text{MSE} of an AL estimator should converge to, ξ2:=P0IF2\xi^{2}:=P_{0}\text{IF}^{2}. Note that nMSEn\cdot\text{MSE} is not sensitive to the choice of KK over a wide range of KK.

4.5.2 Violation of Condition C5

For the kk-fold CV selection of KK in our method to yield an efficient plug-in estimator, Ψ\Psi must be highly smooth in the sense that ψ˙\dot{\psi} can be approximated by the series about as well as can the identity function (see Condition C5). Although we have argued that this condition is reasonable, in this section, we explore via simulation the behavior of our method based on CV when ψ˙\dot{\psi} is rough. We again take θ0:x𝔼P0[Z|X=x]\theta_{0}:x\mapsto\mathbb{E}_{P_{0}}[Z|X=x] and an artificial summary Ψ(θ0)=P0(fθ0)\Psi(\theta_{0})=P_{0}(f\circ\theta_{0}), where ff is an element of C1[1,1]C^{1}[-1,1] but not of C2[1,1]C^{2}[-1,1]. In this case, ψ˙=f\dot{\psi}=f^{\prime} is very rough, so we do not expect it to be approximated by a trigonometric series as well as the identity function. However, it is sufficiently smooth to allow for the existence of a deterministic KK that achieves efficiency. Further simulation details are provided in Appendix E.

Table 3 presents the performance of our estimator based on 10-fold CV. We note that it performs reasonably well in terms of the nMSEn\cdot\text{MSE} criterion. However, it is unclear whether its scaled bias converges to zero for large nn, so our method may be too biased. The coverage of 95% Wald CIs is close to the nominal level, suggesting that the bias is fairly small relative to the standard error of the estimator at the sample sizes considered. One possible explanation for the good performance observed is that the L2(P0)L^{2}(P_{0})-convergence rate of θn\theta_{n}^{*} is much faster than n1/4n^{-1/4}, which allows for a slower convergence rate of the approximation error ψ˙θ0Πn,θ0(ψ˙)θ0\|\dot{\psi}\circ\theta_{0}-\Pi_{n,\theta_{0}}(\dot{\psi})\circ\theta_{0}\| (see Appendix C). This simulation shows that our proposed method may still perform well even if Condition C5 is violated, especially when the initial ML fit is close to the unknown function.

Table 3: Performance of the plug-in estimator of Ψ(θ0)=P0(fθ0)\Psi(\theta_{0})=P_{0}(f\circ\theta_{0}) based on data-adaptive series. Here ff is not infinitely differentiable. The relative MSE is nMSE/ξ2n\cdot\text{MSE}/\xi^{2} where ξ2:=P0IF2\xi^{2}:=P_{0}\text{IF}^{2} is the asymptotic variance that the nMSEn\cdot\text{MSE} of an AL estimator should converge to; the root-nn abs relative bias is n|bias/Ψ(θ0)|\sqrt{n}|\text{bias}/\Psi(\theta_{0})|. The performance appears to be acceptable in view of the small MSE and reasonable CI coverage.
n relative MSE root-nn absolute relative bias 95% Wald CI coverage
500 0.88 3.95 0.97
1000 0.89 3.73 0.96
2000 0.79 3.15 0.97
5000 0.78 2.02 0.97
10000 0.88 2.57 0.97
20000 0.88 1.75 0.96

5 Generalized data-adaptive series

5.1 Proposed method

As in Section 4, we consider the case that Ψ\Psi is scalar-valued in this section. The assumption that Ψ˙=ψ˙θ0\dot{\Psi}=\dot{\psi}\circ\theta_{0} may be too restrictive for general summaries as in Examples 24, especially if Ψ˙\dot{\Psi} is not derived analytically (see Remark 2). In this section, we generalize the method in Section 4 to deal with these summaries. Letting x\mathcal{I}_{x} be the identity function defined on 𝒳\mathcal{X}, we can readily generalize the above method to the case where Ψ˙\dot{\Psi} can be represented as ψ˙(θ0,x)\dot{\psi}\circ(\theta_{0},\mathcal{I}_{x}) for a function ψ˙:q×𝒳q\dot{\psi}:\mathbb{R}^{q}\times\mathcal{X}\rightarrow\mathbb{R}^{q}; that is, Ψ˙(x)=ψ˙(θ0(x),x)\dot{\Psi}(x)=\dot{\psi}(\theta_{0}(x),x). This form holds trivially if we set ψ˙(t,x)=Ψ˙(x)\dot{\psi}(t,x)=\dot{\Psi}(x), i.e., ψ˙\dot{\psi} is independent of its first argument, but we can utilize flexible ML methods if ψ˙\dot{\psi} is nontrivial. Again, we assume Θ\Theta is a vector space of q\mathbb{R}^{q}-valued function equipped with the L2(P0)L^{2}(P_{0})-inner product. We assume ψ˙\dot{\psi} can be approximated well by a basis ϕ1,ϕ2,:q×𝒳q\phi_{1},\phi_{2},\ldots:\mathbb{R}^{q}\times\mathcal{X}\rightarrow\mathbb{R}^{q}, and consider the data-adaptive sieve-like subspace Θn:=Θn,θn0:=Span{ϕ1,,ϕK}(θn0,x)\Theta_{n}:=\Theta_{n,\theta_{n}^{0}}:=\text{Span}\{\phi_{1},\ldots,\phi_{K}\}\circ(\theta_{n}^{0},\mathcal{I}_{x}). We propose to use Ψ(θn)\Psi(\theta_{n}^{*}) to estimate Ψ(θ0)\Psi(\theta_{0}), where θn=θn(θn0)argminθΘnn1i=1n(θ)(Vi)\theta_{n}^{*}=\theta_{n}^{*}(\theta_{n}^{0})\in\operatorname*{argmin}_{\theta\in\Theta_{n}}n^{-1}\sum_{i=1}^{n}\ell(\theta)(V_{i}) denotes the series estimator within Θn\Theta_{n} minimizing the empirical risk.

5.2 Results for proposed method

With a slight abuse of notation, in this section we use \mathcal{I} to denote the function (t,x)t(t,x)\mapsto t where tqt\in\mathbb{R}^{q} and x𝒳x\in\mathcal{X}. Again, we use πn:=πn,θn0\pi_{n}:=\pi_{n,\theta_{n}^{0}} to denote the projection operator onto Θn,θn0\Theta_{n,\theta_{n}^{0}}. Let Πn,θ\Pi_{n,\theta} be defined such that, for any function g:q×𝒳qg:\mathbb{R}^{q}\times\mathcal{X}\rightarrow\mathbb{R}^{q} with g(θ,x)L2(P0)g\circ(\theta,\mathcal{I}_{x})\in L^{2}(P_{0}), it holds that Πn,θ(g)(θ,x)=πn,θ(g(θ,x))\Pi_{n,\theta}(g)\circ(\theta,\mathcal{I}_{x})=\pi_{n,\theta}(g\circ(\theta,\mathcal{I}_{x})); that is, letting βj\beta_{j} be the quantity that depends on gg and θ\theta such that πn,θ(g(θ,x))=(j=1Kβjϕj)(θ,x)\pi_{n,\theta}(g\circ(\theta,\mathcal{I}_{x}))=(\sum_{j=1}^{K}\beta_{j}\phi_{j})\circ(\theta,\mathcal{I}_{x}), we define Πn,θ(g):=j=1Kβjϕj\Pi_{n,\theta}(g):=\sum_{j=1}^{K}\beta_{j}\phi_{j}.

We introduce conditions and derive theoretical results that are parallel to those in Section 4.

Condition C3 (Sufficiently small approximation error to \mathcal{I} for Θn,θ0\Theta_{n,\theta_{0}}).

θ0Πn,θ0()(θ0,x)=o(n1/4)\|\theta_{0}-\Pi_{n,\theta_{0}}(\mathcal{I})\circ(\theta_{0},\mathcal{I}_{x})\|=o(n^{-1/4}).

Condition C4 (Sufficiently small approximation error to ψ˙\dot{\psi} for Θn,θ0\Theta_{n,\theta_{0}} and convergence rate of θn\theta_{n}^{*}).

[ψ˙Πn,θ0(ψ˙)](θ0,x)θnθ0=op(n1/2)\|[\dot{\psi}-\Pi_{n,\theta_{0}}(\dot{\psi})]\circ(\theta_{0},\mathcal{I}_{x})\|\cdot\|\theta_{n}^{*}-\theta_{0}\|=o_{p}(n^{-1/2}).

Additional regularity conditions can be found in Appendix B.3. Note that Ψ˙\dot{\Psi} may depend on components of P0P_{0} other than θ0\theta_{0}, Condition C4 may impose smoothness conditions on these components so that ψ˙\dot{\psi} can be well approximated by the chosen series. For example, in Example 2, Condition C4 requires that p0/p0p_{0}^{\prime}/p_{0} and the propensity score can be approximated by the series well; in Examples 3 and 4, Condition C4 imposes the same requirement on the propensity score. We now present a theorem that establishes the efficiency of the plug-in estimator based on θn\theta_{n}^{*}.

Theorem 5 (Efficiency of plug-in estimator).

Under Conditions A1A4, C1, C2, C3, C4, C6, C7, C8 and C9, Ψ(θn)\Psi(\theta_{n}^{*}) is an asymptotically linear estimator of Ψ(θ0)\Psi(\theta_{0}) with influence function vα0,1{0[ψ˙(θ0,x)](v)+𝔼P0[0[ψ˙(θ0,x)](V)]}v\mapsto\alpha_{0,\ell}^{-1}\{-\ell_{0}^{\prime}[\dot{\psi}\circ(\theta_{0},\mathcal{I}_{x})](v)+\mathbb{E}_{P_{0}}[\ell_{0}^{\prime}[\dot{\psi}\circ(\theta_{0},\mathcal{I}_{x})](V)]\}, that is,

Ψ(θn)=Ψ(θ0)+1ni=1nα0,1{0[ψ˙(θ0,x)](Vi)+𝔼P0[0[ψ˙(θ0,x)](V)]}+op(n1/2).\Psi(\theta_{n}^{*})=\Psi(\theta_{0})+\frac{1}{n}\sum_{i=1}^{n}\alpha_{0,\ell}^{-1}\left\{-\ell_{0}^{\prime}[\dot{\psi}\circ(\theta_{0},\mathcal{I}_{x})](V_{i})+\mathbb{E}_{P_{0}}\left[\ell_{0}^{\prime}[\dot{\psi}\circ(\theta_{0},\mathcal{I}_{x})](V)\right]\right\}+o_{p}(n^{-1/2}).

As a consequence, n[Ψ(θn)Ψ(θ0)]𝑑N(0,ξ2)\sqrt{n}[\Psi(\theta_{n}^{*})-\Psi(\theta_{0})]\overset{d}{\rightarrow}N(0,\xi^{2}) with ξ2:=VarP0(0[ψ˙(θ0,x)](V))/α0,2\xi^{2}:=\textnormal{Var}_{P_{0}}(\ell_{0}^{\prime}[\dot{\psi}\circ(\theta_{0},\mathcal{I}_{x})](V))/\alpha_{0,\ell}^{2}. In addition, under Conditions E1 and E2 in Appendix B.4, Ψ(θ^n)\Psi(\hat{\theta}_{n}) is efficient under nonparametric models.

Remark 4.

When Ψ\Psi depends on both θ0\theta_{0} and P0P_{0}, we can readily adapt this method as in Section 4.3.

We now present a condition for selecting KK via kk-fold CV, in parallel with Condition C5 from Section 4.4.

Condition C5 (Bounded approximation error of ψ˙\dot{\psi} relative to \mathcal{I}).

There exists a constant C>0C>0 such that, with probability tending to one, ψ˙(θn0,x)ΠK,θn0(ψ˙)(θn0,x)Cθn0ΠK,θn0()(θn0,x)\|\dot{\psi}\circ(\theta_{n}^{0},\mathcal{I}_{x})-\Pi_{K,\theta_{n}^{0}}(\dot{\psi})\circ(\theta_{n}^{0},\mathcal{I}_{x})\|\leq C\|\theta_{n}^{0}-\Pi_{K,\theta_{n}^{0}}(\mathcal{I})\circ(\theta_{n}^{0},\mathcal{I}_{x})\| for all KK.

Remark 5.

Similarly to Condition C5, Condition C5 requires that the identity function \mathcal{I} is not contained in the span of finitely many terms of the chosen series and that ψ˙\dot{\psi} is sufficiently smooth so that ψ˙\dot{\psi} can be approximated well by the chosen series. However, Condition C5 may be far more stringent than Condition C5. In fact, it may be overly stringent in practice. Since Ψ˙\dot{\Psi} may depend on components of P0P_{0} other than θ0\theta_{0}, Condition C5 may require these components to be sufficiently smooth. When a common candidate series such as the trigonometric series is used, a sufficient condition for Condition C5 is that ψ˙\dot{\psi} is infinitely differentiable with bounded derivatives, which further imposes assumptions on the smoothness of other components of P0P_{0}. For example, in Example 2, a sufficient condition for Condition C5 is that p0/p0p_{0}^{\prime}/p_{0} is infinitely differentiable with bounded derivatives; in Examples 3 and 4, a sufficient condition for Condition C5 is that Condition C5 is that the propensity score function satisfies the same requirement. Due to the stringency of Condition C5, we conduct a simulation in Section 5.3.2 to understand the performance of our proposed method when this condition is violated. The simulation appears to indicate that our method may be robust against violation of Condition C5.

The following theorem shows that kk-fold CV can be used to select KK under certain conditions.

Theorem 6 (Efficiency of CV-based plug-in estimator).

Assume Conditions A1A4C1, C2, C3, C6, C7, C8 and C9 hold for a deterministic K=K(n)K=K(n). Suppose that part (a) of Condition C7 holds. With θn:=θK(θn0)\theta_{n}^{\sharp}:=\theta_{K^{*}}(\theta_{n}^{0}), Ψ(θn)\Psi(\theta_{n}^{\sharp}) is an asymptotically linear estimator of Ψ(θ0)\Psi(\theta_{0}) with influence function vα0,1{0[ψ˙(θ0,x)](v)+𝔼P0[0[ψ˙(θ0,x)](V)]}v\mapsto\alpha_{0,\ell}^{-1}\{-\ell_{0}^{\prime}[\dot{\psi}\circ(\theta_{0},\mathcal{I}_{x})](v)+\mathbb{E}_{P_{0}}[\ell_{0}^{\prime}[\dot{\psi}\circ(\theta_{0},\mathcal{I}_{x})](V)]\}, that is,

Ψ(θn)=Ψ(θ0)+1ni=1nα0,1{0[ψ˙(θ0,x)](Vi)+𝔼P0[0[ψ˙(θ0,x)](V)]}+op(n1/2).\Psi(\theta_{n}^{\sharp})=\Psi(\theta_{0})+\frac{1}{n}\sum_{i=1}^{n}\alpha_{0,\ell}^{-1}\left\{-\ell_{0}^{\prime}[\dot{\psi}\circ(\theta_{0},\mathcal{I}_{x})](V_{i})+\mathbb{E}_{P_{0}}\left[\ell_{0}^{\prime}[\dot{\psi}\circ(\theta_{0},\mathcal{I}_{x})](V)\right]\right\}+o_{p}(n^{-1/2}).

Therefore, n[Ψ(θn)Ψ(θ0)]𝑑N(0,ξ2)\sqrt{n}[\Psi(\theta_{n}^{\sharp})-\Psi(\theta_{0})]\overset{d}{\rightarrow}N(0,\xi^{2}) with ξ2:=VarP0(0[ψ˙θ0](V))/α0,2\xi^{2}:=\textnormal{Var}_{P_{0}}(\ell_{0}^{\prime}[\dot{\psi}\circ\theta_{0}](V))/\alpha_{0,\ell}^{2}. In addition, under Conditions E1 and E2 in Appendix B.4, Ψ(θ^n)\Psi(\hat{\theta}_{n}) is efficient under a nonparametric model.

5.3 Simulation

In the following simulations, we consider the problem in Example 4. As we show in Appendix A, letting g0:xP0(A=1|X=x)g_{0}:x\mapsto P_{0}(A=1|X=x) be the propensity score and setting θ=(μ0,μ1)\theta=(\mu_{0},\mu_{1}), with (θ):va[zμ1(x)]2+(1a)[zμ0(x)]2\ell(\theta):v\mapsto a[z-\mu_{1}(x)]^{2}+(1-a)[z-\mu_{0}(x)]^{2}, the generalized data-adaptive series methodology may be used to obtain an efficient estimator. As in Section 4.5, we conduct two simulation studies, the first demonstrating Theorem 6 and the other exploring the robustness of CV against violation of Condition C5.

5.3.1 Demonstration of Theorem 6

We choose θ0\theta_{0} to be a discontinuous function while g0g_{0} is highly smooth. We compare the performance of plug-in estimators based on three different nonparametric regressions: (i) polynomial regression with the degree selected by 5-fold CV (poly), which results in a traditional sieve estimator, (ii) gradient boosting (xgb) [8, 9, 18, 19], and (iii) generalized data-adaptive trigonometric series estimation with gradient boosting as the initial ML fit and 5-fold CV to select the number of terms in the series (xgb.trig). Further details of the simulation setting are provided in Appendix E.

Fig 6 presents nMSEn\cdot\text{MSE} and n|bias|\sqrt{n}\cdot|\text{bias}| for each estimator, whereas Table 4 presents the coverage probability of 95% Wald CIs based on these estimators. There are a few runs in the simulation with noticeably poor behavior, so we trimmed the most extreme values when computing MSE and bias in Fig 6 (1% of all Monte Carlo runs). The outliers may be caused by the performance of gradient boosting and the instability of 5-fold CV. In practice, the user may ensemble more ML methods and use 10-fold CV to mitigate such behavior. We note that xgb.trig and xgb.1step estimators perform well, while poly and xgb plug-in estimators do not appear to be efficient. Based on gradient boosting, our estimator and the one-step corrected estimator both appear to be efficient, but the construction of our estimator has the advantage of not requiring the analytic expression of an influence function.

Refer to caption
Figure 6: The relative MSE, nMSE/ξ2n\cdot\text{MSE}/\xi^{2}, and the relative absolute bias, n|bias/Ψ(θ0)|\sqrt{n}\cdot|\text{bias}/\Psi(\theta_{0})|, of estimators of Ψ(θ0)=VarP0(μ0,1(X)μ0,0(X))\Psi(\theta_{0})=\textnormal{Var}_{P_{0}}(\mu_{0,1}(X)-\mu_{0,0}(X)) where μ0,a:x𝔼P0[Y|A=a,X=x]\mu_{0,a}:x\mapsto\mathbb{E}_{P_{0}}[Y|A=a,X=x]. ξ2:=P0IF2\xi^{2}:=P_{0}\text{IF}^{2} is the asymptotic variance that the nMSEn\cdot\text{MSE} of an AL estimator should converge to. poly: plug-in estimator based on polynomial sieve estimation. xgb: plug-in estimator based on gradient boosting. xgb.1step: one-step correction (debiasing) of the plug-in estimator based on gradient boosting. xgb.trig: data-adaptive series with trigonometric series composed with gradient boosting. All tuning parameters are CV-selected. The y-axis for relative MSE is scaled based on logarithm for readability. Note that the nMSEn\cdot\text{MSE} for xgb.trig and xgb.1step tend to ξ2\xi^{2}, but those for poly and xgb do not.
Table 4: Coverage probability of 95% Wald CI based on estimators of Ψ(θ0)=VarP0(μ0,1(X)μ0,0(X))\Psi(\theta_{0})=\textnormal{Var}_{P_{0}}(\mu_{0,1}(X)-\mu_{0,0}(X)) where μ0,a:x𝔼P0[Y|A=a,X=x]\mu_{0,a}:x\mapsto\mathbb{E}_{P_{0}}[Y|A=a,X=x]. poly: plug-in estimator based on polynomial sieve estimation. xgb: plug-in estimator based on gradient boosting. xgb.1step: one-step correction (debiasing) of the plug-in estimator based on gradient boosting. xgb.trig: data-adaptive series with trigonometric series composed with gradient boosting. All tuning parameters are CV-selected. The CI is constructed based on the influence function. The coverage probabilities for xgb.trig and xgb.1step are relatively close to 95%, but those for poly and xgb are not.
n poly xgb xgb.1step xgb.trig
500 0.85 0.76 0.89 0.90
1000 0.68 0.78 0.93 0.93
2000 0.44 0.81 0.93 0.92
5000 0.11 0.80 0.89 0.87
10000 0.00 0.79 0.92 0.90
20000 0.00 0.67 0.91 0.88

5.3.2 Violation of Condition C5

We also study via simulation the behavior of our estimator when Condition C5 is violated. We note that whether Condition C5 holds depends on the smoothness of g0g_{0}. We choose g0g_{0} to be rougher than \mathcal{I} with g0g_{0} being an element of C2[1,1]C^{2}[-1,1] but not of C3[1,1]C^{3}[-1,1]. Consequently, Ψ˙\dot{\Psi} cannot be approximated by our generalized data-adaptive series as well as \mathcal{I}, but its smoothness is sufficient for the existence of a deterministic KK to achieve efficiency. Appendix E describes further details of this simulation setting.

Table 5 presents the performance of our estimator based on 5-fold CV. We observe that its scaled MSE appears to converge to one, but it is unclear whether its scaled bias converges to zero for large nn, and so our method may be overly biased.. The coverage of 95% Wald CIs is close to the nominal level, suggesting that the bias may be fairly small relative to the standard error of the estimator at the sample sizes considered. Therefore, according to this simulation, our generalized data-adaptive series methodology appears to be robust against violation of Condition C5.

Table 5: Performance of the plug-in estimator of Ψ(θ0)=VarP0(μ0,1(X)μ0,0(X))\Psi(\theta_{0})=\textnormal{Var}_{P_{0}}(\mu_{0,1}(X)-\mu_{0,0}(X)) where μ0,a:x𝔼P0[Z|A=a,X=x]\mu_{0,a}:x\mapsto\mathbb{E}_{P_{0}}[Z|A=a,X=x] based on data-adaptive series. Here the propensity score g0:x𝔼P0[A|X=x]g_{0}:x\mapsto\mathbb{E}_{P_{0}}[A|X=x] is rough. The relative MSE is nMSE/ξ2n\cdot\text{MSE}/\xi^{2} where ξ2:=P0IF2\xi^{2}:=P_{0}\text{IF}^{2} is the asymptotic variance that the nMSEn\cdot\text{MSE} of an AL estimator should converge to; the root-nn abs relative bias is n|bias/Ψ(θ0)|\sqrt{n}|\text{bias}/\Psi(\theta_{0})|. The performance appears to be acceptable in view of small MSE and reasonable CI coverage.
n relative MSE root-nn absolute relative bias 95% Wald CI coverage
500 1.02 0.28 0.92
1000 1.13 0.26 0.91
2000 1.10 0.19 0.94
5000 1.03 0.02 0.93
10000 0.96 0.23 0.95
20000 0.99 0.24 0.94

6 Discussion

Numerous methods have been proposed to construct efficient estimators for statistical parameters under a nonparametric model, but each of them has one or more of the following undesirable limitations: (i) their construction may require specialized expertise that is not accessible to most statisticians; (ii) for any given data set, there may be little guidance, if any, on how to select a key tuning parameter; and (iii) they may require stringent smoothness conditions, especially on derivatives. In this paper, we propose two sieve-like methods that can partially overcome these difficulties.

Our first approach, namely that based on HAL, can be further generalized to the case in which the flexible fit is an empirical risk minimizer over a function class assumed to contain the unknown function. The key condition B2 may be modified in that case as long as it ensures that certain perturbations of the unknown function still lie in that function class. We note that our methods may also be applied under semiparametric models.

A major direction for future work is to construct valid CIs without the knowledge of the influence function of the resulting plug-in estimator. The nonparametric bootstrap is in general invalid when the overall summary is not Hadamard differentiable and especially when the method relies on CV [2, 11], but a model-based bootstrap is a possible solution (Chapter 28 of [33]). In many cases only certain components of the true data-generating distribution must be estimated to obtain a plug-in estimator, while its variance may depend on other components that are not explicitly estimated. Therefore, generating valid model-based bootstrap samples is generally difficult.

Our proposed sieve-like methods may be used to construct efficient plug-in estimators for new applications in which the relevant theoretical results are difficult to derive. They may also inspire new methods to construct such estimators under weaker conditions.

Appendix A Modification of chosen norm for evaluating the conditions: case study of mean counterfactual outcome

In this appendix, we consider a parameter that requires a modification in the chosen norm for evaluating the conditions. In particular, we discuss estimating counterfactual mean outcome in Example 3.

Let g0:xP0(A=1|X=x)g_{0}:x\mapsto P_{0}(A=1|X=x) be the propensity score function. A natural choice of the loss function is (θ):va[yθ(x)]2\ell(\theta):v\mapsto a[y-\theta(x)]^{2}. Indeed, learning a function with this loss function is equivalent to fitting a function within the stratum of observations that received treatment 1. Unfortunately, this loss function does not satisfy Condition A2 with L2(P0)L^{2}(P_{0})-norm, because P0{(θ)(θ0)}=P0{g0(θθ0)2}P_{0}\{\ell(\theta)-\ell(\theta_{0})\}=P_{0}\{g_{0}\cdot(\theta-\theta_{0})^{2}\} cannot be well approximated by α0,P0{(θθ0)2}/2\alpha_{0,\ell}P_{0}\{(\theta-\theta_{0})^{2}\}/2 for any constant α0,>0\alpha_{0,\ell}>0 unless g0g_{0} is a constant. One way to overcome this challenge is to choose the alternative inner product θ1,θ2g0:=P0{g0θ1θ2}\langle\theta_{1},\theta_{2}\rangle_{g_{0}}:=P_{0}\{g_{0}\theta_{1}\theta_{2}\} and its induced norm g0\|\cdot\|_{g_{0}}. In this case, Condition A2 is satisfied once \|\cdot\| is replaced by g0\|\cdot\|_{g_{0}} in the condition statement. Under this choice, Ψθ0=P0(θθ0)=1/g0,θθ0g0\Psi^{\prime}_{\theta_{0}}=P_{0}(\theta-\theta_{0})=\langle 1/g_{0},\theta-\theta_{0}\rangle_{g_{0}}. We may redefine the corresponding Ψ˙\dot{\Psi} similarly as the function that satisfies

Ψθ0=Ψ˙,θθ0g0,\Psi^{\prime}_{\theta_{0}}=\langle\dot{\Psi},\theta-\theta_{0}\rangle_{g_{0}},

and it immediately follows that Ψ˙=1/g0\dot{\Psi}=1/g_{0}. Moreover, under a strong positivity condition, namely g0(X)δg>0g_{0}(X)\geq\delta_{g}>0 a.s. for some δg\delta_{g}, which is a typical condition in causal inference literature [33, 36], then it is straightforward to show that δgg0\delta_{g}\|\cdot\|\leq\|\cdot\|_{g_{0}}\leq\|\cdot\|; that is, g0\|\cdot\|_{g_{0}} is equivalent to L2(P0)L^{2}(P_{0})-norm. Using this fact, it can be shown that all other conditions with respect to the L2(P0)L^{2}(P_{0})-inner product are equivalent to the corresponding conditions with respect to ,g0\langle\cdot,\cdot\rangle_{g_{0}}.

Therefore, the data-adaptive series can be applied to estimation of the counterfactual mean outcome under our conditions for L2(P0)L^{2}(P_{0})-inner product. If we use the targeted form in Remark 2, then we need a flexible estimator of g0g_{0} and the procedure is almost identical to a TMLE [33]. If we use the generalized data-adaptive series, we would require sufficient amount of smoothness for g0()g_{0}(\cdot). In the latter case, the change in norm when evaluating the conditions is a purely technical device and the estimation procedure is the same as would have been used if we had used the L2(P0)L^{2}(P_{0})-norm. We also note that the same argument may be used to show that in Example 4, with (θ):va[zμ1(x)]2+(1a)[zμ0(x)]2\ell(\theta):v\mapsto a[z-\mu_{1}(x)]^{2}+(1-a)[z-\mu_{0}(x)]^{2} being the usual squared-error loss, we may choose the alternative inner product θ1,θ2g0:=P0{θ1diag(1g0,g0)θ2}\langle\theta_{1},\theta_{2}\rangle_{g_{0}}:=P_{0}\{\theta_{1}^{\top}\cdot\mathrm{diag}(1-g_{0},g_{0})\cdot\theta_{2}\} and find that Ψ˙=(2/(1g0)[(μ01μ00)P0(μ01μ00)],2/g0[(μ01μ00)P0(μ01μ00)])\dot{\Psi}=(-2/(1-g_{0})\cdot[(\mu_{01}-\mu_{00})-P_{0}(\mu_{01}-\mu_{00})],2/g_{0}\cdot[(\mu_{01}-\mu_{00})-P_{0}(\mu_{01}-\mu_{00})])^{\top}, as we did in Section 5.3.

Appendix B Additional conditions

Throughout the rest of this appendix, we use CC to denote a general absolute positive constant that can vary line by line.

B.1 HAL

Condition B3 (Empirical processes conditions).

For any fixed ϑΘv,M\vartheta\in\Theta_{{\mathrm{v}},M} and some Δ>0\Delta>0, it holds that (θ)\ell(\theta), 0[θθ0]\ell_{0}^{\prime}[\theta-\theta_{0}] and {r[θθ0]r[θ+δ(ϑθ)θ0]}/δ\{r[\theta-\theta_{0}]-r[\theta+\delta(\vartheta-\theta)-\theta_{0}]\}/\delta are càdlàg for all θΘv,M\theta\in\Theta_{{\mathrm{v}},M} and all δ[0,Δ]\delta\in[0,\Delta]. Moreover, the following terms are all finite:

supθΘv,M(θ)v,supθΘv,M0[θθ0]v,supθΘv,M,δ[0,Δ]r[θθ0]r[θθ0+δ(ϑθ)]δv.\sup_{\theta\in\Theta_{{\mathrm{v}},M}}\|\ell(\theta)\|_{{\mathrm{v}}},\,\sup_{\theta\in\Theta_{{\mathrm{v}},M}}\|\ell_{0}^{\prime}[\theta-\theta_{0}]\|_{{\mathrm{v}}},\,\sup_{\theta\in\Theta_{{\mathrm{v}},M},\delta\in[0,\Delta]}\left\|\frac{r[\theta-\theta_{0}]-r[\theta-\theta_{0}+\delta(\vartheta-\theta)]}{\delta}\right\|_{{\mathrm{v}}}.

In addition, 0[θ^nθ0]\|\ell_{0}^{\prime}[\hat{\theta}_{n}-\theta_{0}]\| and supδ[0,Δ]{r[θ^nθ0]r[θ^nθ0+δ(ϑθ^n)]}/δ\sup_{\delta\in[0,\Delta]}\left\|\{r[\hat{\theta}_{n}-\theta_{0}]-r[\hat{\theta}_{n}-\theta_{0}+\delta(\vartheta-\hat{\theta}_{n})]\}/\delta\right\| converge to 0 in probability.

Condition B4 (Finite variance of influence function).

ξ2:=VarP0(0[Ψ˙](V))/α0,2<\xi^{2}:=\textnormal{Var}_{P_{0}}(\ell_{0}^{\prime}[\dot{\Psi}](V))/\alpha_{0,\ell}^{2}<\infty.

B.2 Data-adaptive series

Condition C6 (Local Lipschitz continuity of Πn,θ0()\Pi_{n,\theta_{0}}(\mathcal{I})).

For sufficiently large nn,

Πn,θ0()θΠn,θ0()θ0Cθθ0\|\Pi_{n,\theta_{0}}(\mathcal{I})\circ\theta-\Pi_{n,\theta_{0}}(\mathcal{I})\circ\theta_{0}\|\leq C\|\theta-\theta_{0}\|

for all θΘ\theta\in\Theta with θθ0n1/4\|\theta-\theta_{0}\|\leq n^{-1/4}.

Condition C7 (Local Lipschitz continuity of ψ˙\dot{\psi} and Πn,θ0(ψ˙)\Pi_{n,\theta_{0}}(\dot{\psi})).

For sufficiently large nn, for all θΘ\theta\in\Theta with θθ0n1/4\|\theta-\theta_{0}\|\leq n^{-1/4},

  1. (a)

    ψ˙θψ˙θ0Cθθ0\|\dot{\psi}\circ\theta-\dot{\psi}\circ\theta_{0}\|\leq C\|\theta-\theta_{0}\|;

  2. (b)

    Πn,θ0(ψ˙)θΠn,θ0(ψ˙)θ0Cθθ0\|\Pi_{n,\theta_{0}}(\dot{\psi})\circ\theta-\Pi_{n,\theta_{0}}(\dot{\psi})\circ\theta_{0}\|\leq C\|\theta-\theta_{0}\|.

Condition C8 (Empirical process conditions).

There exists some constant Δ>0\Delta>0 such that

supδ[0,Δ]|(PnP0){r[θnθ0]r[πn((1δ)θn+δ(±Ψ˙+θ0))θ0]δ}|=op(n1/2),\displaystyle\sup_{\delta\in[0,\Delta]}\left|(P_{n}-P_{0})\left\{\frac{r[\theta_{n}^{*}-\theta_{0}]-r[\pi_{n}((1-\delta)\theta_{n}^{*}+\delta(\pm\dot{\Psi}+\theta_{0}))-\theta_{0}]}{\delta}\right\}\right|=o_{p}(n^{-1/2}),
(PnP0)0[(±Ψ˙+θ0)πn(±Ψ˙+θ0)]=op(n1/2),\displaystyle(P_{n}-P_{0})\ell_{0}^{\prime}[(\pm\dot{\Psi}+\theta_{0})-\pi_{n}(\pm\dot{\Psi}+\theta_{0})]=o_{p}(n^{-1/2}),
(PnP0)0[θnθ0]=op(n1/2).\displaystyle(P_{n}-P_{0})\ell_{0}^{\prime}[\theta_{n}^{*}-\theta_{0}]=o_{p}(n^{-1/2}).
Condition C9 (Finite variance of influence function).

ξ2:=VarP0(0[Ψ˙](V))/α0,2<\xi^{2}:=\textnormal{Var}_{P_{0}}(\ell_{0}^{\prime}[\dot{\Psi}](V))/\alpha_{0,\ell}^{2}<\infty.

B.3 Generalized data-adaptive series

Condition C6 (Local Lipschitz continuity of projected \mathcal{I} for Θn,θ0\Theta_{n,\theta_{0}}).

For sufficiently large nn, Πn,θ0()(θ,x)Πn,θ0()(θ0,x)Cθθ0\|\Pi_{n,\theta_{0}}(\mathcal{I})\circ(\theta,\mathcal{I}_{x})-\Pi_{n,\theta_{0}}(\mathcal{I})\circ(\theta_{0},\mathcal{I}_{x})\|\leq C\|\theta-\theta_{0}\| for all θθ0n1/4\|\theta-\theta_{0}\|\leq n^{-1/4}.

Condition C7 (Local Lipschitz continuity of ψ˙\dot{\psi} and its projection for Θn,θ0\Theta_{n,\theta_{0}}).

For sufficiently large nn, for all θθ0n1/4\|\theta-\theta_{0}\|\leq n^{-1/4},

  1. (a)

    ψ˙(θ,x)ψ˙(θ0,x)Cθθ0\|\dot{\psi}\circ(\theta,\mathcal{I}_{x})-\dot{\psi}\circ(\theta_{0},\mathcal{I}_{x})\|\leq C\|\theta-\theta_{0}\|;

  2. (b)

    Πn,θ0(ψ˙)(θ,x)Πn,θ0(ψ˙)(θ0,x)Cθθ0\|\Pi_{n,\theta_{0}}(\dot{\psi})\circ(\theta,\mathcal{I}_{x})-\Pi_{n,\theta_{0}}(\dot{\psi})\circ(\theta_{0},\mathcal{I}_{x})\|\leq C\|\theta-\theta_{0}\|.

B.4 Conditions for efficiency of the plug-in estimator

Define a collection of submodels

{{PH,δ:δBH}:H}\left\{\{P_{H,\delta}:\delta\in B_{H}\subseteq\mathbb{R}\}:H\in\mathscr{H}\right\}

for which: (i) \mathscr{H} is a subset of L02(P0)L_{0}^{2}(P_{0}) and the L02(P0)L_{0}^{2}(P_{0})-closure of its linear span is L02(P0)L_{0}^{2}(P_{0}); and (ii) each {PH,δ:δBH}\{P_{H,\delta}:\delta\in B_{H}\subseteq\mathbb{R}\} is a regular univariate parametric submodel that passes through P0P_{0} and has score HH for δ\delta at δ=0\delta=0. For each HH\in\mathscr{H} and δBH\delta\in B_{H}, we define θH,δargminθΘPH,δ(θ)\theta_{H,\delta}\in\operatorname*{argmin}_{\theta\in\Theta}P_{H,\delta}\ell(\theta). In this appendix, for all small oo and big OO notations, we let δ0\delta\rightarrow 0 with HH fixed.

Condition E1 (Sufficiently close risk minimizer).

For any given HH\in\mathscr{H}, θH,δθ0=o(δ1/2)\|\theta_{H,\delta}-\theta_{0}\|=o(\delta^{1/2}).

Condition E2 (Quadratic behavior of loss function remainder near 0).

For any given HH\in\mathscr{H} and ϑ\vartheta, there exists positive δ=o(δ)\delta^{\prime}=o(\delta) such that (PH,δP0){r[(1δ)(θH,δθ0)+δΨ˙]r[θH,δθ0]}/δ=o(δ)(P_{H,\delta}-P_{0})\{r[(1-\delta^{\prime})(\theta_{H,\delta}-\theta_{0})+\delta^{\prime}\dot{\Psi}]-r[\theta_{H,\delta}-\theta_{0}]\}/\delta^{\prime}=o(\delta).

Appendix C Discussion of technical conditions for data-adaptive series and its generalization

C.1 Theorem 2

Condition C2 usually imposes an upper bound on the growth rate of KK. To see this, we show that Condition C2 is equivalent to a term being op(n1/4)o_{p}(n^{-1/4}), and an upper bound of this term is controlled by KK. Let θnargminθΘnP0(θ)\theta_{n}^{\dagger}\in\operatorname*{argmin}_{\theta\in\Theta_{n}}P_{0}\ell(\theta) be the true-risk minimizer in Θn\Theta_{n}. Under Conditions A2, C1, C3 and C6, by Lemma 5, it follows that Condition C2 is equivalent to requiring that θnθn=op(n1/4)\|\theta_{n}^{*}-\theta_{n}^{\dagger}\|=o_{p}(n^{-1/4}). Note that θn\theta_{n}^{*} minimizes the empirical risk in Θn\Theta_{n}, and M-estimation theory [34] can show that θnθn\|\theta_{n}^{*}-\theta_{n}^{\dagger}\| can be upper bounded by an empirical process term, whose upper bound is related to the complexity of Θn\Theta_{n}, namely how fast KK grows with sample size. To ensure this bound is op(n1/4)o_{p}(n^{-1/4}), KK must not grow too quickly.

Condition C3 assumes that the identity function can be well approximated by the series ϕk\phi_{k} with the specified number of terms KK in the L2(Pθ0)L^{2}(P_{\theta_{0}}) sense. If Span{ϕ1,,ϕK}\text{Span}\{\phi_{1},\ldots,\phi_{K}\} does not contain \mathcal{I} for any KK, then sufficiently many terms must be included to satisfy this condition; that is, this condition imposes a lower bound on the rate at which KK should grow with nn. Even if Span{ϕ1,,ϕK}\text{Span}\{\phi_{1},\ldots,\phi_{K}\} does contain \mathcal{I} for some finite KK, this condition still requires that KK is not too small.

Condition C4 is implied by the following condition in view of Lemma 3:

Condition C4s.

[ψ˙Πn,θ0(ψ˙)]θ0=o(n1/4)\|[\dot{\psi}-\Pi_{n,\theta_{0}}(\dot{\psi})]\circ\theta_{0}\|=o(n^{-1/4}).

This condition is similar to Condition C3. However, in general, we do not expect ψ˙\dot{\psi} to be contained in Span{ϕ1,,ϕK}\text{Span}\{\phi_{1},\ldots,\phi_{K}\} for any KK, and hence this condition generally imposes a lower bound on the rate of KK. Note that Condition C4s is stronger than Condition C4, and there are interesting examples where C4 holds but C4s fails to hold. Indeed, if θn\theta_{n}^{*} converges to θ0\theta_{0} at a rate much faster than n1/4n^{-1/4}, then C4 can be satisfied even if [ψ˙Πn,θ0(ψ˙)]θ0\|[\dot{\psi}-\Pi_{n,\theta_{0}}(\dot{\psi})]\circ\theta_{0}\| decays to zero in probability relatively slowly — that is, the convergence rate of θn\theta_{n}^{*} can compensate for the approximation error of ψ˙\dot{\psi}. This is one way in which we can benefit from using flexible ML algorithms to estimate θ0\theta_{0}: if θn0\theta_{n}^{0} converges to θ0\theta_{0} at a fast rate, then we can expect θn\theta_{n}^{*} to also have a fast convergence rate.

Conditions C2, C3 and C4 are not stringent provided sufficient smoothness on derivatives of ψ˙\dot{\psi} and a reasonable series. For example, as noted in [4], when ψ˙\dot{\psi} has a bounded pp-th order derivative and the polynomial, trigonometric series or spline with degree at least p+1p+1 is used, then if K2/n0K^{2}/n\rightarrow 0 (K3/n0K^{3}/n\rightarrow 0 for polynomial series), the term in Condition C2 is Op(K/n)O_{p}(\sqrt{K/n}); the terms in Condition C3 and the sufficient Condition C4s are O(Kp/q)O(K^{-p/q}). Therefore, we can select KK to grow at a rate faster than nq/(4p)n^{q/(4p)} and slower than n1/2n^{1/2} (n1/3n^{1/3} for polynomial series). If pp is large, then this allows for a wide range of rates for KK. Typically Ψ˙\dot{\Psi} (and hence ψ˙\dot{\psi}) is only related to the summary of interest Ψ\Psi but not the true function θ0\theta_{0}. For example, for the summary Ψ(θ)=P0(fθ)\Psi(\theta)=P_{0}(f\circ\theta) at the beginning of Section 4.1, ψ˙=f\dot{\psi}=f^{\prime} is variation independent of θ0\theta_{0}. It is often the case that Ψ\Psi is smooth and so is ψ˙\dot{\psi}, so pp is often sufficiently large for this window to be wide.

Condition C6 is usually easy to satisfy. Since Πn,θ0()\Pi_{n,\theta_{0}}(\mathcal{I}) is a linear combination of {ϕk:k{1,,K}}\{\phi_{k}:k\in\{1,\ldots,K\}\} and is an approximation of a highly smooth function \mathcal{I}, if the series ϕk\phi_{k} is smooth, then we can expect that Πn,θ0()\Pi_{n,\theta_{0}}(\mathcal{I}) will be Lipschitz uniformly over nn, that is, that Condition C6 holds. For example, using polynomial series, cubic splines or trigonometric series imply that this condition holds.

Condition C7 imposes Lipschitz continuity conditions on ψ˙\dot{\psi} and Πn,θ0(ψ˙)\Pi_{n,\theta_{0}}(\dot{\psi}) uniformly over nn. The Lipschitz continuity of ψ˙\dot{\psi} has been discussed above. As for Πn,θ0(ψ˙)\Pi_{n,\theta_{0}}(\dot{\psi}), similarly to Condition C6, as long as the series ϕk\phi_{k} being used is smooth, Πn,θ0(ψ˙)\Pi_{n,\theta_{0}}(\dot{\psi}) would be Lipschitz continuous uniformly over nn.

C.2 Theorem 5

The conditions are similar to those in Theorem 2. However, Condition C4 can be more stringent than Condition C4. For generalized data-adaptive series, the dimension of the argument of the series is larger. Hence, as noted in [4], C4 may require more smoothness of ψ˙\dot{\psi} in order that ψ˙\dot{\psi} can be well approximated by Πn,θ0(ψ˙)\Pi_{n,\theta_{0}}(\dot{\psi}). However, in general, we do not expect the smoothness of ψ˙\dot{\psi} to depend on Ψ\Psi alone but no components of P0P_{0}, so the amount of smoothness of ψ˙\dot{\psi} may be more limited in practice.

It is also worth noting that, similarly to Theorem 2, a sufficient condition for Condition C4 is the following:

Condition C4s.

[ψ˙Πn,θ0(ψ˙)](θ0,x)=o(n1/4)\|[\dot{\psi}-\Pi_{n,\theta_{0}}(\dot{\psi})]\circ(\theta_{0},\mathcal{I}_{x})\|=o(n^{-1/4}).

Appendix D Lemmas and technical proofs

D.1 Highly Adaptive Lasso (HAL)

Proof of Theorem 1.

Under Conditions A2 and B1B3, Lemma 1 and its corollary in [29] show that θ^nθ0=op(n1/4)\|\hat{\theta}_{n}-\theta_{0}\|=o_{p}(n^{-1/4}).

We show that the small perturbations of θ^n\hat{\theta}_{n} in certain directions are contained in Θv,M\Theta_{{\mathrm{v}},M}. Let ϑδ=θ^n+δ(Ψ˙+θ0θ^n)\vartheta_{\delta}=\hat{\theta}_{n}+\delta(\dot{\Psi}+\theta_{0}-\hat{\theta}_{n}) be a path indexed by δ\delta (0δ<1)(0\leq\delta<1) that is a perturbation of θ^n\hat{\theta}_{n}. Note that for all δ\delta, ϑδ\vartheta_{\delta} is càdlàg by Condition B1 and we have that

ϑδv=(1δ)θ^n+δ(Ψ˙+θ0)v(1δ)θ^nv+δ(Ψ˙v+θ0v)(1δ)M+δM=M\|\vartheta_{\delta}\|_{{\mathrm{v}}}=\|(1-\delta)\hat{\theta}_{n}+\delta(\dot{\Psi}+\theta_{0})\|_{{\mathrm{v}}}\leq(1-\delta)\|\hat{\theta}_{n}\|_{{\mathrm{v}}}+\delta(\|\dot{\Psi}\|_{{\mathrm{v}}}+\|\theta_{0}\|_{{\mathrm{v}}})\leq(1-\delta)M+\delta M=M

by Condition B2. Hence ϑδΘv,M\vartheta_{\delta}\in\Theta_{{\mathrm{v}},M}. The same result holds for the path ϑ~δ:=θ^n+δ(Ψ˙+θ0θ^n)\tilde{\vartheta}_{\delta}:=\hat{\theta}_{n}+\delta(-\dot{\Psi}+\theta_{0}-\hat{\theta}_{n}).

Combining this observation with the P0P_{0}-Donkser property of Θv,M\Theta_{{\mathrm{v}},M^{\prime}} for any fixed M>0M^{\prime}>0 [10] and Conditions A1A2, B4, we have that all of the conditions of Theorem 1 in [28] are satisfied with all sieves being Θv,M\Theta_{{\mathrm{v}},M}. The desired asymptotic linearity result follows. The efficiency result is shown in Appendix D.3. ∎

Proof of Lemma 1.

Recall that 𝒳d\mathcal{X}\subseteq\mathbb{R}^{d}. Similar to x(l)x^{(l)}, let x(u)=inf{x:P0(Xx)=1}x^{(u)}=\inf\{x:P_{0}(X\leq x)=1\} where inf\inf and \leq are entrywise. To avoid clumsy notations, in this proof we drop the subscript in θ0\theta_{0} and use θ\theta instead. This should not introduce confusion because other functions (e.g., an estimator of θ0\theta_{0}) are not involved in the statement or proof. Using the results reviewed in Section 3.1,

Ψ˙v\displaystyle\|\dot{\Psi}\|_{{\mathrm{v}}} =|Ψ˙(x())|+s{1,2,,d},sxs()xs(u)|Ψ˙s(du)|\displaystyle=|\dot{\Psi}(x^{(\ell)})|+\sum_{s\subseteq\{1,2,\ldots,d\},s\neq\emptyset}\int_{x^{(\ell)}_{s}}^{x^{(u)}_{s}}|\dot{\Psi}_{s}(du)|
=|Ψ˙(x())|+s{1,2,,d},sxs()xs(u)|ψ˙(z)||z=θs(u)|θs(du)|.\displaystyle=|\dot{\Psi}(x^{(\ell)})|+\sum_{s\subseteq\{1,2,\ldots,d\},s\neq\emptyset}\int_{x^{(\ell)}_{s}}^{x^{(u)}_{s}}|\dot{\psi}^{\prime}(z)|\Big{|}_{z=\theta_{s}(u)}|\theta_{s}(du)|.

Since

|θ(x)|\displaystyle|\theta(x)| =|θ(x())+s{1,2,,d},sxs()xsθs(du)|\displaystyle=\left|\theta(x^{(\ell)})+\sum_{s\subseteq\{1,2,\ldots,d\},s\neq\emptyset}\int_{x^{(\ell)}_{s}}^{x_{s}}\theta_{s}(du)\right|
|θ(x())|+s{1,2,,d},sxs()xs|θs(du)|\displaystyle\leq|\theta(x^{(\ell)})|+\sum_{s\subseteq\{1,2,\ldots,d\},s\neq\emptyset}\int_{x^{(\ell)}_{s}}^{x_{s}}|\theta_{s}(du)|
|θ(x())|+s{1,2,,d},sxs()xs(u)|θs(du)|=θv,\displaystyle\leq|\theta(x^{(\ell)})|+\sum_{s\subseteq\{1,2,\ldots,d\},s\neq\emptyset}\int_{x^{(\ell)}_{s}}^{x^{(u)}_{s}}|\theta_{s}(du)|=\|\theta\|_{{\mathrm{v}}},

we have |ψ˙(z)||z=θs(u)supz:|z|θ0v|ψ˙(z)|=B|\dot{\psi}^{\prime}(z)|\Big{|}_{z=\theta_{s}(u)}\leq\sup_{z^{\prime}:|z^{\prime}|\leq\|\theta_{0}\|_{{\mathrm{v}}}}|\dot{\psi}^{\prime}(z^{\prime})|=B for all x()ux(u)x^{(\ell)}\leq u\leq x^{(u)}, so

Ψ˙v|Ψ˙(x())|+s{1,2,,d},sxs()xs(u)B|θs(du)||Ψ˙(x())|+Bθ0v.\|\dot{\Psi}\|_{{\mathrm{v}}}\leq|\dot{\Psi}(x^{(\ell)})|+\sum_{s\subseteq\{1,2,\ldots,d\},s\neq\emptyset}\int_{x^{(\ell)}_{s}}^{x^{(u)}_{s}}B|\theta_{s}(du)|\leq|\dot{\Psi}(x^{(\ell)})|+B\|\theta_{0}\|_{{\mathrm{v}}}.

Lemma 2 (CV-selected bound not much smaller than the bound of the true function’s variation norm).

Suppose that Condition B1 holds, θ0\theta_{0} is càdlàg, θ0v<\|\theta_{0}\|_{{\mathrm{v}}}<\infty and for any MM, supθΘv,M(θ)<\sup_{\theta\in\Theta_{{\mathrm{v}},M}}\|\ell(\theta)\|<\infty. Let MnM_{n} be a (possibly random) sequence such that P0{(θ^n,Mn)(θ0)}=op(1)P_{0}\{\ell(\hat{\theta}_{n,M_{n}})-\ell(\theta_{0})\}=o_{p}(1). Then for any ϵ>0\epsilon>0, with probability tending to one, Mnθ0vϵM_{n}\geq\|\theta_{0}\|_{{\mathrm{v}}}-\epsilon. Therefore, for any fixed ϵ>0\epsilon>0, with probability tending to one, Mn+ϵ(θ0vϵ)+ϵ=θ0vM_{n}+\epsilon\geq(\|\theta_{0}\|_{{\mathrm{v}}}-\epsilon)+\epsilon=\|\theta_{0}\|_{{\mathrm{v}}}.

Proof of Lemma 2.

We prove by contradiction. Suppose the claim is not true, i.e. there exists ϵ,δ>0\epsilon,\delta>0 such that P(Mn<θ0vϵ)δP(M_{n}<\|\theta_{0}\|_{{\mathrm{v}}}-\epsilon)\geq\delta for all n𝒩n\in\mathcal{N}, where 𝒩\mathcal{N} is an infinite set. Let θ0,MargminθΘv,MP0(θ)\theta_{0,M}\in\operatorname*{argmin}_{\theta\in\Theta_{{\mathrm{v}},M}}P_{0}\ell(\theta). Then for all n𝒩n\in\mathcal{N}, with probability at least δ\delta,

P0{(θ^n,Mn)(θ0)}\displaystyle P_{0}\{\ell(\hat{\theta}_{n,M_{n}})-\ell(\theta_{0})\} =P0{(θ^n,Mn)(θ0,Mn)}+P0{(θ0,Mn)(θ0)}\displaystyle=P_{0}\{\ell(\hat{\theta}_{n,M_{n}})-\ell(\theta_{0,M_{n}})\}+P_{0}\{\ell(\theta_{0,M_{n}})-\ell(\theta_{0})\}
P0{(θ0,Mn)(θ0)}\displaystyle\geq P_{0}\{\ell(\theta_{0,M_{n}})-\ell(\theta_{0})\}
P0{(θ0,θ0vϵ)(θ0)},\displaystyle\geq P_{0}\{\ell(\theta_{0,\|\theta_{0}\|_{{\mathrm{v}}}-\epsilon})-\ell(\theta_{0})\},

which is a positive constant since the function class Θθ0vϵ\Theta_{\|\theta_{0}\|_{{\mathrm{v}}}-\epsilon} does not contain θ0\theta_{0} and this term is non-negligible bias. This contradicts the assumption that P0{(θ^n,Mn)(θ0)}=op(1)P_{0}\{\ell(\hat{\theta}_{n,M_{n}})-\ell(\theta_{0})\}=o_{p}(1) and hence the desired follows. ∎

Therefore, if Ψ˙vF(θ0v)\|\dot{\Psi}\|_{{\mathrm{v}}}\leq F(\|\theta_{0}\|_{{\mathrm{v}}}) for a known increasing function FF, then with probability tending to one, Mn+ϵ+F(Mn+ϵ)M_{n}+\epsilon+F(M_{n}+\epsilon) is a valid bound on θ^nv\|\hat{\theta}_{n}\|_{\mathrm{v}} that can be used to obtain an efficient plug-in estimator. Moreover, if the bound is loose, i.e. Ψ˙v<F(θ0v)\|\dot{\Psi}\|_{{\mathrm{v}}}<F(\|\theta_{0}\|_{{\mathrm{v}}}), and FF is continuous, then there exists some ϵ>0\epsilon>0 such that Ψ˙vF(θ0vϵ)ϵ\|\dot{\Psi}\|_{{\mathrm{v}}}\leq F(\|\theta_{0}\|_{{\mathrm{v}}}-\epsilon)-\epsilon and hence θ0v+Ψ˙vMn+F(Mn)\|\theta_{0}\|_{{\mathrm{v}}}+\|\dot{\Psi}\|_{{\mathrm{v}}}\leq M_{n}+F(M_{n}) with probability tending to one.

Note that this lemma only concerns learning a function-valued feature but not estimating Ψ(θ0)\Psi(\theta_{0}). There are examples where Ψ˙\dot{\Psi} depends on components of P0P_{0}, say η0\eta_{0}, other than θ0\theta_{0}. However, if η0\eta_{0} can be learned via HAL, then Lemma 2 can be applied. Therefore, if it is known that Ψ˙vF(θ0v,η0v)\|\dot{\Psi}\|_{{\mathrm{v}}}\leq F(\|\theta_{0}\|_{{\mathrm{v}}},\|\eta_{0}\|_{{\mathrm{v}}}) for a known increasing function FF, then we can use a bound on θ^nv\|\hat{\theta}_{n}\|_{\mathrm{v}} obtained in a similar fashion as above from the sequence MnM_{n} to construct an efficient plug-in estimator Ψ(θ^n)\Psi(\hat{\theta}_{n}).

Now consider obtaining MnM_{n} by kk-fold CV from a set of candidate bounds. Then, under Conditions B1B3, by (i) Lemma 1 and its corollary of [29], and (ii) the oracle inequality for kk-fold CV in [31], P0{(θ^n,Mn)(θ0)}=op(n1/4)P_{0}\{\ell(\hat{\theta}_{n,M_{n}})-\ell(\theta_{0})\}=o_{p}(n^{-1/4}) if (i) one candidate bound is no smaller than θ0v\|\theta_{0}\|_{\mathrm{v}}, and (ii) the number of candidate bounds is fixed. Therefore, the above results apply to this case.

D.2 Data-adaptive series estimation

We first present and prove two lemmas that lead to Theorems 2 and 5.

Lemma 3 (Convergence rate of the sieve estimator).

Under Conditions C1, C3 and C6, πn(θ0)θ0=op(n1/4)\|\pi_{n}(\theta_{0})-\theta_{0}\|=o_{p}(n^{-1/4}). Under an additional condition C2, θnθ0=op(n1/4)\|\theta_{n}^{*}-\theta_{0}\|=o_{p}(n^{-1/4}).

Proof of Lemma 3.

By triangle inequality, πn(θ0)θ0θ0θn0+θn0πn(θn0)+πn(θn0)πn(θ0)\|\pi_{n}(\theta_{0})-\theta_{0}\|\leq\|\theta_{0}-\theta_{n}^{0}\|+\|\theta_{n}^{0}-\pi_{n}(\theta_{n}^{0})\|+\|\pi_{n}(\theta_{n}^{0})-\pi_{n}(\theta_{0})\|. We bound these three terms separately.

Term 1: By Condition C1, θ0θn0=op(n1/4)\|\theta_{0}-\theta_{n}^{0}\|=o_{p}(n^{-1/4}).

Term 2: By the definition of projection operator,

θn0πn(θn0)=θn0Πn,θn0()θn0θn0Πn,θ0()θn0.\|\theta_{n}^{0}-\pi_{n}(\theta_{n}^{0})\|=\|\theta_{n}^{0}-\Pi_{n,\theta_{n}^{0}}(\mathcal{I})\circ\theta_{n}^{0}\|\leq\|\theta_{n}^{0}-\Pi_{n,\theta_{0}}(\mathcal{I})\circ\theta_{n}^{0}\|.

We bound the right-hand side by showing this term is close to θ0Πn,θ0()θ0\|\theta_{0}-\Pi_{n,\theta_{0}}(\mathcal{I})\circ\theta_{0}\| up to an op(n1/4)o_{p}(n^{-1/4}) term. By the reverse triangle inequality and the triangle inequality,

|θn0Πn,θ0()θn0θ0Πn,θ0()θ0|\displaystyle\left|\|\theta_{n}^{0}-\Pi_{n,\theta_{0}}(\mathcal{I})\circ\theta_{n}^{0}\|-\|\theta_{0}-\Pi_{n,\theta_{0}}(\mathcal{I})\circ\theta_{0}\|\right|
[θn0Πn,θ0()θn0][θ0Πn,θ0()θ0]\displaystyle\quad\leq\|[\theta_{n}^{0}-\Pi_{n,\theta_{0}}(\mathcal{I})\circ\theta_{n}^{0}]-[\theta_{0}-\Pi_{n,\theta_{0}}(\mathcal{I})\circ\theta_{0}]\|
=[θn0θ0][Πn,θ0()θn0Πn,θ0()θ0]\displaystyle\quad=\|[\theta_{n}^{0}-\theta_{0}]-[\Pi_{n,\theta_{0}}(\mathcal{I})\circ\theta_{n}^{0}-\Pi_{n,\theta_{0}}(\mathcal{I})\circ\theta_{0}]\|
θn0θ0+Πn,θ0()θn0Πn,θ0()θ0\displaystyle\quad\leq\|\theta_{n}^{0}-\theta_{0}\|+\|\Pi_{n,\theta_{0}}(\mathcal{I})\circ\theta_{n}^{0}-\Pi_{n,\theta_{0}}(\mathcal{I})\circ\theta_{0}\|
θn0θ0+Cθn0θ0,\displaystyle\quad\leq\|\theta_{n}^{0}-\theta_{0}\|+C\|\theta_{n}^{0}-\theta_{0}\|, (Condition C6)

which is op(n1/4)o_{p}(n^{-1/4}) by Condition C1. Therefore, by Condition C3,

θn0πn(θn0)θn0Πn,θ0()θn0θ0Πn,θ0()θ0+op(n1/4)=op(n1/4).\|\theta_{n}^{0}-\pi_{n}(\theta_{n}^{0})\|\leq\|\theta_{n}^{0}-\Pi_{n,\theta_{0}}(\mathcal{I})\circ\theta_{n}^{0}\|\leq\|\theta_{0}-\Pi_{n,\theta_{0}}(\mathcal{I})\circ\theta_{0}\|+o_{p}(n^{-1/4})=o_{p}(n^{-1/4}).

Term 3: By the definition of projection and Condition C1, πn(θn0)πn(θ0)θn0θ0=op(n1/4)\|\pi_{n}(\theta_{n}^{0})-\pi_{n}(\theta_{0})\|\leq\|\theta_{n}^{0}-\theta_{0}\|=o_{p}(n^{-1/4}).

Conclusion from the three bounds: πn(θ0)θ0=op(n1/4)\|\pi_{n}(\theta_{0})-\theta_{0}\|=o_{p}(n^{-1/4}).

If, in addition, Condition C2 also holds, then θnθ0πn(θ0)θ0+θnπn(θ0)=op(n1/4)\|\theta_{n}^{*}-\theta_{0}\|\leq\|\pi_{n}(\theta_{0})-\theta_{0}\|+\|\theta_{n}^{*}-\pi_{n}(\theta_{0})\|=o_{p}(n^{-1/4}). ∎

The same result holds for the generalized data-adaptive series under Conditions C1, C6, C3 and C2 (if relevant). The proof is almost identical and is therefore omitted.

Lemma 4 (Approximation error to ψ˙\dot{\psi}).

Under Condition C7, ψ˙θ0πn(ψ˙θ0)Cθn0θ0+ψ˙θ0Πn,θ0(ψ˙)θ0\|\dot{\psi}\circ\theta_{0}-\pi_{n}(\dot{\psi}\circ\theta_{0})\|\leq C\|\theta_{n}^{0}-\theta_{0}\|+\|\dot{\psi}\circ\theta_{0}-\Pi_{n,\theta_{0}}(\dot{\psi})\circ\theta_{0}\|. Therefore, under Conditions C1C4, ψ˙θ0πn(ψ˙θ0)θnθ0=op(n1/2)\|\dot{\psi}\circ\theta_{0}-\pi_{n}(\dot{\psi}\circ\theta_{0})\|\cdot\|\theta_{n}^{*}-\theta_{0}\|=o_{p}(n^{-1/2}).

Proof of Lemma 4.

By the definition of the projection operator and triangle inequality,

ψ˙θ0πn(ψ˙θ0)ψ˙θ0πn(ψ˙θn0)ψ˙θ0ψ˙θn0+ψ˙θn0πn(ψ˙θn0).\|\dot{\psi}\circ\theta_{0}-\pi_{n}(\dot{\psi}\circ\theta_{0})\|\leq\|\dot{\psi}\circ\theta_{0}-\pi_{n}(\dot{\psi}\circ\theta_{n}^{0})\|\leq\|\dot{\psi}\circ\theta_{0}-\dot{\psi}\circ\theta_{n}^{0}\|+\|\dot{\psi}\circ\theta_{n}^{0}-\pi_{n}(\dot{\psi}\circ\theta_{n}^{0})\|.

We bound the two terms on the right-hand side separately.

Term 1: By Condition C7, ψ˙θ0ψ˙θn0Cθ0θn0\|\dot{\psi}\circ\theta_{0}-\dot{\psi}\circ\theta_{n}^{0}\|\leq C\|\theta_{0}-\theta_{n}^{0}\|.

Term 2: This term can be bounded similarly as in Lemma 3. By the reverse triangle inequality and the triangle inequality,

|ψ˙θn0Πn,θ0(ψ˙)θn0ψ˙θ0Πn,θ0()θ0|\displaystyle\left|\|\dot{\psi}\circ\theta_{n}^{0}-\Pi_{n,\theta_{0}}(\dot{\psi})\circ\theta_{n}^{0}\|-\|\dot{\psi}\circ\theta_{0}-\Pi_{n,\theta_{0}}(\mathcal{I})\circ\theta_{0}\|\right|
[ψ˙θn0Πn,θ0(ψ˙)θn0][ψ˙θ0Πn,θ0(ψ˙)θ0]\displaystyle\quad\leq\|[\dot{\psi}\circ\theta_{n}^{0}-\Pi_{n,\theta_{0}}(\dot{\psi})\circ\theta_{n}^{0}]-[\dot{\psi}\circ\theta_{0}-\Pi_{n,\theta_{0}}(\dot{\psi})\circ\theta_{0}]\|
=[ψ˙θn0ψ˙θ0][Πn,θ0(ψ˙)θn0Πn,θ0(ψ˙)θ0]\displaystyle\quad=\|[\dot{\psi}\circ\theta_{n}^{0}-\dot{\psi}\circ\theta_{0}]-[\Pi_{n,\theta_{0}}(\dot{\psi})\circ\theta_{n}^{0}-\Pi_{n,\theta_{0}}(\dot{\psi})\circ\theta_{0}]\|
ψ˙θn0ψ˙θ0+Πn,θ0(ψ˙)θn0Πn,θ0(ψ˙)θ0\displaystyle\quad\leq\|\dot{\psi}\circ\theta_{n}^{0}-\dot{\psi}\circ\theta_{0}\|+\|\Pi_{n,\theta_{0}}(\dot{\psi})\circ\theta_{n}^{0}-\Pi_{n,\theta_{0}}(\dot{\psi})\circ\theta_{0}\|
Cθn0θ0+Cθn0θ0\displaystyle\quad\leq C\|\theta_{n}^{0}-\theta_{0}\|+C\|\theta_{n}^{0}-\theta_{0}\| (Condition C7)
=Cθn0θ0.\displaystyle\quad=C\|\theta_{n}^{0}-\theta_{0}\|.

Therefore, by the definition of the projection operator and Condition C7,

ψ˙θn0πn(ψ˙θn0)\displaystyle\|\dot{\psi}\circ\theta_{n}^{0}-\pi_{n}(\dot{\psi}\circ\theta_{n}^{0})\| ψ˙θn0Πn,θ0(ψ˙)θn0\displaystyle\leq\|\dot{\psi}\circ\theta_{n}^{0}-\Pi_{n,\theta_{0}}(\dot{\psi})\circ\theta_{n}^{0}\|
ψ˙θ0Πn,θ0(ψ˙)θ0+Cθn0θ0.\displaystyle\leq\|\dot{\psi}\circ\theta_{0}-\Pi_{n,\theta_{0}}(\dot{\psi})\circ\theta_{0}\|+C\|\theta_{n}^{0}-\theta_{0}\|.

Conclusion from the two bounds: ψ˙θ0πn(ψ˙θ0)Cθn0θ0+ψ˙θ0Πn,θ0(ψ˙)θ0\|\dot{\psi}\circ\theta_{0}-\pi_{n}(\dot{\psi}\circ\theta_{0})\|\leq C\|\theta_{n}^{0}-\theta_{0}\|+\|\dot{\psi}\circ\theta_{0}-\Pi_{n,\theta_{0}}(\dot{\psi})\circ\theta_{0}\|.

Under Conditions C1C4, using Lemma 3, it follows that ψ˙θ0πn(ψ˙θ0)θnθ0=op(n1/2)\|\dot{\psi}\circ\theta_{0}-\pi_{n}(\dot{\psi}\circ\theta_{0})\|\cdot\|\theta_{n}^{*}-\theta_{0}\|=o_{p}(n^{-1/2}). ∎

Note that πn\pi_{n} is a linear operator. Lemma 3 and 4 along with other conditions essentially satisfy the assumptions in Corollary 2 in [28]. We can prove the asymptotic linearity result of Theorem 2 similarly to this result as follows.

Proof of Theorem 2.

We note that

Pn(θn)\displaystyle P_{n}\ell(\theta_{n}^{*}) =Pn(θ0)+P0[(θn)(θ0)]+(PnP0)[(θn)(θ0)]\displaystyle=P_{n}\ell(\theta_{0})+P_{0}[\ell(\theta_{n}^{*})-\ell(\theta_{0})]+(P_{n}-P_{0})[\ell(\theta_{n}^{*})-\ell(\theta_{0})]
=Pn(θ0)+P0[(θn)(θ0)]+(PnP0)0[θnθ0]\displaystyle=P_{n}\ell(\theta_{0})+P_{0}[\ell(\theta_{n}^{*})-\ell(\theta_{0})]+(P_{n}-P_{0})\ell_{0}^{\prime}[\theta_{n}^{*}-\theta_{0}]
+(PnP0)r[θnθ0].\displaystyle\quad+(P_{n}-P_{0})r[\theta_{n}^{*}-\theta_{0}].

Let ϵn\epsilon_{n} be an arbitrary sequence of positive real numbers that is o(n1/2)o(n^{-1/2}). We may replace θn\theta_{n}^{*} with πn((1ϵn)θn+ϵn(θ0±Ψ˙))\pi_{n}((1-\epsilon_{n})\theta_{n}^{*}+\epsilon_{n}(\theta_{0}\pm\dot{\Psi})) in the above equation. We first consider πn((1ϵn)θn+ϵn(θ0+Ψ˙))\pi_{n}((1-\epsilon_{n})\theta_{n}^{*}+\epsilon_{n}(\theta_{0}+\dot{\Psi})):

Pn(πn((1ϵn)θn+ϵn(θ0+Ψ˙)))=Pn(θ0)+P0[(πn((1ϵn)θn+ϵn(θ0+Ψ˙)))(θ0)]+(PnP0)0[πn((1ϵn)θn+ϵn(θ0+Ψ˙))θ0]+(PnP0)r[πn((1ϵn)θn+ϵn(θ0+Ψ˙))θ0].\displaystyle\begin{split}&P_{n}\ell\left(\pi_{n}((1-\epsilon_{n})\theta_{n}^{*}+\epsilon_{n}(\theta_{0}+\dot{\Psi}))\right)\\ &=P_{n}\ell(\theta_{0})+P_{0}[\ell(\pi_{n}((1-\epsilon_{n})\theta_{n}^{*}+\epsilon_{n}(\theta_{0}+\dot{\Psi})))-\ell(\theta_{0})]\\ &\quad+(P_{n}-P_{0})\ell_{0}^{\prime}[\pi_{n}((1-\epsilon_{n})\theta_{n}^{*}+\epsilon_{n}(\theta_{0}+\dot{\Psi}))-\theta_{0}]\\ &\quad+(P_{n}-P_{0})r[\pi_{n}((1-\epsilon_{n})\theta_{n}^{*}+\epsilon_{n}(\theta_{0}+\dot{\Psi}))-\theta_{0}].\end{split} (1)

Take the difference between the above two equations. By the linearity of 0\ell_{0}^{\prime} and πn\pi_{n}, we have that

Pn(πn((1ϵn)θn+ϵn(θ0+Ψ˙)))Pn(θn)\displaystyle P_{n}\ell\left(\pi_{n}((1-\epsilon_{n})\theta_{n}^{*}+\epsilon_{n}(\theta_{0}+\dot{\Psi}))\right)-P_{n}\ell(\theta_{n}^{*})
=P0[(πn((1ϵn)θn+ϵn(θ0+Ψ˙)))(θ0)]P0[(θn)(θ0)]\displaystyle=P_{0}[\ell(\pi_{n}((1-\epsilon_{n})\theta_{n}^{*}+\epsilon_{n}(\theta_{0}+\dot{\Psi})))-\ell(\theta_{0})]-P_{0}[\ell(\theta_{n}^{*})-\ell(\theta_{0})]
+(PnP0)0[πn((1ϵn)θn+ϵn(θ0+Ψ˙))θn]\displaystyle\quad+(P_{n}-P_{0})\ell_{0}^{\prime}[\pi_{n}((1-\epsilon_{n})\theta_{n}^{*}+\epsilon_{n}(\theta_{0}+\dot{\Psi}))-\theta_{n}^{*}]
+(PnP0){r[πn((1ϵn)θn+ϵn(θ0+Ψ˙))θ0]r[θnθ0]}.\displaystyle\quad+(P_{n}-P_{0})\{r[\pi_{n}((1-\epsilon_{n})\theta_{n}^{*}+\epsilon_{n}(\theta_{0}+\dot{\Psi}))-\theta_{0}]-r[\theta_{n}^{*}-\theta_{0}]\}.

We next analyze the three lines on the right-hand side of the above equation separately.

Line 1: Under Condition A2,

P0[(πn((1ϵn)θn+ϵn(θ0+Ψ˙)))(θ0)]P0[(θn)(θ0)]\displaystyle P_{0}[\ell(\pi_{n}((1-\epsilon_{n})\theta_{n}^{*}+\epsilon_{n}(\theta_{0}+\dot{\Psi})))-\ell(\theta_{0})]-P_{0}[\ell(\theta_{n}^{*})-\ell(\theta_{0})]
=α0,2πn((1ϵn)θn+ϵn(θ0+Ψ˙))θ02α0,2θnθ02\displaystyle=\frac{\alpha_{0,\ell}}{2}\|\pi_{n}((1-\epsilon_{n})\theta_{n}^{*}+\epsilon_{n}(\theta_{0}+\dot{\Psi}))-\theta_{0}\|^{2}-\frac{\alpha_{0,\ell}}{2}\|\theta_{n}^{*}-\theta_{0}\|^{2}
+op(πn((1ϵn)θn+ϵn(θ0+Ψ˙))θ02+θnθ02)\displaystyle\quad+o_{p}\left(\|\pi_{n}((1-\epsilon_{n})\theta_{n}^{*}+\epsilon_{n}(\theta_{0}+\dot{\Psi}))-\theta_{0}\|^{2}+\|\theta_{n}^{*}-\theta_{0}\|^{2}\right)
We subtract and add (1ϵn)θn+ϵn(θ0+Ψ˙)(1-\epsilon_{n})\theta_{n}^{*}+\epsilon_{n}(\theta_{0}+\dot{\Psi}) in the first term. By the fact that πn\pi_{n} is linear and πn(θn)=θn\pi_{n}(\theta_{n}^{*})=\theta_{n}^{*}, the display continues as
=α0,2{πn((1ϵn)θn+ϵn(θ0+Ψ˙))((1ϵn)θn+ϵn(θ0+Ψ˙))}\displaystyle=\frac{\alpha_{0,\ell}}{2}\|\{\pi_{n}((1-\epsilon_{n})\theta_{n}^{*}+\epsilon_{n}(\theta_{0}+\dot{\Psi}))-((1-\epsilon_{n})\theta_{n}^{*}+\epsilon_{n}(\theta_{0}+\dot{\Psi}))\}
+{(1ϵn)θn+ϵn(θ0+Ψ˙)θ0}2\displaystyle\quad+\{(1-\epsilon_{n})\theta_{n}^{*}+\epsilon_{n}(\theta_{0}+\dot{\Psi})-\theta_{0}\}\|^{2}
α0,2θnθ02+op(πn((1ϵn)θn+ϵn(θ0+Ψ˙))θ02+θnθ02)\displaystyle\quad-\frac{\alpha_{0,\ell}}{2}\|\theta_{n}^{*}-\theta_{0}\|^{2}+o_{p}\left(\|\pi_{n}((1-\epsilon_{n})\theta_{n}^{*}+\epsilon_{n}(\theta_{0}+\dot{\Psi}))-\theta_{0}\|^{2}+\|\theta_{n}^{*}-\theta_{0}\|^{2}\right)
=α0,2ϵn{πn(θ0+Ψ˙)(θ0+Ψ˙)}+(θnθ0)+ϵn(Ψ˙+θ0θn)2α0,2θnθ02\displaystyle=\frac{\alpha_{0,\ell}}{2}\|\epsilon_{n}\{\pi_{n}(\theta_{0}+\dot{\Psi})-(\theta_{0}+\dot{\Psi})\}+(\theta_{n}^{*}-\theta_{0})+\epsilon_{n}(\dot{\Psi}+\theta_{0}-\theta_{n}^{*})\|^{2}-\frac{\alpha_{0,\ell}}{2}\|\theta_{n}^{*}-\theta_{0}\|^{2}
+op(πn((1ϵn)θn+ϵn(θ0+Ψ˙))θ02+θnθ02)\displaystyle\quad+o_{p}\left(\|\pi_{n}((1-\epsilon_{n})\theta_{n}^{*}+\epsilon_{n}(\theta_{0}+\dot{\Psi}))-\theta_{0}\|^{2}+\|\theta_{n}^{*}-\theta_{0}\|^{2}\right)
=ϵnα0,θnθ0,Ψ˙+ϵn2α0,2πn(θ0+Ψ˙)θn2\displaystyle=\epsilon_{n}\alpha_{0,\ell}\langle\theta_{n}^{*}-\theta_{0},\dot{\Psi}\rangle+\epsilon_{n}^{2}\frac{\alpha_{0,\ell}}{2}\|\pi_{n}(\theta_{0}+\dot{\Psi})-\theta_{n}^{*}\|^{2}
+ϵnα0,πn(θ0)θ0,θnθ0+ϵnα0,πn(Ψ˙)Ψ˙,θnθ0ϵnα0,θnθ02\displaystyle\quad+\epsilon_{n}\alpha_{0,\ell}\langle\pi_{n}(\theta_{0})-\theta_{0},\theta_{n}^{*}-\theta_{0}\rangle+\epsilon_{n}\alpha_{0,\ell}\langle\pi_{n}(\dot{\Psi})-\dot{\Psi},\theta_{n}^{*}-\theta_{0}\rangle-\epsilon_{n}\alpha_{0,\ell}\|\theta_{n}^{*}-\theta_{0}\|^{2}
+op(πn((1ϵn)θn+ϵn(θ0+Ψ˙))θ02+θnθ02)\displaystyle\quad+o_{p}\left(\|\pi_{n}((1-\epsilon_{n})\theta_{n}^{*}+\epsilon_{n}(\theta_{0}+\dot{\Psi}))-\theta_{0}\|^{2}+\|\theta_{n}^{*}-\theta_{0}\|^{2}\right)
By Cauchy-Schwards inequality, the display continues as
ϵnα0,θnθ0,Ψ˙+ϵn2α0,2πn(θ0+Ψ˙)θn2\displaystyle\leq\epsilon_{n}\alpha_{0,\ell}\langle\theta_{n}^{*}-\theta_{0},\dot{\Psi}\rangle+\epsilon_{n}^{2}\frac{\alpha_{0,\ell}}{2}\|\pi_{n}(\theta_{0}+\dot{\Psi})-\theta_{n}^{*}\|^{2}
+ϵnα0,πn(θ0)θ0θnθ0+ϵnα0,πn(Ψ˙)Ψ˙θnθ0ϵnα0,θnθ02\displaystyle\quad+\epsilon_{n}\alpha_{0,\ell}\|\pi_{n}(\theta_{0})-\theta_{0}\|\|\theta_{n}^{*}-\theta_{0}\|+\epsilon_{n}\alpha_{0,\ell}\|\pi_{n}(\dot{\Psi})-\dot{\Psi}\|\|\theta_{n}^{*}-\theta_{0}\|-\epsilon_{n}\alpha_{0,\ell}\|\theta_{n}^{*}-\theta_{0}\|^{2}
+op(πn((1ϵn)θn+ϵn(θ0+Ψ˙))θ02+θnθ02)\displaystyle\quad+o_{p}\left(\|\pi_{n}((1-\epsilon_{n})\theta_{n}^{*}+\epsilon_{n}(\theta_{0}+\dot{\Psi}))-\theta_{0}\|^{2}+\|\theta_{n}^{*}-\theta_{0}\|^{2}\right)
By Lemmas 34 and the assumption that ϵn=o(n1/2)\epsilon_{n}=o(n^{-1/2}), the display continues as
=ϵnα0,θnθ0,Ψ˙+ϵnop(n1/2)+op(πn((1ϵn)θn+ϵn(θ0+Ψ˙))θ02+θnθ02).\displaystyle=\epsilon_{n}\alpha_{0,\ell}\langle\theta_{n}^{*}-\theta_{0},\dot{\Psi}\rangle+\epsilon_{n}o_{p}(n^{-1/2})+o_{p}\left(\|\pi_{n}((1-\epsilon_{n})\theta_{n}^{*}+\epsilon_{n}(\theta_{0}+\dot{\Psi}))-\theta_{0}\|^{2}+\|\theta_{n}^{*}-\theta_{0}\|^{2}\right).

Line 2: We subtract and add (1ϵn)θn+ϵn(θ0+Ψ˙)(1-\epsilon_{n})\theta_{n}^{*}+\epsilon_{n}(\theta_{0}+\dot{\Psi}). By linearity of 0\ell_{0}^{\prime}, Condition C8, and the fact that πn(θn)=θn\pi_{n}(\theta_{n}^{*})=\theta_{n}^{*}, we have that

(PnP0)0[πn((1ϵn)θn+ϵn(θ0+Ψ˙))θn]\displaystyle(P_{n}-P_{0})\ell_{0}^{\prime}[\pi_{n}((1-\epsilon_{n})\theta_{n}^{*}+\epsilon_{n}(\theta_{0}+\dot{\Psi}))-\theta_{n}^{*}]
=(PnP0)0[(1ϵn)θn+ϵn(θ0+Ψ˙)θn]\displaystyle=(P_{n}-P_{0})\ell_{0}^{\prime}[(1-\epsilon_{n})\theta_{n}^{*}+\epsilon_{n}(\theta_{0}+\dot{\Psi})-\theta_{n}^{*}]
+ϵn(PnP0)0[πn(θ0+Ψ˙)(θ0+Ψ˙)]\displaystyle\quad+\epsilon_{n}(P_{n}-P_{0})\ell_{0}^{\prime}[\pi_{n}(\theta_{0}+\dot{\Psi})-(\theta_{0}+\dot{\Psi})]
=ϵn(PnP0)0[Ψ˙]ϵn(PnP0)0[θnθ0]+ϵnop(n1/2)\displaystyle=\epsilon_{n}(P_{n}-P_{0})\ell_{0}^{\prime}[\dot{\Psi}]-\epsilon_{n}(P_{n}-P_{0})\ell_{0}^{\prime}[\theta_{n}^{*}-\theta_{0}]+\epsilon_{n}o_{p}(n^{-1/2})
=ϵn(PnP0)0[Ψ˙]+ϵnop(n1/2).\displaystyle=\epsilon_{n}(P_{n}-P_{0})\ell_{0}^{\prime}[\dot{\Psi}]+\epsilon_{n}o_{p}(n^{-1/2}).

Line 3: By Condition C8, this term is ϵnop(n1/2)\epsilon_{n}o_{p}(n^{-1/2}).

Conclusion of the three lines: It holds that

Pn(πn((1ϵn)θn+ϵn(θ0+Ψ˙)))Pn(θn)\displaystyle P_{n}\ell\left(\pi_{n}((1-\epsilon_{n})\theta_{n}^{*}+\epsilon_{n}(\theta_{0}+\dot{\Psi}))\right)-P_{n}\ell(\theta_{n}^{*})
ϵnα0,θnθ0,Ψ˙+ϵn(PnP0)0[Ψ˙]\displaystyle\leq\epsilon_{n}\alpha_{0,\ell}\langle\theta_{n}^{*}-\theta_{0},\dot{\Psi}\rangle+\epsilon_{n}(P_{n}-P_{0})\ell_{0}^{\prime}[\dot{\Psi}]
+ϵnop(n1/2)+op(πn((1ϵn)θn+ϵn(θ0+Ψ˙))θ02+θnθ02).\displaystyle\quad+\epsilon_{n}o_{p}(n^{-1/2})+o_{p}\left(\|\pi_{n}((1-\epsilon_{n})\theta_{n}^{*}+\epsilon_{n}(\theta_{0}+\dot{\Psi}))-\theta_{0}\|^{2}+\|\theta_{n}^{*}-\theta_{0}\|^{2}\right).

Since θn\theta_{n}^{*} is an empirical risk minimizer, the left-hand side is non-negative. Thus,

0θnθ0,Ψ˙+(PnP0)α0,10[Ψ˙]+op(n1/2).0\leq\langle\theta_{n}^{*}-\theta_{0},\dot{\Psi}\rangle+(P_{n}-P_{0})\alpha_{0,\ell}^{-1}\ell_{0}^{\prime}[\dot{\Psi}]+o_{p}(n^{-1/2}).

Similarly, by replacing πn((1ϵn)θn+ϵn(θ0+Ψ˙))\pi_{n}((1-\epsilon_{n})\theta_{n}^{*}+\epsilon_{n}(\theta_{0}+\dot{\Psi})) with πn((1ϵn)θn+ϵn(θ0Ψ˙))\pi_{n}((1-\epsilon_{n})\theta_{n}^{*}+\epsilon_{n}(\theta_{0}-\dot{\Psi})) in (LABEL:eq:_asymptotic_linearity_key2), we derive that

0θnθ0,Ψ˙(PnP0)α0,10[Ψ˙]+op(n1/2).0\leq-\langle\theta_{n}^{*}-\theta_{0},\dot{\Psi}\rangle-(P_{n}-P_{0})\alpha_{0,\ell}^{-1}\ell_{0}^{\prime}[\dot{\Psi}]+o_{p}(n^{-1/2}).

Therefore, |θnθ0,Ψ˙+(PnP0)α0,10[Ψ˙]|=op(n1/2)|\langle\theta_{n}^{*}-\theta_{0},\dot{\Psi}\rangle+(P_{n}-P_{0})\alpha_{0,\ell}^{-1}\ell_{0}^{\prime}[\dot{\Psi}]|=o_{p}(n^{-1/2}). By Conditions A3A4 and Lemma 3,

Ψ(θn)Ψ(θ0)\displaystyle\Psi(\theta_{n}^{*})-\Psi(\theta_{0}) =θnθ0,Ψ˙+op(n1/2)\displaystyle=\langle\theta_{n}^{*}-\theta_{0},\dot{\Psi}\rangle+o_{p}(n^{-1/2})
=(PnP0)α0,10[Ψ˙]+op(n1/2).\displaystyle=-(P_{n}-P_{0})\alpha_{0,\ell}^{-1}\ell_{0}^{\prime}[\dot{\Psi}]+o_{p}(n^{-1/2}).

The asymptotic linearity of Ψ(θn)\Psi(\theta_{n}^{*}) follows. We prove the efficiency in Appendix D.3. ∎

The proof of Theorem 5 is almost identical.

Nest we present and prove a lemma allows us to interpret Condition C2 as an upper bound on the rate of KK.

Lemma 5.

Under Conditions A2, C1, C3 (C3 resp.) and C6 (C6 resp.), πn(θ0)θn=op(n1/4)\|\pi_{n}(\theta_{0})-\theta_{n}^{\dagger}\|=o_{p}(n^{-1/4}).

Proof of Lemma 5.

By definition of θn\theta_{n}^{\dagger} and Condition A2, we have

θnθ02CP0{(θn)(θ0)}CP0{(πn(θ0))(θ0)}Cπn(θ0)θ02,\|\theta_{n}^{\dagger}-\theta_{0}\|^{2}\leq CP_{0}\{\ell(\theta_{n}^{\dagger})-\ell(\theta_{0})\}\leq CP_{0}\{\ell(\pi_{n}(\theta_{0}))-\ell(\theta_{0})\}\leq C\|\pi_{n}(\theta_{0})-\theta_{0}\|^{2},

the right-hand side of which is op(n1/2)o_{p}(n^{-1/2}) by Lemma 3 (or its corresponding version under Conditions C6 and C3). Therefore, θnθ0=op(n1/4)\|\theta_{n}^{\dagger}-\theta_{0}\|=o_{p}(n^{-1/4}) and hence πn(θ0)θnπn(θ0)θ0+θnθ0=op(n1/4)\|\pi_{n}(\theta_{0})-\theta_{n}^{\dagger}\|\leq\|\pi_{n}(\theta_{0})-\theta_{0}\|+\|\theta_{n}^{\dagger}-\theta_{0}\|=o_{p}(n^{-1/4}). ∎

We finally prove the efficiency of the data-adaptive series estimator with KK selected by CV.

Proof of Theorem 4.

By Lemma 3 and Condition A2, for that existing deterministic KK, P0{(θK(θn0))(θ0)}CθK(θn0)θ02=op(n1/2)P_{0}\{\ell(\theta_{K}^{*}(\theta_{n}^{0}))-\ell(\theta_{0})\}\leq C\|\theta_{K}^{*}(\theta_{n}^{0})-\theta_{0}\|^{2}=o_{p}(n^{-1/2}). By the oracle inequality for CV in [31], P0{(θn)(θ0)}=op(n1/2)P_{0}\{\ell(\theta_{n}^{\sharp})-\ell(\theta_{0})\}=o_{p}(n^{-1/2}). By Condition A2, θnθ02CP0{(θn)(θ0)}=op(n1/2)\|\theta_{n}^{\sharp}-\theta_{0}\|^{2}\leq CP_{0}\{\ell(\theta_{n}^{\sharp})-\ell(\theta_{0})\}=o_{p}(n^{-1/2}) and hence θnθ0=op(n1/4)\|\theta_{n}^{\sharp}-\theta_{0}\|=o_{p}(n^{-1/4}). So with probability tending to one,

ψ˙θn0πK,θn0(ψ˙θn0)\displaystyle\|\dot{\psi}\circ\theta_{n}^{0}-\pi_{K^{*},\theta_{n}^{0}}(\dot{\psi}\circ\theta_{n}^{0})\| =ψ˙θn0ΠK,θn0(ψ˙)θn0\displaystyle=\|\dot{\psi}\circ\theta_{n}^{0}-\Pi_{K^{*},\theta_{n}^{0}}(\dot{\psi})\circ\theta_{n}^{0}\|
Cθn0ΠK,θn0()θn0\displaystyle\leq C\|\theta_{n}^{0}-\Pi_{K^{*},\theta_{n}^{0}}(\mathcal{I})\circ\theta_{n}^{0}\| (Condition C5)
Cθn0θn\displaystyle\leq C\|\theta_{n}^{0}-\theta_{n}^{\sharp}\| (definition of the projection operator)
C(θn0θ0+θnθ0),\displaystyle\leq C(\|\theta_{n}^{0}-\theta_{0}\|+\|\theta_{n}^{\sharp}-\theta_{0}\|), (triangle inequality)

which is op(n1/4)o_{p}(n^{-1/4}) by Condition C1. Hence,

ψ˙θ0πK,θn0(ψ˙θ0)\displaystyle\|\dot{\psi}\circ\theta_{0}-\pi_{K^{*},\theta_{n}^{0}}(\dot{\psi}\circ\theta_{0})\| ψ˙θ0πK,θn0(ψ˙θn0)\displaystyle\leq\|\dot{\psi}\circ\theta_{0}-\pi_{K^{*},\theta_{n}^{0}}(\dot{\psi}\circ\theta_{n}^{0})\|
ψ˙θ0ψ˙θn0+ψ˙θn0πK,θn0(ψ˙θn0)\displaystyle\leq\|\dot{\psi}\circ\theta_{0}-\dot{\psi}\circ\theta_{n}^{0}\|+\|\dot{\psi}\circ\theta_{n}^{0}-\pi_{K^{*},\theta_{n}^{0}}(\dot{\psi}\circ\theta_{n}^{0})\|
Cθn0θ0+op(n1/4),\displaystyle\leq C\|\theta_{n}^{0}-\theta_{0}\|+o_{p}(n^{-1/4}), (Condition C7)

which is op(n1/4)o_{p}(n^{-1/4}) by Condition C1.

This bounds the approximation error ψ˙θ0πK,θn0(ψ˙θ0)\|\dot{\psi}\circ\theta_{0}-\pi_{K^{*},\theta_{n}^{0}}(\dot{\psi}\circ\theta_{0})\| for ψ˙\dot{\psi}, a result that is similar to Lemma 4 combined with Conditions C1 and C4s. Similarly to Theorem 2, along with other conditions, the assumptions in Corollary 2 in [28] are essentially satisfied and hence an almost identical argument shows that Ψ(θn)\Psi(\theta_{n}^{\sharp}) is an asymptotically linear estimator of Ψ(θ0)\Psi(\theta_{0}). We prove the efficiency in Appendix D.3. ∎

D.3 Efficiency

Proof of efficiency of the proposed estimators.

It is sufficient to show that the influence function of our proposed estimators is the canonical gradient under a nonparametric model. Let HH\in\mathscr{H} be fixed. In the rest of this proof, for all small oo and big OO notations, we let δ0\delta\rightarrow 0. The proof is similar to the proof of asymptotic linearity in [28] except that the estimator of θ0\theta_{0} and the empirical distribution PnP_{n} are replaced by θH,δ\theta_{H,\delta} and PH,δP_{H,\delta} respectively.

Let δ\delta^{\prime} satisfy Condition E2. We note that

PH,δ(θH,δ)\displaystyle P_{H,\delta}\ell(\theta_{H,\delta}) =PH,δ(θ0)+P0[(θH,δ)(θ0)]+(PH,δP0)[(θH,δ)(θ0)]\displaystyle=P_{H,\delta}\ell(\theta_{0})+P_{0}[\ell(\theta_{H,\delta})-\ell(\theta_{0})]+(P_{H,\delta}-P_{0})[\ell(\theta_{H,\delta})-\ell(\theta_{0})]
=PH,δ(θ0)+P0[(θH,δ)(θ0)]+(PH,δP0)0[θH,δθ0]\displaystyle=P_{H,\delta}\ell(\theta_{0})+P_{0}[\ell(\theta_{H,\delta})-\ell(\theta_{0})]+(P_{H,\delta}-P_{0})\ell_{0}^{\prime}[\theta_{H,\delta}-\theta_{0}]
+(PH,δP0)r[θH,δθ0].\displaystyle\quad+(P_{H,\delta}-P_{0})r[\theta_{H,\delta}-\theta_{0}].

We also note that (1δ)θH,δ+δ(θ0±Ψ˙)Θ(1-\delta^{\prime})\theta_{H,\delta}+\delta^{\prime}(\theta_{0}\pm\dot{\Psi})\in\Theta if |δ||\delta| is sufficiently small. Then, similarly, by replacing θH,δ\theta_{H,\delta} with (1δ)θH,δ+δ(θ0+Ψ˙)(1-\delta^{\prime})\theta_{H,\delta}+\delta^{\prime}(\theta_{0}+\dot{\Psi}) in the above equation, we have that

PH,δ((1δ)θH,δ+δ(θ0+Ψ˙))=PH,δ(θ0)+P0[((1δ)θH,δ+δ(θ0+Ψ˙))(θ0)]+(PH,δP0)0[(1δ)θH,δ+δ(θ0+Ψ˙)θ0]+(PH,δP0)r[(1δ)(θH,δθ0)+δΨ˙].\displaystyle\begin{split}&P_{H,\delta}\ell((1-\delta^{\prime})\theta_{H,\delta}+\delta^{\prime}(\theta_{0}+\dot{\Psi}))\\ &=P_{H,\delta}\ell(\theta_{0})+P_{0}[\ell((1-\delta^{\prime})\theta_{H,\delta}+\delta^{\prime}(\theta_{0}+\dot{\Psi}))-\ell(\theta_{0})]\\ &\quad+(P_{H,\delta}-P_{0})\ell_{0}^{\prime}[(1-\delta^{\prime})\theta_{H,\delta}+\delta^{\prime}(\theta_{0}+\dot{\Psi})-\theta_{0}]\\ &\quad+(P_{H,\delta}-P_{0})r[(1-\delta^{\prime})(\theta_{H,\delta}-\theta_{0})+\delta^{\prime}\dot{\Psi}].\end{split} (2)

Take the difference between the above two equations. By the linearity of 0\ell_{0}^{\prime}, we have that

PH,δ((1δ)θH,δ+δ(θ0+Ψ˙))PH,δ(θH,δ)\displaystyle P_{H,\delta}\ell((1-\delta^{\prime})\theta_{H,\delta}+\delta^{\prime}(\theta_{0}+\dot{\Psi}))-P_{H,\delta}\ell(\theta_{H,\delta})
=P0[((1δ)θH,δ+δ(θ0+Ψ˙))(θ0)]P0[(θH,δ)(θ0)]\displaystyle=P_{0}[\ell((1-\delta^{\prime})\theta_{H,\delta}+\delta^{\prime}(\theta_{0}+\dot{\Psi}))-\ell(\theta_{0})]-P_{0}[\ell(\theta_{H,\delta})-\ell(\theta_{0})]
+δ(PH,δP0)0[Ψ˙θH,δ+θ0]\displaystyle\quad+\delta^{\prime}(P_{H,\delta}-P_{0})\ell_{0}^{\prime}[\dot{\Psi}-\theta_{H,\delta}+\theta_{0}]
+(PH,δP0){r[(1δ)(θH,δθ0)+δΨ˙]r[θH,δθ0]}\displaystyle\quad+(P_{H,\delta}-P_{0})\{r[(1-\delta^{\prime})(\theta_{H,\delta}-\theta_{0})+\delta^{\prime}\dot{\Psi}]-r[\theta_{H,\delta}-\theta_{0}]\}
=α0,2(1δ)(θH,δθ0)+δΨ˙2α0,2θH,δθ02\displaystyle=\frac{\alpha_{0,\ell}}{2}\|(1-\delta^{\prime})(\theta_{H,\delta}-\theta_{0})+\delta^{\prime}\dot{\Psi}\|^{2}-\frac{\alpha_{0,\ell}}{2}\|\theta_{H,\delta}-\theta_{0}\|^{2}
+o(θH,δθ02+(1δ)(θH,δθ0)+δΨ˙2)\displaystyle\quad+o\left(\|\theta_{H,\delta}-\theta_{0}\|^{2}+\|(1-\delta^{\prime})(\theta_{H,\delta}-\theta_{0})+\delta^{\prime}\dot{\Psi}\|^{2}\right) (Condition A2)
+δ(PH,δP0)0[Ψ˙]δ(PH,δP0)0[θH,δθ0]+δo(δ)\displaystyle\quad+\delta^{\prime}(P_{H,\delta}-P_{0})\ell_{0}^{\prime}[\dot{\Psi}]-\delta^{\prime}(P_{H,\delta}-P_{0})\ell_{0}^{\prime}[\theta_{H,\delta}-\theta_{0}]+\delta^{\prime}o(\delta) (Condition E2)
=δα0,θH,δθ0,Ψ˙δα0,θH,δθ02+δ2α0,2θH,δθ0+Ψ˙2\displaystyle=\delta^{\prime}\alpha_{0,\ell}\langle\theta_{H,\delta}-\theta_{0},\dot{\Psi}\rangle-\delta^{\prime}\alpha_{0,\ell}\|\theta_{H,\delta}-\theta_{0}\|^{2}+\delta^{\prime 2}\frac{\alpha_{0,\ell}}{2}\|\theta_{H,\delta}-\theta_{0}+\dot{\Psi}\|^{2}
+o(θH,δθ02+(1δ)(θH,δθ0)+δΨ˙2)\displaystyle\quad+o\left(\|\theta_{H,\delta}-\theta_{0}\|^{2}+\|(1-\delta^{\prime})(\theta_{H,\delta}-\theta_{0})+\delta^{\prime}\dot{\Psi}\|^{2}\right)
+δ(PH,δP0)0[Ψ˙]+δo(δ)\displaystyle\quad+\delta^{\prime}(P_{H,\delta}-P_{0})\ell_{0}^{\prime}[\dot{\Psi}]+\delta^{\prime}o(\delta)
δα0,θH,δθ0,Ψ˙+δ2α0,2θH,δθ0+Ψ˙2\displaystyle\leq\delta^{\prime}\alpha_{0,\ell}\langle\theta_{H,\delta}-\theta_{0},\dot{\Psi}\rangle+\delta^{\prime 2}\frac{\alpha_{0,\ell}}{2}\|\theta_{H,\delta}-\theta_{0}+\dot{\Psi}\|^{2}
+o(θH,δθ02+(1δ)(θH,δθ0)+δΨ˙2)\displaystyle\quad+o\left(\|\theta_{H,\delta}-\theta_{0}\|^{2}+\|(1-\delta^{\prime})(\theta_{H,\delta}-\theta_{0})+\delta^{\prime}\dot{\Psi}\|^{2}\right)
+δ(PH,δP0)0[Ψ˙]+δo(δ).\displaystyle\quad+\delta^{\prime}(P_{H,\delta}-P_{0})\ell_{0}^{\prime}[\dot{\Psi}]+\delta^{\prime}o(\delta).

Since the left-hand side of the above display is nonnegative, by Condition E1, we have that

0\displaystyle 0 θH,δθ0,Ψ˙+α0,1(PH,δP0)0[Ψ˙]+O(δ)+o(δ)\displaystyle\leq\langle\theta_{H,\delta}-\theta_{0},\dot{\Psi}\rangle+\alpha_{0,\ell}^{-1}(P_{H,\delta}-P_{0})\ell_{0}^{\prime}[\dot{\Psi}]+O(\delta^{\prime})+o(\delta)
=θH,δθ0,Ψ˙+α0,1(PH,δP0)0[Ψ˙]+o(δ).\displaystyle=\langle\theta_{H,\delta}-\theta_{0},\dot{\Psi}\rangle+\alpha_{0,\ell}^{-1}(P_{H,\delta}-P_{0})\ell_{0}^{\prime}[\dot{\Psi}]+o(\delta).

Similarly, by replacing (1δ)θH,δ+δ(θ0+Ψ˙)(1-\delta^{\prime})\theta_{H,\delta}+\delta^{\prime}(\theta_{0}+\dot{\Psi}) with (1δ)θH,δ+δ(θ0Ψ˙)(1-\delta^{\prime})\theta_{H,\delta}+\delta^{\prime}(\theta_{0}-\dot{\Psi}) in (LABEL:eq:_regularity_key2), we show that 0θH,δθ0,Ψ˙α0,1(PH,δP0)0[Ψ˙]+o(δ)0\leq-\langle\theta_{H,\delta}-\theta_{0},\dot{\Psi}\rangle-\alpha_{0,\ell}^{-1}(P_{H,\delta}-P_{0})\ell_{0}^{\prime}[\dot{\Psi}]+o(\delta). Therefore, |θH,δ+θ0,Ψ˙+α0,1(PH,δP0)0[Ψ˙]|=o(δ)|\langle\theta_{H,\delta}+\theta_{0},\dot{\Psi}\rangle+\alpha_{0,\ell}^{-1}(P_{H,\delta}-P_{0})\ell_{0}^{\prime}[\dot{\Psi}]|=o(\delta) and

Ψ(θH,δ)Ψ(θ0)\displaystyle\Psi(\theta_{H,\delta})-\Psi(\theta_{0}) =θH,δθ0,Ψ˙+O(θH,δθ02)\displaystyle=\langle\theta_{H,\delta}-\theta_{0},\dot{\Psi}\rangle+O(\|\theta_{H,\delta}-\theta_{0}\|^{2})
=α0,1(PH,δP0)0[Ψ˙]+o(δ)+O(θH,δθ02)\displaystyle=-\alpha_{0,\ell}^{-1}(P_{H,\delta}-P_{0})\ell_{0}^{\prime}[\dot{\Psi}]+o(\delta)+O(\|\theta_{H,\delta}-\theta_{0}\|^{2})
=α0,1(PH,δP0)0[Ψ˙]+o(δ).\displaystyle=-\alpha_{0,\ell}^{-1}(P_{H,\delta}-P_{0})\ell_{0}^{\prime}[\dot{\Psi}]+o(\delta). (Condition E1)

Consequently, limδ0[Ψ(θH,δ)Ψ(θ0)]/δ=P0{α0,10[Ψ˙]H}\lim_{\delta\rightarrow 0}[\Psi(\theta_{H,\delta})-\Psi(\theta_{0})]/\delta=P_{0}\{-\alpha_{0,\ell}^{-1}\ell_{0}^{\prime}[\dot{\Psi}]\cdot H\} and hence the canonical gradient of Ψ\Psi under a nonparametric model is α0,1{0[Ψ˙]+P00[Ψ˙]}\alpha_{0,\ell}^{-1}\{-\ell_{0}^{\prime}[\dot{\Psi}]+P_{0}\ell_{0}^{\prime}[\dot{\Psi}]\}. Since the influence functions of our asymptotically linear estimators are equal to this canonical gradient, our proposed estimators are efficient under a nonparametric model. ∎

Appendix E Simulation setting details

In all simulations, since θ0(x)=𝔼P0[Z|X=x]\theta_{0}(x)=\mathbb{E}_{P_{0}}[Z|X=x] is the conditional mean function, the loss function was chosen to be the square loss (θ):v(zθ(x))2\ell(\theta):v\mapsto(z-\theta(x))^{2}.

E.1 HAL

In the simulation, we generate data from the distribution defined by

XN(0,1),θ0(x)=exp{(1+2x+2x2)/2},Z|X=xExponential(rate=1/θ0(x)).X\sim\text{N}(0,1),\ \theta_{0}(x)=\exp\{-(-1+2x+2x^{2})/2\},Z|X=x\sim\text{Exponential}(\text{rate}=1/\theta_{0}(x)).

The sample sizes being considered are 500, 1000, 2000, 5000 and 10000. For each scenario we run 1000 replicates. We chose M.gcv+ to be 3.1 times M.cv.

E.2 Data-adaptive series

E.2.1 Demonstration of Theorem 4

In the simulation, we generate data from the distribution defined by XUnif(1,1),Z|X=xN(θ0(x),0.252)X\sim\text{Unif}(-1,1),\ Z|X=x\sim\text{N}(\theta_{0}(x),0.25^{2}) where

θ0:x\displaystyle\theta_{0}:x I(1x<3/4)+πI(3/4x<1/2)+10x2I(1/4x<1/4)\displaystyle\mapsto I(-1\leq x<-3/4)+\pi I(-3/4\leq x<-1/2)+10x^{2}I(-1/4\leq x<1/4)
+2I(1/4x<1/2)+exp(1)I(1/2x<3/4)+33I(3/4x1),\displaystyle\quad+\sqrt{2}I(1/4\leq x<1/2)+\exp(-1)I(1/2\leq x<3/4)+\sqrt[3]{3}I(3/4\leq x\leq 1),

When using the trigonometric series, we first shift and scale the initial function range to be [1/2,1/2][-1/2,1/2] and then use the basis for the interval [1,1][-1,1] (i.e. sin(jπz),cos(jπz)\sin(j\pi z),\cos(j\pi z)) in sieve estimation to avoid the poor behavior of trigonometric series near the boundary. We consider sample sizes 500, 1000, 2000, 5000, 10000 and 20000. For each sample size, we run 1000 simulations.

E.2.2 Violation of Condition C5

In the simulation, we generate data from the distribution defined by XUnif(1,1),Z|X=xN(θ0(x),1)X\sim\text{Unif}(-1,1),\ Z|X=x\sim\text{N}(\theta_{0}(x),1) where θ0:xcos(10x)\theta_{0}:x\mapsto\cos(10x). The estimand is Ψ(θ0)=P0(fθ0)\Psi(\theta_{0})=P_{0}(f\circ\theta_{0}) where

f:z\displaystyle f:z [310πcos(5πz)38]I(z<12)32z2I(12z<0)\displaystyle\mapsto\left[\frac{3}{10\pi}\cos(5\pi z)-\frac{3}{8}\right]I\left(z<-\frac{1}{2}\right)-\frac{3}{2}z^{2}I\left(-\frac{1}{2}\leq z<0\right)
+3z2I(0z<12)+[32exp(24z)3z+154]I(z12).\displaystyle\quad\ \,+3z^{2}I\left(0\leq z<\frac{1}{2}\right)+\left[-\frac{3}{2}\exp(2-4z)-3z+\frac{15}{4}\right]I\left(z\geq\frac{1}{2}\right).

We consider sample sizes 500, 1000, 2000, 5000, 10000 and 20000; for each sample size, we run 1000 simulations. Our goal is to explore the behavior of the plug-in estimator when ff, instead of θ0\theta_{0}, is rough, so we use kernel regression [20] to estimate θ0\theta_{0} for convenience.

E.3 Generalized data-adaptive series

E.3.1 Demonstration of Theorem 6

In the simulation, we generate data from the distribution defined by XUnif(1,1),A|X=xBern(expit(x)),Y|A=a,X=xN(μ0,a(x),0.252)X\sim\text{Unif}(-1,1),\ A|X=x\sim\text{Bern}(\text{expit}(-x)),\ Y|A=a,X=x\sim\text{N}(\mu_{0,a}(x),0.25^{2}) where

μ00:x\displaystyle\mu_{00}:x I(1x<3/4)+πI(3/4x<1/2)+10x2I(1/4x<1/4)\displaystyle\mapsto I(-1\leq x<-3/4)+\pi I(-3/4\leq x<-1/2)+10x^{2}I(-1/4\leq x<1/4)
+2I(1/4x<1/2)+exp(1)I(1/2x<3/4)+33I(3/4x1),\displaystyle\quad+\sqrt{2}I(1/4\leq x<1/2)+\exp(-1)I(1/2\leq x<3/4)+\sqrt[3]{3}I(3/4\leq x\leq 1),
μ01:x\displaystyle\mu_{01}:x x2I(x<1/3)+exp(x)I(1/3x<1/3)+I(x>1/3)\displaystyle\mapsto x^{2}I(x<-1/3)+\exp(x)I(-1/3\leq x<1/3)+I(x>1/3)

The series is the tensor product [4] of univariate trigonometric series in E.2.1. The sample sizes are the same as in E.2.1.

E.3.2 Violation of Condition C5

In the simulation, we generate data from the distribution defined by XUnif(1,1),A|X=xBern(g0(x)),Y|A=a,X=xN(μ0,a(x),0.252)X\sim\text{Unif}(-1,1),\ A|X=x\sim\text{Bern}(g_{0}(x)),\ Y|A=a,X=x\sim N(\mu_{0,a}(x),0.25^{2}) where μ0a:xexp(x2+0.8ax+0.5a)\mu_{0a}:x\mapsto\exp(-x^{2}+0.8ax+0.5a) (a{0,1}a\in\{0,1\}) and

g0:x\displaystyle g_{0}:x expit{(53x3154x253x2596)I(x12)+(56x4+53x3)I(12<x0)\displaystyle\mapsto\text{expit}\Bigg{\{}\left(-\frac{5}{3}x^{3}-\frac{15}{4}x^{2}-\frac{5}{3}x-\frac{25}{96}\right)I\left(x\leq-\frac{1}{2}\right)+\left(\frac{5}{6}x^{4}+\frac{5}{3}x^{3}\right)I\left(-\frac{1}{2}<x\leq 0\right)
+53x3I(0<x12)+(5x2154x+56)I(x>12)}.\displaystyle\quad+\frac{5}{3}x^{3}I\left(0<x\leq\frac{1}{2}\right)+\left(5x^{2}-\frac{15}{4}x+\frac{5}{6}\right)I\left(x>\frac{1}{2}\right)\Bigg{\}}.

We consider sample sizes 500, 1000, 2000, 5000, 10000 and 20000; for each sample size, we run 1000 simulations. Our goal is to explore the behavior of the plug-in estimator when Ψ˙\dot{\Psi}, instead of θ0\theta_{0}, is rough, so we use kernel regression [20] to estimate θ0\theta_{0} for convenience.

Acknowledgements

This work was partially supported by the National Institutes of Health under award number DP2-LM013340 and R01HL137808. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

References

  • Benkeser and van Der Laan [2016] Benkeser, D. and M. van Der Laan (2016). The Highly Adaptive Lasso Estimator. In Data Science and Advanced Analytics (DSAA), 2016 IEEE International Conference on, pp.  689–696. IEEE.
  • Bickel et al. [1997] Bickel, P., F. Götze, and W. vanZwet (1997). Resampling fewer than n observations: Gains, losses, and remedies for losses. STATISTICA SINICA 7(1).
  • Bickel and Ritov [2003] Bickel, P. J. and Y. Ritov (2003). Nonparametric estimators which can be “plugged-in”. Annals of Statistics 31(4), 1033–1053.
  • Chen [2007] Chen, X. (2007). Chapter 76 Large Sample Sieve Estimation of Semi-Nonparametric Models. Handbook of Econometrics 6(SUPPL. PART B), 5549–5632.
  • Chernozhukov et al. [2017] Chernozhukov, V., D. Chetverikov, M. Demirer, E. Duflo, C. Hansen, and W. Newey (2017). Double/debiased/Neyman machine learning of treatment effects. American Economic Review 107(5), 261–265.
  • Chernozhukov et al. [2018] Chernozhukov, V., D. Chetverikov, M. Demirer, E. Duflo, C. Hansen, W. Newey, and J. Robins (2018). Double/debiased machine learning for treatment and structural parameters. Econometrics Journal 21(1), C1–C68.
  • Fan et al. [1998] Fan, J., W. Härdle, and E. Mammen (1998). Direct estimation of low-dimensional components in additive models. The Annals of Statistics 26(3), 943–971.
  • Friedman [2001] Friedman, J. H. (2001). Greedy Function Approximation: A Gradient Boosting Machine. Technical Report 5.
  • Friedman [2002] Friedman, J. H. (2002). Stochastic gradient boosting. Computational Statistics and Data Analysis 38(4), 367–378.
  • Gill et al. [1993] Gill, R. D., M. J. van der Laan, and J. A. Wellner (1993). Inefficient estimators of the bivariate survival function for three models. Rijksuniversiteit Utrecht. Mathematisch Instituut.
  • Hall [2013] Hall, P. (2013). The bootstrap and Edgeworth expansion. Springer Science & Business Media.
  • Hall et al. [2004] Hall, P., J. Racine, and Q. Li (2004). Cross-validation and the estimation of conditional probability densities. Journal of the American Statistical Association 99(468), 1015–1026.
  • Härdle and Stoker [1989] Härdle, W. and T. M. Stoker (1989). Investigating smooth multiple regression by the method of average derivatives. Journal of the American Statistical Association 84(408), 986–995.
  • Levy et al. [2018] Levy, J., M. van der Laan, A. Hubbard, and R. Pirracchio (2018). A Fundamental Measure of Treatment Effect Heterogeneity.
  • Li and Racine [2004] Li, Q. and J. Racine (2004). Cross-Validated local Linear Nonparametric Regression. Statistica Sinica 14, 485–512.
  • Mallat [2009] Mallat, S. (2009). A Wavelet Tour of Signal Processing. Elsevier.
  • Marron [1994] Marron, J. S. (1994). Visual understanding of higher-order kernels. Journal of Computational and Graphical Statistics 3(4), 447–458.
  • Mason et al. [1999] Mason, L., J. Baxter, P. Bartlett, and M. Frean (1999). Boosting Algorithms as Gradient Descent in Function Space. Technical report.
  • Mason et al. [2000] Mason, L., J. Baxter, P. L. Bartlett, and M. Frean (2000). Boosting Algorithms as Gradient Descent. Technical report.
  • Nadaraya [1964] Nadaraya, E. A. (1964). On estimating regression. Theory of Probability & Its Applications 9(1), 141–142.
  • Newey et al. [1998] Newey, W., F. Hsieh, and J. Robins (1998). Undersmoothing and Bias Corrected Functional Estimation. Working papers.
  • Newey [1997] Newey, W. K. (1997). Convergence rates and asymptotic normality for series estimators. Journal of Econometrics 79(1), 147–168.
  • Newey et al. [2004] Newey, W. K., F. Hsieh, and J. M. Robins (2004). Twicing Kernels and a Small Bias Property of Semiparametric Estimators. Econometrica 72(3), 947–962.
  • Owen [2005] Owen, A. B. (2005). Multidimensional variation for quasi-monte carlo. In Contemporary Multivariate Analysis And Design Of Experiments: In Celebration of Professor Kai-Tai Fang’s 65th Birthday, pp.  49–74. World Scientific.
  • Pfanzagl [1982] Pfanzagl, J. (1982). Contributions to a General Asymptotic Statistical Theory, Volume 13 of Lecture Notes in Statistics. New York, NY: Springer New York.
  • Rubin [1974] Rubin, D. B. (1974). Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology 66(5), 688–701.
  • Sheather and Jones [1991] Sheather, S. J. and M. C. Jones (1991). A Reliable Data-Based Bandwidth Selection Method for Kernel Density Estimation. Journal of the Royal Statistical Society: Series B (Methodological) 53(3), 683–690.
  • Shen [1997] Shen, X. (1997). On methods of sieves and penalization. Annals of Statistics 25(6), 2555–2591.
  • van Der Laan [2017] van Der Laan, M. (2017). A Generally Efficient Targeted Minimum Loss Based Estimator based on the Highly Adaptive Lasso. International Journal of Biostatistics 13(2).
  • van der Laan and Rubin [2006] van der Laan, M. and D. Rubin (2006). Targeted Maximum Likelihood Learning. U.C. Berkeley Division of Biostatistics Working Paper Series.
  • van der laan and Dudoit [2003] van der laan, M. J. and S. Dudoit (2003). Unified cross-validation methodology for selection among estimators and a general cross-validated adaptive epsilon-net estimator: Finite sample oracle inequalities and examples. U.C. Berkeley Division of Biostatistics Working Paper Working Pa.
  • van der Laan and Robins [2003] van der Laan, M. J. and J. M. Robins (2003). Unified Methods for Censored Longitudinal Data and Causality. Springer Series in Statistics. New York, NY: Springer New York.
  • van der Laan and Rose [2018] van der Laan, M. J. and S. Rose (2018). Targeted Learning in Data Science.
  • van der Vaart and Wellner [2000] van der Vaart, A. and J. Wellner (2000). Weak Convergence and Empirical Processes: With Applications to Statistics. Springer Series in Statistics. Springer.
  • Williamson et al. [2017] Williamson, B., P. Gilbert, N. Simon, and M. Carone (2017). Nonparametric variable importance assessment using machine learning techniques. UW Biostatistics Working Paper Series.
  • Yang and Ding [2018] Yang, S. and P. Ding (2018). Asymptotic inference of causal effects with observational studies trimmed by the estimated propensity scores. Biometrika 105(2), 487–493.
  • Zhang [2010] Zhang, C. H. (2010). Nearly unbiased variable selection under minimax concave penalty. Annals of Statistics 38(2), 894–942.