Flexible sensitivity analysis for causal inference in observational studies subject to unmeasured confounding
Abstract
Causal inference with observational studies often suffers from unmeasured confounding, yielding biased estimators based on the unconfoundedness assumption. Sensitivity analysis assesses how the causal conclusions change with respect to different degrees of unmeasured confounding. Most existing sensitivity analysis methods work well for specific types of statistical estimation or testing strategies. We propose a flexible sensitivity analysis framework that can deal with commonly-used inverse probability weighting, outcome regression, and doubly robust estimators simultaneously. It is based on the well-known parametrization of the selection bias as comparisons of the observed and counterfactual outcomes conditional on observed covariates. It is attractive for practical use because it only requires simple modifications of the standard estimators. Moreover, it naturally extends to many other causal inference settings, including the causal risk ratio or odds ratio, the average causal effect on the treated units, and studies with survival outcomes. We also develop an R package saci to implement our sensitivity analysis estimators.
Keywords: Double robustness; Efficient influence function; Ignorability; Inverse probability weighting; Potential outcome; Selection bias
1 Introduction
Learning causal effects is of great importance in empirical research, which, however, presents a persistent challenge for researchers. When randomized control trials are not feasible, we need to learn from the observational data to conduct causal inference. The standard method for identifying causal parameters, such as the average causal effect, hinges upon the unconfoundedness assumption. Unconfoundedness postulates that the potential outcomes are conditionally independent of the treatment assignment given the observed covariates. However, unconfoundedness is untestable and necessitates the measurement of all confounding factors associated with the treatment and outcome. The presence of unmeasured confounders yields bias in estimating the true causal effect, even with a substantial sample size.
Motivated by the concern of unmeasured confounding, sensitivity analysis becomes an attractive alternative to assess the impact of the unobserved confounders on the causal estimates. Rosenbaum and Rubin (1983a), Lin et al. (1998) and Imbens (2003) built parametric models to assess the impact of the unobserved confounder on the estimation of the average causal effect. Rosenbaum (1987) proposed a sensitivity analysis framework to test the sharp null hypothesis of no unit-level causal effects in matched observational studies, with a recent application by Zubizarreta et al. (2013). Cornfield et al. (1959) and Ding and VanderWeele (2016) derived inequalities to quantify the strength of the unmeasured confounding in order to explain away observed causal estimates based on risk ratios, which motivated the concept of E-value for observational research for causal inference (VanderWeele and Ding 2017). Zhao et al. (2019) and Dorn and Guo (2022) developed sensitivity analysis methods for the inverse propensity score weighting estimator.
The methods mentioned above for sensitivity analysis are useful for specific statistical estimation or testing strategies. However, they may be limited in their ability to simultaneously deal with various types of estimators, such as outcome regression-based estimators, inverse propensity score weighting estimators, and doubly robust estimators. To develop a comprehensive approach, we propose a unified framework for sensitivity analysis adaptable to different estimators. Our framework utilizes the sensitivity parameters defined as the ratio between the conditional means of potential outcome in two treatment groups given the observed covariates (Robins 1999). We provide nonparametric identification formulas for the average causal effect given the specified sensitivity parameters. Our framework facilitates sensitivity analysis for various standard estimators by providing closed-form modifications. Furthermore, we furnish practical guidance on calibrating and specifying sensitivity parameters to interpret the sensitivity analysis results.
1.1 A running example
Consider the causal impact of smoking on individuals’ health as a motivating example. Randomly assigning individuals to smoke is neither practical nor ethical, thus we need to rely on observational data to learn the causal effect of interest. Specifically, the observational study in Bazzano et al. (2003) explored whether cigarette smoking positively affects homocysteine levels, a known risk factor for cardiovascular disease. Drawing upon observations from the U.S. National Health and Nutrition Examination Survey (NHANES) 2005–2006, the study compares homocysteine levels between smokers and non-smokers while adjusting for observed covariates like age, gender, race, and body mass index (BMI) under the unconfoundedness assumption. However, in the presence of suspected underlying factors associated with both increased probability of smoking and elevated homocysteine levels, such as certain gene expressions, estimates adjusted solely for observed covariates are prone to bias. In such instances, conducting sensitivity analyses becomes essential to discern how the estimated causal effect changes in response to the presence of unmeasured confounders. Such analyses help determine the degree to which we need to deviate from the unconfoundedness assumption to attribute the estimated positive effect to unobserved confounders. We will revisit this example in Section 5.1 to illustrate our proposed methods.
1.2 Organiztion of the paper
The remainder of the paper is organized as follows. In Section 2, we review causal inference in observational studies under the unconfoundedness assumption. In Section 3, we introduce our sensitivity analysis framework and provide identification and estimation procedures. In Section 4, we provide practical suggestions for calibration and specification of the sensitivity parameters. In Section 5, we revisit the motivating example, apply the proposed method to this real-world application, and conduct a simulation study to evaluate the finite sample performance of our proposed methods. In Section 6, we discuss alternative sensitivity parameters as well as the extensions to other causal inference settings. In Section 7, we conclude. In the supplementary material, we present additional technical details.
2 Review of causal inference in unconfounded observational studies
Causal inference in observational studies is a central task in many disciplines. The potential outcomes framework is a leading approach to causal inference in statistics. Let and denote the potential outcomes if unit receives the treatment and control, respectively. Let denote the binary treatment, with if unit receives the treatment and if unit receives the control, respectively. Let denote the observed outcome, and denote the observed covariates for unit . We focus on the standard setting with independent and identically distributed , and will drop the index henceforth for the description of population quantities. We start with the average causal effect By the law of total probability, it decomposes into
(1) | |||||
In (1), the fundamental difficulty is to estimate the counterfactual means and , which are not identifiable without further assumptions. Rubin (1978) and Rosenbaum and Rubin (1983b) proposed the following unconfoundedness assumption as a sufficient condition to identify .
Assumption 1 (unconfoundedness).
.
Under Assumption 1, we can prove two identification formulas for which write our causal parameter of interest in terms of the observed data distribution:
(2) |
where and are the conditional means of the outcome given covariates under treatment and control, respectively, and is the conditional mean of the treatment given covariates, also called the propensity score (Rosenbaum and Rubin 1983b). The formulas in (2) rely on the overlap assumption: . We implicitly assume it, since the focus of this paper is on the unconfoundedness assumption.
The two plug-in estimators corresponding to the two identification formulas for in (2) are the outcome regression estimator and the Horvitz–Thompson-type inverse propensity score weighting estimator, respectively:
with and signifying fitted propensity score and outcome models. Moreover, we can construct the doubly robust estimator by combining both models (Bang and Robins 2005):
(3) |
where and are the residuals from the outcome model and propensity score model, respectively. The two equivalent forms in (3) highlight that we can view as a modification of either or : it modifies by inverse propensity score weighted residuals, and augments by the imputed outcomes.
3 Sensitivity analysis with unmeasured confounding
3.1 Motivation for sensitivity analysis and parametrization of unmeasured confounding
Assumption 1 is strong and untestable. Observational studies are often plagued with unmeasured confounding, that is, the existence of some hidden variables that affect both the treatment and outcome simultaneously. Under these circumstances, we need to conduct sensitivity analyses to assess the impact of on the causal estimates. Existing methods only serve purposes in specific statistical estimation or testing strategies, as reviewed in Section 1. To provide a unified framework that can deal with the standard estimators , and reviewed in Section 2 simultaneously, we adopt the following parametrization of confounding.
Definition 1.
Define
In Definition 1, and are two sensitivity parameters. In sensitivity analysis, we can first fix them to obtain the corresponding estimators and then vary them within a range to obtain a sequence of estimators. Definition 1, as well as the theory below, allows and to depend on the covariates. For simplicity of implementation, we can further assume that they are not functions of the covariates.
The parametrization in Definition 1 compares the observable conditional means of the potential outcomes, and , with the corresponding counterfactual conditional means of the potential outcomes, and , respectively. It quantifies the violation of unconfoundedness in Assumption 1. Technically, the form of unconfoundedness in Assumption 1 is stronger than we need if the parameter of interest is . When in Definition 1, we can recover all identification and estimation results for in Section 2. We revisit the running example in Section 1.1. If suspected factors associated with both increased probability of smoking and elevated homocysteine levels exist, under Definition 1, and , leading to an upward bias of estimators of constructed under the unconfoundedness assumption. The sensitivity parameters in Definition 1 are also related to the parametrization based on the treatment selection model for . We provide more discussion in Section S1.8 in the supplementary material.
Robins (1999) and Scharfstein et al. (1999) discussed the potential use of the parametrization in Definition 1. Similar parametrizations also appeared in other settings for causal inference (Tchetgen and Shpitser 2012; VanderWeele et al. 2014; Ding and Lu 2017; Yang and Lok 2018; Jiang et al. 2022). The confounding function in Kasza et al. (2017) also adopts ratios between the marginal means, , in the special case of binary outcomes, without specifying it as a flexible function of the observed covariates. Franks et al. (2020) used a similar parameterization in Bayesian inference and pointed out that the parametrization in Definition 1 does not impose any testable restrictions on the observed data distributions. Nevertheless, despite the attractiveness of this parametrization, the statistical theory for sensitivity analysis is still missing for the standard estimators in the canonical setting of observational studies.
3.2 Identification and estimation under sensitivity analysis
This subsection will present the key identification and estimation results for , in parallel with the results under the unconfoundedness assumption. In particular, by setting in the formulas below, we can recover the corresponding results under the unconfoundedness assumption. We will provide theory and methods for the outcome regression, the inverse propensity score weighting, and the doubly robust estimators for under Definition 1.
Theorem 1 (outcome regression).
In Theorem 1, we essentially get the identification for and from the identification formulas for the two counterfactual conditional means in (4) and (5), which also lead to the two identification formulas for in (6) and (7). Under the unconfoundedness assumption, , (7) reduces to the first identification formula in (2). For general , we need to reweight the conditional means in the identification of the counterfactual means .
With the fitted outcome model, we can construct two plug-in estimators corresponding to (6) and (7) called the predictive and projective estimators for , respectively:
and
The terminologies “predictive” and “projective” are from the survey sampling literature (Firth and Bennett 1998; Ding and Li 2018). The estimators and differ slightly: the former uses the observed outcomes when available; in contrast, the latter replaces the observed outcomes with the fitted values.
More interestingly, we can also identify by an inverse propensity score weighting formula although Definition 1 only involves the conditional means of the potential outcomes.
Theorem 2 (inverse propensity score weighting).
Theorem 2 modifies the classic inverse probability weighting formulas with two extra factors and depending on both the propensity score and the sensitivity parameters. With the fitted propensity score model, (8) motivates the following estimator for :
where and are the estimated weights.
The identification formulas based on outcome regression and inverse propensity score weighting motivate us to develop a combined strategy that leads to doubly robust estimation. Following Bang and Robins (2005), we calculate the efficient influence function for . The details of the efficient influence function are in Bickel et al. (1993), and the proof of Theorem 3 is in Section S3.3 in the supplementary material. Readers focusing only on applying the method do not need to understand the efficient influence function to understand our proposed estimation procedure.
Theorem 3 (efficient influence functions).
Under Definition 1, the efficient influence function for is
the efficient influence function for is
so the efficient influence function for is
The efficient influence function for has mean by definition, so has the following representation:
which motivates the following estimator based on the fitted propensity score and outcome models:
It is numerically identical to the following forms
which, similar to (3), augments by imputed outcomes and modifies and by inverse propensity score weighted residuals. Importantly, Theorem 4 below shows that is doubly robust.
Theorem 4 (double robustness).
Under Definition 1, the estimator is consistent if either is consistent to or is consistent to , for .
Under standard regularity conditions, the doubly robust estimator is asymptotically normally distributed with mean and variance achieving the variance of the efficient influence function. Therefore, we can conduct statistical inference based on the closed-form plug-in variance estimator or the nonparametric bootstrap variance estimator of . We can also construct double machine learning estimators and apply the cross-fitting technique to achieve the same asymptotic distribution under a relaxed condition on the complexity of the spaces of the nuisance parameters (Chernozhukov et al. 2018).
4 Calibration and specification of the sensitivity parameters
4.1 Calibration of the sensitivity parameters
Specifying the ranges of the sensitivity parameters is fundamentally challenging in causal inference because the observed data do not provide direct information on the sensitivity parameters. We provide a method to calibrate the sensitivity parameters using the observed covariates. The identification and estimation procedures rely on the unidentifiable sensitivity parameters . Calibration based on the observed covariates provides us guidance on their sizes, helping us to understand their relative magnitudes in the analytical framework. Zhang and Small (2020) proposed a calibration method based on the observed covariates as a benchmark to understand how sensitive the estimates are with respect to an unobserved confounder in matched observational studies. In our sensitivity analysis framework, we do not directly model the unobserved confounders or their association with the treatment or potential outcome. Also, our proposed sensitivity parameter depends on the set of observed covariates we have controlled for, thus it is challenging to directly characterize the corresponding calibration of the observed covariates. Below we provide a leave-one-covariate-out approach to calibrate the sensitivity parameters. Similar approaches appeared in other sensitivity analysis methods (Chapter 17 in Ding 2024).
For each observed covariate in the -dimensional , , we first drop it as if it is an unobserved confounder, then estimate the value of sensitivity parameters with unobserved. Under the unconfoundedness Assumption 1, we have
for , where denotes the dimensional observed covariates after dropping covariate . The measures how much the observed data deviates from the unconfoundedness assumption when we delete an observed confounder , and can be estimated using observed data. Thus, we can summarize the distribution of to characterize how strong a contribution each covariate has as a confounder. To summarize from a function of to a scalar, we can further marginalize over the distribution of to compute the mean of sensitivity parameter ignoring , or use other summary statistics such as maximum, minimum, upper or lower quantiles of the estimated distribution to guide us on the interpretation of the magnitude of the sensitivity parameter . We will illustrate this calibration method in Section 5.1. In addition, we can also calibrate using a subset of covariates and provide a leave-multiple-covariates-out approach. The definition, identification, and estimation of the sensitivity parameters similarly follow from the unconfoundedness assumption, where denotes the indices of the subset of covariates we use for calibration.
4.2 Practical suggestions for specifying the sensitivity parameters
In this subsection, we provide practical suggestions on specifying the sensitivity parameters and reporting the results. When the sensitivity parameters are constants not depending on , i.e., and , the proposed estimators of monotonically decrease in both and . Therefore, it often suffices to present one side of the parameters for sensitivity analysis if we are interested in how strong the unobserved confounding should be to explain away the estimated causal effect under the unconfoundedness assumption. For instance, if the estimated causal effect under unconfoundedness is positive, researchers can report a table of estimated average causal effects for different with and . Such table shows how robust the positive result is when we increase the sensitivity parameters, and presents how large the sensitivity parameters should be to overturn the estimated significant positive effect, while the other direction with small sensitivity parameters is usually of less interest since it will only increase the estimated positive effect. For a similar reason, when the estimated average causal effect is negative under the unconfoundedness assumption, we suggest reporting results with and . However, in the case when we would like to learn how strong the unobserved confounding is needed to match some previously estimated causal effects with larger magnitude, the other direction may be of interest as well.
In our proposed framework, we allow for the dependence of the sensitivity parameters on observed covariates. When the sensitivity parameters depend on , two alternative approaches can be pursued: modeling the sensitivity parameters explicitly or adopting a worst-case interpretation. In the first approach, we can assume for , and specify the value of based on domain knowledge about the parameters . In the second approach, under the special scenarios when the outcome is non-negative (e.g., binary), treating the estimated results with constant values serves as a worst-case bound for the true average causal effect. Here, is determined as either or , depending on the direction of the average causal effect. We give a formal result below.
Proposition 1.
Let and denote the lower and upper bound of , i.e., for . In the special case when the potential outcomes are non-negative (e.g., binary), we have , where
and is computed using the same formulas as , with the replacement of and by and , respectively.
Proposition 1 gives bounds based on the three identification formulas in Theorems 1–3. Define and as estimators using the same formula as by replacing by and , respectively, for . Following from Proposition 1, we can treat and as estimators of lower and upper bounds of , respectively. When the potential outcomes are non-positive, we have analogous results to Proposition 1.
Suppose our estimated average causal effect under the unconfoundedness assumption is positive, we focus on , to answer the question of how large the sensitivity parameters have to be to explain away the estimated positive causal effect. Looping over the pre-specified lists of constant and , we are finding how large the lower bounds of and are to explain away the positive effect.
5 Application and simulation study
5.1 Application
We apply the proposed sensitivity analysis method to a real-world application. We re-analyze the observational study in Bazzano et al. (2003) to study whether cigarette smoking has a causal effect on homocysteine levels, the elevation of which is considered a risk factor for cardiovascular disease. We compare the homocysteine levels in daily smokers and non-smokers based on the NHANES 2005–2006 data. The NHANES is a series of surveys whose target population is the civilian, noninstitutionalized U.S. population. Since 1999, the NHANES has been conducted annually by interviewing individuals in their homes and asking the responders to complete the health examination competent of the survey. We use the cross-sectional data collected in the NHANES 2005–2006, which includes observed covariates such as gender, age, education level, BMI, and a measure of poverty. Consider the daily smokers to be in the treated group and the never smokers to be in the control group. The baseline homocysteine level in the control group is 7.86 umol/L. Under the unconfoundedness assumption with , the estimated using the doubly robust estimator is 1.48 with a 95% confidence interval (0.78, 2.18). We use the logit model for the propensity score and the linear model for the outcome, respectively, and use the nonparametric bootstrap method to estimate the variance. Despite a large collection of observed covariates collected in the survey, unmeasured factors such as the lifestyle of their spouses or parents, work intensity, stress level, and certain gene expressions possibly contribute to both a higher probability of smoking and higher homocysteine level. With the concern about the presence of unmeasured confounders, we conduct sensitivity analysis using the doubly robust estimator . In this example, it is challenging to specify a correct model of sensitivity parameters as a function of , and the outcome of interest is non-negative. Thus, we adopt the conservative worst-case interpretation in Proposition 1, assume the sensitivity parameters and do not depend on , and vary both of them from 0.75 to 1.25. Table 1 reports the results of the doubly robust estimator and its 95% confidence interval for different where we again use nonparametric bootstrap to estimate the variance of . We omit the detailed results for small and since they all give significantly positive results at the 0.05 level. As and increase, the significance level decreases, and the direction of the point estimator changes when both are very large.
However, the length of the confidence interval is not a monotone function of and since the asymptotic variance of under our sensitivity analysis framework is a complicated function of the sensitivity parameters. In Section S1.7 in the supplementary material, we give the specific form of the semiparametric efficiency bound for based on its efficient influence function in Theorem 3.
Figure 1 shows the contour plot of the estimated average causal effect using the doubly robust estimator with various values of and , where the contour lines are generated through grid search. We also implement the leave-one-covariate-out calibration method proposed in Section 4.1 in the example. For each observed covariate , we label the maximum estimated value of and in the contour plot. This suggests that to explain away the estimated positive causal effect, the unobserved confounder has to be stronger than all observed covariates. Figure 1 also shows that under Assumption 1, covariates such as BMI and poverty are weak confounders with and in the range of , and , , respectively.
0.90 | 1.00 | 1.10 | 1.20 | 1.25 | ||
---|---|---|---|---|---|---|
0.90 | 2.48 (1.70, 3.27) | 1.65 (0.97, 2.33) | 0.98 (0.32, 1.63) | 0.41 (-0.21, 1.03) | 0.16 (-0.45, 0.77) | |
1.00 | 2.31 (1.58, 3.04) | 1.48 (0.78, 2.18) | 0.80 (0.12, 1.49) | 0.24 (-0.39, 0.87) | -0.01 (-0.62, 0.60) | |
1.10 | 2.14 (1.38, 2.89) | 1.31 (0.62, 2.00) | 0.63 (0.01, 1.25) | 0.07 (-0.56, 0.69) | -0.18 (-0.76, 0.40) | |
1.20 | 1.96 (1.13, 2.79) | 1.14 (0.42, 1.85) | 0.46 (-0.16, 1.07) | -0.11 (-0.73, 0.52) | -0.36 (-0.97, 0.26) | |
1.25 | 1.88 (1.10, 2.65) | 1.05 (0.35, 1.75) | 0.37 (-0.30, 1.04) | -0.19 (-0.80, 0.41) | -0.44 (-1.06, 0.17) |

5.2 Simulation study
In this subsection, we conduct a simulation study to evaluate the finite sample performance of our proposed sensitivity analysis framework with the doubly robust estimator.
We generate the data using the following data-generating process with sample size . We first generate observed covariates with independent for and and a binary unobserved confounder . Next generate the treatment . Then generate the potential outcomes with log-normal conditional distribution, and , where and are independent . Under this data-generating process, the population average causal effect equals 0. Since is an unobserved confounder of and conditional on , the unconfoundedness assumption is violated, thus estimators constructed using merely observed variables will be asymptotically biased, leading to invalid inference. Under the log-normal model of the potential outcomes and the Bernoulli treatment assignment, the true , and because the distribution of does not depend on . We evaluate the performance of with various values of . For , the true and assuming the unconfoundedness assumption will over-estimate . Thus, we also take a pre-specified list of values larger than 1, and . We use a nonparametric bootstrap with 200 bootstrap samples to estimate the asymptotic variance. For each value of and each pair , we compute the coverage probability of the 95% confidence interval.
When we observe a positive under the unconfoundedness assumption, at each level, we reject the null hypothesis that if the constructed lower bound of the 95% confidence interval is positive. With a positive , will be overestimated if takes a value smaller than the truth, thus it is likely to make a false rejection. While for larger than the truth, we will make a conservative inference thus the false rejection is less likely. To evaluate this, we also compute the frequency of making a false rejection of a significantly positive average causal effect. The number of Monte Carlo samples is 500.
Table 2 reports the simulation results including the coverage and false rejection rates of our proposed estimation procedure under various data-generating processes. First, when the sensitivity parameters are set to be equal to the true values, the coverage rate is correct. Second, the false rejection rates are valid if the value of is specified to be larger than the truth. The proposed sensitivity analysis procedure has valid finite sample performance.
coverage rate | false rejection rate | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
true | ||||||||||||||
1.00 | 1.10 | 1.16 | 1.28 | 1.60 | 1.93 | 1.00 | 1.10 | 1.16 | 1.28 | 1.60 | 1.93 | |||
0.00 | 1.00 | 0.95 | 0.53 | 0.21 | 0.01 | 0.00 | 0.00 | 0.03 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | |
0.20 | 1.10 | 0.55 | 0.96 | 0.88 | 0.32 | 0.00 | 0.00 | 0.45 | 0.02 | 0.00 | 0.00 | 0.00 | 0.00 | |
0.30 | 1.16 | 0.25 | 0.85 | 0.97 | 0.70 | 0.00 | 0.00 | 0.75 | 0.15 | 0.02 | 0.00 | 0.00 | 0.00 | |
0.50 | 1.28 | 0.02 | 0.42 | 0.71 | 0.96 | 0.19 | 0.00 | 0.98 | 0.58 | 0.29 | 0.02 | 0.00 | 0.00 | |
1.00 | 1.60 | 0.00 | 0.03 | 0.09 | 0.40 | 0.96 | 0.68 | 1.00 | 0.97 | 0.91 | 0.60 | 0.02 | 0.00 | |
1.50 | 1.93 | 0.00 | 0.01 | 0.02 | 0.11 | 0.74 | 0.95 | 1.00 | 0.99 | 0.98 | 0.89 | 0.26 | 0.02 |
6 Extensions
6.1 Sensitivity analysis on the difference scale based on counterfactual means
We expressed the sensitivity parameters as ratios between the observed outcome means and the corresponding counterfactual outcome means. Alternatively, we can also express the sensitivity parameters on the difference scale as Definition 2 below (Robins 1999).
Definition 2 (sensitivity parameters on the difference scale).
Define
In Definition 2, and are the two sensitivity parameters.
Under Definition 2, we can identify the two counterfactual means by and . Therefore, we have the following identification formulas for and based on outcome regression, inverse probability weighting, and doubly robust estimation:
and
where the two doubly robust formulas come from the efficient influence functions for and , respectively. Taking the difference between these two expectations, we have the identification formulas for the average causal effect , which can be written as the standard identification formulas under unconfoundedness assumption minus When the sensitivity parameters and do not depend on , the correction simplifies to , a constant that depends on the probability of the treatment and .
Similarly, for the average causal effect on the treated, the correction is When the sensitivity parameter does not depend on , the correction simplifies to , a constant that equals the sensitivity parameter itself.
From the above discussion, sensitivity analysis based on the difference scale is much simpler than that based on the ratio scale. Its simplicity is an advantage. However, it also has disadvantages. First, the sensitivity analysis estimators reduce to the corresponding classic estimators minus prespecified constants. It makes sensitivity analysis a tautology of specifying how different the estimators and the estimands are. Second, the sensitivity parameters in Definition 2 depend on the baseline expectations of the outcome. They may be harder to specify than those in Definition 1. This is a classic reason to focus on the ratio scale in sensitivity analysis (Cornfield et al. 1959; Poole 2010; Ding and VanderWeele 2014).
Sjölander et al. (2022) generalized the sensitivity parameters on the difference scale to allow for the transformation of the conditional means by a smooth link function and proposed an outcome regression approach to conduct sensitivity analyses with generalized linear models.
6.2 Extension to nonlinear causal parameters
We can extend the current sensitivity analysis framework to analyze a more general class of nonlinear causal parameters, , a general function of the marginal means of the potential outcomes. Previous results focus on the special case when . More generally, many other function forms are of interest. Specifically, when the potential outcomes are binary, the causal risk ratio and the causal odds ratio, and are two canonical causal parameters. We can utilize the nonparametric identification and proposed estimators for and and construct estimators for a general causal parameter by simply plugging in estimators into function for and . The plug-in outcome and inverse probability weighting estimators are straightforward. The plug-in estimator remains to be doubly robust and achieves the semiparametric efficiency bound. We also implement estimators for rr, or, and their logarithms in the R package saci.
6.3 Other extensions in the supplementary material
For the inverse propensity score weighting and doubly robust estimators, we presented only the Horvitz–Thompson-type estimators. It is straightforward to obtain the corresponding Hajek-type estimators (Hájek 1971). We present them in Section S1.5 in the supplementary material.
Another canonical estimation strategy is matching (Rubin 2006). Lin et al. (2023) pointed out that Abadie and Imbens (2011)’s bias-corrected matching estimator has an equivalent form as the doubly robust estimator if we view matching as a nonparametric method to estimate the propensity score. Therefore, similar sensitivity analysis formulas also hold for matching. We present them in Section S1.6 in the supplementary material.
Another parameter of interest is the average causal effect on the treated units. Under our sensitivity analysis framework, we provide identification results, estimation procedures, and real-world examples to estimate the average causal effect on the treated units. We also implement estimators in our R package. We present details in Section S1.1 in the supplementary material.
We also extend our framework of sensitivity analysis to survival outcomes with the parameter of interest defined as the difference between two survival functions and to observational studies with a multi-valued treatment. Moreover, estimating the controlled direct effect reduces to estimating causal effects with a multi-valued treatment if we view the treatment and the intermediate variable as two treatment factors (VanderWeele 2015). Again, we relegate the details to Sections S1.2 and S1.3 in the supplementary material.
7 Discussion
We focus on the cross-sectional setting, while the framework can be extended to deal with longitudinal data (Robins 1999). With a time-varying treatment, the sensitivity parameters are also time-varying and depend on the past treatment trajectory. We can derive similar nonparametric identification formulas by modifying the g-formula or the inverse probability weighting estimators under sequential ignorability. We leave the derivation of the semiparametric efficient influence function and the implementation of the motivated estimator to future research.
Acknowledgements
Peng Ding was partially supported by the U.S. National Science Foundation # 1945136. A reviewer and the Associate Editor made constructive comments.
References
- Abadie and Imbens (2006) Abadie, A. and Imbens, G. W. (2006). Large sample properties of matching estimators for average treatment effects. Econometrica 74, 235–267.
- Abadie and Imbens (2011) Abadie, A. and Imbens, G. W. (2011). Bias-corrected matching estimators for average treatment effects. Journal of Business and Economic Statistics 29, 1–11.
- Bang and Robins (2005) Bang, H. and Robins, J. M. (2005). Doubly robust estimation in missing data and causal inference models. Biometrics 61, 962–973.
- Bazzano et al. (2003) Bazzano, L. A., He, J., Muntner, P., Vupputuri, S., and Whelton, P. K. (2003). Relationship between cigarette smoking and novel risk factors for cardiovascular disease in the united states. Annals of Internal Medicine 138, 891–897.
- Bickel et al. (1993) Bickel, P. J., Klaassen, C. A., Ritov, Y., and Wellner, J. A. (1993). Efficient and Adaptive Estimation for Semiparametric Models. Baltimore: Johns Hopkins University Press.
- Chernozhukov et al. (2018) Chernozhukov, V., Chetverikov, D., Demirer, M., Duflo, E., Hansen, C., Newey, W., and Robins, J. (2018). Double/debiased machine learning for treatment and structural parameters. Econometrics Journal 21, C1–C68.
- Cornfield et al. (1959) Cornfield, J., Haenszel, W., Hammond, E. C., Lilienfeld, A. M., Shimkin, M. B., and Wynder, E. L. (1959). Smoking and lung cancer: recent evidence and a discussion of some questions. Journal of the National Cancer Institute 22, 173–203.
- Cox (1972) Cox, D. R. (1972). Regression models and life-tables. Journal of the Royal Statistical Society: Series B (Methodological) 34, 187–202.
- Ding (2024) Ding, P. (2024). A First Course in Causal Inference. New York: Chapman & Hall.
- Ding and Li (2018) Ding, P. and Li, F. (2018). Causal inference: a missing data perspective. Statistical Science 33, 214–237.
- Ding and Lu (2017) Ding, P. and Lu, J. (2017). Principal stratification analysis using principal scores. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 79, 757–777.
- Ding and VanderWeele (2014) Ding, P. and VanderWeele, T. J. (2014). Generalized Cornfield conditions for the risk difference. Biometrika 101, 971–977.
- Ding and VanderWeele (2016) Ding, P. and VanderWeele, T. J. (2016). Sensitivity analysis without assumptions. Epidemiology 27, 368–377.
- Dorn and Guo (2022) Dorn, J. and Guo, K. (2022). Sharp sensitivity analysis for inverse propensity weighting via quantile balancing. Journal of the American Statistical Association 00, 1–13.
- Firth and Bennett (1998) Firth, D. and Bennett, K. E. (1998). Robust models in probability sampling (with discussion). Journal of the Royal Statistical Society: Series B (Statistical Methodology) 60, 3–21.
- Franks et al. (2020) Franks, A., D’Amour, A., and Feller, A. (2020). Flexible sensitivity analysis for observational studies without observable implications. Journal of the American Statistical Association 115, 1730–1746.
- Hahn (1998) Hahn, J. (1998). On the role of the propensity score in efficient semiparametric estimation of average treatment effects. Econometrica 66, 315–331.
- Hájek (1971) Hájek, J. (1971). Comment: An essay on the logical foundations of survey sampling, part one. The Foundations of Survey Sampling 236,.
- Hernán (2010) Hernán, M. A. (2010). The hazards of hazard ratios. Epidemiology 21, 13–15.
- Imai and Van Dyk (2004) Imai, K. and Van Dyk, D. A. (2004). Causal inference with general treatment regimes: Generalizing the propensity score. Journal of the American Statistical Association 99, 854–866.
- Imbens (2000) Imbens, G. W. (2000). The role of the propensity score in estimating dose-response functions. Biometrika 87, 706–710.
- Imbens (2003) Imbens, G. W. (2003). Sensitivity to exogeneity assumptions in program evaluation. American Economic Review 93, 126–132.
- Jiang et al. (2022) Jiang, Z., Yang, S., and Ding, P. (2022). Multiply robust estimation of causal effects under principal ignorability. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 84, 1423–1445.
- Kasza et al. (2017) Kasza, J., Wolfe, R., and Schuster, T. (2017). Assessing the impact of unmeasured confounding for binary outcomes using confounding functions. International Journal of Epidemiology 46, 1303–1311.
- Lin et al. (1998) Lin, D. Y., Psaty, B. M., and Kronmal, R. A. (1998). Assessing the sensitivity of regression results to unmeasured confounders in observational studies. Biometrics 54, 948–963.
- Lin et al. (2023) Lin, Z., Ding, P., and Han, F. (2023). Estimation based on nearest neighbor matching: from density ratio to average treatment effect. Econometrica 91, 2187–2217.
- Poole (2010) Poole, C. (2010). On the origin of risk relativism. Epidemiology 21, 3–9.
- Robins (1999) Robins, J. M. (1999). Association, causation, and marginal structural models. Synthese 121, 151–179.
- Rosenbaum (1987) Rosenbaum, P. R. (1987). Sensitivity analysis for certain permutation inferences in matched observational studies. Biometrika 74, 13–26.
- Rosenbaum and Rubin (1983a) Rosenbaum, P. R. and Rubin, D. B. (1983a). Assessing sensitivity to an unobserved binary covariate in an observational study with binary outcome. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 45, 212–218.
- Rosenbaum and Rubin (1983b) Rosenbaum, P. R. and Rubin, D. B. (1983b). The central role of the propensity score in observational studies for causal effects. Biometrika 70, 41–55.
- Rubin (1978) Rubin, D. B. (1978). Bayesian inference for causal effects: The role of randomization. Annals of Statistics 6, 34–58.
- Rubin (2006) Rubin, D. B. (2006). Matched Sampling for Causal Effects. Cambridge: Cambridge University Press.
- Scharfstein et al. (1999) Scharfstein, D. O., Rotnitzky, A., and Robins, J. M. (1999). Adjusting for nonignorable drop-out using semiparametric nonresponse models. Journal of the American Statistical Association 94, 1096–1120.
- Sjölander et al. (2022) Sjölander, A., Gabriel, E. E., and Ciocănea-Teodorescu, I. (2022). Sensitivity analysis for causal effects with generalized linear models. Journal of Causal Inference 10, 441–479.
- Tchetgen and Shpitser (2012) Tchetgen, E. J. T. and Shpitser, I. (2012). Semiparametric theory for causal mediation analysis: efficiency bounds, multiple robustness, and sensitivity analysis. Annals of Statistics 40, 1816–1845.
- VanderWeele (2015) VanderWeele, T. J. (2015). Explanation in Causal Inference: Methods for Mediation and Interaction. Oxford: Oxford University Press.
- VanderWeele and Ding (2017) VanderWeele, T. J. and Ding, P. (2017). Sensitivity analysis in observational research: introducing the e-value. Annals of Internal Medicine 167, 268–274.
- VanderWeele et al. (2014) VanderWeele, T. J., Tchetgen, E. J. T., and Halloran, M. E. (2014). Interference and sensitivity analysis. Statistical science: a review journal of the Institute of Mathematical Statistics 29, 687.
- Xie and Liu (2005) Xie, J. and Liu, C. (2005). Adjusted kaplan–meier estimator and log-rank test with inverse probability of treatment weighting for survival data. Statistics in Medicine 24, 3089–3110.
- Yang and Lok (2018) Yang, S. and Lok, J. J. (2018). Sensitivity analysis for unmeasured confounding in coarse structural nested mean models. Statistica Sinica 28, 1703–1723.
- Zhang and Small (2020) Zhang, B. and Small, D. S. (2020). A calibrated sensitivity analysis for matched observational studies with application to the effect of second-hand smoke exposure on blood lead levels in children. Journal of the Royal Statistical Society Series C: Applied Statistics 69, 1285–1305.
- Zhao et al. (2019) Zhao, Q., Small, D. S., and Bhattacharya, B. B. (2019). Sensitivity analysis for inverse probability weighting estimators via the percentile bootstrap. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 81, 735–761.
- Zubizarreta et al. (2013) Zubizarreta, J. R., Cerda, M., and Rosenbaum, P. R. (2013). Effect of the 2010 chilean earthquake on posttraumatic stress reducing sensitivity to unmeasured bias through study design. Epidemiology 24, 79–87.
Supplementary Material
The supplementary materials contain the following sections.
Section S1 presents additional results on several extensions, formulas for the Hajek-type estimators and for the bias-corrected matching estimator, semiparametric efficiency bounds for and , and connection of the sensitivity parameters to the treatment selection model.
Section S2 provides guidance to our R package and presents additional data analysis results.
Section S3 presents all the proofs.
Appendix S1 Additional results
S1.1 Extension to the average treatment effect on the treated units
Another parameter of interest is the average treatment effect on the treated units
Let and denote these two terms. The first term has a simple moment estimator , where is the total number of treated units. The key is to identify and estimate the second term . In parallel with Theorems 1 and 2, has two identification forms based on the outcome and the propensity score models. Define .
Theorem S1 (outcome regression and inverse propensity score weighting for ).
Under Definition 1, we have
Theorem S1 motivates using to estimate , where
with the estimated conditional odds of the treatment given covariates, . To motivate the doubly robust estimator for , we also derive the efficient influence function for .
Theorem S2 (efficient influence function for ).
Under Definition 1, the efficient influence function for equals
The efficient influence function for has mean , so has the following representation:
This representation motivates the following estimator for :
which are two numerically identical forms based on the fitted propensity score and outcome models.
Importantly, Theorem S3 below shows that is doubly robust.
Theorem S3 (double robustness for estimating ).
Under Definition 1, the estimator is doubly robust in the sense that it is consistent if either the propensity score or the outcome model is correctly specified.
We also provide a worst-case interpretation for the average treatment effect on the treated with a constant value of in the special scenarios when the outcome is non-negative (e.g., binary). Again, is determined as either or , depending on the direction of the average treatment effect on the treated. We give a formal in the following proposition.
Proposition S1.
Let and denote the lower and upper bound of , i.e., . In the special case when the potential outcomes are non-negative (e.g., binary), we have , where
and is computed using the same formula as , with the replacement of by .
Proposition S1 gives bounds based on the three identification formulas based on the outcome model, the propensity score model, and the efficient influence function for . Define and by replacing by and , respectively, for . Following from Proposition S1, we can treat and as estimators of lower and upper bounds of , respectively. Symmetrically, when the potential outcomes are non-positive, we have analogous results to Proposition S1.
Example S1.
We continue the application study in Section 5.1 by applying the above results to estimate with the same observational data and focus on the other causal parameter of interest, the average treatment effect on the treated . The estimated under the unconfoundedness assumption is 1.36, with a 95% confidence interval (0.66, 2.05), using the same unit of the homocysteine level umol/L. We conduct sensitivity analysis using the doubly robust estimator . The estimated only depends on values of , we again simplify it not to depend on and vary it from 0.75 to 1.25, and report the results for larger values of , since for values smaller than 1, the estimated are all significantly positive at the 0.05 level. Table 3 summarizes the results. For , the inference changes, and for , the estimated becomes negative.
0.90 | 2.18 (1.41, 2.96) |
---|---|
0.95 | 1.77 (1.07, 2.46) |
1.00 | 1.36 (0.66, 2.05) |
1.05 | 0.94 (0.19, 1.70) |
1.10 | 0.53 (-0.22, 1.29) |
1.15 | 0.12 (-0.64, 0.88) |
1.20 | -0.29 (-1.05, 0.47) |
1.25 | -0.70 (-1.52, 0.11) |
S1.2 Extension to survival outcomes
We can also extend the results to survival outcomes. To avoid the problem of the hazard ratio for causal inference that it has a built-in selection bias even under randomization (Hernán 2010), we define the parameter of interest as the difference between the two survival functions
where denotes the potential survival functions for . For a fixed , is the average causal effect on the indicator . Define the sensitivity parameters as
Define as the conditional survival probability for . We can rewrite the survival function of potential outcomes as
(S1) | |||||
(S2) | |||||
(S3) |
and
(S4) | |||||
(S5) | |||||
(S6) |
where and are the weights.
Censoring is a major challenge for analyzing survival outcomes. To construct the outcome regression estimator, we must use models and estimators tailored to survival outcomes and censoring (e.g., Cox 1972). With estimated , (S1) and (S4) motivate the outcome regression estimator
To construct the inverse propensity score weighting estimator, we must use an inverse probability weighting adjusted Kaplan–Meier estimator (Xie and Liu 2005). Let denote the minimum of the outcome and the censoring time , and denote the censoring indicator. Let ’s denote the times when at least one event happens. For each group with treatment , define the inverse probability weighted number of events that happened at time and the inverse probability weighted number of individuals at risk at time as
Then (S2) and (S5) motivate the weighted Kaplan–Meier estimator
With the inverse probability weighting estimator, (S3) and (S6) motivate the following estimator that combines the estimated propensity score and the survival probability models
S1.3 Extensions to observational studies with a multi-valued treatment
The results also extend to observational studies with a multi-valued treatment, . Each unit has potential outcomes corresponding to the treatment levels. We can define the causal parameters of interest as the comparisons of potential outcomes
where .
Define the sensitivity parameters as below.
Definition S1.
For any two treatment levels and , define the sensitivity parameters as
In Definition S1, we must have for all s. Under Definition S1, we have the following identification formulas for the mean of the potential outcomes
where is the generalized propensity score (Imbens 2000; Imai and Van Dyk 2004) and is the conditional outcome mean. Motivated by the three identification formulas, we can construct the following estimators for ,
and the following estimator for the causal parameter of interest
S1.4 Extensions to the controlled direct effect
With an intermediate variable between the treatment and the outcome , we use the notation to denote the potential outcome under treatment assignment and intermediate value . When is binary, for a fixed level , we define the controlled direct effect as
which measures the average treatment effect of when the level of the intermediate variable is fixed at , thus measures the direct effect of treatment on the outcome controlling the mediator value at . As an extension, if is discrete, we can view the problem of identifying and estimating the controlled direct effect as an observation study with multiple treatment levels (VanderWeele 2015, Ding 2024, Chapter 28 in). Therefore, we can apply the above framework in Section S1.3 to conduct sensitivity analysis for the controlled direct effect. We omit the details.
S1.5 Formulas for the Hajek-type estimators
Under our sensitivity analysis framework, Theorem 2 motivates the following Hajek estimator of average causal effect
The denominators in the Hajek estimator take the same form as the canonical ones, which do not depend on or . We also have the following alternative form of the doubly robust estimator based on the Hajek estimator
In the special case with unconfoundedness assumption, , they reduce to the standard Hajek estimator and doubly robust estimator for .
Similarly, Theorem S1 motivates the following Hajek estimator of the average treatment effect on the treated units, where
The doubly robust estimator is with
Again, in the special case when , they reduce to the standard Hajek estimator and doubly robust estimator for .
S1.6 Formulas for the bias-corrected matching estimator
We also provide the formulas for the bias-corrected matching estimator under our sensitivity analysis framework. Matching estimators impute the missing counterfactual for an individual in treatment group by finding nearest neighbors of in treatment group and use the average observed outcomes of the nearest neighbors as the imputed value of the counterfactual for . The matching-based estimator of is then constructed by taking the average of the imputed individual treatment effect (Abadie and Imbens 2006). This estimator is generally not consistent when the nearest neighbor matching is based on multi-dimensional observed covariates. Abadie and Imbens (2011) proposed a bias-corrected version of the matching estimator by estimating the conditional outcome models and combining it with the original matching estimator. As shown in Lemma 5.1 in Lin et al. (2023), under Assumption 1, viewing matching as a nonparametric method of estimating the propensity score, we can rewrite the bias-corrected matching estimator in Abadie and Imbens (2011) as
where is the fixed number of matches for each observation and is the number of matched times of unit in treatment group , .
Under our sensitivity analysis framework, we can rewrite the bias-corrected matching estimator as
Lin et al. (2023) shows that under some regularity conditions, consistently estimates the density ratio for , where is the total number of observations in treatment group . Moreover, is a consistent estimator of the ratio of the marginal probabilities , thus we essentially use and to estimate and , respectively.
S1.7 Semiparametric efficiency bounds for and
In this subsection, we provide the semiparametric efficiency bound for and under our sensitivity analysis framework in the following proposition.
Proposition S2.
S1.8 Connection to the treatment selection model
In this subsection, we show that our proposed sensitivity parameters are also related to the treatment selection model, although their definitions are based on the outcome models.
Take the treatment selection model for example. Under Definition 1, when , is generally not equal to and does depend on . We next derive the relationship between and to provide the connection between our sensitivity parameters and the selection model.
We start with the special case when is binary. In this case, we have
(S7) |
where is the propensity score under selection to treatment for . The term on the right-hand side depends on , thus (S7) shows the connection between the sensitivity parameter and the selection model. Previous sensitivity analysis framework bounds the ratio
to estimate , which works better for the IPW estimators (Zhao et al. 2019; Dorn and Guo 2022).
For a general , we first introduce a set of to denote the ratio between the observed and counterfactual conditional densities,
where denotes the conditional density of in treatment group for with . We have the following equality
This provides the relation of treatment selection model with our sensitivity parameter.
Appendix S2 R package and implementation
We provide an R package saci to conduct the proposed sensitivity analysis. The package can be installed from Github using the command: devtools::install_github("sizhu-lu/saci"). We provide the following functions in this package:
sa_ate is used to conduct sensitivity analysis of the average treatment effect. The inputs include the observed data, the user-specified lists of and , the specification of the “family” option in the generalized linear models of propensity score model and outcome model, the total number of bootstrap samples to compute the estimated standard errors, and the truncation cutoffs of the estimated propensity score. This function outputs the point estimator, estimated standard error, p-value, and the 95% confidence interval for each pair of sensitivity parameters in the user-specified lists and for each method (proj, ht, hajek, and dr). We use the nonparametric bootstrap for standard error estimation.
sa_att is used to conduct sensitivity analysis of the average treatment effect on the treated. The inputs and outputs are similar to sa_ate, except that the estimation procedure only depends on .
For binary potential outcomes, the function sa_rr is used to conduct sensitivity analysis of the causal risk ratio or log causal risk ratio. The inputs include the observed data, the user-specified list of and , the specification of the “family” option in the generalized linear models of propensity score model and outcome model, the total number of bootstrap samples to compute the estimated standard errors, the truncation cutoffs of the estimated propensity score, and option to take the logarithm of the causal risk ratio or not. This function outputs the point estimator, estimated standard error, p-value, and the 95% confidence interval for each pair of sensitivity parameters in the user-specified lists and for each method (proj, ht, hajek, and dr). Again, we use the nonparametric bootstrap for standard error estimation.
For binary potential outcomes, the function sa_or is used to conduct sensitivity analysis of the causal odds ratio or log causal odds ratio. The inputs and outputs are similar to sa_rr.
We also provide a function plot_contour to generate a contour plot of the estimated average treatment effects with different values of sensitivity parameters. The usage is
where ate_result is the output of the sa_ate function and the default value to plot is the doubly robust point estimator. Figure 1 shows a sample plot for the observational study in Section 5.1. Users are also free to change the estimation method to other estimators and the plotted values to lower or upper confidence bounds by specifying, e.g., estimator="ht" and value="ci_lb".
Appendix S3 Proofs
S3.1 Proof of Theorem 1
S3.2 Proof of Theorem 2
We only prove the result under treatment because the proof for the result under control is similar. On the one hand, we can use (6) to obtain
On the other hand, we can condition on to obtain
The conclusion for follows.
S3.3 Proof of Theorem 3
We first present a short but informal proof. Based on the form as a byproduct in the proof of Theorem 2, we can derive the efficient influence function for by extending the result of Hahn (1998). The key difference is that the weight depends on the propensity score, so we must include an additional term due to the derivative of with respect to . This additional term comes from the path-wise derivative , where is the parameter for the sub-model and is the score function for . Including this additional term yields the following efficient influence function for :
which reduces to by simple algebra. The efficient influence function for follows from a similar calculation.
We then provide a longer but formal proof. We follow the semiparametric theory in Bickel et al. (1993) and Hahn (1998) to derive the efficient influence function for . We denote the vector of all observed variables and factorize the likelihood as
To derive the efficient influence function for , we consider a one-dimensional parametric submodel which contains the true model at . We use in the subscript to denote the quantity with respect to the submodel, e.g., is the value of with respect to the submodel. We use dot to denote the partial derivative with respect to , e.g., , and use to denote the score function with respect to the submodel. Decomposition of the score function under the submodel as
where , , and are the score functions corresponding to the three components of the likelihood function. We write as , which is the score function evaluated at the true model under the one-dimensional submodel. Let denote the efficient influence function for . It must satisfy the equation
To find such , we need to calculate the derivative of
Therefore, the Gateaux derivative of our parameter of interest under the parametric submodel with respect to is
The calculation of two building blocks and are standard (Hahn 1998):
Plugging back to the previous equation, we have
where the second equality is by the properties of score functions and the last equality is by rearranging terms. Thus,
We can prove
using a similar calculation.
S3.4 Proof of Theorem 4
If the fitted propensity score model converges to and the fitted outcome models converge to and , the doubly robust estimator has limit , where
and the formula for is omitted due to similarity. We will only show that if either the propensity score or the outcome model is correctly specified. The proof for the control potential outcome is analogous.
If , the conclusion is obvious since the inverse probability weighting estimator is consistent and the additional term has mean zero. If , then by definition and the law of iterated expectations, equals the expectation of
where
Therefore, if , then .
S3.5 Proof of Proposition 1
Recall that and are the lower and upper bound of , i.e., for . First, when , we have for . Therefore, and . By the identification formula in Theorem 1, we have
Second, when , we have and . Therefore, by the identification formula in Theorem 2, we have
We prove the third formula by showing the third lower bound is equal to the previous two bounds if either the propensity score or the outcome model is correctly specified. If the propensity score model is correct, we have
If the outcome model is correct, we have
where the last equality follows from the fact that . Similarly, we have
Therefore, the third formula of the lower bound is equal to the first two formulas if either the propensity score or the outcome model is correct.
The upper bound for can be derived similarly.
S3.6 Proof of Theorem S1
On the one hand, we have
Conditioning on , we can simplify it to
On the other hand, we condition on to obtain
Then the conclusion follows.
S3.7 Proof of Theorem S2
We first present a short but informal proof. As a byproduct of the Proof of Theorem S1, we have , which is a weighted average of . The result follows from Hahn with an additional term due to the derivative of the weight with respect to :
which reduces to the formula of by simple algebra.
We then present a long but formal proof. We follow the same framework and notations as in the proof of Theorem 3. Again, we have as a byproduct of the Proof of Theorem S1. For a parametric submodel with parameter , we have
by the facts that
and the properties of the score functions.
We also have that . Therefore, the efficient influence function for is
This concludes the proof.
S3.8 Proof of Theorem S3
If the fitted propensity score model converges to and the fitted outcome model under control converge to , the doubly robust estimator has limit , where
If , the conclusion is obvious since the inverse probability weighting estimator is consistent and the additional term has mean zero. If , then by definition, equals the expectation of
where
Therefore, if , then equals .
S3.9 Proof of Proposition S1
Recall that and are the lower and upper bound of , i.e., .
First, when , we have . Therefore, . By the first identification formula in Theorem S1, we have
Second, when , we have . Therefore, by the second identification formula for in Theorem S1, we have
Again, we prove the third formula by showing the third lower bound is equal to the previous two bounds if either the propensity score or the outcome model is correctly specified. If the propensity score model is correct, we have
If the outcome model is correct, we have
Therefore, the third formula of the lower bound is equal to the first two formulas if either the propensity score or outcome model is correct.
The upper bound for can be derived similarly.
S3.10 Proof of Proposition S2
We compute the semiparametric efficiency bound for and in the following two parts, respectively.
S3.10.1 The semiparametric efficiency bound for
With
(S8) |
we compute the three terms on the right-hand side separately. Let denote for , we can rewrite the first term in (S8) as
where and . We have
(S9) | |||||
(S10) | |||||
(S11) | |||||
Similarly, the second term in (S8) equals
The third term in (S8) is , where similarly and . We have
(S12) | |||||
(S13) | |||||
(S14) | |||||
(S15) |
Combining (S12)–(S15), we have
Finally, plugging in the three terms on the right-hand side of (S8), we have