[3]Min Zhang
Outcome Adaptive Propensity Score Methods for Handling Censoring and High-Dimensionality: Application to Insurance Claims
Abstract
Propensity scores are commonly used to reduce the confounding bias in non-randomized observational studies for estimating the average treatment effect. An important assumption underlying this approach is that all confounders that are associated with both the treatment and the outcome of interest are measured and included in the propensity score model. In the absence of strong prior knowledge about potential confounders, researchers may agnostically want to adjust for a high-dimensional set of pre-treatment variables. As such, variable selection procedure is needed for propensity score estimation. In addition, recent studies show that including variables related to treatment only in the propensity score model may inflate the variance of the treatment effect estimates, while including variables that are predictive of only the outcome can improve efficiency. In this paper, we propose a flexible approach to incorporating outcome-covariate relationship in the propensity score model by including the predicted binary outcome probability (OP) as a covariate. Our approach can be easily adapted to an ensemble of variable selection methods, including regularization methods and modern machine learning tools based on classification and regression trees. We evaluate our method to estimate the treatment effects on a binary outcome, which is possibly censored, among multiple treatment groups. Simulation studies indicate that incorporating OP for estimating the propensity scores can improve statistical efficiency and protect against model misspecification. The proposed methods are applied to a cohort of advanced stage prostate cancer patients identified from a private insurance claims database for comparing the adverse effects of four commonly used drugs for treating castration-resistant prostate cancer.
1 Introduction
To obtain an unbiased estimate for the causal average treatment effect (ATE) using data from observational studies, it is important to control for confounding by adjusting for the differences in pre-treatment baseline covariates between treatment groups. A common tool for confounder adjustment is the propensity score, defined as the conditional probability of receiving a specific treatment given a set of covariates [1], which is usually unknown in observational studies and needs to be estimated from the data. The estimated propensity scores can be used in matching [2, 3], weighting [4], regression adjustment [5], among other methods, for estimating the causal treatment effect. Propensity score-based causal estimators rely on robust and accurate estimation for the propensity scores.
There has been a large body of work focusing on the estimation and use of propensity scores in a low-dimensional setting, where the sample size is far greater than the number of candidate covariates. In this setting, empirical researchers often estimate the propensity scores by fitting a logistic regression model for binary treatment, or a multinomial logistic regression model for more than two treatment groups. In the era of `big data', large healthcare databases are increasingly being used to conduct comparative effectiveness research. For example, insurance claims data were used to compare the safety of four drugs on the market prescribed for patients with metastatic castration-resistant prostate cancer [6]. A massive collection of candidate covariates, such as demographics, socioeconomic status, clinical measurements, and diagnosis codes, can be ascertained from patients' health care or insurance claims data, and the number of covariates may exceed the sample size in each treatment group. Standard parametric regression models may become problematic when the dimensionality increases, and a key challenge is to identify variables to be included in the propensity score model from a high-dimensional set of measured covariates. There has been considerable interest in developing methods that perform variable selection for estimating propensity/balancing scores in the high-dimensional setting. For instance, Schneeweiss et al. [7] proposed the high-dimensional propensity score (hd-PS) algorithm that selects covariates to facilitate high-dimensional propensity score adjustment using health care claims data. Specifically, they rank each covariate based on its potential for controlling confounding by assessing the covariate's mean and univariate association with the treatment and outcome, and then pick the top covariates for inclusion in the propensity score modeling.
Parametric models require variable selection and specification of functional forms. A practical difficulty is that misspecification of the model can result in substantial bias of the estimated treatment effect [8]. Machine learning methods are well-known for their powerful predictive performance and ability to handle complex and nonlinear relationship between outcomes and covariates. There has been a growing interest in using machine learning techniques for propensity score modeling [9, 10, 11, 12]. Setoguchi et al. [9] compared several data mining techniques that optimize the prediction of treatment status, including classification and regression trees (CART), pruned CART, and neural networks, in the context of propensity score matching with a continuous outcome. Lee et al. [10] extended the work of Setoguchi et al. [9] to the setting with a binary outcome, and evaluated the performance of CART, pruned CART, bootstrap aggregated (bagged) CART, random forests, and boosted CART with regard to propensity score weighting. Both works focused on the low-dimensional setting, and demonstrated that machine learning methods, such as CART and neural networks, are promising alternatives to parametric modeling for the estimation of propensity scores in the presence of nonadditivity and/or nonlinearity in the true treatment assignment mechanism. However, numerical evaluations of these machine learning techniques for high-dimensional covariates and multiple treatment groups with a binary endpoint remain limited.
These aforementioned approaches only consider the treatment-covariate relationship when modeling the propensity/balancing scores, and fail to incorporate the outcome models into the treatment modeling process. Studies illustrate that using covariates that are associated with the treatment but not the outcome will inflate the variance of the estimators of ATE without reducing the bias [13, 14]. On the other hand, adding covariates explaining the outcome but not the treatment to the propensity score model can improve the precision of the treatment effect estimates [15, 14]. These findings suggest that a highly predictive model for treatment assignment will not necessarily lead to the most efficient estimators of treatment effects. Therefore, standard variable selection methods designed for prediction, which rely only on the relationship between treatment and covariates, may yield sub-optimal results in the context of causal inference. There is an expanding literature on variable selection methods for causal inference that account for the information about the outcome-covariate relationship. Shortreed and Ertefaie [14] proposed the Outcome-Adaptive LASSO (OAL) method, which adopts the adaptive LASSO framework [16] and places smaller adaptive weights on the outcome predictors in the propensity score model to favor the inclusion of outcome-predictive covariates. As a result, heavier penalties are imposed on variables relevant to treatment only than variables predictive of outcome only. Other works that allow the outcome information to contribute to variable selection for propensity score modeling include Outcome Highly Adaptive LASSO proposed by Ju et al. [17] and Bayesian Adjustment for Confounding proposed by Zigler and Dominici [18]. However, how the information about the outcome-covariate relationship can be incorporated into tree-based machine learning methods, such as CART, and to what extent can outcome models contribute to the efficiency gain in high-dimensional data settings with multiple treatment groups have been less studied.
In this paper, we propose a flexible approach to incorporating outcome-covariate relationship in the propensity score model by including the predicted outcome probability (OP) as a covariate. Our approach can be easily adapted to an ensemble of variable selection methods, including regularization methods and tree-based machine learning algorithms. We evaluate our approach for estimating the causal average treatment effect with multiple treatment groups and high-dimensional covariates. We use the inverse probability weighting (IPW) estimator, although the estimated propensity scores considered in this study are applicable to any propensity score-based methodology, such as propensity score matching. We also aim to account for censoring at the same time, where the bias due to censoring is controlled for by applying the inverse probability of remaining uncensored as weights to the outcome. We focus on estimating the treatment effects on a binary outcome (that is possibly censored) among multiple treatment groups. To the best of our knowledge, the literature lying within the intersection of high-dimensionality, complexity of the associations between treatment and covariates, multiple treatments, censored outcomes, and outcome-adaptive propensity score modeling is largely absent.
In Section 2, we introduce the notation and basic setup of the problem. Section 3 describes the algorithm for selecting variables to be included in the treatment model. Section 4 outlines the methods considered for the final treatment model that is used to estimates the propensity scores. Section 5 presents simulation studies that compare the methods in settings with high-dimensional covariates and potentially complex underlying treatment model (i.e., model with nonlinearity and/or nonadditivity). We also consider a setting where censoring exists. The simulation results illustrate the efficiency gain provided by leveraging the information about the outcome-covariate relationship when estimating the propensity scores. Section 6 presents an example that compares the risks of possibly right-censored hospitalization and emergency room (ER) visits of four prostate cancer treatments using data from an insurance claims database. The binary outcomes (i.e., whether the patient was hospitalized/admitted to ER within 180 or 360 days of treatment initiation) in this data example were subject to censoring, as the follow-up period tended to terminate early due to drop-out or treatment switch. We close with a brief discussion.
2 Definition of the Problem and Notation
2.1 Notation and Assumptions
We consider independent individuals, indexed by , with being a -dimensional vector of covariates measured prior to receiving the treatment , where . We let , , , and be the indices of confounders (i.e., covariates associated with both treatment and outcome), covariates predictive of treatment only, covariates predictive of outcome only, and covariates unrelated to both treatment and outcome (i.e., spurious covariates), respectively. Suppressing the index by , , , , and are mutually exclusive and . We further let , , , and denote the cardinality of the corresponding set. Re-introducing subject index , we let be the underlying time to the first event of interest for each individual, and be the censoring time. The outcome of interest is whether the event of interest occurs before a prespecified time point , defined by , which results in a possibly censored binary outcome. The information on may not be completely available due to dropout, study termination, or treatment switch. In the absence of censoring, (and therefore ) would be observed for all individuals, and the set of complete data is then . When the outcome variable is subject to right-censoring, is observed only if the individual has not been censored before , and we let be the indicator of observing .
Under the potential outcome framework [19], each individual is associated with a set of potential outcomes , where denotes the potential outcome had the individual received treatment . The causal parameter of interest is the marginal ATE on the outcome between and , denoted by . The following assumptions are required for statistical identification of causal effects using the observable data [20]:
-
I.
(Random sampling) The individuals in the study are randomly sampled from the population;
-
II.
(Stable Unit Treatment Value Assumption, or SUTVA) For any individual , , if , then , for all ;
-
III.
(Unconfoundedness) ;
-
IV.
(Overlap) For all values of and , , where ;
-
V.
(Censoring at random) .
2.2 Underlying Models for Outcome, Treatment, and Censoring, and Estimators for Average Treatment Effects
We assume that the true outcome model for is a logistic regression model,
where is a -dimensional function of . The treatment assignment mechanism is governed by a multinomial logistic regression
where is the reference level and is a -dimensional function of . Both and may contain nonlinear terms and interactions. We assume that and , where , , and are the numbers of variables in , , and , respectively. Note that in practice, , , and are generally unknown and a common practice is to include all candidate covariates in the model. In the case where the ratio of to is relatively large, traditional regression models based on maximum likelihood estimation may not converge, and some variable selection procedure is required for model fitting.
With respect to censoring, we assume a proportional hazard model for treatment ,
where is the treatment-specific baseline hazard function and is a -dimensional function of . The censoring model is assumed to have a moderate number of predictors () that are known a priori.
We estimate the ATE using the IPW estimator,
where and the propensity scores , , are required to be estimated from the data in an observational study. In the presence of censoring, the outcome is further weighted by the inverse probability of remaining uncensored at ,
The weights [20], where is the cumulative hazard function of at for treatment .
3 Variable Selection for Dimensionality Reduction for the Propensity Score Model
When the dimension of the covariate vector is high, it tends to be infeasible to fit an unrestricted parametric model, such as a multinomial logistic regression model, for the treatment using all the available covariates. A practical problem for empirical researchers is to identify a subset of covariates to be conditioned on to control for confounding. Typically, in medical research, a list of important covariates will be suggested based on the evidence in the literature and/or expert opinion. However, as the number of available covariates increases, it becomes extremely difficult for human experts to check manually which variables are potential confounders. Alternatively, one can turn to data-driven variable selection approaches, such as LASSO [21], which automatically select important variables for treatment predictions from all the available covariates. Figure 1 displays a flowchart of several possible routes that can be followed to identify the set of covariates to be included in the final treatment model for estimating the propensity scores. A commonly chosen route is to apply shrinkage methods directly to the original reservoir of covariates (route \scalebox{0.85}{1}⃝ in Figure 1), and we label the set of covariates All
. In this case, variable selection and propensity score estimation are conducted simultaneously in a single step. However, as we noted later in our simulation, following route \scalebox{0.85}{1}⃝ can result in substantially biased effect estimates. For sparse high-dimensional data, a large value of tuning parameter is necessary to select a parsimonious model. However, large penalties at the same time increase the shrinkage of non-zero components, leading to biased estimation [22]. Therefore, reducing the number of spurious covariates entering the final treatment model through variable selection may help improve the performance of the propensity score estimation methods. In addition, such traditional variable selection approaches targeting for the treatment only are sub-optimal. It has been shown that the use of for propensity score modeling may inflate the variance of the estimated ATE [15, 14]. On the other hand, including predictive of the outcome in the propensity score model can improve the precision of the ATE estimates [15, 14]. Therefore, an optimal propensity model for estimating ATE should include and simultaneously while excluding .

3.1 Using Outcome Model for Variable Selection
To identify , and , we apply a pre-selection procedure to the original set of covariates (All
), where a regularized model (in this case we choose LASSO) is fitted separately for both the outcome and the treatment. Specifically, the model for a binary outcome is specified as
(1) |
the coefficients of which are estimated based on the LASSO penalty
where the tuning parameter is chosen using cross-validation. Here we exclude the treatment variable from the outcome model, and only consider the association of with across all treatment groups. For the treatment assignment mechanism, we assume a multinomial logistic regression model,
(2) |
which is fitted by minimizing the negative penalized log-likelihood
where is a -dimensional vector containing the -th coefficient for all treatment groups [23]. This group LASSO penalty function ensures that each variable in is selected or excluded for all levels, as opposed to each level having its own set of selected variables. We use Ysel
to label the set of variables selected by Model (1) and YZsel
to label the intersection of the two sets of variables selected by Model (1) and Model (2). In an ideal case, YZsel
is identical to and Ysel
\YZsel
is identical to . Theoretically, adjusting for alone in the treatment model is sufficient to remove the confounding bias. Route \scalebox{0.85}{3}⃝ in Figure 1 corresponds to the case where YZsel
is used as predictors for propensity scores estimation.
As we mentioned previously, the ideal propensity score model adjusts for while excluding . The OAL method proposed by Shortreed and Ertefaie [14] intends to achieve this by fitting an adaptive LASSO for the treatment [16], where the adaptive weights are computed based on the coefficients from the outcome model, with smaller coefficients corresponding to larger weights. OAL discourages from being selected and encourages the inclusion of by imposing heavier penalties on than . There are two limitations of the work of Shortreed and Ertefaie [14]. The first is that the coefficients of the outcome model were estimated using unpenalized linear regression, which may be fitted for a continuous outcome in the context where the ratio of to is relatively large. However, for a binary outcome as considered in our study, it tends to be more difficult for a standard logistic regression to converge. We extend their method by considering a LASSO-fitted outcome model to compute the adaptive weights, in which case the coefficients of covariates not predictive of the outcome can be zero, and therefore the covariates with zero coefficients will be excluded from the treatment model. Another limitation is that the idea of OAL cannot be conveniently extended to machine learning methods that do not rely on regularization.
OAL incorporates the outcome-covariate associations as adaptive weights when selecting variables for propensity score estimation. An alternative is to directly include the covariates predictive of the outcome in the treatment model, namely, Ysel
. The treatment can then be modeled as a function of Ysel
(route \scalebox{0.85}{2}⃝ in Figure 1).
One possible problem of using Ysel
as input for the final treatment model is that the dimensionality of Ysel
can still remain relatively high for sample size . To improve propensity score modeling by utilizing the variables predictive of the outcome only while maintaining a reasonable dimensionality, we propose to incorporate the information contained in into the propensity score estimation process in the form of predicted probabilities of the outcome, Outcome Probability (OP) for short, as opposed to directly including those variables in their original form. We estimate OP for each subject by , and the logit scale of is denoted . OP summarizes the information about the outcome-covariate relationship and reduces the dimensionality of the outcome predictors to a one-dimensional vector. After obtaining YZsel
which targets , we add OP back to the set of input variables to capture the information in (route \scalebox{0.85}{6}⃝ in Figure 1, and we denote the combination OP+YZsel
). In addition to YZsel
, as was noted later in the simulation studies, OP can also be combined with all the available covariates and Ysel
as input for the final treatment model in Figure 1 to improve the effect estimates. We denote the combinations OP+All
and OP+Ysel
, which correspond to route \scalebox{0.85}{4}⃝ and route \scalebox{0.85}{5}⃝ in Figure 1, respectively. We discuss how OP can be used as predictors to estimate the propensity scores for different treatment modeling techniques in Section 4, where we outline several possible choices for the final treatment model.
In summary, both Ysel
and YZsel
use the outcome model for selecting covariates to account for confounding. Variable selection and propensity score estimation are conducted separately in two steps for methods based on Ysel
, YZsel
, OP+Ysel
, and OP+YZsel
.
4 Possible Choices for the Final Treatment Model for Propensity Score Estimation
We consider five different methods (Table 1) for estimating the propensity score given a set of candidate predictors. We leave out the OP for now and only consider the alternatives for the final treatment model for routes \scalebox{0.85}{1}⃝-\scalebox{0.85}{3}⃝ in Figure 1. The first is a multinomial logistic regression model (LOGIS), specified as
where is the set of (possibly selected) variables entering the final treatment model. When the dimension of is sufficiently small, the coefficients can be estimated using the maximum likelihood estimator. In the case where dimension reduction is needed, the estimates can be obtained by minimizing the negative log-likelihood with the group LASSO penalty
which indicates that each covariate in is associated with all or none of the levels.
The second is the classification and regression tree (CART) method which performs recursive binary splitting on the feature space in a top-down fashion. At each split, CART agnostically searches for a variable and a cutpoint such that the response values in each of the resulting nodes lead to the greatest homogeneity [24]. In that sense, CART intrinsically conducts variable selection while growing the tree, as variables that are not predictive of the treatment are less likely to be chosen at each split. We used the Gini index, a measure of the total variance across the classes, as the metrics for node splitting. Small value of Gini index indicates that observations in this node are predominated by a single class. CART tends to overfit the data. To address the overfitting issue, the common strategy is to first grow a large tree and prune it back in order to retain only part of the tree, as simpler trees tend to be less sensitive to the noises in the data. This method is referred to as pruned CART.
The single tree implementation of both CART and pruned CART, sometimes known as weak learners, may give poor predictions on their own. The ensemble methods, which combine multiple weak learners into one predictive model, have been developed to enhance the predictions. One example is the bootstrap aggregation of the CART algorithm (bagged CART). The bootstrap step of bagged CART involves randomly drawing observations (i.e., the same size as the original sample) with replacement from the original sample and fitting a CART separately for each bootstrap replicate. The bagging estimates of the probability of subjects being assigned to each class are obtained by averaging the predicted class probabilities from each of the single trees. Another popular ensemble method is the random forests. Similar to bagged CART, random forests build trees based on bootstrap samples of the original observations. What is different from bagged CART is that random forests only considers a random sample of predictors () at each split, and typically is chosen for classification problems in practice [25]. In our simulation studies and data example, we choose to grow a relatively large number of trees in order to stabilize the out-of-bag error rate.
4.1 Implementation of Incorporating the Predicted Outcome Probabilities
The methods listed in Table 1 can be applied to All
, Ysel
, and YZsel
for estimating the propensity scores, which are then used to compute the ATE. The OP is employed for propensity score estimation in different ways for different final treatment models. For the LOGIS method, the propensity scores are estimated by regressing the treatment variable on the union of the input covariates (All
, Ysel
, or YZsel
) and . When the regression model is regularized, no penalty is imposed on . In this way the outcome information is guaranteed to be utilized in the propensity score model.
For the tree-based machine learning methods such as CART, there is no straightforward way to force the OP into the tree growing process, where variable selection is intrinsically conducted. We instead conduct a two-step estimation procedure. We first obtain the set of propensity scores using the tree-based methods. Next, we fit a logistic regression model for the treatment as a function of and the propensity scores obtained in route \scalebox{0.85}{1}⃝, \scalebox{0.85}{2}⃝, or \scalebox{0.85}{3}⃝:
(3) |
The coefficients can be estimated using maximum likelihood. The final propensity scores that take OP into account are then obtained by calculating the predicted probabilities from model (3).
For OP+Ysel
(route \scalebox{0.85}{5}⃝) and OP+YZsel
(route \scalebox{0.85}{6}⃝), the associations between the covariates and the outcome are used twice in the entire estimation process, one for variable selection using the outcome model, and the other for propensity score estimation using the OP.
|
|
|
R package | |||||||
---|---|---|---|---|---|---|---|---|---|---|
|
|
|
glmnet*, gcdnet | |||||||
CART | Yes |
|
rpart* | |||||||
Pruned CART | Yes | Same as above | rpart* | |||||||
Bagged CART | Yes | Same as above | ipred* | |||||||
Random forests | Yes | Same as above | randomForest*, ranger |
* R packages used to implement the methods in this paper.
5 Simulation Studies
5.1 Implementation of Methods under Comparison
The comparative methods were implemented in R with default parameters unless otherwise specified. The LASSO algorithm was implemented using the R package glmnet. The tuning parameter was determined using 10-fold cross-validation with the lambda.1se
criterion for selecting variables in the pre-selection step, as the goal was to reduce the number of covariates entering the final treatment model. For LOGIS based on All
, which does not involve the pre-selection step, the lambda.min
criterion was used. CART was implemented using the rpart package with the complexity parameter (cp) being 0.001, which encourages a large and complex tree structure. For pruned CART, the ``cp'' that corresponded to the smallest 10-fold cross-validated error was used to determine the best pruned tree. Bagged CART was implemented using the ipred package with 200 bootstrap replicates. Random forests were implemented using the randomForest package with 1000 bootstrap replicates. The minimum size of terminal nodes (the nodesize
parameter) was set to be 7 in order to make it consistent with the parameters used for CART. For bagged CART and random forests, propensity scores were estimated based on the out-of-bag predictions.
We also extended the OAL approach to three-treatment comparison. Following Shortreed and Ertefaie [14], we considered a set of possible values for the tuning parameter , , and was selected by minimizing a weighted absolute mean difference between treatment groups, a quantity that combines the weighted difference in covariates and the absolute values of the coefficients corresponding to the covariates in the outcome model. Since the adaptive weights for the covariates excluded by the outcome model got inflated to infinity, these covariates were not used to fit the adaptive LASSO for variable selection and propensity score estimation. In that sense, OAL in the high-dimensional setting is equivalent to a LASSO based on Ysel
with additional weights imposed on the covariates for variable selection.
5.2 Simulation Setup
For each simulated dataset, treatment groups were compared and covariates were considered, with confounders, related to treatment only, related to outcome only, and spurious predictors. Covariates were generated as follows unless otherwise specified. Half of the covariates (rounded down) in , , , and were generated from a binomial distribution with a probability of 0.3, and the other half were generated from a multivariate normal distribution with a -dimensional vector of means and a covariance matrix , where is the number of continuous covariates in each subset (, , , or ) and is an identity matrix.
We considered three different simulation settings. Our first setting assumes additivity and linearity for the treatment generating model,
(4) |
where is the probability of receiving treatment , with . We assumed heterogeneous treatment effects on the outcome, and sampled the potential outcome from a binomial distribution with probability
where , , and . We considered three levels of sparsity for the treatment and outcome models, with , , and for the scenarios with sparse, moderately sparse, and dense models, respectively. As a result, the dimension of differed across the scenarios. The parameter was scaled such that . Similarly, the signal strength of the outcome model was scaled such that , , and . The true values for , , and were 0.61, 0.71, and 0.68 for the `sparse' scenario, 0.69, 0.77, and 0.76 for the `moderately sparse' scenario, and 0.76, 0.83, and 0.83 for the `dense' scenario. To examine the performance as the sample size increases, we let the sample size be 500, 1000, and 2000 for the `sparse' scenario.
Our second setting assumed that models were `sparse' () and considered a set of association equations for the treatment assignment that varied in degrees of nonlinearity and nonadditivity. In this case, in (4) may contain some transformation of covariates in and/or interaction effects. The structure of are shown in Table A.1. Specifically, we considered scenarios with nonlinear main effects and no interactions (NL), linear main and interaction effects (L-L), nonlinear main effects and linear interactions (NL-L), and nonlinear main and interaction effects (NL-NL). All confounders were continuous in this setting, and the outcomes were generated using the same model as was used in the first setting except that .
Our third setting also assumed that the models were sparse and let censoring come into play. Instead of sampling binary outcomes directly from a binomial distribution, we first generated time to event from a logistic distribution with mean function
and scale parameter , where , and . Then we obtained the outcome such that . We generated the censoring time using inverse transform sampling [26] as a function of . Specifically, we assumed a Cox proportional hazards model with the baseline hazard following a Weibull distribution,
with scale parameter and shape parameter , where was sampled from a distribution. Note that was assumed to be known in our simulation settings, which resembles our data example where censoring is believed to only depend a low-dimensional set of covariates that can be identified by human experts. The proportion of subject not being observed for was around 22%.
We considered the six routes displayed in Figure 1 for propensity score estimation. We also present the results for three sets of covariates that are usually unknown in practice: confounders only (), treatment predictors (), and outcome predictors (). These results were used to illustrate the impact of different groups of covariates on the treatment effect estimates. The ATE were estimated using the IPW method.
5.3 Construction of Confidence Intervals
The confidence intervals (CI) were constructed using bootstrap standard errors based on 200 bootstrap replicates. For the settings without censoring, we considered two possible bootstrap procedures. The first applies variable selection to each bootstrap replicate and refits the models for the treatment and outcome using variables selected in each replicate, and we refer to it as usual bootstrap. The second ignores the variability due to selection for covariates. Instead, for LOGIS based on All
and OP+All
and all tree-based methods, OP and propensity scores are directly bootstrapped from the OP and propensity scores obtained in the original sample, without refitting the model. For LOGIS based on Ysel
, YZsel
, OP+Ysel
, and OP+YZsel
, propensity scores are obtained by refitting the final treatment model using variables selected in the original data set. As a result, the usual bootstrap is much more computationally intensive than the modified bootstrap. A trial simulation study under the scenario of linear sparse models for sample sizes of 500 (Table 2) and 1000 (Table A.2) showed that usual bootstrap tended to overestimate the standard errors and produce overly conservative CIs for the tree-based methods. For example, most of the coverage rates for the methods that involved OP were above 98%. Over-coverage was also observed for (unpenalized) LOGIS for usual bootstrap. On the other hand, LOGIS based on OP+All
had close to nominal coverage using usual bootstrap at the expense of high computational burden, and slight over-coverage using modified bootstrap. For the tree-based methods, standard errors based on modified bootstrap were close to their corresponding Monte Carlo standard deviation. In this case, modified bootstrap remedied the overestimation of the usual bootstrap by dropping the variability due to variable selection and estimated the true variability well across all methods. Therefore, we proceed with the modified bootstrap technique for our simulation studies, which directly samples the estimated propensity scores and OP from the original simulated data set for each bootstrap replicate. For the third setting where censoring existed, we again used modified bootstrap, but with the censoring model refitted for each bootstrap sample. Metrics that were used to compare the various propensity score estimation methods included bias, Monte Carlo standard deviations, standard errors, root mean squared error (RMSE), and coverage rate of 95% CIs. True values were determined using replicates.
|
|
|
|
|
|
|||||||||||||||||||||||||
Estimators | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | ||||||||||||
OAL | 101 | 77 | -25 | 54 | 48 | 51 | 57 | 51 | 53 | 96.1 | 96.0 | 95.7 | 55 | 50 | 52 | 94.1 | 95.6 | 94.2 | ||||||||||||
LOGIS | ||||||||||||||||||||||||||||||
All | 96 | 75 | -21 | 52 | 48 | 49 | 56 | 54 | 53 | 59.4 | 73.2 | 93.4 | 55 | 55 | 52 | 59.8 | 75.9 | 93.3 | ||||||||||||
Ysel | -6 | -2 | 5 | 56 | 49 | 55 | 94 | 82 | 85 | 100.0 | 99.7 | 99.9 | 58 | 51 | 55 | 95.4 | 96.0 | 94.8 | ||||||||||||
YZsel | -1 | 3 | 4 | 57 | 50 | 55 | 80 | 70 | 74 | 99.7 | 99.1 | 99.4 | 58 | 53 | 55 | 95.0 | 95.5 | 95.0 | ||||||||||||
OP+All | 15 | 15 | 0 | 50 | 45 | 49 | 57 | 52 | 54 | 95.8 | 96.9 | 95.5 | 58 | 56 | 56 | 96.3 | 97.8 | 96.9 | ||||||||||||
OP+Ysel | -6 | -2 | 5 | 56 | 49 | 55 | 94 | 82 | 85 | 100.0 | 99.7 | 99.9 | 58 | 51 | 55 | 95.4 | 96.0 | 94.8 | ||||||||||||
OP+YZsel | -5 | -1 | 5 | 54 | 47 | 54 | 80 | 69 | 73 | 99.8 | 99.4 | 99.3 | 55 | 49 | 53 | 95.2 | 96.0 | 94.3 | ||||||||||||
CART | ||||||||||||||||||||||||||||||
All | 125 | 95 | -30 | 79 | 78 | 74 | 95 | 96 | 89 | 78.1 | 87.4 | 97.0 | 78 | 80 | 74 | 63.2 | 75.9 | 92.0 | ||||||||||||
Ysel | 74 | 57 | -17 | 68 | 68 | 66 | 87 | 86 | 82 | 90.9 | 94.4 | 97.5 | 72 | 72 | 69 | 81.6 | 88.1 | 94.0 | ||||||||||||
YZsel | 59 | 45 | -14 | 66 | 62 | 62 | 83 | 83 | 79 | 93.1 | 96.4 | 99.0 | 70 | 68 | 67 | 86.2 | 90.8 | 94.8 | ||||||||||||
OP+All | 10 | 10 | 0 | 77 | 73 | 78 | 101 | 93 | 102 | 98.8 | 98.6 | 98.9 | 79 | 74 | 79 | 93.4 | 93.8 | 94.2 | ||||||||||||
OP+Ysel | 14 | 12 | -3 | 63 | 61 | 64 | 87 | 81 | 88 | 99.4 | 99.0 | 98.7 | 66 | 63 | 66 | 94.2 | 94.4 | 94.8 | ||||||||||||
OP+YZsel | 14 | 10 | -4 | 60 | 56 | 61 | 82 | 77 | 84 | 99.3 | 99.5 | 99.4 | 63 | 59 | 62 | 94.3 | 94.7 | 94.8 | ||||||||||||
Pruned CART | ||||||||||||||||||||||||||||||
All | 134 | 102 | -32 | 63 | 61 | 55 | 92 | 92 | 86 | 75.3 | 90.4 | 99.1 | 61 | 61 | 56 | 40.1 | 60.4 | 90.4 | ||||||||||||
Ysel | 97 | 74 | -23 | 61 | 59 | 54 | 85 | 84 | 79 | 87.0 | 93.2 | 99.0 | 60 | 60 | 56 | 61.0 | 75.5 | 92.4 | ||||||||||||
YZsel | 81 | 63 | -19 | 64 | 60 | 54 | 82 | 81 | 76 | 87.2 | 93.5 | 99.3 | 61 | 60 | 57 | 68.6 | 80.7 | 93.0 | ||||||||||||
OP+All | -3 | 1 | 4 | 56 | 51 | 55 | 95 | 87 | 96 | 99.8 | 99.5 | 99.9 | 58 | 52 | 57 | 94.3 | 94.5 | 94.6 | ||||||||||||
OP+Ysel | 1 | 4 | 2 | 52 | 49 | 54 | 83 | 77 | 84 | 100.0 | 99.5 | 99.7 | 55 | 50 | 54 | 94.2 | 94.8 | 94.6 | ||||||||||||
OP+YZsel | 2 | 3 | 1 | 53 | 48 | 53 | 78 | 73 | 80 | 99.8 | 99.6 | 99.6 | 55 | 50 | 54 | 95.7 | 94.9 | 94.7 | ||||||||||||
Bagged CART | ||||||||||||||||||||||||||||||
All | 120 | 91 | -29 | 54 | 51 | 49 | 45 | 45 | 42 | 28.9 | 46.9 | 85.3 | 56 | 57 | 53 | 44.1 | 65.5 | 91.2 | ||||||||||||
Ysel | 17 | 17 | 0 | 67 | 61 | 64 | 54 | 51 | 49 | 87.3 | 88.7 | 86.2 | 71 | 66 | 67 | 93.4 | 95.6 | 95.4 | ||||||||||||
YZsel | -35 | -26 | 8 | 90 | 80 | 85 | 63 | 59 | 57 | 80.0 | 82.1 | 80.9 | 86 | 79 | 81 | 91.6 | 92.5 | 94.8 | ||||||||||||
OP+All | -5 | 1 | 6 | 51 | 46 | 50 | 86 | 84 | 90 | 100.0 | 99.9 | 99.9 | 52 | 47 | 51 | 94.2 | 95.4 | 94.4 | ||||||||||||
OP+Ysel | -7 | 0 | 7 | 47 | 44 | 48 | 80 | 80 | 85 | 100.0 | 99.9 | 100.0 | 50 | 45 | 50 | 94.6 | 95.2 | 95.0 | ||||||||||||
OP+YZsel | -7 | -1 | 6 | 47 | 44 | 49 | 77 | 77 | 82 | 100.0 | 99.8 | 99.9 | 50 | 46 | 50 | 94.6 | 95.0 | 94.6 | ||||||||||||
Random Forests | ||||||||||||||||||||||||||||||
All | 135 | 103 | -32 | 50 | 48 | 47 | 40 | 40 | 38 | 13.4 | 30.9 | 80.4 | 52 | 53 | 50 | 27.1 | 51.2 | 90.0 | ||||||||||||
Ysel | 53 | 43 | -10 | 56 | 50 | 51 | 41 | 41 | 38 | 68.4 | 76.2 | 84.7 | 59 | 57 | 57 | 84.9 | 91.2 | 95.9 | ||||||||||||
YZsel | -36 | -22 | 14 | 91 | 72 | 78 | 46 | 44 | 41 | 66.8 | 75.7 | 72.0 | 79 | 70 | 75 | 91.0 | 91.2 | 94.5 | ||||||||||||
OP+All | -5 | 1 | 6 | 53 | 46 | 51 | 61 | 56 | 59 | 97.5 | 98.4 | 97.2 | 53 | 48 | 52 | 94.8 | 94.9 | 94.0 | ||||||||||||
OP+Ysel | -9 | -1 | 8 | 48 | 44 | 49 | 70 | 66 | 71 | 99.9 | 99.5 | 99.4 | 51 | 46 | 50 | 94.5 | 94.8 | 94.4 | ||||||||||||
OP+YZsel | -8 | -1 | 7 | 48 | 45 | 49 | 79 | 76 | 82 | 99.9 | 99.7 | 99.9 | 51 | 46 | 50 | 94.8 | 95.0 | 94.7 |
5.4 Simulation Results
We present the box plots of bias for the IPW estimates across the simulation settings in Figures 2-4. The numerical results for all evaluation metrics are reported in the Supplementary Materials.
5.4.1 Bias
The numerical results for the setting of linear treatment models are presented in Tables A.3-A.9 in the Supplementary Materials. LOGIS that adjusted for confounders (), treatment predictors (), and outcome predictors () had close to zero empirical bias as expected (e.g., Table A.3). For the CART family methods, using confounders only generally yielded smaller bias than using treatment predictors, outcome predictors, or All
. Methods based on pre-selected predictors (Ysel
and YZsel
) tended to yield smaller empirical bias than methods using all the available predictors as input (Figure 2), which indicates that excluding noise variables before fitting the final treatment model can help remove the bias, though the absolute bias was still greater than zero for the tree-based methods. One exception was that random forests based on YZsel
resulted in substantial bias in the `sparse' scenario, possibly because there were too few true predictors at each split for random forests to choose from. The OAL method had similar performance in terms of bias to LOGIS based on OP+Ysel
and OP+YZsel
in the `sparse' scenario (Tables A.3-A.5), while the latter outperformed the former in the `moderately sparse' (Table A.6) and `dense' scenarios (Tables A.8 and A.9).
When the outcome information was not taken into account, LOGIS in general produced less biased estimates than the tree-based methods for each of the routes \scalebox{0.85}{1}⃝-\scalebox{0.85}{3}⃝ in Figure 1. Bias for All
, Ysel
, and YZsel
were getting closer as the model became `denser' (Figure 2). As sample size increased from 500 to 2000 for the `sparse' scenario (Figure 3), the nonzero empirical bias persisted across the methods considered, which illustrates the slow convergence rate of LASSO. The inclusion of OP greatly reduced the bias of the estimates in the case where methods based on All
, Ysel
, or YZsel
resulted in larger than zero absolute bias across the simulation settings, which highlights the robustness of the treatment effect estimator provided by incorporating OP into the estimation process.
With nonlinearity and nonadditivity in the treatment model, the performance of LOGIS was not inferior to the tree-based methods in terms of bias (Tables A.10-A.17), possibly because in this case, LOGIS approximated nonlinear functions reasonably well. The good approximation of linear methods to nonlinear functions was also observed in Tu [12], where multivariable linear regression yielded smaller bias than bagged CART and random forests in some cases with nonlinear and nonadditive associations in the treatment models.
When censoring existed, LOGIS based on Ysel
, YZsel
, OP+Ysel
, and OP+YZsel
produced much lower bias than their CART family counterparts. One exception was YZsel
-based bagged CART, which had close to zero bias (Table A.18).
5.4.2 Statistical Efficiency and RMSE
In general, estimates produced by routes \scalebox{0.85}{4}⃝, \scalebox{0.85}{5}⃝, and \scalebox{0.85}{6}⃝ had smaller variability than those resulting from routes \scalebox{0.85}{1}⃝, \scalebox{0.85}{2}⃝ and \scalebox{0.85}{3}⃝, respectively, for each method across all the settings, which illustrates the advantage of leveraging the information about the outcome model in terms of statistical efficiency.
Methods based on OP+All
, OP+Ysel
, and OP+YZsel
had smaller variabilities and RMSE of the effect estimates across the simulation settings compared to methods based on All
, Ysel
, and YZsel
, respectively (Figures B.1-B.3). For example, for the scenario with moderately sparse models, the percentage of reduction in RMSE ranged from 1.6% to 20.6% for LOGIS based on OP+YZsel
compared to YZsel
, and from 26.3% to 41.6% for random forests based on OP+YZsel
compared to YZsel
(Table A.6). The variabilities for OP+All
, OP+Ysel
, and OP+YZsel
were close to one another, and one was not uniformly smaller than the other two. In general, regardless of whether OP was incorporated, CART had the largest variability and RMSE among the methods for all scenarios considered.
In the presence of censoring, OAL and LOGIS based on OP+YZsel
had the smallest RMSE among the methods under comparison (Table A.18). The inclusion of OP reduced the variability of the effect estimates for all method considered. The bias for LOGIS, CART, and pruned CART decreased when the OP was taken into account, with OP+YZsel
resulting in the smallest bias for each method. However, we still observed residual bias, especially for the CART and pruned-CART.
5.4.3 Coverage of 95% CI
For the tree-based methods and unpenalized LOGIS, standard errors obtained using the modified bootstrap were close to the corresponding Monte Carlo standard deviations in the scenarios with `sparse' models (regardless of whether the treatment models were linear and/or additive). The modified bootstrap tended to slightly overestimate the variability for LOGIS with LASSO penalty. For example, the ratio of standard errors to Monte Carlo standard deviations ranged from 1.12 to 1.21 for OP+All
in the scenario with linear associations and `sparse' representation of the models for sample size of 500. The tree-based methods achieved close to nominal coverage of 95% for OP+All
, OP+Ysel
, and OP+YZsel
for most of the scenarios considered. In the case where the coverage fell below the nominal level, for example the `dense' scenario (Table A.8), the under-coverage was mainly caused by the empirical bias rather than the underestimation of the standard errors.
6 Data Analysis
We applied the algorithms considered in Figure 1 and the propensity score estimation methods listed in Table 1 to a dataset of patients with metastatic castration-resistant prostate cancer (mCRPC) from the Optum Clinformatics Data Mart [27]. Patients who used at least one of the six drugs (docetaxel, abiraterone, enzalutamide, sipuleucel-T, cabazitaxel, and radium-223) approved to treat mCRPC from January 1, 2014, to December 31, 2019 were included in the analytic cohort. We excluded the patients who received cabazitaxel () or radium-223 () as their first-line therapy from our analysis, since there were much fewer samples in these two groups than the other four. The adverse effects of the four drugs for mCRPC were compared, with the outcome being the occurrence of at least one emergency room (ER) visit and hospitalization, respectively, within 180 or 360 days of treatment initiation. In the previous analysis, the working model for the outcome was specified as
where contained the sociodemographic factors and other relevant covariates, and contained five pre-existing comorbid conditions, including diabetes, hypertension, arrhythmia, congestive heart failure, and osteoporosis. Specifically, included age, race, education level, household income, geographic region, insurance product type, whether the insurance plan is administrative services only (ASO), metastatic status of cancer, year of first prescription, and provider type [28]. In this analysis, we increased the granularity of the comorbid conditions and considered a list of phenotype codes (phecodes), which are aggregations of the International Classification of Diseases (ICD) codes that represent clinically meaningful phenotypes [29]. We matched the ICD codes in the claims data to the list of phecodes and identified 1042 phecodes as the original reservoir of predictors. These 1042 phecodes represent 16 broad categories of diseases (circulatory system, congenital anomalies, dermatological diseases, endocrine/metabolic diseases, genitourinary diseases, hematopoietic diseases, infectious diseases, injuries and poisonings, mental disorders, musculoskeletal diseases, neoplasms, neurological diseases, respiratory diseases, sense organs, and symptoms). The working model for the outcome became
where denotes the list of phecodes of dimension 1042. In the pre-selection step, a LASSO was fitted to select the phecodes that are predictive of the outcome, with the coefficients unpenalized. Covariates predictive of the treatment were selected in a similar manner using a multinomial logistic regression with LASSO-type penalty. In other words, covariates in (such as age, race, and household income) are always adjusted for in the treatment and outcome models. The six routes in Figure 1 were followed to obtain six different sets of estimated propensity scores. Note that standard multinomial logistic regression models adjusting for Ysel
and YZsel
yielded substantial standard errors for the estimates of ATE (results not shown). Instead, we fitted the models using the LASSO-type penalty (with not being penalized) to reduce the variability of the estimates. For censoring, we fitted a Cox model adjusting for and , which was a low-dimensional set of covariates. All covariates were coded binary, and the covariates with more than two levels were represented by dummy variables. The standard errors and CIs were constructed using the modified bootstrap procedure.
6.1 Data Analysis Results
The sample sizes for the ER visit cohort and the hospitalization cohort were 7678 and 7709, respectively. The descriptive statistics of the data and the proportions of patients being censored for each treatment group are summarized elsewhere [20]. Specifically, the overall proportions of patients being censored within 180 days and 360 days were 20.8% and 32.6%, respectively, for ER visits, and 24.6% and 40.6%, respectively, for hospitalization. Sipuleucel-T group had larger percentage of censored patients than the other three groups.
Propensity scores estimated by different routes in Figure 1 were highly correlated for each treatment level (results not shown). In general, we observed larger correlations among propensity scores estimated by LOGIS than those estimated by the tree-based methods. For example, correlations between LOGIS based on YZsel
and OP+YZsel
ranged from 0.97 to 1, while the correlations between random forests based on YZsel
and OP+YZsel
ranged from 0.93 to 0.99.
The number of phecodes in each disease group selected by the outcome and/or the treatment model in the pre-selection step for each of the two endpoints are reported in Tables A.19-A.22 in the Supplementary Materials. For example, 54 phecodes were selected by the outcome model and 69 were selected by the treatment model, with 12 lying in the intersection for ER visits within 180 days. Among the 12 phecodes predictive of both treatment and outcome, 4 were associated with neoplasm (cancer of prostate, secondary malignancy of respiratory organs, secondary malignant neoplasm, secondary malignant neoplasm of liver), 2 were associated with circulatory system (congestive heart failure NOS and congestive heart failure nonhypertensive), 2 were associated with mental disorders (delirium dementia and amnestic and other cognitive disorders, tobacco use disorder), 1 was associated with endocrine/metabolic system (type 2 diabetes), 1 was associated with hematopoietic system (lymphadenitis), 1 was associated with respiratory system (abnormal findings examination of lungs), and 1 was associated with symptoms (nausea and vomiting). For hospitalization within 180 days, 63 and 79 phecodes were selected by the outcome and treatment model, respectively, and the intersection contained 10 phecodes. Among the 10 phecodes, 7 were overlapped with those identified for ER visits within 180 days (congestive heart failure NOS, congestive heart failure nonhypertensive, lymphadenitis, cancer of prostate, secondary malignant neoplasm, secondary malignant neoplasm of liver, and nausea and vomiting), with 3 additional phecodes associated with neoplasm (cancer of stomach, malignant neoplasm of head, face, and neck, and secondary malignancy of respiratory organs). We also note that a number of phecodes for genitourinary system were identified to be related either to the treatment (e.g., acute cystitis, chronic renal failure (CKD), nephritis, nephrosis, renal sclerosis, other disorders of the kidney and ureters, renal failure, retention of urine, and urinary tract infection) or to the outcome (e.g., chronic kidney disease stage IV, functional disorders of bladder, hyperplasia of prostate, lump or mass in breast, other disorders of prostate, and prostatitis), while the intersection was empty, possibly due to the fine granularity of the phecodes.
Figure 5 showed results for (penalized) LOGIS, CART, and Bagged CART for the 180-day risks differences in ER visits among the four treatment groups. Docetaxel users exhibited significantly higher 180-day risks of at least one ER visit than the users of the two oral drugs (abiraterone and enzalutamide), a finding that is consistent with previous studies [6, 30]. For example, compared to docetaxel users,
the 180-day risk of ER visits for abiraterone users is 11.2% lower (estimated ATE: -0.112 (-0.147,-0.078 )) and for enzalutamide users is 13.5% lower (estimated ATE: -0.135 (-0.177, -0.093)), from the penalized LOGIS method using OP+YZsel
. Findings from CART and Bagged CART are consistent with the penalized LOGIS method. The 180-day risk differences between abiraterone and enzalutamide users were generally not significant, except that some of the results yielded by random forests indicated significantly lower risk for the enzalutamide group (Figures 5 and B.4). For the 360-day time window, enzalutamide users showed significantly lower risk of ER visits in most cases (Figures B.5 and B.6).
Similar directions of the comparisons among the four treatment groups were observed for 180-day and 360-day risks in hospitalization. In particular, patients who received enzalutamide as their first-line therapy had significantly lower risk of hospitalization than those who received abiraterone, which is consistent with the findings of a study based on a French insurance system database [31].
As was observed in the simulation studies, bagged CART and random forests using YZsel
as input yielded estimates with large standard errors. In general, incorporating the OP into the treatment model reduced the variability of the estimates and led to narrower 95% CIs. Greater efficiency gains were noted for the CART family methods compared to LOGIS. For example, for the 360-day risk of ER visits, the ratios of CI widths of OP+All
over All
ranged from 0.98 to 1 and from 0.61 to 0.66 for LOGIS and bagged CART, respectively. When the OP were not included in the treatment model, the estimates yielded by LOGIS tended to have lower variance than those produced by CART family methods. For example, for the 360-day risk of ER visits, the ratios of CI widths of bagged CART over CI widths of LOGIS using All
as input ranged from 1.06 to 2.01. Consistent with what was observed in the simulation studies, the point estimates and confidence widths of OP+Ysel
and OP+YZsel
were very close across the propensity score estimation methods.
7 Discussion
In this paper, we proposed to incorporate the outcome-covaraite relationship by including predicted outcome probability (OP) in the propensity score model. We examined the traditional multinomial logistic regression method and a set of machine learning techniques combined with OP for estimating ATE. Although the idea of using outcome models to improve efficiency has been explored for regularization methods, how to incorporate outcome-covariate relationship into tree-based machine learning methods has been less studied. Simulation studies show that simultaneous variable selection and propensity score estimation (i.e., methods based on All
) in a high-dimensional setting led to substantial bias for the LOGIS method, possibly because of the slow convergence rate resulting from the large number of noise variables. Similar pattern was observed for the tree-based methods. We showed that the inclusion of OP can improve the robustness and statistical efficiency of the treatment effect estimators. If the variable selection step had satisfactory performance in terms of identifying the set of important confounders and controlling for the bias, then the benefits of including OP in terms of bias reduction may be minimal. On the other hand, if methods based on All
, Ysel
, and YZsel
produced biased estimates of ATE, then further adjusting for OP in the treatment model can help reduce the bias. OP alleviates the bias by adding back the information about the confounders that are potentially missed by the variable selection procedure.
The LOGIS method (both standard and penalized depending on the dimension of the covariates) outperformed the tree-based machine learning methods in the majority of scenarios, especially when OP is used. The bias for the LOGIS method could be smaller than the tree-based methods even under conditions of nonlinearity and/or nonadditivity. On the other hand, the nonparametric machine learning methods such as bagged CART and random forests could produce less biased estimates than multinomial logistic regression when only the treatment-covariate relationship was considered. The performance of tree-based methods may be improved by optimizing the tuning parameters, such as minimum size of terminal nodes, maximum number of terminal nodes, and number of trees to grow. In addition, standard cross-validation procedure, which focuses on out-of-sample performance, is often used to optimize tuning parameters for accurate predictive performance in practice and may not have desired characteristics for selecting tuning parameters for causal inference (i.e., unbiased treatment effect estimates) [32]. The selection criteria for the tuning parameters in the context of treatment effect estimation could be a topic of future work.
Acknowledgment
This research is supported by NIH (1UG3CA267907), NSF (DMS 1712933) and NIH (R01HG008773).
Conflict of interest
Authors state no conflict of interest.
Data availability statement
The data that support the findings of this study are available from OptumInsight but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of OptumInsight (https://ihpi.umich.edu/member-resources/data-and-methods/available-datasets).
References
- Rosenbaum and Rubin [1983] P. R. Rosenbaum and D. B. Rubin. The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1):41–55, April 1983. ISSN 0006-3444. 10.1093/biomet/70.1.41.
- Stuart [2010] E. A. Stuart. Matching methods for causal inference: A review and a look forward. Statistical science : a review journal of the Institute of Mathematical Statistics, 25(1):1–21, February 2010. ISSN 0883-4237. 10.1214/09-STS313.
- Yang et al. [2016] S. Yang, G. W. Imbens, Z. Cui, D. E. Faries, and Z. Kadziola. Propensity score matching and subclassification in observational studies with multi-level treatments. Biometrics, 72(4):1055–1065, 2016. ISSN 1541-0420. 10.1111/biom.12505.
- Lunceford and Davidian [2004] J. K. Lunceford and M. Davidian. Stratification and weighting via the propensity score in estimation of causal treatment effects: A comparative study. Statistics in Medicine, 23(19):2937–2960, October 2004. ISSN 1097-0258. 10.1002/sim.1903.
- Zhou et al. [2019] T. Zhou, M. R. Elliott, and R. J. A. Little. Penalized Spline of Propensity Methods for Treatment Comparison. Journal of the American Statistical Association, 114(525):1–19, January 2019. ISSN 0162-1459. 10.1080/01621459.2018.1518234.
- Yu et al. [2021] Y. Yu, M. Zhang, X. Shi, M. E. V. Caram, R. J. A. Little, and B. Mukherjee. A comparison of parametric propensity score-based methods for causal inference with multiple treatments and a binary outcome. Statistics in Medicine, January 2021. ISSN 1097-0258. 10.1002/sim.8862.
- Schneeweiss et al. [2009] S. Schneeweiss, J. A. Rassen, R. J. Glynn, J. Avorn, H. Mogun, and M. A. Brookhart. High-dimensional propensity score adjustment in studies of treatment effects using health care claims data. Epidemiology (Cambridge, Mass.), 20(4):512–522, July 2009. ISSN 1531-5487. 10.1097/EDE.0b013e3181a663cc.
- Kang and Schafer [2007] J. D. Y. Kang and J. L. Schafer. Demystifying Double Robustness: A Comparison of Alternative Strategies for Estimating a Population Mean from Incomplete Data. Statistical Science, 22(4):523–539, November 2007. ISSN 0883-4237, 2168-8745. 10.1214/07-STS227.
- Setoguchi et al. [2008] S. Setoguchi, S. Schneeweiss, M. A. Brookhart, R. J. Glynn, and E. F. Cook. Evaluating uses of data mining techniques in propensity score estimation: A simulation study. Pharmacoepidemiology and Drug Safety, 17(6):546–555, June 2008. ISSN 1099-1557. 10.1002/pds.1555.
- Lee et al. [2010] B. K. Lee, J. Lessler, and E. A. Stuart. Improving propensity score weighting using machine learning. Statistics in medicine, 29(3):337–346, February 2010. ISSN 0277-6715. 10.1002/sim.3782.
- Ju et al. [2019] C. Ju, M. Combs, S. D. Lendle, J. M. Franklin, R. Wyss, S. Schneeweiss, and M. J. van der Laan. Propensity score prediction for electronic healthcare databases using super learner and high-dimensional propensity score methods. Journal of Applied Statistics, 46(12):2216–2236, September 2019. ISSN 0266-4763. 10.1080/02664763.2019.1582614.
- Tu [2019] C. Tu. Comparison of various machine learning algorithms for estimating generalized propensity score. Journal of Statistical Computation and Simulation, 89(4):708–719, March 2019. ISSN 0094-9655. 10.1080/00949655.2019.1571059.
- De Luna et al. [2011] X. De Luna, I. Waernbaum, and T. S. Richardson. Covariate selection for the nonparametric estimation of an average treatment effect. Biometrika, 98(4):861–875, December 2011. ISSN 0006-3444. 10.1093/biomet/asr041.
- Shortreed and Ertefaie [2017] S. M. Shortreed and A. Ertefaie. Outcome-adaptive lasso: Variable selection for causal inference. Biometrics, 73(4):1111–1122, 2017. ISSN 1541-0420. 10.1111/biom.12679.
- Brookhart et al. [2006] M. A. Brookhart, S. Schneeweiss, K. J. Rothman, R. J. Glynn, J. Avorn, and T. Stürmer. Variable selection for propensity score models. American journal of epidemiology, 163(12):1149–1156, June 2006. ISSN 0002-9262. 10.1093/aje/kwj149.
- Zou [2006] H. Zou. The Adaptive Lasso and Its Oracle Properties. Journal of the American Statistical Association, 101(476):1418–1429, December 2006. ISSN 0162-1459. 10.1198/016214506000000735.
- Ju et al. [2020] C. Ju, D. Benkeser, and M. J. van der Laan. Robust inference on the average treatment effect using the outcome highly adaptive lasso. Biometrics, 76(1):109–118, 2020. ISSN 1541-0420. 10.1111/biom.13121.
- Zigler and Dominici [2014] C. M. Zigler and F. Dominici. Uncertainty in Propensity Score Estimation: Bayesian Methods for Variable Selection and Model Averaged Causal Effects. Journal of the American Statistical Association, 109(505):95–107, January 2014. ISSN 0162-1459. 10.1080/01621459.2013.869498.
- Rubin [1974] D. B. Rubin. Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology, 66(5):688–701, 1974. ISSN 1939-2176(Electronic),0022-0663(Print). 10.1037/h0037350.
- Yu et al. [2022] Y. Yu, M. Zhang, and B. Mukherjee. An inverse probability weighted regression method that accounts for right-censoring for causal inference with multiple treatments and a binary outcome. arXiv preprint arXiv:2205.08052, 2022.
- Tibshirani [1996] R. Tibshirani. Regression Shrinkage and Selection via the Lasso. Journal of the Royal Statistical Society. Series B (Methodological), 58(1):267–288, 1996. ISSN 0035-9246.
- Meinshausen [2007] N. Meinshausen. Relaxed Lasso. Computational Statistics & Data Analysis, 52(1):374–393, September 2007. ISSN 0167-9473. 10.1016/j.csda.2006.12.019.
- Friedman et al. [2010] J. Friedman, T. Hastie, and R. Tibshirani. Regularization Paths for Generalized Linear Models via Coordinate Descent. Journal of statistical software, 33(1):1–22, 2010. ISSN 1548-7660.
- Breiman et al. [1984] L. Breiman, J. Friedman, C. J. Stone, and R. A. Olshen. Classification and Regression Trees. CRC Press, January 1984. ISBN 978-0-412-04841-8.
- James et al. [2021] G. James, D. Witten, T. Hastie, and R. Tibshirani. Tree-Based Methods. In G. James, D. Witten, T. Hastie, and R. Tibshirani, editors, An Introduction to Statistical Learning: With Applications in R, Springer Texts in Statistics, pages 327–365. Springer US, New York, NY, 2021. ISBN 978-1-07-161418-1. 10.1007/978-1-0716-1418-1_8.
- Bender et al. [2005] R. Bender, T. Augustin, and M. Blettner. Generating survival times to simulate Cox proportional hazards models. Statistics in Medicine, 24(11):1713–1723, 2005. ISSN 1097-0258. 10.1002/sim.2059.
- Ross et al. [2021] R. D. Ross, X. Shi, M. E. Caram, P. A. Tsao, P. Lin, A. Bohnert, M. Zhang, and B. Mukherjee. Veridical causal inference using propensity score methods for comparative effectiveness research with medical claims. Health Services and Outcomes Research Methodology, 21(2):206–228, 2021.
- Caram et al. [2019] M. E. V. Caram, R. Ross, P. Lin, and B. Mukherjee. Factors Associated With Use of Sipuleucel-T to Treat Patients With Advanced Prostate Cancer. JAMA network open, 2(4):e192589, April 2019. ISSN 2574-3805. 10.1001/jamanetworkopen.2019.2589.
- Denny et al. [2010] J. C. Denny, M. D. Ritchie, M. A. Basford, J. M. Pulley, L. Bastarache, K. Brown-Gentry, D. Wang, D. R. Masys, D. M. Roden, and D. C. Crawford. PheWAS: Demonstrating the feasibility of a phenome-wide scan to discover gene–disease associations. Bioinformatics, 26(9):1205–1210, May 2010. ISSN 1367-4803. 10.1093/bioinformatics/btq126.
- Tonyali et al. [2017] S. Tonyali, H. B. Haberal, and E. Sogutdelen. Toxicity, Adverse Events, and Quality of Life Associated with the Treatment of Metastatic Castration-Resistant Prostate Cancer. Current Urology, 10(4):169–173, November 2017. ISSN 1661-7649. 10.1159/000447176.
- Scailteux et al. [2021] L.-M. Scailteux, F. Despas, F. Balusson, B. Campillo-Gimenez, R. Mathieu, S. Vincendeau, A. Happe, E. Nowak, S. Kerbrat, and E. Oger. Hospitalization for adverse events under abiraterone or enzalutamide exposure in real-world setting: A French population-based study on prostate cancer patients. British Journal of Clinical Pharmacology, July 2021. ISSN 1365-2125. 10.1111/bcp.14972.
- Häggström and de Luna [2014] J. Häggström and X. de Luna. Targeted smoothing parameter selection for estimating average causal effects. Computational Statistics, 29(6):1727–1748, December 2014. ISSN 1613-9658. 10.1007/s00180-014-0515-0.





Supplementary Materials
Supplementary Tables
Scenario | Predictors for Treatment Generating Model | |||
---|---|---|---|---|
\@BTrule[] Linear main effects only (L) | ||||
Nonlinear main effects only (NL) |
|
|||
|
|
|||
|
|
|||
|
|
|||
\@BTrule[] The baseline scenario. |
|
|
|
|
|
|
|||||||||||||||||||||||||
Estimators | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | ||||||||||||
OAL | 95 | 70 | -25 | 37 | 34 | 36 | 39 | 35 | 37 | 96.0 | 94.9 | 94.5 | 38 | 34 | 36 | 94.9 | 94.9 | 94.9 | ||||||||||||
LOGIS | ||||||||||||||||||||||||||||||
All | 72 | 56 | -16 | 37 | 37 | 35 | 40 | 37 | 37 | 55.5 | 65.9 | 92.7 | 41 | 40 | 38 | 59.8 | 74.2 | 94.6 | ||||||||||||
Ysel | -4 | -2 | 2 | 38 | 35 | 36 | 48 | 41 | 44 | 98.2 | 97.3 | 97.3 | 39 | 35 | 37 | 95.4 | 95.0 | 94.9 | ||||||||||||
YZsel | -3 | -1 | 2 | 39 | 37 | 38 | 46 | 40 | 43 | 97.4 | 96.0 | 95.9 | 40 | 36 | 38 | 95.0 | 95.3 | 95.4 | ||||||||||||
OP+All | 14 | 13 | -2 | 37 | 35 | 35 | 40 | 36 | 38 | 95.3 | 94.1 | 95.8 | 43 | 40 | 41 | 96.6 | 97.6 | 97.2 | ||||||||||||
OP+Ysel | -4 | -2 | 2 | 38 | 35 | 36 | 48 | 41 | 44 | 98.2 | 97.3 | 97.3 | 39 | 35 | 37 | 95.4 | 95.0 | 94.9 | ||||||||||||
OP+YZsel | -3 | -1 | 2 | 37 | 35 | 36 | 45 | 39 | 42 | 97.7 | 96.8 | 97.0 | 38 | 34 | 37 | 94.8 | 95.0 | 94.8 | ||||||||||||
CART | ||||||||||||||||||||||||||||||
All | 112 | 82 | -30 | 58 | 58 | 55 | 69 | 69 | 65 | 65.8 | 81.6 | 95.7 | 57 | 58 | 55 | 51.0 | 70.8 | 91.9 | ||||||||||||
Ysel | 61 | 43 | -18 | 50 | 50 | 47 | 63 | 62 | 59 | 87.6 | 93.5 | 98.4 | 52 | 51 | 50 | 78.1 | 86.8 | 94.9 | ||||||||||||
YZsel | 51 | 38 | -13 | 48 | 47 | 46 | 60 | 59 | 56 | 91.9 | 95.3 | 97.4 | 50 | 49 | 48 | 84.3 | 89.4 | 95.1 | ||||||||||||
OP+All | 7 | 4 | -3 | 55 | 52 | 56 | 70 | 65 | 71 | 98.2 | 98.5 | 98.3 | 56 | 52 | 56 | 94.0 | 94.4 | 94.4 | ||||||||||||
OP+Ysel | 13 | 7 | -6 | 45 | 44 | 45 | 61 | 57 | 61 | 98.8 | 98.6 | 99.2 | 46 | 44 | 46 | 94.0 | 95.1 | 95.3 | ||||||||||||
OP+YZsel | 17 | 11 | -6 | 43 | 41 | 43 | 57 | 54 | 58 | 98.4 | 98.4 | 99.4 | 44 | 41 | 44 | 93.5 | 95.0 | 94.7 | ||||||||||||
Pruned CART | ||||||||||||||||||||||||||||||
All | 125 | 94 | -31 | 45 | 43 | 38 | 68 | 68 | 63 | 53.0 | 80.4 | 99.1 | 42 | 42 | 39 | 19.4 | 40.2 | 88.6 | ||||||||||||
Ysel | 86 | 64 | -22 | 44 | 43 | 38 | 62 | 61 | 57 | 78.6 | 88.6 | 98.5 | 42 | 42 | 39 | 48.4 | 66.0 | 92.8 | ||||||||||||
YZsel | 75 | 56 | -19 | 46 | 45 | 39 | 60 | 58 | 55 | 80.6 | 90.7 | 97.8 | 43 | 42 | 40 | 60.0 | 73.4 | 93.9 | ||||||||||||
OP+All | -4 | -1 | 3 | 39 | 36 | 38 | 68 | 62 | 68 | 99.9 | 100.0 | 99.9 | 40 | 36 | 39 | 94.0 | 95.0 | 95.9 | ||||||||||||
OP+Ysel | -1 | -1 | 1 | 36 | 35 | 37 | 58 | 54 | 59 | 99.9 | 99.3 | 99.5 | 38 | 34 | 37 | 94.6 | 94.9 | 95.6 | ||||||||||||
OP+YZsel | 1 | 0 | 0 | 37 | 35 | 37 | 55 | 52 | 56 | 99.1 | 99.5 | 99.5 | 38 | 35 | 37 | 94.8 | 95.4 | 95.2 | ||||||||||||
Bagged CART | ||||||||||||||||||||||||||||||
All | 109 | 82 | -27 | 38 | 38 | 36 | 31 | 31 | 29 | 10.1 | 28.2 | 79.1 | 40 | 40 | 38 | 24.4 | 46.9 | 91.1 | ||||||||||||
Ysel | -1 | 0 | 1 | 52 | 46 | 48 | 39 | 36 | 34 | 84.8 | 87.0 | 85.0 | 52 | 48 | 49 | 94.4 | 95.8 | 96.0 | ||||||||||||
YZsel | -51 | -40 | 11 | 66 | 58 | 59 | 46 | 41 | 39 | 69.8 | 74.8 | 81.0 | 63 | 57 | 60 | 85.8 | 86.9 | 95.2 | ||||||||||||
OP+All | -4 | 1 | 4 | 37 | 34 | 36 | 62 | 62 | 68 | 99.9 | 99.9 | 100.0 | 38 | 33 | 37 | 94.7 | 95.0 | 94.7 | ||||||||||||
OP+Ysel | -7 | -2 | 5 | 34 | 32 | 34 | 57 | 60 | 64 | 99.9 | 99.9 | 100.0 | 36 | 32 | 35 | 94.3 | 95.6 | 94.7 | ||||||||||||
OP+YZsel | -7 | -2 | 5 | 34 | 32 | 34 | 55 | 57 | 60 | 99.3 | 100.0 | 99.9 | 36 | 32 | 35 | 94.5 | 95.2 | 94.6 | ||||||||||||
Random Forests | ||||||||||||||||||||||||||||||
All | 126 | 95 | -30 | 34 | 35 | 34 | 28 | 28 | 26 | 2.5 | 13.2 | 73.1 | 37 | 38 | 35 | 7.0 | 27.8 | 88.2 | ||||||||||||
Ysel | 32 | 25 | -7 | 44 | 39 | 37 | 30 | 29 | 27 | 69.3 | 77.2 | 84.5 | 43 | 41 | 41 | 86.2 | 91.4 | 96.7 | ||||||||||||
YZsel | -71 | -49 | 22 | 86 | 65 | 63 | 34 | 32 | 29 | 48.3 | 57.3 | 64.3 | 62 | 52 | 58 | 73.2 | 78.6 | 93.5 | ||||||||||||
OP+All | -4 | 1 | 4 | 38 | 35 | 37 | 43 | 39 | 42 | 96.6 | 96.9 | 96.4 | 39 | 34 | 38 | 95.3 | 95.2 | 94.0 | ||||||||||||
OP+Ysel | -8 | -2 | 6 | 35 | 32 | 35 | 50 | 47 | 52 | 98.6 | 99.5 | 98.7 | 36 | 32 | 36 | 94.7 | 95.2 | 94.7 | ||||||||||||
OP+YZsel | -8 | -2 | 6 | 35 | 32 | 35 | 56 | 54 | 59 | 99.5 | 99.8 | 99.8 | 36 | 32 | 36 | 94.8 | 95.6 | 94.7 |
Bias | MCSD | RMSE | SE | Coverage (%; modified) | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Estimators | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 |
\@BTrule[] Naive | 165 | 124 | -41 | 50 | 52 | 49 | 172 | 135 | 64 | 50 | 52 | 48 | 9.7 | 34.1 | 85.5 |
OAL | 5 | 5 | 0 | 56 | 50 | 52 | 57 | 50 | 52 | 55 | 50 | 52 | 94.1 | 95.6 | 94.2 |
LOGIS | |||||||||||||||
Confounder | -1 | 1 | 2 | 58 | 53 | 55 | 58 | 53 | 55 | 57 | 52 | 54 | 94.3 | 94.3 | 94.3 |
Treatment | 2 | 3 | 1 | 73 | 63 | 65 | 73 | 63 | 65 | 69 | 61 | 62 | 94.1 | 94.2 | 93.4 |
Outcome | -1 | 0 | 2 | 57 | 49 | 53 | 57 | 49 | 53 | 56 | 50 | 53 | 94.5 | 96.1 | 94.1 |
All | 95 | 73 | -22 | 52 | 51 | 50 | 108 | 89 | 55 | 55 | 55 | 52 | 59.8 | 75.9 | 93.9 |
Ysel | -6 | -3 | 3 | 57 | 49 | 54 | 57 | 49 | 54 | 58 | 51 | 55 | 95.4 | 96.0 | 94.8 |
YZsel | -6 | -3 | 3 | 58 | 51 | 54 | 58 | 52 | 54 | 58 | 53 | 55 | 95.0 | 95.5 | 95.0 |
OP + All | 16 | 13 | -2 | 51 | 46 | 50 | 54 | 48 | 50 | 58 | 56 | 56 | 96.3 | 97.8 | 96.9 |
OP + Ysel | -6 | -3 | 3 | 57 | 46 | 54 | 57 | 59 | 54 | 58 | 51 | 55 | 95.4 | 96.0 | 94.8 |
OP + YZsel | -5 | -2 | 2 | 55 | 48 | 53 | 56 | 48 | 53 | 55 | 49 | 53 | 95.2 | 96.0 | 94.3 |
CART | |||||||||||||||
Confounder | 55 | 41 | -14 | 70 | 66 | 65 | 89 | 78 | 66 | 69 | 68 | 66 | 86.7 | 90.6 | 94.6 |
Treatment | 94 | 71 | -24 | 72 | 72 | 66 | 119 | 101 | 70 | 73 | 72 | 68 | 73.8 | 82.7 | 93.6 |
Outcome | 68 | 51 | -17 | 71 | 67 | 65 | 99 | 84 | 67 | 71 | 70 | 68 | 81.0 | 89.1 | 94.8 |
All | 123 | 93 | -30 | 79 | 81 | 76 | 146 | 123 | 81 | 78 | 80 | 74 | 63.2 | 75.9 | 92.0 |
Ysel | 72 | 54 | -17 | 72 | 71 | 66 | 102 | 89 | 70 | 72 | 72 | 69 | 81.6 | 88.1 | 94.0 |
YZsel | 60 | 44 | -15 | 69 | 66 | 63 | 91 | 79 | 66 | 70 | 68 | 67 | 86.2 | 90.8 | 94.8 |
OP + All | 10 | 8 | -1 | 80 | 74 | 79 | 80 | 76 | 79 | 79 | 74 | 79 | 93.4 | 93.8 | 94.2 |
OP + Ysel | 12 | 9 | -3 | 67 | 61 | 65 | 68 | 62 | 65 | 66 | 63 | 66 | 94.2 | 94.4 | 94.8 |
OP + YZsel | 15 | 9 | -6 | 62 | 58 | 62 | 64 | 59 | 62 | 63 | 59 | 62 | 94.3 | 94.7 | 94.8 |
Pruned CART | |||||||||||||||
Confounder | 81 | 61 | -20 | 64 | 61 | 55 | 103 | 86 | 59 | 61 | 60 | 57 | 69.4 | 81.4 | 93.3 |
Treatment | 110 | 83 | -27 | 66 | 66 | 58 | 128 | 106 | 64 | 65 | 64 | 59 | 58.1 | 72.4 | 92.3 |
Outcome | 92 | 69 | -22 | 63 | 59 | 54 | 111 | 91 | 59 | 60 | 60 | 56 | 63.1 | 77.0 | 93.0 |
All | 133 | 100 | -32 | 62 | 63 | 56 | 147 | 118 | 66 | 61 | 61 | 56 | 40.1 | 60.4 | 90.4 |
Ysel | 96 | 72 | -24 | 63 | 60 | 55 | 115 | 94 | 60 | 60 | 60 | 56 | 61.0 | 75.5 | 92.4 |
YZsel | 84 | 63 | -20 | 64 | 61 | 55 | 105 | 87 | 59 | 61 | 60 | 57 | 68.6 | 80.7 | 93.0 |
OP + All | -2 | 1 | 2 | 58 | 53 | 56 | 58 | 52 | 56 | 58 | 52 | 57 | 94.3 | 94.5 | 94.6 |
OP + Ysel | 1 | 2 | 1 | 56 | 49 | 54 | 56 | 49 | 54 | 55 | 50 | 54 | 94.2 | 94.8 | 94.6 |
OP + YZsel | 2 | 2 | 0 | 54 | 50 | 53 | 54 | 50 | 53 | 55 | 50 | 54 | 95.7 | 94.9 | 94.7 |
Bagged CART | |||||||||||||||
Confounder | -50 | -37 | 12 | 93 | 83 | 86 | 105 | 91 | 87 | 88 | 80 | 84 | 90.1 | 90.0 | 93.8 |
Treatment | 43 | 33 | -10 | 81 | 76 | 71 | 92 | 83 | 72 | 79 | 74 | 71 | 89.0 | 92.0 | 93.9 |
Outcome | -2 | 0 | 2 | 76 | 65 | 70 | 76 | 65 | 70 | 75 | 70 | 71 | 94.2 | 96.4 | 95.0 |
All | 119 | 90 | -30 | 54 | 53 | 51 | 131 | 104 | 59 | 56 | 57 | 53 | 44.1 | 65.5 | 91.2 |
Ysel | 17 | 14 | -3 | 71 | 63 | 63 | 73 | 65 | 63 | 71 | 66 | 67 | 93.4 | 95.6 | 95.2 |
YZsel | -34 | -27 | 7 | 91 | 82 | 81 | 97 | 85 | 82 | 86 | 79 | 81 | 91.6 | 92.5 | 94.8 |
OP + All | -4 | -1 | 4 | 53 | 46 | 52 | 53 | 46 | 52 | 52 | 47 | 51 | 94.2 | 95.4 | 94.4 |
OP + Ysel | -7 | -2 | 5 | 51 | 45 | 49 | 51 | 45 | 50 | 50 | 45 | 50 | 94.6 | 95.2 | 95.0 |
OP + YZsel | -7 | -2 | 5 | 51 | 45 | 49 | 51 | 45 | 49 | 50 | 46 | 50 | 94.6 | 95.0 | 94.6 |
Random Forests | |||||||||||||||
Confounder | -22 | -13 | 9 | 77 | 67 | 71 | 80 | 68 | 71 | 75 | 68 | 72 | 94.7 | 95.1 | 95.0 |
Treatment | 62 | 48 | -14 | 62 | 58 | 57 | 87 | 75 | 58 | 63 | 61 | 59 | 82.6 | 89.4 | 94.8 |
Outcome | 35 | 28 | -7 | 58 | 52 | 54 | 67 | 59 | 54 | 62 | 59 | 59 | 92.3 | 95.7 | 96.2 |
All | 135 | 102 | -33 | 49 | 51 | 48 | 144 | 114 | 58 | 52 | 53 | 50 | 27.1 | 51.2 | 90.0 |
Ysel | 53 | 41 | -11 | 57 | 52 | 51 | 77 | 66 | 53 | 59 | 57 | 57 | 84.9 | 91.2 | 94.5 |
YZsel | -36 | -24 | 13 | 94 | 78 | 79 | 101 | 81 | 82 | 79 | 70 | 75 | 91.0 | 91.2 | 94.0 |
OP + All | -5 | -1 | 4 | 55 | 47 | 53 | 55 | 47 | 52 | 53 | 48 | 52 | 94.8 | 94.9 | 94.0 |
OP + Ysel | -9 | -2 | 6 | 51 | 45 | 50 | 52 | 45 | 50 | 51 | 46 | 50 | 94.5 | 94.8 | 94.4 |
OP + YZsel | -8 | -2 | 5 | 50 | 46 | 50 | 51 | 45 | 50 | 51 | 46 | 50 | 94.8 | 95.0 | 94.7 |
Bias | MCSD | RMSE | SE | Coverage (%; modified) | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Estimators | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 |
\@BTrule[] Naive | 165 | 125 | -41 | 36 | 37 | 33 | 169 | 130 | 53 | 36 | 37 | 34 | 0.5 | 8.3 | 78.4 |
OAL | -2 | 0 | 2 | 38 | 33 | 36 | 38 | 33 | 36 | 38 | 34 | 36 | 94.9 | 94.9 | 94.9 |
LOGIS | |||||||||||||||
Confounder | -3 | 0 | 2 | 40 | 36 | 37 | 40 | 36 | 37 | 40 | 36 | 38 | 94.5 | 95.0 | 95.2 |
Treatment | -2 | 0 | 1 | 50 | 43 | 43 | 50 | 43 | 43 | 49 | 43 | 43 | 94.2 | 94.7 | 94.4 |
Outcome | -3 | 0 | 2 | 38 | 33 | 36 | 38 | 33 | 36 | 38 | 34 | 37 | 94.8 | 95.1 | 94.9 |
All | 71 | 55 | -16 | 38 | 36 | 35 | 81 | 66 | 38 | 41 | 40 | 38 | 59.8 | 74.2 | 94.6 |
Ysel | -6 | -3 | 3 | 38 | 34 | 36 | 39 | 34 | 36 | 39 | 35 | 37 | 95.4 | 95.0 | 94.9 |
YZsel | -5 | -2 | 3 | 40 | 36 | 37 | 40 | 36 | 37 | 40 | 36 | 38 | 95.0 | 95.3 | 95.4 |
OP + All | 12 | 11 | -1 | 37 | 33 | 35 | 39 | 35 | 35 | 43 | 40 | 41 | 96.6 | 97.6 | 97.2 |
OP + Ysel | -6 | -3 | 3 | 38 | 34 | 36 | 39 | 34 | 36 | 39 | 35 | 37 | 95.4 | 95.0 | 94.9 |
OP + YZsel | -5 | -3 | 3 | 38 | 34 | 36 | 39 | 34 | 36 | 38 | 34 | 37 | 94.8 | 95.0 | 94.8 |
CART | |||||||||||||||
Confounder | 44 | 32 | -12 | 49 | 45 | 43 | 66 | 55 | 45 | 50 | 48 | 48 | 85.5 | 90.8 | 95.3 |
Treatment | 83 | 63 | -21 | 54 | 52 | 48 | 99 | 81 | 52 | 53 | 52 | 50 | 64.4 | 77.0 | 93.2 |
Outcome | 56 | 42 | -15 | 50 | 48 | 45 | 76 | 63 | 47 | 51 | 50 | 49 | 79.7 | 86.8 | 94.8 |
All | 111 | 82 | -29 | 58 | 58 | 54 | 125 | 100 | 62 | 57 | 58 | 55 | 51.0 | 70.8 | 91.9 |
Ysel | 60 | 43 | -17 | 51 | 49 | 46 | 79 | 66 | 49 | 52 | 51 | 50 | 78.1 | 86.8 | 94.9 |
YZsel | 47 | 35 | -13 | 49 | 47 | 44 | 69 | 58 | 46 | 50 | 49 | 48 | 84.3 | 89.4 | 95.1 |
OP + All | 5 | 4 | -1 | 56 | 51 | 56 | 56 | 51 | 56 | 56 | 52 | 56 | 94.0 | 94.4 | 94.4 |
OP + Ysel | 12 | 7 | -5 | 46 | 43 | 45 | 48 | 44 | 45 | 46 | 44 | 46 | 94.0 | 95.1 | 95.3 |
OP + YZsel | 14 | 8 | -6 | 44 | 40 | 42 | 46 | 41 | 43 | 44 | 41 | 44 | 93.5 | 95.0 | 94.7 |
Pruned CART | |||||||||||||||
Confounder | 70 | 52 | -18 | 49 | 45 | 37 | 85 | 69 | 41 | 43 | 42 | 40 | 59.0 | 73.6 | 93.8 |
Treatment | 100 | 76 | -24 | 50 | 46 | 40 | 112 | 89 | 47 | 46 | 45 | 42 | 42.2 | 58.4 | 91.2 |
Outcome | 81 | 61 | -20 | 47 | 43 | 37 | 94 | 74 | 42 | 42 | 41 | 39 | 51.1 | 67.7 | 91.9 |
All | 125 | 94 | -30 | 46 | 43 | 38 | 133 | 104 | 48 | 42 | 42 | 39 | 19.4 | 40.2 | 88.6 |
Ysel | 84 | 63 | -21 | 47 | 43 | 37 | 96 | 77 | 42 | 42 | 42 | 39 | 48.4 | 66.0 | 92.8 |
YZsel | 71 | 53 | -17 | 48 | 45 | 38 | 86 | 70 | 42 | 43 | 42 | 40 | 60.0 | 73.4 | 93.9 |
OP + All | -6 | -2 | 3 | 40 | 35 | 38 | 40 | 35 | 38 | 40 | 36 | 39 | 94.0 | 95.0 | 95.9 |
OP + Ysel | -3 | -2 | 1 | 38 | 34 | 36 | 38 | 34 | 36 | 38 | 34 | 37 | 94.6 | 94.9 | 95.6 |
OP + YZsel | -2 | -1 | 0 | 38 | 33 | 36 | 38 | 33 | 36 | 38 | 35 | 37 | 94.8 | 95.4 | 95.2 |
Bagged CART | |||||||||||||||
Confounder | -64 | -51 | 12 | 66 | 59 | 62 | 92 | 78 | 63 | 64 | 58 | 62 | 82.3 | 84.9 | 94.0 |
Treatment | 27 | 19 | -8 | 59 | 54 | 52 | 65 | 57 | 52 | 58 | 53 | 52 | 90.2 | 94.0 | 93.8 |
Outcome | -21 | -14 | 7 | 54 | 47 | 50 | 58 | 49 | 50 | 55 | 50 | 52 | 94.0 | 95.7 | 95.8 |
All | 107 | 80 | -26 | 39 | 37 | 35 | 114 | 89 | 44 | 40 | 40 | 38 | 24.4 | 46.9 | 91.1 |
Ysel | -4 | -3 | 2 | 53 | 46 | 46 | 53 | 46 | 46 | 52 | 48 | 49 | 94.4 | 95.8 | 96.0 |
YZsel | -53 | -42 | 12 | 66 | 59 | 59 | 85 | 73 | 60 | 63 | 57 | 60 | 85.8 | 86.9 | 95.2 |
OP + All | -6 | -1 | 5 | 38 | 33 | 37 | 38 | 33 | 37 | 38 | 33 | 37 | 94.7 | 95.0 | 94.7 |
OP + Ysel | -9 | -3 | 6 | 35 | 31 | 35 | 36 | 31 | 35 | 36 | 32 | 35 | 94.3 | 95.6 | 94.7 |
OP + YZsel | -8 | -3 | 6 | 35 | 31 | 35 | 36 | 31 | 35 | 36 | 32 | 35 | 94.5 | 95.2 | 94.6 |
Random Forests | |||||||||||||||
Confounder | -43 | -31 | 12 | 59 | 49 | 53 | 72 | 58 | 54 | 56 | 49 | 53 | 88.3 | 90.2 | 95.3 |
Treatment | 50 | 40 | -10 | 45 | 41 | 40 | 67 | 57 | 41 | 46 | 43 | 42 | 79.0 | 86.9 | 95.2 |
Outcome | 19 | 16 | -3 | 41 | 37 | 37 | 45 | 40 | 37 | 45 | 42 | 43 | 94.1 | 96.2 | 96.8 |
All | 125 | 94 | -30 | 35 | 35 | 33 | 130 | 101 | 45 | 37 | 38 | 35 | 7.0 | 27.8 | 88.2 |
Ysel | 31 | 25 | -6 | 45 | 39 | 36 | 55 | 46 | 37 | 43 | 41 | 41 | 86.2 | 91.4 | 96.7 |
YZsel | -74 | -52 | 22 | 87 | 66 | 64 | 114 | 83 | 68 | 62 | 52 | 58 | 73.2 | 78.6 | 93.5 |
OP + All | -6 | -1 | 5 | 39 | 33 | 38 | 39 | 33 | 38 | 39 | 34 | 38 | 95.3 | 95.2 | 94.0 |
OP + Ysel | -9 | -3 | 6 | 35 | 31 | 35 | 37 | 31 | 35 | 36 | 32 | 36 | 94.7 | 95.2 | 94.7 |
OP + YZsel | -9 | -3 | 6 | 35 | 31 | 35 | 37 | 31 | 35 | 36 | 32 | 36 | 94.8 | 95.6 | 94.7 |
Bias | MCSD | RMSE | SE | Coverage (%; modified) | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Estimators | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 |
\@BTrule[] Naive | 167 | 125 | -41 | 26 | 27 | 24 | 169 | 128 | 48 | 25 | 26 | 24 | 0 | 0.4 | 59.9 |
OAL | -1 | 0 | 1 | 27 | 24 | 26 | 27 | 24 | 26 | 27 | 24 | 26 | 95.0 | 94.7 | 95.2 |
LOGIS | |||||||||||||||
Confounder | 0 | 1 | 1 | 28 | 26 | 27 | 28 | 26 | 27 | 28 | 26 | 27 | 95.1 | 94.6 | 95.2 |
Treatment | 0 | 0 | 0 | 35 | 31 | 31 | 35 | 31 | 31 | 34 | 30 | 30 | 93.8 | 94.3 | 94.8 |
Outcome | 0 | 0 | 1 | 27 | 24 | 26 | 27 | 24 | 26 | 27 | 24 | 26 | 94.8 | 94.7 | 95.0 |
All | 55 | 42 | -12 | 28 | 26 | 26 | 61 | 50 | 29 | 30 | 29 | 28 | 55.7 | 70.4 | 94.8 |
Ysel | -3 | -1 | 1 | 27 | 24 | 26 | 27 | 24 | 26 | 27 | 24 | 26 | 95.1 | 94.8 | 94.9 |
YZsel | -2 | -1 | 1 | 28 | 26 | 27 | 28 | 26 | 27 | 28 | 26 | 27 | 95.6 | 95.1 | 95.4 |
OP + All | 13 | 11 | -2 | 28 | 25 | 26 | 30 | 27 | 26 | 31 | 29 | 29 | 95.1 | 96.5 | 97.4 |
OP + Ysel | -3 | -1 | 1 | 27 | 24 | 26 | 27 | 24 | 26 | 27 | 24 | 26 | 95.1 | 94.9 | 94.9 |
OP + YZsel | -2 | -1 | 1 | 27 | 24 | 26 | 27 | 24 | 26 | 27 | 24 | 26 | 95.2 | 94.8 | 95.0 |
CART | |||||||||||||||
Confounder | 34 | 26 | -9 | 33 | 32 | 31 | 48 | 41 | 32 | 35 | 34 | 33 | 85.2 | 88.7 | 95.5 |
Treatment | 74 | 56 | -19 | 38 | 37 | 34 | 84 | 67 | 39 | 38 | 36 | 35 | 49.3 | 67.3 | 91.7 |
Outcome | 44 | 32 | -12 | 35 | 33 | 32 | 56 | 46 | 34 | 36 | 35 | 34 | 77.0 | 86.5 | 94.8 |
All | 99 | 74 | -25 | 42 | 40 | 38 | 108 | 84 | 46 | 41 | 40 | 38 | 32.0 | 55.2 | 89.6 |
Ysel | 47 | 34 | -13 | 36 | 33 | 32 | 59 | 47 | 35 | 37 | 36 | 35 | 74.4 | 86.0 | 94.5 |
YZsel | 37 | 28 | -9 | 34 | 33 | 31 | 51 | 43 | 32 | 35 | 34 | 33 | 82.7 | 87.4 | 95.2 |
OP + All | 7 | 4 | -3 | 37 | 34 | 37 | 38 | 35 | 37 | 38 | 35 | 37 | 94.4 | 95.2 | 95.2 |
OP + Ysel | 13 | 7 | -6 | 31 | 29 | 30 | 34 | 30 | 31 | 32 | 30 | 31 | 94.0 | 94.8 | 94.8 |
OP + YZsel | 14 | 8 | -5 | 30 | 28 | 29 | 33 | 29 | 30 | 30 | 28 | 29 | 93.0 | 93.8 | 94.8 |
Pruned CART | |||||||||||||||
Confounder | 62 | 47 | -15 | 33 | 31 | 27 | 71 | 56 | 31 | 30 | 29 | 28 | 45.6 | 61.3 | 91.9 |
Treatment | 94 | 71 | -23 | 36 | 33 | 28 | 100 | 78 | 36 | 33 | 32 | 29 | 21.1 | 40.3 | 87.6 |
Outcome | 72 | 55 | -18 | 35 | 32 | 26 | 80 | 63 | 32 | 29 | 29 | 27 | 34.8 | 53.1 | 90.3 |
All | 116 | 88 | -28 | 34 | 32 | 27 | 121 | 93 | 39 | 30 | 30 | 27 | 5.7 | 18.8 | 80.1 |
Ysel | 75 | 57 | -18 | 35 | 32 | 27 | 83 | 65 | 33 | 29 | 29 | 27 | 31.6 | 50.2 | 89.4 |
YZsel | 65 | 49 | -16 | 35 | 31 | 27 | 74 | 58 | 31 | 30 | 29 | 28 | 43.9 | 59.0 | 90.4 |
OP + All | -3 | -2 | 2 | 26 | 25 | 27 | 27 | 25 | 27 | 27 | 25 | 26 | 95.3 | 95.0 | 95.0 |
OP + Ysel | -2 | -1 | 1 | 26 | 24 | 25 | 26 | 24 | 25 | 26 | 24 | 25 | 95.6 | 94.6 | 95.0 |
OP + YZsel | -1 | -1 | 0 | 26 | 24 | 26 | 26 | 24 | 26 | 26 | 24 | 26 | 95.2 | 95.3 | 94.8 |
Bagged CART | |||||||||||||||
Confounder | -89 | -73 | 16 | 49 | 43 | 47 | 102 | 84 | 50 | 49 | 43 | 47 | 54.9 | 59.8 | 93.6 |
Treatment | 11 | 6 | -5 | 45 | 40 | 40 | 46 | 40 | 40 | 45 | 40 | 39 | 93.3 | 94.7 | 93.9 |
Outcome | -48 | -37 | 11 | 41 | 34 | 38 | 63 | 50 | 39 | 42 | 37 | 39 | 81.5 | 84.2 | 95.6 |
All | 92 | 69 | -23 | 29 | 27 | 26 | 96 | 74 | 35 | 30 | 29 | 28 | 12.9 | 33.9 | 87.9 |
Ysel | -34 | -26 | 8 | 42 | 35 | 36 | 54 | 43 | 37 | 40 | 36 | 38 | 86.4 | 89.8 | 95.3 |
YZsel | -80 | -64 | 16 | 51 | 45 | 45 | 95 | 78 | 48 | 48 | 42 | 46 | 59.4 | 65.1 | 93.8 |
OP + All | -3 | 2 | 5 | 27 | 24 | 27 | 27 | 24 | 28 | 28 | 24 | 27 | 95.1 | 94.4 | 94.4 |
OP + Ysel | -6 | -1 | 5 | 25 | 23 | 25 | 25 | 23 | 25 | 25 | 23 | 25 | 95.2 | 94.6 | 94.9 |
OP + YZsel | -6 | 0 | 5 | 24 | 23 | 25 | 25 | 23 | 25 | 25 | 23 | 25 | 95.2 | 94.8 | 94.9 |
Random Forests | |||||||||||||||
Confounder | -70 | -51 | 18 | 46 | 36 | 41 | 83 | 63 | 45 | 44 | 36 | 41 | 64.4 | 70.8 | 94.4 |
Treatment | 41 | 32 | -9 | 33 | 29 | 30 | 53 | 44 | 31 | 33 | 31 | 31 | 75.4 | 84.5 | 94.9 |
Outcome | 3 | 3 | 0 | 30 | 26 | 27 | 30 | 26 | 27 | 33 | 30 | 31 | 96.4 | 96.8 | 97.3 |
All | 116 | 88 | -28 | 25 | 25 | 24 | 119 | 92 | 37 | 26 | 27 | 25 | 0.9 | 8.1 | 80.6 |
Ysel | 0 | 1 | 1 | 42 | 35 | 29 | 42 | 35 | 29 | 33 | 30 | 31 | 87.1 | 91.0 | 96.3 |
YZsel | -123 | -91 | 31 | 88 | 68 | 56 | 151 | 114 | 64 | 53 | 42 | 49 | 40.2 | 42.9 | 91.0 |
OP + All | -3 | 1 | 5 | 29 | 25 | 28 | 29 | 25 | 28 | 29 | 25 | 27 | 94.0 | 94.8 | 94.7 |
OP + Ysel | -6 | -1 | 5 | 25 | 23 | 25 | 25 | 23 | 25 | 26 | 23 | 25 | 95.0 | 94.8 | 94.9 |
OP + YZsel | -6 | -1 | 5 | 25 | 23 | 25 | 25 | 23 | 25 | 25 | 23 | 25 | 95.2 | 95.2 | 94.8 |
Bias | MCSD | RMSE | SE | Coverage (%; modified) | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Estimators | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 |
\@BTrule[] Naive | 141 | 108 | -32 | 49 | 48 | 44 | 149 | 119 | 54 | 48 | 50 | 43 | 17.7 | 40.6 | 88.4 |
OAL | 24 | 19 | -5 | 53 | 48 | 48 | 58 | 52 | 48 | 50 | 47 | 47 | 90.2 | 92.6 | 94.1 |
LOGIS | |||||||||||||||
Confounder | 0 | 2 | 1 | 54 | 48 | 52 | 54 | 48 | 52 | 53 | 49 | 51 | 94.4 | 95.3 | 95.0 |
Treatment | 1 | 2 | 2 | 64 | 54 | 59 | 64 | 54 | 59 | 64 | 56 | 58 | 95.0 | 95.3 | 94.3 |
Outcome | 0 | 2 | 1 | 54 | 46 | 52 | 54 | 46 | 52 | 54 | 48 | 52 | 94.8 | 95.9 | 94.4 |
All | 94 | 73 | -20 | 50 | 47 | 46 | 106 | 87 | 50 | 51 | 51 | 47 | 54.6 | 70.8 | 93.2 |
Ysel | 3 | 3 | 0 | 53 | 47 | 51 | 53 | 48 | 51 | 55 | 50 | 53 | 95.2 | 95.9 | 95.4 |
YZsel | 31 | 24 | -7 | 58 | 51 | 50 | 65 | 56 | 50 | 53 | 50 | 50 | 88.4 | 92.4 | 94.4 |
OP + All | 21 | 18 | -4 | 48 | 44 | 47 | 53 | 47 | 47 | 52 | 51 | 51 | 93.6 | 96.7 | 96.4 |
OP + Ysel | 3 | 3 | 0 | 53 | 47 | 51 | 53 | 48 | 51 | 55 | 50 | 53 | 95.2 | 95.9 | 95.3 |
OP + YZsel | 5 | 5 | -1 | 52 | 46 | 49 | 52 | 46 | 49 | 50 | 45 | 49 | 94.0 | 94.8 | 94.4 |
CART | |||||||||||||||
Confounder | 72 | 54 | -18 | 65 | 63 | 60 | 97 | 83 | 63 | 65 | 65 | 60 | 79.3 | 86.5 | 94.0 |
Treatment | 98 | 76 | -22 | 71 | 68 | 63 | 121 | 102 | 67 | 68 | 68 | 61 | 68.2 | 79.7 | 92.2 |
Outcome | 84 | 65 | -19 | 67 | 65 | 62 | 108 | 92 | 65 | 66 | 66 | 61 | 73.4 | 84.4 | 92.8 |
All | 116 | 89 | -26 | 75 | 73 | 66 | 138 | 116 | 71 | 72 | 74 | 65 | 62.4 | 76.6 | 92.4 |
Ysel | 86 | 65 | -21 | 67 | 64 | 62 | 109 | 91 | 66 | 67 | 67 | 62 | 73.1 | 84.2 | 92.6 |
YZsel | 80 | 61 | -19 | 65 | 62 | 60 | 103 | 87 | 63 | 64 | 64 | 59 | 73.8 | 84.7 | 92.7 |
OP + All | 19 | 16 | -3 | 72 | 67 | 73 | 75 | 69 | 73 | 72 | 68 | 73 | 92.6 | 94.6 | 93.9 |
OP + Ysel | 21 | 15 | -6 | 64 | 59 | 63 | 67 | 61 | 64 | 63 | 60 | 64 | 92.7 | 94.2 | 94.0 |
OP + YZsel | 21 | 16 | -6 | 60 | 56 | 60 | 64 | 58 | 60 | 59 | 56 | 59 | 92.0 | 93.1 | 93.8 |
Pruned CART | |||||||||||||||
Confounder | 91 | 70 | -21 | 60 | 57 | 52 | 109 | 90 | 56 | 56 | 56 | 51 | 60.1 | 74.2 | 91.8 |
Treatment | 111 | 86 | -25 | 62 | 58 | 53 | 127 | 104 | 59 | 58 | 58 | 52 | 50.9 | 67.1 | 90.9 |
Outcome | 101 | 78 | -22 | 59 | 55 | 52 | 117 | 96 | 57 | 56 | 56 | 50 | 54.3 | 70.2 | 91.3 |
All | 124 | 95 | -29 | 60 | 58 | 51 | 137 | 111 | 59 | 57 | 58 | 50 | 40.6 | 62.0 | 90.6 |
Ysel | 103 | 79 | -24 | 58 | 55 | 51 | 118 | 96 | 56 | 56 | 56 | 50 | 53.1 | 69.1 | 91.0 |
YZsel | 95 | 73 | -22 | 59 | 54 | 51 | 112 | 91 | 56 | 56 | 56 | 51 | 58.7 | 74.7 | 91.2 |
OP + All | 13 | 10 | -3 | 56 | 51 | 55 | 57 | 52 | 55 | 53 | 49 | 53 | 93.0 | 94.3 | 94.0 |
OP + Ysel | 13 | 11 | -3 | 53 | 48 | 52 | 54 | 49 | 52 | 51 | 47 | 51 | 92.8 | 94.5 | 94.3 |
OP + YZsel | 15 | 12 | -3 | 53 | 48 | 51 | 55 | 49 | 51 | 51 | 47 | 50 | 93.1 | 94.0 | 94.3 |
Bagged CART | |||||||||||||||
Confounder | 19 | 15 | -4 | 69 | 61 | 64 | 71 | 63 | 64 | 65 | 61 | 62 | 92.6 | 94.7 | 93.7 |
Treatment | 73 | 57 | -17 | 62 | 56 | 55 | 96 | 79 | 57 | 60 | 58 | 54 | 74.1 | 85.2 | 92.2 |
Outcome | 54 | 41 | -12 | 57 | 52 | 53 | 78 | 67 | 55 | 58 | 56 | 54 | 83.0 | 90.6 | 93.7 |
All | 114 | 88 | -26 | 51 | 49 | 46 | 125 | 100 | 53 | 51 | 52 | 47 | 40.3 | 61.6 | 91.4 |
Ysel | 60 | 45 | -15 | 56 | 51 | 52 | 82 | 68 | 54 | 57 | 55 | 53 | 80.1 | 89.8 | 94.5 |
YZsel | 30 | 23 | -7 | 74 | 68 | 69 | 79 | 71 | 69 | 70 | 66 | 65 | 90.5 | 93.7 | 94.2 |
OP + All | 12 | 10 | -2 | 49 | 44 | 47 | 51 | 46 | 47 | 47 | 43 | 46 | 92.2 | 94.4 | 94.5 |
OP + Ysel | 10 | 9 | -1 | 48 | 44 | 46 | 49 | 45 | 46 | 46 | 42 | 46 | 93.1 | 94.2 | 94.8 |
OP + YZsel | 10 | 9 | -1 | 48 | 44 | 47 | 49 | 45 | 47 | 46 | 42 | 46 | 93.0 | 93.9 | 94.7 |
Random Forests | |||||||||||||||
Confounder | 50 | 39 | -11 | 55 | 49 | 50 | 74 | 63 | 52 | 55 | 53 | 52 | 84.2 | 90.6 | 94.7 |
Treatment | 84 | 66 | -19 | 52 | 48 | 47 | 99 | 82 | 51 | 52 | 52 | 48 | 63.0 | 77.3 | 93.4 |
Outcome | 73 | 57 | -16 | 50 | 46 | 46 | 88 | 73 | 49 | 52 | 51 | 48 | 70.3 | 83.7 | 93.7 |
All | 119 | 92 | -27 | 49 | 47 | 44 | 129 | 103 | 52 | 49 | 50 | 45 | 32.9 | 55.4 | 91.3 |
Ysel | 77 | 60 | -18 | 50 | 46 | 46 | 92 | 75 | 49 | 51 | 51 | 48 | 66.9 | 81.5 | 94.0 |
YZsel | 35 | 29 | -6 | 78 | 67 | 64 | 85 | 73 | 64 | 63 | 59 | 59 | 84.4 | 90.3 | 94.7 |
OP + All | 10 | 9 | -1 | 50 | 44 | 48 | 51 | 45 | 48 | 47 | 43 | 47 | 92.4 | 94.4 | 94.4 |
OP + Ysel | 8 | 8 | 0 | 48 | 44 | 47 | 49 | 44 | 47 | 46 | 42 | 46 | 93.1 | 93.9 | 94.9 |
OP + YZsel | 8 | 8 | 0 | 49 | 44 | 47 | 50 | 45 | 47 | 46 | 43 | 46 | 92.9 | 94.2 | 94.8 |
Bias | MCSD | RMSE | SE | Coverage (%; modified) | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
\@BTrule[] Estimators | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 |
\@BTrule[] Naive | 142 | 109 | -33 | 34 | 35 | 30 | 146 | 114 | 45 | 34 | 35 | 31 | 1.5 | 12.2 | 81.0 |
OAL | 3 | 2 | -1 | 35 | 32 | 34 | 35 | 32 | 34 | 36 | 32 | 34 | 95.9 | 95.3 | 95.6 |
LOGIS | |||||||||||||||
Confounder | 1 | 0 | -1 | 36 | 33 | 35 | 36 | 33 | 35 | 37 | 33 | 36 | 95.3 | 94.8 | 95.3 |
Treatment | 1 | 0 | -1 | 43 | 37 | 39 | 43 | 37 | 39 | 43 | 37 | 39 | 94.8 | 94.3 | 95.2 |
Outcome | 0 | 0 | 0 | 35 | 32 | 34 | 35 | 32 | 34 | 36 | 32 | 35 | 96.4 | 95.4 | 95.8 |
All | 70 | 54 | -16 | 34 | 33 | 32 | 78 | 64 | 36 | 37 | 36 | 34 | 52.8 | 69.6 | 93.7 |
Ysel | -3 | -3 | 0 | 35 | 32 | 35 | 36 | 32 | 35 | 37 | 33 | 36 | 96.6 | 95.0 | 95.9 |
YZsel | 0 | 0 | 0 | 36 | 33 | 35 | 36 | 33 | 35 | 37 | 34 | 36 | 95.6 | 95.2 | 96.0 |
OP + All | 12 | 9 | -3 | 33 | 31 | 33 | 35 | 32 | 33 | 38 | 36 | 37 | 96.2 | 96.9 | 96.9 |
OP + Ysel | -3 | -3 | 0 | 35 | 32 | 35 | 36 | 32 | 35 | 37 | 33 | 36 | 96.6 | 95.0 | 95.9 |
OP + YZsel | -2 | -2 | 0 | 35 | 32 | 34 | 35 | 32 | 34 | 36 | 32 | 35 | 95.9 | 95.2 | 95.6 |
CART | |||||||||||||||
Confounder | 66 | 49 | -17 | 47 | 46 | 42 | 81 | 67 | 45 | 46 | 46 | 43 | 68.8 | 81.2 | 93.7 |
Treatment | 94 | 71 | -23 | 49 | 49 | 44 | 106 | 86 | 50 | 49 | 49 | 44 | 51.0 | 68.0 | 91.1 |
Outcome | 76 | 58 | -18 | 47 | 46 | 43 | 89 | 74 | 47 | 48 | 48 | 45 | 63.8 | 78.6 | 93.0 |
All | 111 | 85 | -27 | 52 | 54 | 48 | 123 | 101 | 55 | 52 | 53 | 48 | 43.4 | 63.8 | 90.4 |
Ysel | 81 | 61 | -20 | 50 | 48 | 44 | 95 | 77 | 48 | 49 | 49 | 45 | 60.8 | 76.8 | 92.6 |
YZsel | 71 | 54 | -17 | 47 | 45 | 42 | 86 | 71 | 45 | 47 | 47 | 44 | 65.7 | 79.1 | 94.5 |
OP + All | 6 | 4 | -2 | 52 | 48 | 53 | 53 | 48 | 53 | 52 | 48 | 53 | 94.2 | 94.4 | 94.8 |
OP + Ysel | 10 | 5 | -4 | 45 | 42 | 45 | 46 | 42 | 46 | 45 | 42 | 46 | 94.5 | 95.2 | 94.8 |
OP + YZsel | 12 | 7 | -5 | 43 | 40 | 43 | 45 | 41 | 43 | 43 | 40 | 43 | 93.7 | 94.0 | 95.1 |
Pruned CART | |||||||||||||||
Confounder | 87 | 66 | -21 | 43 | 42 | 35 | 97 | 78 | 41 | 39 | 39 | 36 | 39.1 | 59.9 | 90.6 |
Treatment | 106 | 81 | -25 | 43 | 42 | 37 | 114 | 92 | 44 | 41 | 41 | 36 | 28.4 | 48.2 | 88.4 |
Outcome | 96 | 73 | -23 | 41 | 40 | 35 | 104 | 83 | 42 | 39 | 39 | 35 | 31.6 | 52.1 | 90.0 |
All | 121 | 93 | -28 | 40 | 40 | 34 | 128 | 101 | 45 | 39 | 40 | 35 | 14.1 | 35.0 | 87.0 |
Ysel | 99 | 75 | -24 | 41 | 40 | 34 | 107 | 85 | 42 | 39 | 39 | 35 | 28.5 | 49.9 | 89.1 |
YZsel | 91 | 69 | -22 | 42 | 41 | 35 | 101 | 80 | 41 | 40 | 40 | 36 | 37.4 | 57.6 | 90.6 |
OP + All | 0 | 0 | 0 | 36 | 33 | 36 | 36 | 33 | 36 | 37 | 33 | 37 | 95.5 | 95.4 | 95.2 |
OP + Ysel | 1 | 0 | -1 | 35 | 33 | 35 | 35 | 33 | 35 | 36 | 33 | 36 | 95.1 | 94.3 | 95.6 |
OP + YZsel | 3 | 1 | -1 | 36 | 33 | 35 | 36 | 33 | 35 | 36 | 33 | 36 | 95.4 | 94.7 | 95.8 |
Bagged CART | |||||||||||||||
Confounder | 11 | 7 | -3 | 47 | 43 | 44 | 48 | 43 | 44 | 47 | 43 | 44 | 94.0 | 95.1 | 95.1 |
Treatment | 68 | 52 | -16 | 42 | 40 | 37 | 80 | 65 | 40 | 42 | 41 | 38 | 63.5 | 75.1 | 93.3 |
Outcome | 46 | 34 | -11 | 38 | 36 | 36 | 59 | 50 | 38 | 41 | 39 | 38 | 80.2 | 87.4 | 95.5 |
All | 108 | 83 | -26 | 35 | 35 | 31 | 114 | 90 | 40 | 36 | 37 | 33 | 14.2 | 38.2 | 88.6 |
Ysel | 57 | 43 | -14 | 38 | 36 | 34 | 68 | 56 | 37 | 40 | 38 | 37 | 68.2 | 80.1 | 94.6 |
YZsel | 25 | 19 | -6 | 46 | 42 | 41 | 53 | 46 | 41 | 45 | 42 | 42 | 88.6 | 91.9 | 95.7 |
OP + All | -1 | 0 | 1 | 33 | 31 | 33 | 33 | 31 | 33 | 34 | 30 | 34 | 95.7 | 94.7 | 95.3 |
OP + Ysel | -3 | -1 | 1 | 32 | 30 | 32 | 32 | 30 | 32 | 33 | 30 | 33 | 96.1 | 94.9 | 95.2 |
OP + YZsel | -2 | -1 | 1 | 32 | 30 | 32 | 32 | 30 | 32 | 33 | 30 | 33 | 95.9 | 94.9 | 95.7 |
Random Forests | |||||||||||||||
Confounder | 44 | 34 | -10 | 36 | 35 | 34 | 57 | 49 | 36 | 39 | 37 | 37 | 79.4 | 86.7 | 95.3 |
Treatment | 79 | 61 | -19 | 35 | 35 | 32 | 87 | 70 | 37 | 37 | 37 | 34 | 43.1 | 62.6 | 92.7 |
Outcome | 66 | 50 | -16 | 33 | 33 | 31 | 74 | 60 | 35 | 37 | 36 | 34 | 56.5 | 73.0 | 94.2 |
All | 115 | 88 | -27 | 33 | 34 | 30 | 120 | 94 | 40 | 35 | 35 | 32 | 7.6 | 29.5 | 87.2 |
Ysel | 76 | 58 | -18 | 34 | 33 | 31 | 83 | 67 | 36 | 36 | 36 | 34 | 44.4 | 64.7 | 94.0 |
YZsel | 47 | 36 | -11 | 40 | 37 | 34 | 62 | 52 | 35 | 38 | 37 | 36 | 74.4 | 82.8 | 95.8 |
OP + All | -1 | 0 | 1 | 34 | 31 | 34 | 34 | 31 | 34 | 35 | 31 | 34 | 95.8 | 95.0 | 95.0 |
OP + Ysel | -3 | -1 | 2 | 32 | 30 | 33 | 33 | 30 | 33 | 33 | 30 | 33 | 96.1 | 94.6 | 95.2 |
OP + YZsel | -3 | -1 | 2 | 33 | 30 | 33 | 33 | 30 | 33 | 33 | 30 | 33 | 96.0 | 95.0 | 95.8 |
Bias | MCSD | RMSE | SE | Coverage (%; modified) | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Estimators | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 |
\@BTrule[] Naive | 113 | 87 | -26 | 44 | 47 | 38 | 121 | 99 | 46 | 44 | 46 | 38 | 27.8 | 52.5 | 90.2 |
OAL | 51 | 37 | -14 | 48 | 47 | 41 | 70 | 60 | 43 | 44 | 43 | 41 | 76.4 | 84.1 | 93.4 |
LOGIS | |||||||||||||||
Confounder | 2 | 1 | -1 | 48 | 45 | 47 | 48 | 45 | 47 | 51 | 46 | 50 | 95.3 | 95.0 | 95.0 |
Treatment | 2 | 1 | -2 | 57 | 51 | 54 | 57 | 51 | 54 | 65 | 57 | 61 | 97.1 | 96.9 | 96.9 |
Outcome | 1 | 0 | -1 | 48 | 45 | 48 | 48 | 45 | 48 | 56 | 50 | 54 | 97.5 | 96.5 | 97.2 |
All | 87 | 66 | -20 | 44 | 45 | 39 | 97 | 80 | 44 | 46 | 46 | 41 | 50.5 | 70.3 | 92.5 |
Ysel | 26 | 18 | -8 | 48 | 46 | 43 | 55 | 49 | 44 | 48 | 45 | 46 | 90.1 | 91.4 | 95.5 |
YZsel | 62 | 46 | -17 | 48 | 47 | 41 | 78 | 66 | 44 | 46 | 45 | 41 | 71.8 | 82.8 | 93.5 |
OP + All | 42 | 30 | -12 | 45 | 45 | 40 | 62 | 54 | 42 | 46 | 45 | 43 | 84.8 | 89.9 | 95.5 |
OP + Ysel | 26 | 18 | -8 | 48 | 46 | 43 | 55 | 49 | 44 | 48 | 45 | 46 | 90.1 | 91.4 | 95.5 |
OP + YZsel | 30 | 20 | -10 | 45 | 44 | 41 | 54 | 49 | 42 | 43 | 41 | 41 | 88.3 | 90.9 | 94.2 |
CART | |||||||||||||||
Confounder | 73 | 55 | -17 | 61 | 61 | 56 | 95 | 82 | 59 | 60 | 60 | 54 | 77.2 | 84.0 | 92.4 |
Treatment | 89 | 67 | -22 | 65 | 66 | 56 | 111 | 94 | 60 | 63 | 64 | 55 | 68.8 | 80.2 | 92.2 |
Outcome | 81 | 62 | -19 | 64 | 63 | 58 | 103 | 88 | 61 | 62 | 62 | 55 | 71.1 | 82.2 | 91.5 |
All | 96 | 74 | -23 | 67 | 69 | 59 | 118 | 101 | 63 | 65 | 67 | 57 | 67.2 | 78.6 | 92.2 |
Ysel | 81 | 60 | -21 | 60 | 61 | 54 | 101 | 86 | 58 | 60 | 61 | 54 | 72.3 | 82.2 | 92.8 |
YZsel | 85 | 63 | -22 | 55 | 57 | 50 | 101 | 85 | 55 | 56 | 57 | 51 | 68.2 | 80.4 | 92.8 |
OP + All | 44 | 32 | -12 | 69 | 67 | 65 | 82 | 75 | 66 | 64 | 63 | 62 | 87.2 | 89.8 | 92.9 |
OP + Ysel | 40 | 28 | -12 | 59 | 58 | 55 | 72 | 64 | 57 | 57 | 56 | 56 | 86.8 | 90.5 | 94.1 |
OP + YZsel | 38 | 25 | -14 | 53 | 53 | 52 | 66 | 59 | 53 | 52 | 51 | 52 | 87.1 | 91.1 | 92.7 |
Pruned CART | |||||||||||||||
Confounder | 88 | 67 | -21 | 53 | 54 | 45 | 103 | 86 | 50 | 50 | 51 | 45 | 56.6 | 71.8 | 92.2 |
Treatment | 99 | 75 | -24 | 53 | 53 | 45 | 112 | 92 | 51 | 52 | 53 | 45 | 50.0 | 68.3 | 91.6 |
Outcome | 93 | 71 | -22 | 52 | 54 | 44 | 107 | 89 | 49 | 51 | 51 | 45 | 52.2 | 69.8 | 91.8 |
All | 104 | 80 | -24 | 52 | 54 | 45 | 116 | 96 | 51 | 51 | 52 | 45 | 45.8 | 65.6 | 91.6 |
Ysel | 94 | 71 | -23 | 50 | 51 | 43 | 107 | 87 | 49 | 49 | 50 | 44 | 51.1 | 68.8 | 91.8 |
YZsel | 92 | 69 | -23 | 50 | 51 | 43 | 104 | 86 | 49 | 50 | 51 | 44 | 53.8 | 71.4 | 91.7 |
OP + All | 43 | 31 | -12 | 54 | 52 | 47 | 70 | 61 | 48 | 49 | 47 | 47 | 82.7 | 87.6 | 93.9 |
OP + Ysel | 38 | 25 | -12 | 48 | 48 | 45 | 61 | 54 | 47 | 46 | 44 | 45 | 86.1 | 90.0 | 93.5 |
OP + YZsel | 36 | 24 | -12 | 47 | 47 | 44 | 59 | 53 | 46 | 45 | 44 | 45 | 87.3 | 90.0 | 93.8 |
Bagged CART | |||||||||||||||
Confounder | 51 | 38 | -14 | 50 | 49 | 45 | 72 | 62 | 47 | 50 | 50 | 46 | 82.7 | 87.9 | 93.8 |
Treatment | 80 | 60 | -20 | 48 | 49 | 42 | 93 | 78 | 47 | 48 | 49 | 43 | 62.1 | 75.8 | 92.4 |
Outcome | 71 | 53 | -17 | 46 | 46 | 41 | 84 | 71 | 45 | 48 | 48 | 43 | 69.7 | 80.8 | 93.6 |
All | 94 | 72 | -22 | 45 | 47 | 40 | 104 | 86 | 46 | 46 | 47 | 41 | 46.8 | 67.3 | 91.8 |
Ysel | 64 | 46 | -18 | 49 | 49 | 45 | 81 | 67 | 48 | 50 | 50 | 46 | 75.0 | 85.7 | 93.8 |
YZsel | 57 | 42 | -15 | 68 | 69 | 63 | 89 | 81 | 65 | 65 | 63 | 58 | 82.1 | 89.1 | 93.5 |
OP + All | 41 | 29 | -12 | 47 | 46 | 40 | 63 | 55 | 42 | 42 | 41 | 41 | 80.8 | 86.5 | 94.3 |
OP + Ysel | 36 | 25 | -11 | 45 | 44 | 40 | 57 | 50 | 42 | 41 | 40 | 40 | 84.9 | 88.7 | 93.3 |
OP + YZsel | 34 | 23 | -11 | 44 | 43 | 40 | 56 | 49 | 42 | 41 | 40 | 41 | 86.0 | 90.6 | 94.3 |
Random Forests | |||||||||||||||
Confounder | 68 | 52 | -17 | 45 | 46 | 40 | 82 | 69 | 44 | 46 | 46 | 42 | 69.2 | 80.0 | 93.4 |
Treatment | 86 | 65 | -21 | 44 | 46 | 39 | 96 | 80 | 44 | 45 | 46 | 41 | 53.0 | 70.2 | 92.5 |
Outcome | 81 | 61 | -19 | 43 | 45 | 39 | 91 | 76 | 43 | 45 | 46 | 41 | 56.7 | 73.6 | 93.2 |
All | 97 | 75 | -23 | 44 | 46 | 38 | 107 | 88 | 45 | 45 | 46 | 40 | 41.0 | 63.2 | 91.6 |
Ysel | 72 | 54 | -19 | 49 | 49 | 41 | 87 | 73 | 45 | 47 | 47 | 43 | 65.4 | 78.5 | 93.7 |
YZsel | 49 | 36 | -13 | 81 | 81 | 65 | 95 | 88 | 66 | 61 | 59 | 54 | 78.9 | 85.1 | 93.8 |
OP + All | 37 | 26 | -11 | 46 | 46 | 41 | 59 | 52 | 42 | 43 | 41 | 41 | 85.2 | 88.4 | 93.7 |
OP + Ysel | 35 | 24 | -11 | 45 | 44 | 41 | 57 | 51 | 42 | 41 | 40 | 41 | 85.2 | 88.7 | 93.9 |
OP + YZsel | 33 | 22 | -11 | 46 | 44 | 43 | 57 | 50 | 44 | 42 | 40 | 41 | 86.7 | 90.6 | 94.8 |
Bias | MCSD | RMSE | SE | Coverage (%; modified) | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Estimators | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 |
\@BTrule[] Naive | 111 | 87 | -24 | 32 | 32 | 27 | 116 | 93 | 36 | 31 | 32 | 27 | 5.8 | 21.6 | 86.3 |
OAL | 12 | 10 | -2 | 32 | 30 | 31 | 34 | 31 | 31 | 32 | 30 | 31 | 93.2 | 94.1 | 94.8 |
LOGIS | |||||||||||||||
Confounder | -1 | 0 | 1 | 33 | 30 | 33 | 33 | 30 | 33 | 34 | 31 | 33 | 94.5 | 95.1 | 94.5 |
Treatment | -1 | 0 | 0 | 38 | 33 | 36 | 38 | 33 | 36 | 39 | 34 | 36 | 95.0 | 95.1 | 94.3 |
Outcome | -1 | 0 | 1 | 32 | 29 | 33 | 32 | 29 | 33 | 34 | 30 | 33 | 95.4 | 95.5 | 94.6 |
All | 61 | 48 | -13 | 31 | 30 | 29 | 68 | 57 | 31 | 33 | 33 | 30 | 55.4 | 70.0 | 94.1 |
Ysel | 1 | 1 | 0 | 32 | 29 | 33 | 32 | 29 | 33 | 34 | 31 | 34 | 96.2 | 95.8 | 94.7 |
YZsel | 13 | 10 | -2 | 33 | 30 | 32 | 36 | 32 | 32 | 34 | 31 | 32 | 93.0 | 94.1 | 94.5 |
OP + All | 13 | 10 | -3 | 30 | 28 | 30 | 33 | 30 | 30 | 34 | 32 | 32 | 95.0 | 95.8 | 96.0 |
OP + Ysel | 1 | 1 | 0 | 32 | 29 | 33 | 32 | 29 | 33 | 34 | 31 | 34 | 96.2 | 95.8 | 94.7 |
OP + YZsel | 2 | 2 | 0 | 31 | 29 | 32 | 31 | 29 | 32 | 33 | 29 | 32 | 95.3 | 94.9 | 94.7 |
CART | |||||||||||||||
Confounder | 68 | 53 | -15 | 44 | 43 | 39 | 81 | 68 | 42 | 43 | 43 | 39 | 65.0 | 76.8 | 92.3 |
Treatment | 85 | 67 | -18 | 46 | 45 | 41 | 97 | 81 | 44 | 46 | 46 | 40 | 52.4 | 68.7 | 92.0 |
Outcome | 76 | 60 | -17 | 45 | 46 | 41 | 89 | 75 | 45 | 45 | 45 | 40 | 59.4 | 72.7 | 92.0 |
All | 93 | 73 | -19 | 49 | 48 | 41 | 105 | 88 | 46 | 48 | 48 | 42 | 51.1 | 67.0 | 92.6 |
Ysel | 77 | 60 | -17 | 45 | 45 | 41 | 90 | 75 | 45 | 45 | 46 | 40 | 59.0 | 75.1 | 92.2 |
YZsel | 72 | 57 | -16 | 43 | 43 | 38 | 84 | 71 | 41 | 44 | 44 | 39 | 61.8 | 74.5 | 93.2 |
OP + All | 11 | 10 | -2 | 47 | 43 | 46 | 48 | 44 | 46 | 46 | 43 | 47 | 92.5 | 94.0 | 94.6 |
OP + Ysel | 13 | 9 | -4 | 42 | 40 | 44 | 44 | 41 | 44 | 42 | 40 | 43 | 94.2 | 93.6 | 93.8 |
OP + YZsel | 13 | 10 | -3 | 39 | 38 | 41 | 42 | 39 | 41 | 40 | 38 | 41 | 93.8 | 94.6 | 94.2 |
Pruned CART | |||||||||||||||
Confounder | 82 | 65 | -17 | 39 | 37 | 31 | 91 | 75 | 36 | 36 | 36 | 31 | 37.9 | 54.8 | 90.6 |
Treatment | 94 | 74 | -20 | 38 | 38 | 32 | 102 | 83 | 38 | 37 | 37 | 32 | 29.1 | 47.2 | 90.0 |
Outcome | 88 | 69 | -19 | 38 | 37 | 32 | 96 | 78 | 37 | 36 | 36 | 31 | 32.0 | 48.6 | 89.3 |
All | 100 | 78 | -22 | 38 | 37 | 31 | 107 | 87 | 38 | 36 | 37 | 31 | 22.4 | 42.0 | 88.8 |
Ysel | 90 | 70 | -20 | 37 | 36 | 32 | 97 | 79 | 37 | 36 | 36 | 31 | 30.0 | 48.0 | 89.6 |
YZsel | 85 | 67 | -18 | 38 | 36 | 32 | 93 | 76 | 37 | 36 | 36 | 32 | 34.7 | 53.2 | 90.3 |
OP + All | 8 | 7 | -1 | 34 | 31 | 33 | 34 | 31 | 33 | 33 | 31 | 33 | 93.2 | 94.7 | 95.0 |
OP + Ysel | 7 | 6 | -1 | 32 | 30 | 33 | 32 | 31 | 33 | 33 | 30 | 33 | 94.4 | 94.1 | 94.8 |
OP + YZsel | 8 | 7 | -2 | 33 | 31 | 33 | 34 | 31 | 33 | 33 | 30 | 33 | 93.5 | 94.6 | 94.7 |
Bagged CART | |||||||||||||||
Confounder | 45 | 35 | -9 | 34 | 33 | 32 | 56 | 48 | 33 | 35 | 34 | 33 | 75.3 | 82.8 | 94.7 |
Treatment | 75 | 58 | -16 | 34 | 33 | 30 | 82 | 67 | 34 | 34 | 34 | 30 | 42.2 | 60.6 | 91.6 |
Outcome | 64 | 50 | -14 | 32 | 31 | 30 | 72 | 59 | 33 | 34 | 34 | 31 | 52.1 | 69.2 | 92.5 |
All | 89 | 70 | -19 | 32 | 32 | 28 | 95 | 77 | 34 | 33 | 33 | 29 | 22.4 | 44.6 | 91.0 |
Ysel | 67 | 52 | -15 | 32 | 31 | 29 | 74 | 61 | 32 | 34 | 33 | 30 | 49.1 | 67.3 | 93.4 |
YZsel | 53 | 41 | -12 | 35 | 33 | 32 | 63 | 53 | 34 | 36 | 35 | 32 | 68.1 | 79.1 | 94.2 |
OP + All | 7 | 5 | -1 | 29 | 28 | 30 | 30 | 28 | 30 | 30 | 27 | 30 | 93.7 | 94.1 | 94.8 |
OP + Ysel | 6 | 5 | -1 | 29 | 27 | 29 | 29 | 28 | 30 | 29 | 27 | 29 | 94.4 | 94.5 | 94.4 |
OP + YZsel | 5 | 5 | -1 | 29 | 27 | 30 | 29 | 27 | 30 | 29 | 27 | 30 | 94.5 | 94.6 | 94.4 |
Random Forests | |||||||||||||||
Confounder | 62 | 49 | -13 | 32 | 31 | 29 | 70 | 58 | 32 | 33 | 32 | 30 | 52.8 | 67.6 | 93.1 |
Treatment | 81 | 63 | -18 | 31 | 31 | 28 | 87 | 70 | 33 | 32 | 32 | 29 | 29.8 | 51.9 | 90.8 |
Outcome | 75 | 59 | -16 | 30 | 30 | 28 | 81 | 66 | 32 | 32 | 32 | 29 | 35.3 | 56.5 | 92.0 |
All | 93 | 73 | -20 | 31 | 31 | 27 | 98 | 79 | 34 | 32 | 32 | 28 | 16.9 | 38.1 | 89.3 |
Ysel | 77 | 60 | -17 | 31 | 30 | 28 | 83 | 67 | 32 | 32 | 32 | 29 | 33.2 | 54.7 | 92.0 |
YZsel | 65 | 51 | -14 | 32 | 32 | 29 | 73 | 60 | 32 | 33 | 33 | 30 | 48.4 | 66.3 | 93.6 |
OP + All | 6 | 4 | -1 | 30 | 28 | 30 | 30 | 28 | 30 | 30 | 28 | 30 | 94.2 | 94.2 | 94.1 |
OP + Ysel | 5 | 4 | 0 | 29 | 27 | 30 | 29 | 28 | 30 | 30 | 27 | 30 | 94.5 | 94.6 | 94.6 |
OP + YZsel | 5 | 4 | 0 | 29 | 27 | 30 | 29 | 28 | 30 | 30 | 27 | 30 | 94.5 | 94.6 | 94.9 |
Bias | MCSD | RMSE | SE | Coverage (%; modified) | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Estimators | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 |
\@BTrule[] Naive | 38 | 26 | -13 | 53 | 52 | 53 | 65 | 58 | 55 | 54 | 54 | 54 | 90.3 | 92.4 | 94.8 |
OAL | 10 | 3 | -7 | 47 | 46 | 47 | 48 | 47 | 48 | 55 | 55 | 55 | 97.4 | 97.2 | 97.2 |
LOGIS | |||||||||||||||
Confounder | 10 | 7 | -3 | 50 | 49 | 49 | 51 | 49 | 49 | 51 | 49 | 50 | 95.2 | 94.7 | 95.7 |
Treatment | -1 | 0 | 1 | 51 | 49 | 50 | 51 | 49 | 50 | 52 | 50 | 52 | 95.7 | 95.1 | 95.3 |
Outcome | -1 | 0 | 1 | 48 | 46 | 47 | 48 | 46 | 47 | 49 | 47 | 48 | 95.3 | 95.8 | 96.0 |
All | 35 | 24 | -11 | 52 | 52 | 53 | 63 | 57 | 54 | 55 | 54 | 54 | 91.1 | 92.7 | 95.0 |
Ysel | 8 | 0 | -8 | 47 | 46 | 48 | 48 | 46 | 49 | 56 | 56 | 56 | 98.1 | 97.8 | 97.5 |
YZsel | 28 | 18 | -9 | 53 | 54 | 51 | 60 | 57 | 52 | 54 | 54 | 54 | 92.0 | 93.8 | 94.6 |
OP + All | 12 | 2 | -11 | 45 | 44 | 45 | 47 | 44 | 47 | 55 | 54 | 55 | 98.1 | 98.2 | 97.7 |
OP + Ysel | 8 | 0 | -8 | 47 | 46 | 48 | 48 | 46 | 49 | 48 | 46 | 47 | 94.8 | 94.6 | 94.0 |
OP + YZsel | 12 | 4 | -8 | 46 | 47 | 45 | 48 | 47 | 45 | 46 | 44 | 46 | 94.6 | 95.4 | 94.6 |
CART | |||||||||||||||
Confounder | 18 | 14 | -5 | 68 | 64 | 67 | 70 | 65 | 67 | 70 | 69 | 70 | 94.6 | 95.6 | 95.6 |
Treatment | 19 | 14 | -5 | 72 | 69 | 71 | 75 | 71 | 71 | 73 | 72 | 72 | 94.4 | 95.0 | 95.3 |
Outcome | 16 | 12 | -4 | 69 | 67 | 69 | 71 | 68 | 69 | 72 | 71 | 72 | 94.2 | 95.5 | 95.0 |
All | 28 | 19 | -9 | 81 | 80 | 79 | 85 | 82 | 79 | 80 | 79 | 81 | 92.8 | 93.1 | 95.0 |
Ysel | 20 | 10 | -10 | 70 | 70 | 69 | 72 | 70 | 70 | 73 | 72 | 73 | 95.3 | 94.9 | 95.4 |
YZsel | 31 | 19 | -13 | 63 | 62 | 62 | 70 | 65 | 63 | 65 | 64 | 65 | 91.6 | 92.7 | 94.9 |
OP + All | 12 | 5 | -7 | 73 | 73 | 71 | 74 | 73 | 72 | 75 | 73 | 75 | 94.7 | 94.3 | 95.2 |
OP + Ysel | 14 | 3 | -11 | 64 | 64 | 63 | 65 | 64 | 64 | 66 | 64 | 66 | 94.8 | 94.2 | 95.1 |
OP + YZsel | 15 | 4 | -11 | 57 | 55 | 57 | 59 | 55 | 58 | 56 | 55 | 56 | 92.1 | 94.9 | 94.4 |
Pruned CART | |||||||||||||||
Confounder | 22 | 14 | -7 | 58 | 56 | 57 | 62 | 58 | 57 | 59 | 59 | 59 | 94.2 | 94.3 | 95.2 |
Treatment | 26 | 18 | -8 | 61 | 58 | 59 | 66 | 61 | 59 | 62 | 60 | 60 | 93.1 | 94.3 | 95.3 |
Outcome | 24 | 16 | -7 | 59 | 56 | 58 | 64 | 58 | 59 | 60 | 59 | 60 | 93.0 | 94.6 | 94.9 |
All | 32 | 22 | -11 | 62 | 62 | 61 | 70 | 65 | 62 | 62 | 61 | 62 | 91.9 | 92.9 | 94.8 |
Ysel | 27 | 17 | -10 | 58 | 57 | 57 | 64 | 59 | 58 | 60 | 59 | 60 | 93.2 | 94.8 | 95.3 |
YZsel | 31 | 19 | -12 | 57 | 56 | 53 | 65 | 59 | 54 | 58 | 58 | 59 | 92.1 | 89.9 | 93.8 |
OP + All | 11 | 2 | -10 | 53 | 53 | 53 | 55 | 53 | 53 | 54 | 52 | 53 | 94.4 | 94.9 | 94.2 |
OP + Ysel | 12 | 2 | -10 | 50 | 50 | 49 | 52 | 50 | 51 | 51 | 50 | 51 | 94.8 | 95.0 | 94.6 |
OP + YZsel | 15 | 4 | -11 | 48 | 48 | 47 | 50 | 48 | 49 | 49 | 48 | 50 | 93.8 | 94.4 | 94.9 |
Bagged CART | |||||||||||||||
Confounder | 1 | 0 | 0 | 64 | 64 | 65 | 64 | 64 | 65 | 67 | 66 | 66 | 95.6 | 95.4 | 94.9 |
Treatment | 7 | 5 | -2 | 62 | 60 | 61 | 63 | 60 | 61 | 65 | 63 | 64 | 96.1 | 95.7 | 95.9 |
Outcome | 3 | 2 | -1 | 55 | 54 | 55 | 56 | 54 | 55 | 62 | 61 | 62 | 97.2 | 97.0 | 97.0 |
All | 29 | 19 | -10 | 53 | 52 | 53 | 60 | 55 | 53 | 57 | 56 | 57 | 93.8 | 95.3 | 96.3 |
Ysel | 13 | 4 | -9 | 53 | 51 | 53 | 54 | 52 | 54 | 60 | 60 | 61 | 97.6 | 97.4 | 97.2 |
YZsel | 11 | 9 | -2 | 106 | 103 | 98 | 106 | 104 | 98 | 89 | 90 | 89 | 91.0 | 92.1 | 91.0 |
OP + All | 12 | 1 | -11 | 45 | 44 | 46 | 47 | 44 | 47 | 46 | 45 | 46 | 94.8 | 95.0 | 93.8 |
OP + Ysel | 12 | 1 | -11 | 45 | 44 | 46 | 47 | 44 | 47 | 46 | 45 | 46 | 95.2 | 95.0 | 94.4 |
OP + YZsel | 15 | 5 | -10 | 47 | 47 | 47 | 49 | 48 | 48 | 47 | 46 | 48 | 93.3 | 94.4 | 94.9 |
Random Forests | |||||||||||||||
Confounder | 11 | 8 | -3 | 54 | 54 | 55 | 56 | 54 | 55 | 60 | 59 | 60 | 96.6 | 96.9 | 96.2 |
Treatment | 15 | 11 | -4 | 54 | 52 | 53 | 56 | 53 | 53 | 58 | 57 | 58 | 96.2 | 96.0 | 96.6 |
Outcome | 13 | 9 | -4 | 49 | 48 | 49 | 51 | 49 | 49 | 57 | 57 | 57 | 97.4 | 97.0 | 97.7 |
All | 32 | 21 | -10 | 51 | 51 | 52 | 60 | 55 | 53 | 55 | 54 | 55 | 92.9 | 94.2 | 95.7 |
Ysel | 20 | 10 | -10 | 48 | 48 | 49 | 52 | 49 | 50 | 56 | 56 | 57 | 96.9 | 97.0 | 97.1 |
YZsel | 9 | 8 | -1 | 148 | 144 | 137 | 148 | 144 | 137 | 89 | 88 | 85 | 84.3 | 82.6 | 86.5 |
OP + All | 12 | 1 | -11 | 46 | 45 | 46 | 47 | 45 | 47 | 46 | 45 | 46 | 94.6 | 94.8 | 93.7 |
OP + Ysel | 12 | 1 | -11 | 45 | 44 | 46 | 47 | 44 | 47 | 46 | 45 | 46 | 94.8 | 95.3 | 94.3 |
OP + YZsel | 13 | 4 | -9 | 59 | 59 | 49 | 60 | 59 | 50 | 50 | 48 | 49 | 94.9 | 94.9 | 94.9 |
Bias | MCSD | RMSE | SE | Coverage (%; modified) | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Estimators | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 |
\@BTrule[] Naive | 41 | 27 | -15 | 38 | 37 | 38 | 56 | 45 | 41 | 38 | 38 | 38 | 81.0 | 89.6 | 93.8 |
OAL | 6 | 2 | -4 | 34 | 32 | 33 | 34 | 32 | 33 | 39 | 38 | 39 | 97.4 | 98.2 | 98.0 |
LOGIS | |||||||||||||||
Confounder | 13 | 8 | -5 | 36 | 34 | 35 | 38 | 35 | 35 | 36 | 34 | 35 | 93.7 | 93.8 | 94.6 |
Treatment | 2 | 0 | -2 | 36 | 34 | 35 | 36 | 34 | 35 | 36 | 35 | 35 | 95.0 | 94.8 | 95.0 |
Outcome | 1 | 0 | -2 | 34 | 32 | 33 | 34 | 32 | 33 | 34 | 32 | 33 | 95.2 | 95.3 | 94.9 |
All | 35 | 22 | -12 | 37 | 36 | 37 | 51 | 42 | 39 | 39 | 38 | 38 | 85.9 | 92.1 | 94.5 |
Ysel | 5 | 0 | -6 | 34 | 32 | 33 | 34 | 32 | 33 | 39 | 38 | 39 | 97.8 | 98.0 | 97.8 |
YZsel | 21 | 13 | -7 | 37 | 35 | 36 | 42 | 37 | 37 | 38 | 38 | 38 | 91.5 | 94.7 | 95.3 |
OP + All | 9 | 3 | -7 | 33 | 31 | 32 | 34 | 31 | 33 | 38 | 38 | 38 | 97.4 | 98.0 | 98.0 |
OP + Ysel | 5 | 0 | -6 | 34 | 32 | 33 | 34 | 32 | 33 | 33 | 32 | 33 | 94.7 | 95.0 | 94.6 |
OP + YZsel | 8 | 2 | -6 | 32 | 30 | 32 | 33 | 30 | 32 | 33 | 31 | 32 | 95.4 | 95.7 | 95.6 |
CART | |||||||||||||||
Confounder | 18 | 13 | -6 | 48 | 47 | 48 | 51 | 49 | 49 | 50 | 49 | 50 | 94.1 | 94.4 | 96.2 |
Treatment | 23 | 13 | -9 | 50 | 48 | 50 | 55 | 50 | 51 | 52 | 51 | 52 | 93.9 | 95.4 | 95.4 |
Outcome | 16 | 11 | -5 | 48 | 46 | 48 | 50 | 48 | 49 | 51 | 51 | 51 | 94.8 | 95.8 | 95.6 |
All | 32 | 20 | -12 | 59 | 57 | 59 | 67 | 60 | 60 | 58 | 57 | 58 | 91.0 | 92.8 | 93.6 |
Ysel | 19 | 12 | -8 | 49 | 48 | 51 | 52 | 49 | 51 | 52 | 51 | 52 | 94.3 | 95.4 | 95.0 |
YZsel | 24 | 16 | -8 | 43 | 42 | 43 | 49 | 45 | 44 | 46 | 46 | 46 | 94.9 | 93.8 | 95.3 |
OP + All | 10 | 3 | -7 | 54 | 52 | 53 | 55 | 52 | 53 | 52 | 51 | 52 | 93.4 | 94.2 | 94.3 |
OP + Ysel | 11 | 4 | -7 | 45 | 44 | 46 | 46 | 44 | 47 | 46 | 45 | 46 | 94.4 | 94.2 | 94.8 |
OP + YZsel | 10 | 4 | -7 | 38 | 36 | 38 | 40 | 36 | 39 | 40 | 39 | 39 | 95.1 | 94.5 | 94.5 |
Pruned CART | |||||||||||||||
Confounder | 23 | 15 | -8 | 41 | 39 | 40 | 47 | 42 | 41 | 41 | 41 | 41 | 90.4 | 94.2 | 95.1 |
Treatment | 30 | 20 | -11 | 43 | 40 | 41 | 52 | 45 | 43 | 43 | 42 | 42 | 88.8 | 93.1 | 94.7 |
Outcome | 23 | 14 | -9 | 40 | 39 | 40 | 46 | 41 | 41 | 42 | 41 | 41 | 91.8 | 94.7 | 95.3 |
All | 36 | 23 | -13 | 42 | 41 | 43 | 56 | 47 | 44 | 43 | 42 | 42 | 85.8 | 91.2 | 93.4 |
Ysel | 26 | 16 | -10 | 40 | 39 | 40 | 48 | 42 | 42 | 42 | 41 | 41 | 90.9 | 93.6 | 94.8 |
YZsel | 27 | 17 | -10 | 38 | 36 | 38 | 47 | 40 | 39 | 40 | 40 | 40 | 89.3 | 93.6 | 94.7 |
OP + All | 10 | 3 | -8 | 38 | 36 | 37 | 39 | 36 | 37 | 37 | 35 | 36 | 93.5 | 94.4 | 93.7 |
OP + Ysel | 10 | 3 | -7 | 35 | 34 | 35 | 37 | 34 | 36 | 36 | 34 | 35 | 94.8 | 94.8 | 94.6 |
OP + YZsel | 8 | 2 | -6 | 33 | 31 | 33 | 34 | 32 | 34 | 35 | 33 | 34 | 95.3 | 95.7 | 95.1 |
Bagged CART | |||||||||||||||
Confounder | 1 | 0 | -1 | 44 | 44 | 43 | 44 | 44 | 43 | 46 | 45 | 46 | 96.4 | 95.6 | 96.2 |
Treatment | 9 | 4 | -5 | 44 | 42 | 42 | 45 | 42 | 42 | 46 | 44 | 44 | 95.4 | 95.9 | 95.8 |
Outcome | 4 | 1 | -2 | 38 | 36 | 38 | 38 | 36 | 38 | 43 | 42 | 42 | 96.9 | 97.0 | 97.2 |
All | 30 | 19 | -11 | 37 | 36 | 37 | 48 | 40 | 39 | 40 | 39 | 40 | 90.3 | 94.4 | 95.5 |
Ysel | 10 | 4 | -6 | 37 | 35 | 36 | 38 | 35 | 37 | 42 | 41 | 42 | 96.7 | 97.6 | 97.7 |
YZsel | -4 | -1 | 3 | 75 | 73 | 71 | 75 | 73 | 71 | 65 | 63 | 64 | 91.6 | 90.4 | 92.6 |
OP + All | 10 | 2 | -7 | 33 | 31 | 32 | 34 | 31 | 33 | 33 | 32 | 33 | 93.8 | 94.8 | 94.4 |
OP + Ysel | 9 | 2 | -7 | 33 | 31 | 32 | 34 | 31 | 33 | 33 | 31 | 32 | 94.0 | 95.0 | 94.4 |
OP + YZsel | 10 | 3 | -6 | 32 | 30 | 32 | 34 | 30 | 33 | 33 | 32 | 33 | 94.5 | 95.9 | 94.7 |
Random Forests | |||||||||||||||
Confounder | 13 | 8 | -4 | 38 | 37 | 38 | 41 | 38 | 38 | 42 | 41 | 41 | 95.3 | 96.1 | 97.3 |
Treatment | 17 | 10 | -7 | 38 | 36 | 37 | 41 | 38 | 38 | 41 | 40 | 40 | 94.7 | 95.6 | 96.2 |
Outcome | 14 | 9 | -5 | 35 | 33 | 34 | 37 | 34 | 35 | 40 | 39 | 40 | 95.7 | 97.8 | 97.9 |
All | 34 | 22 | -12 | 37 | 35 | 36 | 50 | 41 | 38 | 39 | 38 | 39 | 87.0 | 92.7 | 95.0 |
Ysel | 18 | 10 | -8 | 34 | 32 | 34 | 39 | 34 | 35 | 40 | 39 | 40 | 95.4 | 97.6 | 97.6 |
YZsel | -28 | -23 | 5 | 106 | 103 | 92 | 110 | 106 | 92 | 71 | 67 | 63 | 79.1 | 85.2 | 89.5 |
OP + All | 9 | 2 | -7 | 33 | 31 | 33 | 34 | 31 | 33 | 33 | 32 | 33 | 94.1 | 95.2 | 94.6 |
OP + Ysel | 9 | 2 | -7 | 33 | 31 | 32 | 34 | 31 | 33 | 33 | 32 | 32 | 94.0 | 94.9 | 94.6 |
OP + YZsel | 7 | 2 | -5 | 35 | 33 | 36 | 36 | 33 | 36 | 34 | 33 | 34 | 95.5 | 96.1 | 94.9 |
Bias | MCSD | RMSE | SE | Coverage (%; modified) | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Estimators | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 |
\@BTrule[] Naive | 128 | 62 | -66 | 54 | 54 | 53 | 139 | 82 | 84 | 54 | 54 | 54 | 35.1 | 78.6 | 78.0 |
OAL | 33 | 31 | -2 | 51 | 48 | 48 | 61 | 57 | 48 | 56 | 55 | 55 | 92.4 | 93.3 | 96.8 |
LOGIS | |||||||||||||||
Confounder | 8 | 31 | 23 | 52 | 50 | 50 | 52 | 59 | 55 | 53 | 51 | 51 | 95.6 | 91.1 | 92.2 |
Treatment | -6 | 23 | 29 | 56 | 52 | 53 | 56 | 57 | 60 | 58 | 54 | 54 | 95.4 | 94.0 | 91.1 |
Outcome | -3 | 24 | 27 | 49 | 46 | 47 | 49 | 52 | 55 | 51 | 48 | 49 | 95.6 | 92.8 | 91.3 |
All | 95 | 53 | -42 | 51 | 52 | 50 | 108 | 75 | 66 | 56 | 55 | 55 | 60.5 | 84.4 | 90.8 |
Ysel | 7 | 24 | 17 | 49 | 47 | 48 | 50 | 52 | 51 | 56 | 56 | 56 | 97.0 | 95.5 | 96.4 |
YZsel | 48 | 39 | -10 | 58 | 52 | 54 | 75 | 65 | 55 | 55 | 53 | 53 | 83.6 | 89.9 | 94.4 |
OP + All | 25 | 20 | -5 | 46 | 45 | 45 | 52 | 49 | 45 | 56 | 55 | 55 | 96.1 | 96.6 | 98.0 |
OP + Ysel | 7 | 24 | 17 | 49 | 47 | 48 | 50 | 52 | 51 | 48 | 47 | 47 | 94.3 | 91.8 | 92.2 |
OP + YZsel | 13 | 22 | 9 | 48 | 46 | 47 | 50 | 51 | 47 | 48 | 46 | 47 | 94.8 | 92.1 | 94.5 |
CART | |||||||||||||||
Confounder | 47 | 34 | -14 | 72 | 68 | 65 | 86 | 76 | 67 | 73 | 72 | 70 | 90.0 | 92.8 | 95.7 |
Treatment | 61 | 39 | -22 | 73 | 73 | 70 | 95 | 82 | 74 | 75 | 73 | 73 | 86.9 | 91.3 | 94.2 |
Outcome | 50 | 35 | -15 | 73 | 70 | 69 | 89 | 78 | 71 | 74 | 73 | 72 | 90.1 | 92.0 | 94.8 |
All | 89 | 52 | -38 | 83 | 81 | 82 | 122 | 96 | 90 | 82 | 81 | 80 | 79.7 | 89.2 | 91.8 |
Ysel | 56 | 36 | -20 | 73 | 71 | 70 | 92 | 80 | 73 | 75 | 74 | 73 | 89.1 | 92.4 | 94.4 |
YZsel | 70 | 42 | -27 | 68 | 65 | 64 | 97 | 77 | 70 | 70 | 69 | 68 | 85.4 | 91.1 | 94.6 |
OP + All | 29 | 19 | -10 | 75 | 73 | 74 | 81 | 76 | 74 | 75 | 74 | 74 | 93.0 | 94.0 | 94.6 |
OP + Ysel | 32 | 21 | -11 | 66 | 64 | 64 | 73 | 68 | 65 | 67 | 66 | 65 | 92.7 | 93.8 | 94.6 |
OP + YZsel | 29 | 22 | -8 | 60 | 57 | 57 | 67 | 61 | 58 | 62 | 60 | 59 | 93.1 | 94.0 | 94.6 |
Pruned CART | |||||||||||||||
Confounder | 57 | 37 | -20 | 66 | 60 | 59 | 87 | 71 | 63 | 64 | 63 | 62 | 82.8 | 91.3 | 94.1 |
Treatment | 76 | 44 | -32 | 67 | 64 | 62 | 101 | 78 | 70 | 65 | 63 | 63 | 76.2 | 87.8 | 93.0 |
Outcome | 65 | 40 | -25 | 64 | 61 | 59 | 91 | 73 | 64 | 64 | 62 | 62 | 81.3 | 88.9 | 94.0 |
All | 100 | 54 | -46 | 65 | 61 | 63 | 119 | 82 | 78 | 64 | 62 | 62 | 63.0 | 85.5 | 87.8 |
Ysel | 71 | 42 | -29 | 64 | 62 | 60 | 96 | 74 | 67 | 64 | 62 | 61 | 78.4 | 88.8 | 92.6 |
YZsel | 77 | 45 | -33 | 62 | 58 | 57 | 99 | 73 | 66 | 62 | 61 | 60 | 76.0 | 89.0 | 91.7 |
OP + All | 23 | 19 | -4 | 54 | 52 | 52 | 59 | 55 | 52 | 55 | 53 | 53 | 93.8 | 93.8 | 95.0 |
OP + Ysel | 24 | 20 | -4 | 54 | 53 | 51 | 59 | 57 | 51 | 54 | 53 | 52 | 93.2 | 93.2 | 95.0 |
OP + YZsel | 23 | 20 | -3 | 51 | 49 | 49 | 56 | 53 | 50 | 53 | 51 | 50 | 94.0 | 92.9 | 95.3 |
Bagged CART | |||||||||||||||
Confounder | -4 | 14 | 18 | 74 | 70 | 69 | 74 | 72 | 71 | 73 | 71 | 70 | 94.3 | 94.0 | 93.4 |
Treatment | 22 | 30 | 8 | 66 | 62 | 63 | 70 | 69 | 63 | 68 | 65 | 65 | 94.2 | 92.8 | 95.1 |
Outcome | 12 | 23 | 12 | 61 | 56 | 58 | 62 | 61 | 59 | 65 | 64 | 64 | 95.7 | 94.9 | 96.0 |
All | 85 | 51 | -34 | 52 | 53 | 51 | 99 | 73 | 61 | 58 | 57 | 57 | 70.3 | 86.6 | 93.7 |
Ysel | 28 | 28 | 0 | 57 | 53 | 54 | 64 | 60 | 54 | 63 | 62 | 62 | 94.6 | 95.9 | 97.0 |
YZsel | 15 | 22 | 7 | 93 | 88 | 90 | 94 | 91 | 90 | 84 | 81 | 79 | 93.2 | 93.8 | 92.2 |
OP + All | 20 | 19 | -2 | 47 | 45 | 45 | 51 | 49 | 45 | 48 | 46 | 46 | 94.1 | 93.1 | 95.1 |
OP + Ysel | 21 | 21 | 0 | 47 | 45 | 45 | 51 | 50 | 45 | 48 | 46 | 46 | 94.1 | 93.4 | 95.4 |
OP + YZsel | 22 | 19 | -3 | 47 | 45 | 45 | 52 | 49 | 45 | 48 | 46 | 46 | 93.6 | 93.2 | 95.3 |
Random Forests | |||||||||||||||
Confounder | 29 | 27 | -2 | 59 | 56 | 56 | 66 | 62 | 56 | 63 | 61 | 61 | 94.0 | 94.4 | 96.7 |
Treatment | 53 | 39 | -14 | 55 | 54 | 53 | 76 | 67 | 54 | 59 | 58 | 58 | 87.8 | 90.4 | 96.7 |
Outcome | 46 | 36 | -10 | 52 | 49 | 49 | 69 | 61 | 50 | 58 | 57 | 58 | 89.8 | 93.8 | 97.8 |
All | 105 | 56 | -49 | 52 | 52 | 50 | 117 | 76 | 70 | 55 | 55 | 55 | 52.3 | 82.9 | 88.3 |
Ysel | 61 | 40 | -21 | 50 | 49 | 48 | 79 | 63 | 53 | 57 | 56 | 57 | 84.9 | 92.6 | 96.9 |
YZsel | 33 | 30 | -3 | 102 | 97 | 92 | 108 | 101 | 92 | 74 | 71 | 70 | 90.4 | 92.0 | 94.0 |
OP + All | 19 | 18 | -1 | 47 | 45 | 45 | 51 | 49 | 45 | 48 | 46 | 46 | 94.0 | 93.0 | 95.0 |
OP + Ysel | 20 | 21 | 1 | 47 | 45 | 45 | 51 | 49 | 45 | 48 | 46 | 46 | 94.1 | 93.0 | 95.0 |
OP + YZsel | 21 | 19 | -2 | 47 | 45 | 45 | 51 | 49 | 45 | 48 | 46 | 46 | 93.8 | 92.9 | 95.3 |
Bias | MCSD | RMSE | SE | Coverage (%; modified) | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Estimators | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 |
\@BTrule[] Naive | 128 | 62 | -66 | 38 | 38 | 38 | 133 | 73 | 76 | 38 | 38 | 38 | 8.7 | 63.2 | 59.2 |
OAL | 23 | 31 | 8 | 39 | 34 | 36 | 46 | 46 | 36 | 39 | 39 | 39 | 90.6 | 90.1 | 96.6 |
LOGIS | |||||||||||||||
Confounder | 8 | 33 | 25 | 37 | 36 | 35 | 38 | 48 | 43 | 37 | 36 | 36 | 94.4 | 85.0 | 88.3 |
Treatment | -7 | 24 | 32 | 39 | 37 | 37 | 40 | 44 | 48 | 40 | 37 | 37 | 94.8 | 89.2 | 85.7 |
Outcome | -4 | 25 | 29 | 35 | 33 | 33 | 35 | 42 | 44 | 35 | 33 | 34 | 94.7 | 88.1 | 85.6 |
All | 71 | 47 | -24 | 36 | 36 | 35 | 80 | 60 | 42 | 40 | 39 | 39 | 57.5 | 78.4 | 93.4 |
Ysel | 0 | 25 | 25 | 35 | 33 | 33 | 35 | 42 | 41 | 39 | 39 | 39 | 96.9 | 92.7 | 93.1 |
YZsel | 16 | 32 | 16 | 40 | 36 | 37 | 43 | 48 | 40 | 37 | 36 | 36 | 91.6 | 86.9 | 91.0 |
OP + All | 15 | 22 | 7 | 33 | 33 | 32 | 37 | 40 | 33 | 40 | 39 | 39 | 96.4 | 94.2 | 97.2 |
OP + Ysel | 0 | 25 | 25 | 35 | 33 | 33 | 35 | 42 | 41 | 33 | 33 | 33 | 94.0 | 87.9 | 87.4 |
OP + YZsel | 3 | 25 | 22 | 35 | 33 | 33 | 35 | 42 | 40 | 35 | 33 | 33 | 94.5 | 88.2 | 89.0 |
CART | |||||||||||||||
Confounder | 46 | 34 | -12 | 52 | 50 | 49 | 69 | 60 | 50 | 53 | 51 | 50 | 86.6 | 91.0 | 94.1 |
Treatment | 56 | 38 | -18 | 54 | 53 | 49 | 78 | 65 | 52 | 55 | 53 | 52 | 82.5 | 88.2 | 94.3 |
Outcome | 47 | 34 | -14 | 52 | 50 | 48 | 70 | 61 | 50 | 54 | 53 | 52 | 86.4 | 91.0 | 95.8 |
All | 77 | 47 | -30 | 59 | 58 | 58 | 97 | 75 | 65 | 60 | 58 | 58 | 74.1 | 87.1 | 92.2 |
Ysel | 52 | 36 | -16 | 53 | 52 | 50 | 74 | 63 | 52 | 55 | 53 | 52 | 85.1 | 89.7 | 94.8 |
YZsel | 52 | 36 | -16 | 51 | 48 | 47 | 73 | 60 | 50 | 52 | 51 | 50 | 83.9 | 89.3 | 94.7 |
OP + All | 21 | 18 | -3 | 52 | 52 | 51 | 57 | 55 | 51 | 53 | 52 | 52 | 93.6 | 93.4 | 94.6 |
OP + Ysel | 29 | 23 | -6 | 47 | 46 | 45 | 55 | 52 | 46 | 48 | 46 | 46 | 90.6 | 90.8 | 94.7 |
OP + YZsel | 28 | 22 | -6 | 45 | 43 | 43 | 53 | 48 | 44 | 45 | 43 | 43 | 90.4 | 91.4 | 94.2 |
Pruned CART | |||||||||||||||
Confounder | 53 | 35 | -18 | 46 | 43 | 42 | 70 | 55 | 46 | 45 | 44 | 43 | 76.2 | 86.7 | 93.2 |
Treatment | 70 | 43 | -28 | 49 | 45 | 43 | 86 | 62 | 51 | 46 | 44 | 44 | 63.5 | 82.6 | 89.8 |
Outcome | 58 | 35 | -22 | 46 | 42 | 41 | 74 | 55 | 47 | 44 | 43 | 43 | 71.6 | 86.6 | 91.8 |
All | 92 | 50 | -41 | 46 | 43 | 43 | 103 | 66 | 59 | 44 | 42 | 42 | 44.0 | 76.9 | 82.2 |
Ysel | 63 | 38 | -25 | 46 | 43 | 43 | 78 | 57 | 49 | 44 | 43 | 43 | 68.4 | 84.9 | 90.7 |
YZsel | 60 | 37 | -22 | 46 | 42 | 41 | 75 | 56 | 47 | 44 | 43 | 42 | 71.0 | 86.1 | 92.2 |
OP + All | 15 | 19 | 4 | 37 | 36 | 35 | 40 | 41 | 36 | 37 | 36 | 36 | 93.6 | 91.3 | 94.7 |
OP + Ysel | 18 | 20 | 2 | 38 | 36 | 36 | 42 | 42 | 36 | 37 | 36 | 36 | 92.1 | 90.6 | 94.8 |
OP + YZsel | 18 | 20 | 2 | 38 | 36 | 36 | 42 | 41 | 36 | 37 | 36 | 35 | 91.7 | 90.8 | 94.0 |
Bagged CART | |||||||||||||||
Confounder | -5 | 12 | 16 | 53 | 51 | 48 | 53 | 52 | 50 | 53 | 51 | 49 | 93.9 | 93.8 | 94.0 |
Treatment | 19 | 28 | 8 | 46 | 44 | 42 | 50 | 52 | 43 | 48 | 46 | 46 | 93.8 | 91.2 | 95.5 |
Outcome | 9 | 20 | 11 | 43 | 42 | 39 | 44 | 46 | 41 | 47 | 45 | 45 | 95.8 | 94.0 | 96.2 |
All | 74 | 48 | -26 | 38 | 37 | 36 | 83 | 61 | 45 | 41 | 40 | 40 | 55.4 | 79.1 | 92.2 |
Ysel | 21 | 25 | 4 | 42 | 41 | 39 | 47 | 48 | 39 | 45 | 44 | 43 | 94.2 | 92.8 | 96.7 |
YZsel | -1 | 14 | 15 | 55 | 52 | 50 | 55 | 54 | 52 | 55 | 52 | 51 | 94.7 | 94.3 | 93.7 |
OP + All | 13 | 22 | 9 | 34 | 33 | 33 | 36 | 40 | 34 | 34 | 33 | 33 | 93.7 | 89.7 | 93.6 |
OP + Ysel | 16 | 22 | 6 | 34 | 33 | 33 | 38 | 40 | 33 | 35 | 33 | 33 | 93.3 | 89.8 | 94.3 |
OP + YZsel | 16 | 21 | 5 | 34 | 33 | 33 | 38 | 39 | 33 | 35 | 33 | 33 | 92.8 | 90.5 | 94.9 |
Random Forests | |||||||||||||||
Confounder | 29 | 26 | -3 | 42 | 41 | 39 | 51 | 48 | 39 | 44 | 43 | 43 | 91.2 | 91.1 | 96.1 |
Treatment | 51 | 38 | -13 | 39 | 38 | 37 | 64 | 54 | 39 | 42 | 41 | 41 | 78.6 | 86.2 | 96.0 |
Outcome | 43 | 34 | -9 | 36 | 35 | 34 | 56 | 49 | 35 | 41 | 40 | 41 | 85.2 | 90.3 | 97.4 |
All | 99 | 55 | -45 | 37 | 37 | 36 | 106 | 66 | 57 | 39 | 39 | 39 | 26.1 | 71.3 | 81.0 |
Ysel | 54 | 38 | -16 | 36 | 35 | 34 | 65 | 52 | 38 | 40 | 40 | 40 | 75.2 | 87.3 | 96.5 |
YZsel | 30 | 27 | -4 | 47 | 44 | 43 | 56 | 51 | 43 | 46 | 44 | 44 | 90.4 | 91.7 | 96.5 |
OP + All | 11 | 21 | 10 | 34 | 34 | 33 | 36 | 40 | 34 | 34 | 33 | 33 | 94.2 | 89.4 | 92.7 |
OP + Ysel | 14 | 22 | 7 | 34 | 34 | 33 | 37 | 40 | 33 | 35 | 33 | 33 | 93.3 | 90.0 | 93.7 |
OP + YZsel | 15 | 20 | 5 | 34 | 33 | 33 | 37 | 39 | 33 | 35 | 33 | 33 | 93.2 | 90.5 | 94.6 |
Bias | MCSD | RMSE | SE | Coverage (%; modified) | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Estimators | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 |
\@BTrule[] Naive | 33 | 16 | -18 | 54 | 53 | 53 | 64 | 56 | 55 | 55 | 54 | 53 | 90.5 | 94.6 | 93.8 |
OAL | 10 | 4 | -6 | 48 | 47 | 45 | 49 | 47 | 45 | 56 | 55 | 54 | 97.6 | 98.2 | 97.8 |
LOGIS | |||||||||||||||
Confounder | 9 | 9 | -1 | 51 | 50 | 48 | 51 | 51 | 48 | 51 | 51 | 49 | 94.9 | 95.3 | 95.2 |
Treatment | -1 | 1 | 2 | 51 | 51 | 49 | 51 | 51 | 49 | 53 | 52 | 50 | 95.0 | 95.3 | 95.1 |
Outcome | -1 | 2 | 2 | 48 | 46 | 45 | 48 | 46 | 45 | 49 | 48 | 47 | 95.1 | 96.2 | 95.6 |
All | 32 | 15 | -17 | 54 | 53 | 52 | 62 | 55 | 55 | 55 | 55 | 53 | 91.1 | 95.1 | 94.2 |
Ysel | 8 | 1 | -6 | 48 | 47 | 45 | 49 | 47 | 46 | 56 | 56 | 55 | 97.7 | 98.7 | 98.2 |
YZsel | 25 | 13 | -12 | 53 | 55 | 56 | 59 | 56 | 57 | 55 | 54 | 53 | 92.8 | 96.2 | 94.5 |
OP + All | 12 | 1 | -11 | 46 | 45 | 43 | 47 | 45 | 44 | 55 | 55 | 54 | 97.7 | 98.6 | 98.2 |
OP + Ysel | 8 | 1 | -6 | 48 | 47 | 45 | 49 | 47 | 46 | 48 | 48 | 46 | 94.2 | 96.2 | 95.8 |
OP + YZsel | 12 | 4 | -8 | 47 | 46 | 47 | 48 | 47 | 48 | 46 | 46 | 44 | 94.5 | 96.2 | 92.8 |
CART | |||||||||||||||
Confounder | 14 | 9 | -5 | 68 | 68 | 64 | 70 | 69 | 64 | 71 | 71 | 69 | 94.9 | 95.2 | 96.0 |
Treatment | 17 | 9 | -8 | 74 | 71 | 69 | 76 | 72 | 69 | 73 | 73 | 71 | 94.0 | 94.0 | 95.5 |
Outcome | 16 | 8 | -8 | 71 | 69 | 67 | 73 | 69 | 68 | 73 | 72 | 71 | 94.5 | 95.4 | 95.3 |
All | 24 | 12 | -11 | 80 | 80 | 77 | 83 | 81 | 78 | 81 | 80 | 79 | 94.3 | 94.0 | 95.2 |
Ysel | 19 | 9 | -10 | 72 | 68 | 68 | 74 | 69 | 69 | 74 | 73 | 72 | 94.3 | 95.4 | 95.5 |
YZsel | 26 | 15 | -11 | 60 | 66 | 63 | 65 | 68 | 63 | 65 | 65 | 63 | 95.8 | 95.8 | 93.8 |
OP + All | 10 | 1 | -9 | 74 | 74 | 69 | 75 | 74 | 70 | 75 | 74 | 73 | 94.9 | 95.0 | 95.7 |
OP + Ysel | 12 | 1 | -10 | 66 | 64 | 62 | 67 | 64 | 63 | 66 | 65 | 64 | 94.6 | 95.1 | 94.9 |
OP + YZsel | 15 | 5 | -10 | 52 | 58 | 54 | 54 | 58 | 55 | 57 | 56 | 54 | 95.8 | 95.8 | 93.8 |
Pruned CART | |||||||||||||||
Confounder | 20 | 11 | -8 | 61 | 59 | 57 | 64 | 60 | 57 | 61 | 61 | 59 | 93.2 | 94.8 | 95.4 |
Treatment | 22 | 11 | -11 | 62 | 60 | 58 | 66 | 61 | 59 | 62 | 62 | 60 | 93.3 | 95.2 | 95.2 |
Outcome | 22 | 11 | -11 | 60 | 59 | 56 | 64 | 60 | 57 | 61 | 61 | 60 | 93.5 | 95.0 | 95.0 |
All | 28 | 13 | -15 | 62 | 61 | 60 | 69 | 62 | 62 | 62 | 62 | 60 | 92.4 | 95.0 | 94.3 |
Ysel | 26 | 13 | -13 | 60 | 57 | 56 | 65 | 58 | 58 | 61 | 61 | 59 | 93.1 | 95.9 | 94.4 |
YZsel | 28 | 17 | -12 | 55 | 57 | 59 | 62 | 60 | 60 | 59 | 59 | 57 | 92.7 | 94.8 | 95.8 |
OP + All | 10 | 1 | -10 | 54 | 52 | 50 | 55 | 52 | 51 | 54 | 53 | 52 | 94.8 | 96.4 | 94.7 |
OP + Ysel | 12 | 2 | -10 | 52 | 51 | 49 | 53 | 51 | 50 | 53 | 52 | 50 | 94.8 | 95.8 | 95.0 |
OP + YZsel | 12 | 5 | -7 | 48 | 49 | 49 | 50 | 49 | 49 | 51 | 49 | 47 | 94.8 | 97.9 | 93.8 |
Bagged CART | |||||||||||||||
Confounder | 4 | 3 | -1 | 66 | 64 | 64 | 66 | 64 | 64 | 67 | 67 | 65 | 95.2 | 95.4 | 95.4 |
Treatment | 6 | 4 | -2 | 62 | 61 | 57 | 62 | 61 | 57 | 64 | 64 | 61 | 95.2 | 95.4 | 95.6 |
Outcome | 5 | 4 | -1 | 55 | 53 | 52 | 55 | 54 | 52 | 61 | 62 | 60 | 97.1 | 97.1 | 97.7 |
All | 25 | 12 | -13 | 53 | 53 | 50 | 59 | 54 | 52 | 57 | 57 | 55 | 94.3 | 96.4 | 96.4 |
Ysel | 13 | 5 | -8 | 53 | 53 | 51 | 54 | 53 | 52 | 60 | 60 | 59 | 97.5 | 97.4 | 97.5 |
YZsel | 15 | 2 | -12 | 102 | 102 | 103 | 103 | 102 | 104 | 89 | 90 | 87 | 93.8 | 99.0 | 93.8 |
OP + All | 12 | 1 | -11 | 46 | 45 | 43 | 48 | 45 | 45 | 47 | 46 | 45 | 94.2 | 96.4 | 95.0 |
OP + Ysel | 12 | 1 | -11 | 46 | 45 | 44 | 48 | 45 | 45 | 47 | 46 | 45 | 94.4 | 96.3 | 94.8 |
OP + YZsel | 14 | 3 | -11 | 49 | 49 | 48 | 51 | 49 | 49 | 48 | 47 | 45 | 94.8 | 96.9 | 93.8 |
Random Forests | |||||||||||||||
Confounder | 11 | 7 | -4 | 57 | 55 | 54 | 58 | 55 | 54 | 60 | 60 | 59 | 95.5 | 96.2 | 96.4 |
Treatment | 14 | 8 | -6 | 54 | 53 | 51 | 56 | 54 | 51 | 58 | 58 | 56 | 95.5 | 96.3 | 96.4 |
Outcome | 13 | 8 | -5 | 50 | 49 | 47 | 52 | 50 | 47 | 57 | 57 | 56 | 97.5 | 98.0 | 98.2 |
All | 28 | 13 | -14 | 52 | 52 | 50 | 59 | 53 | 52 | 55 | 55 | 54 | 93.2 | 95.7 | 95.7 |
Ysel | 18 | 7 | -11 | 49 | 48 | 46 | 53 | 49 | 48 | 57 | 57 | 55 | 96.4 | 97.7 | 97.7 |
YZsel | 16 | 7 | -9 | 138 | 139 | 137 | 139 | 139 | 137 | 90 | 92 | 86 | 86.5 | 91.7 | 93.8 |
OP + All | 12 | 0 | -11 | 46 | 45 | 44 | 48 | 45 | 45 | 47 | 46 | 45 | 93.6 | 96.2 | 94.8 |
OP + Ysel | 11 | 1 | -10 | 46 | 45 | 44 | 47 | 45 | 45 | 47 | 46 | 45 | 94.6 | 96.5 | 94.6 |
OP + YZsel | 7 | 3 | -4 | 63 | 58 | 57 | 64 | 58 | 57 | 50 | 50 | 48 | 93.8 | 97.9 | 93.8 |
Bias | MCSD | RMSE | SE | Coverage (%; modified) | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Estimators | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 |
\@BTrule[] Naive | 36 | 17 | -19 | 39 | 39 | 38 | 53 | 42 | 42 | 39 | 38 | 38 | 83.9 | 92.8 | 92.8 |
OAL | 6 | 5 | -1 | 34 | 34 | 33 | 34 | 34 | 33 | 39 | 39 | 38 | 97.3 | 97.5 | 97.2 |
LOGIS | |||||||||||||||
Confounder | 12 | 11 | -1 | 36 | 36 | 34 | 38 | 38 | 34 | 36 | 36 | 34 | 94.2 | 93.4 | 94.5 |
Treatment | 2 | 4 | 2 | 37 | 36 | 34 | 37 | 36 | 34 | 36 | 36 | 34 | 95.0 | 94.6 | 94.2 |
Outcome | 1 | 4 | 2 | 34 | 33 | 33 | 34 | 34 | 33 | 34 | 33 | 32 | 95.1 | 94.0 | 93.7 |
All | 32 | 15 | -17 | 38 | 38 | 37 | 50 | 41 | 40 | 39 | 39 | 38 | 87.2 | 92.8 | 93.8 |
Ysel | 5 | 3 | -2 | 34 | 33 | 33 | 34 | 34 | 33 | 39 | 39 | 38 | 97.4 | 97.4 | 97.0 |
YZsel | 23 | 12 | -10 | 39 | 38 | 37 | 45 | 40 | 38 | 38 | 38 | 38 | 90.0 | 92.3 | 94.3 |
OP + All | 9 | 4 | -5 | 33 | 33 | 32 | 34 | 33 | 33 | 39 | 39 | 38 | 97.2 | 97.6 | 97.0 |
OP + Ysel | 5 | 3 | -2 | 34 | 33 | 33 | 34 | 34 | 33 | 33 | 33 | 32 | 94.8 | 94.4 | 93.2 |
OP + YZsel | 8 | 7 | -2 | 33 | 32 | 31 | 34 | 33 | 31 | 33 | 33 | 31 | 95.0 | 93.8 | 94.3 |
CART | |||||||||||||||
Confounder | 17 | 12 | -5 | 49 | 49 | 47 | 52 | 50 | 48 | 51 | 50 | 49 | 94.0 | 94.1 | 96.0 |
Treatment | 18 | 11 | -7 | 52 | 52 | 50 | 55 | 53 | 50 | 53 | 52 | 51 | 93.8 | 93.7 | 95.3 |
Outcome | 15 | 8 | -6 | 51 | 49 | 47 | 53 | 50 | 48 | 52 | 52 | 50 | 94.6 | 95.4 | 95.5 |
All | 27 | 15 | -13 | 58 | 58 | 55 | 64 | 60 | 56 | 58 | 58 | 57 | 92.8 | 93.6 | 95.0 |
Ysel | 17 | 9 | -8 | 51 | 50 | 49 | 54 | 51 | 50 | 53 | 53 | 51 | 94.3 | 95.0 | 95.3 |
YZsel | 28 | 15 | -13 | 45 | 45 | 44 | 53 | 48 | 46 | 46 | 46 | 45 | 92.7 | 95.8 | 95.2 |
OP + All | 11 | 5 | -6 | 52 | 52 | 51 | 53 | 53 | 51 | 53 | 52 | 51 | 94.8 | 94.9 | 94.3 |
OP + Ysel | 10 | 4 | -6 | 48 | 46 | 46 | 49 | 46 | 46 | 47 | 46 | 45 | 94.0 | 95.6 | 93.4 |
OP + YZsel | 12 | 7 | -4 | 39 | 40 | 38 | 41 | 41 | 38 | 40 | 39 | 38 | 94.1 | 96.2 | 95.2 |
Pruned CART | |||||||||||||||
Confounder | 22 | 13 | -9 | 44 | 43 | 41 | 49 | 45 | 42 | 44 | 44 | 42 | 91.2 | 94.0 | 95.1 |
Treatment | 24 | 13 | -11 | 45 | 43 | 42 | 51 | 45 | 43 | 44 | 43 | 42 | 90.8 | 93.4 | 95.1 |
Outcome | 22 | 12 | -9 | 44 | 42 | 41 | 49 | 43 | 42 | 44 | 43 | 42 | 91.6 | 94.4 | 95.2 |
All | 32 | 16 | -16 | 44 | 44 | 42 | 54 | 47 | 44 | 43 | 43 | 42 | 87.8 | 92.8 | 93.9 |
Ysel | 23 | 12 | -11 | 43 | 42 | 41 | 49 | 44 | 43 | 43 | 43 | 42 | 91.1 | 93.9 | 94.8 |
YZsel | 29 | 15 | -14 | 41 | 40 | 40 | 50 | 43 | 42 | 41 | 41 | 40 | 91.0 | 92.7 | 95.5 |
OP + All | 9 | 4 | -5 | 37 | 37 | 36 | 38 | 37 | 36 | 37 | 37 | 35 | 94.6 | 94.8 | 94.0 |
OP + Ysel | 9 | 4 | -5 | 39 | 37 | 37 | 40 | 37 | 37 | 38 | 37 | 36 | 94.4 | 95.3 | 94.0 |
OP + YZsel | 9 | 6 | -4 | 34 | 35 | 33 | 35 | 35 | 34 | 35 | 34 | 33 | 95.8 | 94.5 | 93.1 |
Bagged CART | |||||||||||||||
Confounder | 5 | 6 | 1 | 46 | 44 | 43 | 46 | 45 | 43 | 47 | 47 | 45 | 95.4 | 95.6 | 95.2 |
Treatment | 9 | 7 | -2 | 43 | 41 | 39 | 44 | 42 | 39 | 45 | 44 | 42 | 95.4 | 95.9 | 96.1 |
Outcome | 6 | 6 | -1 | 38 | 38 | 37 | 39 | 38 | 37 | 43 | 43 | 41 | 97.0 | 97.2 | 97.5 |
All | 26 | 14 | -12 | 38 | 38 | 36 | 46 | 40 | 37 | 40 | 40 | 39 | 92.1 | 95.0 | 95.8 |
Ysel | 12 | 7 | -4 | 37 | 37 | 36 | 39 | 37 | 36 | 42 | 42 | 41 | 96.2 | 97.3 | 96.8 |
YZsel | 6 | 7 | 1 | 73 | 73 | 73 | 73 | 74 | 73 | 62 | 62 | 60 | 90.3 | 91.0 | 90.0 |
OP + All | 9 | 4 | -5 | 33 | 33 | 32 | 34 | 33 | 33 | 33 | 33 | 32 | 94.2 | 94.3 | 93.3 |
OP + Ysel | 9 | 4 | -4 | 33 | 33 | 33 | 35 | 33 | 33 | 34 | 33 | 32 | 94.2 | 94.0 | 93.3 |
OP + YZsel | 10 | 6 | -4 | 32 | 32 | 32 | 34 | 33 | 32 | 33 | 33 | 32 | 96.2 | 95.5 | 95.5 |
Random Forests | |||||||||||||||
Confounder | 13 | 10 | -4 | 39 | 38 | 38 | 41 | 39 | 38 | 42 | 42 | 41 | 95.6 | 96.4 | 96.3 |
Treatment | 15 | 9 | -6 | 38 | 38 | 36 | 41 | 39 | 36 | 41 | 40 | 39 | 95.1 | 96.0 | 96.0 |
Outcome | 14 | 9 | -5 | 35 | 35 | 34 | 38 | 36 | 35 | 40 | 40 | 39 | 96.4 | 97.0 | 97.2 |
All | 29 | 15 | -14 | 37 | 37 | 35 | 47 | 40 | 38 | 39 | 39 | 38 | 89.8 | 94.2 | 95.2 |
Ysel | 17 | 9 | -8 | 35 | 35 | 34 | 39 | 36 | 35 | 40 | 40 | 39 | 95.0 | 96.8 | 97.4 |
YZsel | -11 | -2 | 9 | 108 | 104 | 95 | 109 | 104 | 96 | 65 | 67 | 55 | 84.1 | 89.6 | 86.2 |
OP + All | 9 | 4 | -5 | 33 | 33 | 32 | 34 | 33 | 33 | 33 | 33 | 32 | 94.0 | 94.0 | 93.4 |
OP + Ysel | 8 | 4 | -4 | 33 | 33 | 33 | 34 | 33 | 33 | 34 | 33 | 32 | 94.8 | 94.4 | 93.6 |
OP + YZsel | 9 | 4 | -5 | 38 | 34 | 38 | 39 | 35 | 38 | 34 | 34 | 33 | 96.9 | 96.9 | 95.2 |
Bias | MCSD | RMSE | SE | Coverage (%; modified) | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Estimators | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 |
\@BTrule[] Naive | 62 | 38 | -24 | 56 | 54 | 54 | 84 | 67 | 59 | 55 | 54 | 54 | 79.0 | 88.6 | 92.6 |
OAL | 13 | 6 | -7 | 50 | 47 | 48 | 52 | 48 | 48 | 56 | 55 | 55 | 96.1 | 97.2 | 97.3 |
LOGIS | |||||||||||||||
Confounder | 9 | 8 | -1 | 52 | 51 | 50 | 53 | 51 | 50 | 52 | 50 | 50 | 94.2 | 94.0 | 94.7 |
Treatment | 0 | 1 | 1 | 52 | 51 | 51 | 52 | 51 | 51 | 53 | 51 | 51 | 94.7 | 93.9 | 94.5 |
Outcome | 0 | 2 | 2 | 49 | 47 | 48 | 49 | 47 | 48 | 50 | 47 | 48 | 94.0 | 94.3 | 94.6 |
All | 48 | 30 | -18 | 54 | 53 | 53 | 73 | 61 | 56 | 55 | 54 | 54 | 86.2 | 91.2 | 93.8 |
Ysel | 8 | 1 | -7 | 49 | 47 | 48 | 50 | 47 | 49 | 57 | 56 | 56 | 96.9 | 97.8 | 97.4 |
YZsel | 21 | 13 | -7 | 56 | 52 | 53 | 59 | 54 | 53 | 55 | 54 | 53 | 93.2 | 94.1 | 94.8 |
OP + All | 14 | 4 | -10 | 46 | 45 | 45 | 48 | 45 | 47 | 55 | 54 | 54 | 97.0 | 98.0 | 97.8 |
OP + Ysel | 8 | 1 | -7 | 49 | 47 | 48 | 50 | 47 | 49 | 48 | 47 | 47 | 93.4 | 94.7 | 93.3 |
OP + YZsel | 11 | 2 | -9 | 49 | 46 | 46 | 50 | 46 | 47 | 47 | 45 | 45 | 93.6 | 94.0 | 95.6 |
CART | |||||||||||||||
Confounder | 23 | 16 | -7 | 71 | 68 | 69 | 75 | 70 | 70 | 71 | 70 | 69 | 93.7 | 94.6 | 94.7 |
Treatment | 22 | 15 | -6 | 72 | 69 | 70 | 75 | 71 | 70 | 74 | 72 | 72 | 94.2 | 94.3 | 94.8 |
Outcome | 23 | 15 | -8 | 70 | 68 | 68 | 74 | 69 | 68 | 73 | 72 | 72 | 94.0 | 95.4 | 95.5 |
All | 34 | 22 | -13 | 84 | 81 | 81 | 90 | 84 | 81 | 81 | 80 | 80 | 92.0 | 93.0 | 93.9 |
Ysel | 27 | 16 | -11 | 71 | 69 | 70 | 76 | 71 | 71 | 75 | 73 | 73 | 94.2 | 94.7 | 94.8 |
YZsel | 28 | 15 | -13 | 67 | 64 | 65 | 73 | 65 | 66 | 68 | 67 | 66 | 93.9 | 95.2 | 95.4 |
OP + All | 13 | 3 | -10 | 75 | 74 | 73 | 76 | 75 | 74 | 75 | 73 | 74 | 93.8 | 94.2 | 94.1 |
OP + Ysel | 14 | 5 | -9 | 65 | 64 | 65 | 67 | 64 | 65 | 67 | 65 | 65 | 94.8 | 95.0 | 94.4 |
OP + YZsel | 18 | 3 | -15 | 59 | 56 | 55 | 62 | 56 | 57 | 60 | 58 | 58 | 93.9 | 96.2 | 95.0 |
Pruned CART | |||||||||||||||
Confounder | 28 | 19 | -10 | 63 | 60 | 59 | 69 | 63 | 60 | 61 | 60 | 59 | 91.3 | 93.2 | 94.0 |
Treatment | 31 | 20 | -11 | 63 | 60 | 59 | 70 | 64 | 60 | 62 | 61 | 60 | 91.1 | 93.4 | 94.5 |
Outcome | 29 | 19 | -11 | 62 | 60 | 58 | 68 | 63 | 59 | 62 | 60 | 59 | 90.8 | 93.8 | 94.4 |
All | 40 | 26 | -15 | 65 | 63 | 61 | 76 | 68 | 63 | 62 | 61 | 60 | 88.6 | 92.0 | 93.3 |
Ysel | 31 | 20 | -12 | 62 | 59 | 60 | 70 | 63 | 61 | 62 | 61 | 60 | 91.1 | 93.2 | 93.8 |
YZsel | 32 | 20 | -13 | 60 | 57 | 56 | 68 | 60 | 58 | 60 | 59 | 57 | 91.8 | 93.7 | 93.9 |
OP + All | 12 | 3 | -9 | 53 | 52 | 52 | 55 | 52 | 52 | 54 | 52 | 52 | 95.0 | 94.2 | 94.4 |
OP + Ysel | 12 | 3 | -9 | 53 | 51 | 52 | 54 | 51 | 53 | 53 | 51 | 51 | 94.6 | 93.8 | 93.8 |
OP + YZsel | 14 | 3 | -11 | 52 | 49 | 48 | 54 | 49 | 49 | 51 | 49 | 49 | 94.5 | 96.6 | 95.2 |
Bagged CART | |||||||||||||||
Confounder | -11 | -6 | 5 | 76 | 70 | 70 | 76 | 71 | 70 | 74 | 71 | 69 | 94.0 | 95.4 | 94.0 |
Treatment | -1 | 0 | 0 | 66 | 63 | 61 | 66 | 63 | 61 | 67 | 65 | 63 | 94.8 | 95.6 | 95.2 |
Outcome | -3 | 0 | 2 | 63 | 60 | 58 | 63 | 60 | 58 | 67 | 65 | 63 | 95.6 | 95.9 | 96.6 |
All | 37 | 23 | -13 | 55 | 53 | 53 | 66 | 58 | 54 | 57 | 57 | 56 | 91.4 | 94.2 | 95.5 |
Ysel | 9 | 3 | -5 | 59 | 56 | 55 | 60 | 56 | 55 | 64 | 62 | 61 | 96.2 | 96.8 | 97.0 |
YZsel | -21 | -13 | 8 | 105 | 99 | 98 | 107 | 100 | 98 | 90 | 88 | 86 | 88.0 | 94.5 | 92.9 |
OP + All | 13 | 3 | -10 | 47 | 45 | 46 | 49 | 46 | 47 | 47 | 45 | 46 | 93.4 | 94.1 | 94.2 |
OP + Ysel | 12 | 3 | -10 | 47 | 46 | 46 | 48 | 46 | 47 | 47 | 45 | 46 | 93.8 | 93.8 | 93.8 |
OP + YZsel | 15 | 4 | -11 | 48 | 45 | 46 | 50 | 45 | 47 | 47 | 45 | 46 | 93.5 | 96.6 | 95.2 |
Random Forests | |||||||||||||||
Confounder | 11 | 8 | -3 | 60 | 57 | 57 | 61 | 58 | 57 | 63 | 61 | 60 | 95.0 | 95.4 | 95.6 |
Treatment | 20 | 13 | -7 | 55 | 53 | 54 | 59 | 55 | 54 | 59 | 58 | 57 | 94.5 | 96.0 | 95.5 |
Outcome | 18 | 13 | -6 | 53 | 50 | 51 | 56 | 52 | 51 | 59 | 58 | 57 | 95.8 | 96.4 | 97.2 |
All | 49 | 31 | -19 | 54 | 52 | 52 | 73 | 61 | 55 | 55 | 55 | 54 | 86.4 | 92.0 | 94.5 |
Ysel | 28 | 16 | -12 | 52 | 49 | 50 | 59 | 52 | 51 | 57 | 57 | 56 | 94.8 | 96.6 | 97.0 |
YZsel | -20 | -15 | 4 | 138 | 131 | 124 | 139 | 132 | 124 | 88 | 86 | 81 | 87.8 | 90.8 | 93.3 |
OP + All | 13 | 3 | -10 | 47 | 46 | 46 | 49 | 46 | 47 | 47 | 45 | 46 | 94.2 | 94.3 | 93.7 |
OP + Ysel | 12 | 2 | -9 | 47 | 46 | 46 | 48 | 46 | 47 | 47 | 45 | 46 | 94.0 | 94.3 | 94.3 |
OP + YZsel | 15 | 3 | -11 | 49 | 46 | 45 | 51 | 46 | 47 | 47 | 45 | 46 | 93.7 | 95.8 | 95.4 |
Bias | MCSD | RMSE | SE | Coverage (%; modified) | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Estimators | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 |
\@BTrule[] Naive | 61 | 38 | -23 | 39 | 38 | 39 | 73 | 54 | 45 | 39 | 38 | 38 | 64.6 | 82.8 | 89.3 |
OAL | 3 | 2 | -1 | 35 | 33 | 33 | 35 | 33 | 33 | 39 | 39 | 38 | 97.2 | 97.4 | 97.4 |
LOGIS | |||||||||||||||
Confounder | 8 | 7 | -1 | 37 | 35 | 36 | 38 | 36 | 36 | 36 | 35 | 35 | 93.8 | 94.3 | 94.8 |
Treatment | 0 | 1 | 2 | 37 | 36 | 35 | 37 | 36 | 35 | 37 | 35 | 35 | 95.0 | 94.4 | 95.1 |
Outcome | -1 | 1 | 1 | 35 | 33 | 33 | 35 | 33 | 33 | 34 | 33 | 33 | 94.6 | 95.2 | 94.3 |
All | 36 | 24 | -13 | 38 | 37 | 38 | 52 | 44 | 40 | 39 | 38 | 38 | 85.9 | 91.3 | 93.4 |
Ysel | 3 | 0 | -3 | 35 | 33 | 33 | 35 | 33 | 34 | 39 | 39 | 38 | 97.2 | 97.9 | 97.2 |
YZsel | 10 | 8 | -2 | 39 | 37 | 38 | 40 | 38 | 38 | 38 | 37 | 37 | 93.6 | 94.6 | 94.6 |
OP + All | 8 | 3 | -5 | 33 | 32 | 32 | 34 | 32 | 33 | 39 | 38 | 38 | 97.3 | 98.0 | 97.9 |
OP + Ysel | 3 | 0 | -3 | 35 | 33 | 33 | 35 | 33 | 34 | 33 | 32 | 32 | 93.4 | 95.2 | 93.9 |
OP + YZsel | 5 | 2 | -3 | 34 | 32 | 33 | 35 | 32 | 33 | 34 | 32 | 32 | 94.1 | 95.3 | 94.0 |
CART | |||||||||||||||
Confounder | 20 | 13 | -7 | 49 | 48 | 48 | 53 | 50 | 49 | 51 | 50 | 49 | 94.7 | 95.1 | 95.2 |
Treatment | 21 | 15 | -6 | 51 | 49 | 50 | 55 | 52 | 50 | 53 | 52 | 51 | 93.9 | 94.9 | 95.6 |
Outcome | 20 | 14 | -6 | 50 | 49 | 49 | 54 | 51 | 50 | 53 | 52 | 51 | 94.5 | 94.6 | 95.3 |
All | 26 | 18 | -9 | 57 | 56 | 58 | 62 | 59 | 58 | 58 | 58 | 57 | 93.3 | 94.5 | 94.8 |
Ysel | 22 | 13 | -9 | 51 | 50 | 50 | 55 | 51 | 51 | 53 | 52 | 52 | 94.4 | 95.9 | 95.1 |
YZsel | 17 | 11 | -6 | 46 | 46 | 46 | 49 | 47 | 46 | 48 | 48 | 47 | 94.6 | 95.0 | 94.5 |
OP + All | 8 | 3 | -4 | 52 | 50 | 52 | 53 | 50 | 52 | 53 | 51 | 51 | 94.8 | 94.4 | 94.2 |
OP + Ysel | 11 | 4 | -7 | 47 | 45 | 46 | 48 | 45 | 47 | 47 | 45 | 46 | 94.6 | 95.0 | 94.1 |
OP + YZsel | 11 | 4 | -6 | 42 | 40 | 40 | 43 | 40 | 41 | 42 | 40 | 40 | 93.6 | 95.3 | 93.9 |
Pruned CART | |||||||||||||||
Confounder | 24 | 16 | -8 | 42 | 41 | 41 | 48 | 44 | 42 | 43 | 42 | 41 | 90.6 | 93.2 | 94.2 |
Treatment | 25 | 16 | -9 | 43 | 41 | 42 | 50 | 44 | 42 | 43 | 42 | 41 | 91.0 | 93.0 | 94.0 |
Outcome | 25 | 16 | -9 | 42 | 41 | 41 | 49 | 44 | 42 | 42 | 42 | 41 | 91.0 | 93.0 | 94.2 |
All | 32 | 20 | -12 | 44 | 42 | 42 | 54 | 47 | 44 | 43 | 42 | 41 | 88.1 | 91.8 | 93.9 |
Ysel | 26 | 17 | -9 | 42 | 41 | 40 | 49 | 44 | 42 | 42 | 41 | 41 | 90.5 | 93.2 | 94.7 |
YZsel | 22 | 14 | -8 | 41 | 40 | 40 | 47 | 43 | 41 | 41 | 41 | 40 | 92.0 | 93.8 | 94.8 |
OP + All | 7 | 1 | -5 | 37 | 35 | 36 | 38 | 35 | 36 | 36 | 35 | 35 | 94.2 | 94.8 | 93.7 |
OP + Ysel | 7 | 2 | -5 | 37 | 35 | 35 | 37 | 35 | 35 | 36 | 35 | 35 | 94.4 | 95.2 | 94.2 |
OP + YZsel | 6 | 1 | -4 | 36 | 34 | 34 | 36 | 34 | 34 | 35 | 34 | 34 | 94.3 | 95.7 | 94.0 |
Bagged CART | |||||||||||||||
Confounder | -14 | -7 | 7 | 53 | 50 | 47 | 54 | 50 | 48 | 53 | 51 | 48 | 94.2 | 95.8 | 94.8 |
Treatment | -4 | -1 | 3 | 46 | 44 | 42 | 46 | 44 | 42 | 48 | 46 | 44 | 95.3 | 95.3 | 95.5 |
Outcome | -6 | -3 | 3 | 44 | 41 | 40 | 44 | 41 | 40 | 47 | 46 | 44 | 96.2 | 96.9 | 96.7 |
All | 26 | 17 | -9 | 38 | 37 | 37 | 46 | 40 | 38 | 41 | 40 | 39 | 92.2 | 94.4 | 95.7 |
Ysel | 2 | 0 | -2 | 42 | 40 | 38 | 42 | 40 | 38 | 46 | 44 | 43 | 96.9 | 97.0 | 97.6 |
YZsel | -35 | -21 | 14 | 70 | 65 | 66 | 78 | 69 | 68 | 64 | 62 | 60 | 87.9 | 91.9 | 91.5 |
OP + All | 7 | 2 | -5 | 34 | 32 | 33 | 34 | 32 | 33 | 34 | 32 | 32 | 94.0 | 94.8 | 94.0 |
OP + Ysel | 6 | 2 | -4 | 34 | 32 | 33 | 35 | 32 | 33 | 34 | 32 | 32 | 94.3 | 94.9 | 94.3 |
OP + YZsel | 8 | 3 | -4 | 34 | 32 | 33 | 34 | 32 | 33 | 33 | 32 | 32 | 93.4 | 95.3 | 93.8 |
Random Forests | |||||||||||||||
Confounder | 9 | 8 | -1 | 41 | 39 | 40 | 42 | 40 | 40 | 44 | 43 | 42 | 95.9 | 96.2 | 96.2 |
Treatment | 17 | 12 | -5 | 38 | 37 | 37 | 42 | 39 | 38 | 41 | 41 | 40 | 95.5 | 95.2 | 96.2 |
Outcome | 16 | 11 | -5 | 36 | 35 | 35 | 40 | 36 | 36 | 41 | 41 | 40 | 96.0 | 96.8 | 97.3 |
All | 45 | 29 | -16 | 37 | 36 | 37 | 58 | 46 | 40 | 39 | 39 | 38 | 80.8 | 89.6 | 93.9 |
Ysel | 23 | 14 | -9 | 36 | 34 | 35 | 42 | 37 | 36 | 40 | 40 | 39 | 94.2 | 96.2 | 97.2 |
YZsel | -34 | -25 | 8 | 87 | 79 | 77 | 94 | 83 | 77 | 63 | 60 | 55 | 86.6 | 89.7 | 92.7 |
OP + All | 6 | 2 | -4 | 34 | 32 | 33 | 34 | 32 | 33 | 34 | 32 | 32 | 94.3 | 95.3 | 94.3 |
OP + Ysel | 6 | 2 | -4 | 34 | 32 | 33 | 35 | 32 | 33 | 34 | 32 | 33 | 94.4 | 95.0 | 94.4 |
OP + YZsel | 7 | 3 | -4 | 34 | 32 | 32 | 34 | 32 | 33 | 33 | 32 | 32 | 93.8 | 95.4 | 94.3 |
Bias | MCSD | RMSE | SE | Coverage (%; modified) | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Estimators | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 | 1 vs 2 | 1 vs 3 | 2 vs 3 |
\@BTrule[] Naive | -217 | -38 | 179 | 39 | 46 | 45 | 220 | 60 | 185 | 37 | 38 | 36 | 0 | 78.6 | 0.5 |
OAL | -24 | -14 | 10 | 40 | 47 | 40 | 47 | 49 | 41 | 38 | 46 | 41 | 89 | 93.6 | 95.2 |
LOGIS | |||||||||||||||
Confounder | -3 | 3 | 6 | 36 | 46 | 42 | 36 | 46 | 42 | 36 | 46 | 42 | 94.7 | 95.1 | 94.8 |
Treatment | -3 | 4 | 7 | 46 | 53 | 44 | 46 | 53 | 45 | 43 | 52 | 44 | 93.3 | 94.6 | 94.7 |
Outcome | -3 | 2 | 5 | 34 | 43 | 40 | 34 | 43 | 40 | 34 | 43 | 40 | 94.5 | 95.0 | 95.2 |
All | -84 | -61 | 23 | 38 | 46 | 42 | 92 | 76 | 48 | 40 | 50 | 46 | 45.2 | 78.6 | 94.0 |
Ysel | -16 | -7 | 10 | 40 | 47 | 41 | 44 | 48 | 42 | 39 | 48 | 42 | 93.0 | 94.5 | 95.3 |
YZsel | -13 | -4 | 8 | 41 | 49 | 42 | 43 | 49 | 43 | 40 | 49 | 43 | 93.3 | 94.6 | 95.3 |
OP + All | -62 | -43 | 19 | 33 | 43 | 40 | 70 | 61 | 44 | 40 | 50 | 46 | 69.6 | 89.4 | 95.9 |
OP + Ysel | -16 | -7 | 10 | 40 | 47 | 41 | 44 | 48 | 42 | 39 | 48 | 42 | 93.0 | 94.5 | 95.3 |
OP + YZsel | -15 | -5 | 9 | 40 | 47 | 41 | 42 | 47 | 42 | 38 | 47 | 42 | 93.2 | 94.7 | 95.0 |
CART | |||||||||||||||
Confounder | -54 | -35 | 19 | 46 | 61 | 56 | 71 | 70 | 59 | 48 | 61 | 56 | 82.6 | 91.4 | 93.6 |
Treatment | -98 | -69 | 29 | 55 | 66 | 61 | 112 | 95 | 67 | 54 | 65 | 61 | 57.0 | 82.1 | 92.8 |
Outcome | -66 | -45 | 21 | 48 | 62 | 56 | 82 | 76 | 60 | 50 | 63 | 59 | 76.3 | 89.6 | 94.6 |
All | -131 | -89 | 42 | 61 | 76 | 71 | 145 | 117 | 82 | 60 | 73 | 69 | 40.8 | 76.0 | 89.8 |
Ysel | -100 | -69 | 31 | 52 | 66 | 63 | 112 | 95 | 70 | 54 | 66 | 62 | 55.1 | 82.3 | 91.6 |
YZsel | -85 | -59 | 26 | 51 | 63 | 59 | 99 | 87 | 64 | 52 | 64 | 59 | 64.1 | 85.2 | 93.2 |
OP + All | -71 | -45 | 26 | 50 | 68 | 64 | 86 | 82 | 69 | 51 | 67 | 63 | 74.8 | 90.2 | 92.1 |
OP + Ysel | -61 | -40 | 22 | 45 | 60 | 57 | 76 | 72 | 61 | 46 | 59 | 56 | 74.9 | 89.6 | 92.2 |
OP + YZsel | -57 | -37 | 20 | 43 | 57 | 54 | 71 | 68 | 58 | 44 | 57 | 53 | 77.7 | 91.1 | 93.2 |
Pruned CART | |||||||||||||||
Confounder | -80 | -57 | 24 | 50 | 54 | 48 | 94 | 78 | 54 | 43 | 52 | 48 | 54.4 | 79.1 | 92.2 |
Treatment | -117 | -85 | 33 | 53 | 59 | 52 | 129 | 103 | 62 | 48 | 57 | 52 | 32.4 | 65.2 | 90.8 |
Outcome | -93 | -65 | 28 | 49 | 53 | 47 | 105 | 84 | 55 | 43 | 52 | 48 | 42.5 | 74.8 | 91.2 |
All | -147 | -106 | 41 | 49 | 55 | 51 | 155 | 119 | 66 | 45 | 53 | 50 | 12.0 | 47.5 | 86.0 |
Ysel | -122 | -87 | 35 | 50 | 54 | 50 | 132 | 102 | 61 | 45 | 54 | 50 | 26.0 | 62.5 | 88.5 |
YZsel | -109 | -78 | 32 | 51 | 56 | 50 | 121 | 96 | 59 | 46 | 54 | 50 | 35.7 | 69.0 | 90.6 |
OP + All | -76 | -52 | 24 | 37 | 48 | 44 | 84 | 71 | 50 | 36 | 47 | 44 | 44.5 | 78.4 | 90.8 |
OP + Ysel | -67 | -46 | 21 | 37 | 47 | 44 | 76 | 66 | 49 | 37 | 47 | 44 | 55.2 | 82.3 | 91.7 |
OP + YZsel | -63 | -42 | 21 | 38 | 48 | 45 | 73 | 64 | 49 | 37 | 47 | 44 | 61.0 | 84.8 | 92.3 |
Bagged CART | |||||||||||||||
Confounder | 67 | 63 | -4 | 56 | 69 | 60 | 87 | 93 | 60 | 54 | 68 | 59 | 70.5 | 81.9 | 95.3 |
Treatment | -33 | -18 | 15 | 55 | 63 | 55 | 65 | 66 | 57 | 54 | 63 | 55 | 90.9 | 94.4 | 95.3 |
Outcome | 17 | 20 | 3 | 46 | 55 | 49 | 49 | 59 | 49 | 48 | 59 | 52 | 93.6 | 95.5 | 97.0 |
All | -127 | -90 | 37 | 39 | 48 | 45 | 133 | 102 | 58 | 42 | 50 | 47 | 11.8 | 56.6 | 89.3 |
Ysel | -50 | -31 | 19 | 51 | 56 | 48 | 72 | 64 | 52 | 48 | 57 | 51 | 80.6 | 91.6 | 94.7 |
YZsel | -7 | 4 | 10 | 64 | 68 | 56 | 64 | 68 | 57 | 55 | 65 | 56 | 90.8 | 93.3 | 95.4 |
OP + All | -60 | -40 | 20 | 33 | 43 | 40 | 68 | 59 | 44 | 34 | 43 | 40 | 60.0 | 85.0 | 92.2 |
OP + Ysel | -54 | -36 | 18 | 33 | 42 | 39 | 63 | 56 | 43 | 34 | 43 | 40 | 66.8 | 86.2 | 93.2 |
OP + YZsel | -52 | -34 | 17 | 33 | 43 | 39 | 61 | 55 | 43 | 34 | 43 | 40 | 68.2 | 86.9 | 93.2 |
Random Forests | |||||||||||||||
Confounder | 42 | 39 | -3 | 48 | 57 | 50 | 64 | 69 | 50 | 47 | 59 | 52 | 83.4 | 89.7 | 95.4 |
Treatment | -61 | -43 | 18 | 42 | 50 | 46 | 74 | 66 | 49 | 44 | 53 | 48 | 72.8 | 89.3 | 94.8 |
Outcome | -24 | -14 | 10 | 36 | 46 | 42 | 43 | 48 | 43 | 41 | 52 | 47 | 94.6 | 96.7 | 96.8 |
All | -149 | -107 | 42 | 36 | 45 | 44 | 153 | 116 | 60 | 39 | 48 | 46 | 1.8 | 37.5 | 86.4 |
Ysel | -83 | -59 | 25 | 40 | 46 | 42 | 93 | 75 | 49 | 41 | 50 | 47 | 45.9 | 80.6 | 93.7 |
YZsel | -25 | -14 | 11 | 66 | 65 | 48 | 70 | 66 | 49 | 45 | 55 | 49 | 79.1 | 90.6 | 95.3 |
OP + All | -42 | -27 | 15 | 34 | 44 | 40 | 54 | 52 | 42 | 35 | 44 | 40 | 79.2 | 90.2 | 93.3 |
OP + Ysel | -44 | -30 | 14 | 34 | 43 | 39 | 56 | 52 | 42 | 34 | 43 | 40 | 76.6 | 89.0 | 94.0 |
OP + YZsel | -43 | -29 | 14 | 34 | 43 | 39 | 55 | 52 | 42 | 34 | 43 | 40 | 76.8 | 89.4 | 94.0 |
Total |
|
|
|
|
|||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
\@BTrule[]Circulatory system | 130 | 8 | 8 | 2 | Congestive heart failure (CHF) NOS, congestive heart failure; nonhypertensive. | ||||||||
Congenital anomalies | 13 | 0 | 0 | 0 | |||||||||
Dermatologic | 51 | 1 | 2 | 0 | |||||||||
Digestive | 96 | 2 | 2 | 0 | |||||||||
Endocrine/metabolic | 95 | 4 | 6 | 1 | Type 2 diabetes. | ||||||||
Genitourinary | 79 | 7 | 6 | 0 | |||||||||
Hematopoietic | 35 | 5 | 5 | 1 | Lymphadenitis. | ||||||||
Infectious diseases | 32 | 1 | 0 | 0 | |||||||||
Injuries and poisonings | 72 | 2 | 1 | 0 | |||||||||
Mental disorders | 42 | 3 | 5 | 2 |
|
||||||||
Musculoskeletal | 82 | 0 | 7 | 0 | |||||||||
Neoplasms | 89 | 8 | 15 | 4 |
|
||||||||
Neurological | 46 | 2 | 1 | 0 | |||||||||
Respiratory | 64 | 6 | 4 | 1 | Abnormal findings examination of lungs. | ||||||||
Sense organ | 83 | 0 | 2 | 0 | |||||||||
Symptoms | 36 | 5 | 5 | 1 | Nausea and vomiting. |
Total |
|
|
|
|
|||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
\@BTrule[]Circulatory system | 130 | 10 | 13 | 3 |
|
||||||||
Congenital anomalies | 13 | 0 | 0 | 0 | |||||||||
Dermatologic | 51 | 1 | 3 | 0 | |||||||||
Digestive | 96 | 2 | 5 | 0 | |||||||||
Endocrine/metabolic | 95 | 3 | 6 | 1 | Type 2 diabetes. | ||||||||
Genitourinary | 79 | 5 | 8 | 0 | |||||||||
Hematopoietic | 35 | 3 | 5 | 1 | Lymphadenitis. | ||||||||
Infectious diseases | 32 | 1 | 1 | 0 | |||||||||
Injuries and poisonings | 72 | 2 | 2 | 0 | |||||||||
Mental disorders | 42 | 1 | 6 | 1 | Tobacco use disorder. | ||||||||
Musculoskeletal | 82 | 2 | 9 | 0 | |||||||||
Neoplasms | 89 | 5 | 16 | 3 |
|
||||||||
Neurological | 46 | 3 | 2 | 0 | |||||||||
Respiratory | 64 | 6 | 6 | 1 | Abnormal findings examination of lungs. | ||||||||
Sense organ | 83 | 0 | 2 | 0 | |||||||||
Symptoms | 36 | 4 | 6 | 1 | Malaise and fatigue. |
Total |
|
|
|
|
|||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
\@BTrule[]Circulatory system | 130 | 7 | 10 | 2 | Congestive heart failure (CHF) NOS, congestive heart failure; nonhypertensive. | ||||||||
Congenital anomalies | 13 | 0 | 0 | 0 | |||||||||
Dermatologic | 51 | 0 | 3 | 0 | |||||||||
Digestive | 96 | 2 | 5 | 0 | |||||||||
Endocrine/metabolic | 95 | 6 | 5 | 0 | |||||||||
Genitourinary | 79 | 10 | 9 | 0 | |||||||||
Hematopoietic | 35 | 6 | 5 | 1 | Lymphadenitis. | ||||||||
Infectious diseases | 32 | 2 | 0 | 0 | |||||||||
Injuries and poisonings | 72 | 2 | 1 | 0 | |||||||||
Mental disorders | 42 | 2 | 5 | 0 | |||||||||
Musculoskeletal | 82 | 1 | 8 | 0 | |||||||||
Neoplasms | 89 | 11 | 15 | 6 |
|
||||||||
Neurological | 46 | 2 | 1 | 0 | |||||||||
Respiratory | 64 | 6 | 5 | 0 | |||||||||
Sense organ | 83 | 1 | 2 | 0 | |||||||||
Symptoms | 36 | 5 | 5 | 1 | Nausea and vomiting |
Total |
|
|
|
|
|||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
\@BTrule[]Circulatory system | 130 | 10 | 14 | 2 | Atrial fibrillation and flutter, congestive heart failure (CHF) NOS. | ||||||||
Congenital anomalies | 13 | 0 | 0 | 0 | |||||||||
Dermatologic | 51 | 4 | 3 | 1 | Chronic ulcer of skin. | ||||||||
Digestive | 96 | 4 | 7 | 0 | |||||||||
Endocrine/metabolic | 95 | 6 | 6 | 1 | Type 2 diabetes. | ||||||||
Genitourinary | 79 | 12 | 10 | 0 | |||||||||
Hematopoietic | 35 | 4 | 5 | 1 | Lymphadenitis. | ||||||||
Infectious diseases | 32 | 1 | 1 | 0 | |||||||||
Injuries and poisonings | 72 | 4 | 2 | 0 | |||||||||
Mental disorders | 42 | 1 | 6 | 0 | |||||||||
Musculoskeletal | 82 | 2 | 9 | 0 | |||||||||
Neoplasms | 89 | 12 | 15 | 6 |
|
||||||||
Neurological | 46 | 5 | 2 | 0 | |||||||||
Respiratory | 64 | 7 | 7 | 2 | Abnormal findings examination of lungs, pneumonia | ||||||||
Sense organ | 83 | 2 | 2 | 0 | |||||||||
Symptoms | 36 | 6 | 6 | 2 | Malaise and fatigue, nausea and vomiting. |
Supplementary Figures








