Data-Adaptive Identification of Effect Modifiers through Stochastic Shift Interventions and Cross-Validated Targeted Learning
Abstract
In epidemiology, identifying subpopulations that are particularly vulnerable to exposures and those who may benefit differently from exposure-reducing interventions is essential. Factors such as age, gender-specific vulnerabilities, and physiological states such as pregnancy are critical for policymakers when setting regulatory guidelines. However, current semi-parametric methods for estimating heterogeneous treatment effects are often limited to binary exposures and can function as black boxes, lacking clear, interpretable rules for subpopulation-specific policy interventions. This study introduces a novel method that uses cross-validated targeted minimum loss-based estimation (TMLE) paired with a data-adaptive target parameter strategy to identify subpopulations with the most significant differential impact of simulated policy interventions that reduce exposure. Our approach is assumption-lean, allowing for the integration of machine learning while still yielding valid confidence intervals. We demonstrate the robustness of our methodology through simulations and application to data from the National Health and Nutrition Examination Survey. Our analysis of NHANES data on persistent organic pollutants (POPs) and leukocyte telomere length (LTL) identified age as a significant effect modifier. Specifically, we found that exposure to 3,3’,4,4’,5-pentachlorobiphenyl (PCNB) consistently had a differential impact on LTL, with a one-standard deviation reduction in exposure leading to a more pronounced increase in LTL among younger populations compared to older ones. We offer our method as an open-source software package, EffectXshift, enabling researchers to investigate the effect modification of continuous exposures. The EffectXshift package provides clear and interpretable results, informing targeted public health interventions and policy decisions.
1 Introduction
Environmental risk factors—including pollutants and social determinants—pose significant health risks, but their impact is not uniformly distributed across populations. Certain subpopulations—such as children, pregnant women, the elderly, and socioeconomically disadvantaged communities—are disproportionately exposed to harmful conditions, leading to pronounced health disparities Balbus and Malina (2009); Ruiz et al. (2018); Zorzetto et al. (2024a) For instance, studies have shown that African Americans, Latinos, and low-income individuals are more likely to experience higher exposures to environmental pollutants like air pollution and endocrine-disrupting compounds (EDCs) such as polychlorinated biphenyls (PCBs) and bisphenol A (BPA), as well as adverse social conditions, all of which are linked to metabolic disorders and diabetes Ruiz et al. (2018). Factors contributing to these disparities include dietary patterns, use of certain consumer products, residential proximity to pollution sources, and social determinants like socioeconomic status and access to healthcare Makri and Stilianakis (2008).
Identifying these vulnerable subpopulations is crucial for designing targeted public health interventions and policies aimed at reducing exposure and mitigating health risks. Effect modification—the phenomenon where the effect of an exposure on an outcome differs across levels of another variable - is a key concept in epidemiological research for uncovering such differential impacts Rothman et al. (2008). Traditional methods for assessing effect modification, such as stratified analyses, interaction terms in regression models, and generalized additive models (GAMs), have been widely used Samoli et al. (2011); Analitis et al. (2014). For example, Samoli et al. Samoli et al. (2011) examined how air pollution affects pediatric asthma exacerbations by stratifying analyses by age, sex, and season. However, these conventional methods have limitations when dealing with complex environmental exposures, which are often continuous, multidimensional, and may act synergistically as mixtures Gibson et al. (2019).
While Bayesian methods such as Bayesian profile regression and confounder-dependent mixture modeling have shown promise in identifying subpopulations susceptible to multipollutant exposures Coker et al. (2018); Zorzetto et al. (2024b), they can be computationally intensive for very large datasets and may require specialized expertise to implement and interpret. Similarly, meta-learning methods for estimating heterogeneous treatment effects, like the T-learner, S-learner, and X-learner Künzel et al. (2019), while powerful, often focus on binary treatments and may lack the interpretability desired in environmental studies dealing with continuous exposures. Our approach aims to address these challenges by providing a computationally efficient, interpretable method that naturally handles continuous exposures and leverages machine learning techniques within a causal inference framework.
To address these challenges, we propose a novel, data-adaptive method that leverages causal inference techniques with stochastic shift interventions to identify subpopulations most vulnerable to harmful exposures. Stochastic shift interventions modify the distribution of exposures rather than setting them to fixed values, making them particularly relevant for environmental policies aiming to reduce pollutant levels across populations Muñoz and van der Laan (2012); Kennedy (2019). By integrating targeted maximum likelihood estimation (TMLE) with machine learning, our approach allows for flexible modeling of complex exposure-outcome relationships while providing valid statistical inference van der Laan and Rose (2011).
To illustrate the practical significance of our method, consider the exposure to per- and polyfluoroalkyl substances (PFAS), a group of pollutants found in various consumer products and associated with adverse health outcomes such as thyroid cancer Jain (2015). Suppose that we have biomarker data on PFAS levels from blood samples, thyroid cancer outcomes, and baseline covariates such as age, body mass index (BMI), use of consumer products, and other relevant factors. Our study aims to inform analysts and policymakers about which pollutants in the PFAS mixture has the largest differential impact if reduced in certain subpopulations. For example, we might find that reducing exposure to perfluorononanoic acid (PFNA) by 5 parts per billion (ppb) leads to a 12% decrease in thyroid-related diseases among pregnant women aged 18–29, compared to a 3% decrease in other individuals. Such insights enable the development of targeted interventions and policies that focus on the most vulnerable groups and the most impactful exposures.
Our method achieves two key objectives: (i) identifying the exposure-covariate relationships where an intervention would have the maximal differential impact, and (ii) estimating the effects of such interventions within the identified subpopulations using an assumption-lean estimator. This approach enhances interpretability by providing clear rules for subpopulation-specific interventions and addresses limitations of existing methods that function as black boxes.
This paper is structured as follows. Section 2 introduces stochastic shift interventions, providing the foundational concepts and notation. In Section 3, we discuss our oracle target parameter, which defines the maximal effect modifier relationship. Section 3.3 describes the recursive partitioning algorithm designed to estimate this oracle parameter, identifying subregions within the covariates that exhibit the maximal differential impact of a stochastic intervention. Section 4 presents simulations of a mixed exposure scenario, and 5 shows the identification and accurate estimation of the maximal effect modifier when applying our method to simulatino data. In Section 6.1, we apply our approach to NHANES data to explore the effects of mixed POPs on telomere length, assessing possible effect modification. We close with a conclusion in 7. The code for our method, along with the applications and simulations, is available on GitHub in our open-source package called EffectXshift.
2 Defining Stochastic Shift Interventions
Consider an experiment where an exposure variable , an outcome , and a set of covariates are measured for randomly sampled units, with denoting a random variable drawn from a distribution . Let be i.i.d. observations of .
We posit a Nonparametric Structural Equation Model (NPSEM) that encodes the data-generating process:
where are exogenous random variables and are deterministic functions. Under this model, the observed data distribution factors as:
Define a function
and use as shorthand for the conditional density of given . Let and let denote the density of .
We now consider a stochastic shift intervention that modifies the exposure mechanism. Instead of drawing from as originally defined, we define a shifted exposure distribution:
where characterizes the magnitude and direction of the shift. Under this modified NPSEM, each unit’s exposure is effectively shifted by units, and the corresponding counterfactual outcome is the outcome that would have been observed if the unit’s exposure had been drawn from the shifted distribution .
Our parameter of interest is:
the expected outcome under the stochastic shift intervention.
Identification of from observed data relies on standard causal conditions:
Exchangeability (No Unmeasured Confounding)
Under exchangeability, for all . Given , the observed assignment of can be viewed as if it were randomized. This assumption ensures that the conditional associations we observe can be interpreted as reflecting causal effects of the exposure on the outcome.
Positivity
The positivity condition requires that the shift is chosen so that the shifted exposure levels remain within the support of for each covariate stratum. Let and be the minimum and maximum values of attainable for units with . To maintain positivity, we must ensure that for all in the support of , the point also lies within the interval . If falls outside this support, then for those values, violating positivity and preventing identification of .
In practice, this means selecting so that the shifted exposure remains in the feasible range defined by and . When this is satisfied, there exists an such that
for all relevant , ensuring that the stochastic shift intervention is well-defined and that the resulting counterfactual distribution is identifiable from the observed data.
Interpretation
Intuitively, represents the expected outcome if we replaced the original exposure distribution with one shifted by . This parameter captures the causal effect of a hypothetical population-level intervention that systematically reduces exposure levels. By estimating , we can assess the global impact of such an intervention and compare effects across subpopulations defined by , thereby guiding targeted interventions and informing policy decisions. Because the estimation relies only on observational data and these well-defined causal assumptions (exchangeability and positivity), it allows us to infer the potential effects of interventions that have not been experimentally evaluated in practice.
2.1 Efficient Estimation and Inference for Stochastic Interventions
To estimate the causal effect of stochastic interventions effectively, we aim to construct a semiparametric efficient estimator. Such an estimator achieves the smallest possible asymptotic variance among regular asymptotically linear (RAL) estimators, satisfying:
(1) |
where is the efficient influence function, and denotes the empirical mean. By the Central Limit Theorem, RAL estimators achieve the asymptotic variance bound , facilitating valid statistical inference.
In the context of our stochastic shift intervention, the efficient influence function of at distribution is given by:
(2) |
Several estimators can be employed for estimating our stochastic shift parameter, including inverse probability weighting (IPW), augmented inverse probability weighting (AIPW), and targeted maximum likelihood estimation (TMLE). However, IPW estimators can be inefficient and sensitive to model misspecification, and AIPW estimators may not respect the bounds of the outcome, potentially resulting in greater variability. Therefore, we choose TMLE to estimate our parameter due to its efficiency and robustness in finite samples Li et al. (2022).
Targeted learning begins with initial estimates of the outcome regression and the exposure density , which can be obtained through flexible machine learning techniques. These initial estimates are then updated using a clever covariate based on the efficient influence function. The refinement process involves fitting a parametric fluctuation model that targets the parameter of interest. This results in a final estimate that is efficient and less biased. The updated estimate is then used to compute the TMLE of , and the influence function is utilized to derive confidence intervals.
3 Causal Subpopulation Intervention Effects
Our primary objective in this paper is not only to estimate the overall effects of environmental exposures but also to discern how these effects differ across various subpopulations. Within the framework of stochastic shift interventions, our focus is on identifying specific types of individuals from baseline characteristics that change the impact of policy interventions, such as reducing exposure to pollutants. Unlike the previously described stochastic shift parameter, which averages effects across the entire population, we aim to define our causal parameter for particular subgroups. This allows us to pinpoint the subpopulations where the intervention yields the maximum differential effect, thereby providing more targeted and effective public health strategies.
3.1 Oracle Parameter for Maximal Difference in Expected Outcomes in Subregions
The oracle parameter, denoted as , represents the maximal differential impact of an intervention in a specific subregion of the covariate space. It is defined as follows:
(3) |
where:
-
•
denotes the vector of exposures in the mixture, with representing the -th exposure and denoting all exposures excluding .
-
•
and denote a subregion of the covariate space and its complement, respectively.
-
•
represents the potential outcome under a stochastic shift intervention where exposure is shifted by , while other exposures remain unchanged.
identifies the combination of exposure and subregion that maximizes the difference in expected outcomes between and its complement . This parameter captures effect modification, highlighting where the intervention has the most substantial and relevant impact.
In general, we are interested in the causal exposure-covariate region combination where the effect of the intervention, compared to the complementary covariate space, is maximally different, controlling for other exposures. From now on, we refer to these regions as and .
This method allows us to identify and focus on subpopulations where the intervention has the most substantial and relevant impact, thereby informing targeted public health strategies and interventions.
3.2 Statistical Subpopulation Intervention Effects
This section reformulates the causal parameter of interest in terms of observable quantities, thereby defining a statistical parameter that can be estimated from the data. We consider a fixed subpopulation , but emphasize that subpopulation-specific parameters are interpreted as features of the full population distribution. Let denote the probability that a randomly selected unit from the full population belongs to .
Under standard identification assumptions, including no unmeasured confounding and positivity, the causal parameter is identified as:
(4) |
where .
Fixing , we define the statistical parameter as:
(5) |
where
(6) | ||||
(7) |
This formulation ensures that and are understood as expectations taken over the full population distribution, normalized by and respectively. The parameter thus represents a feature of the entire population, focusing attention on differences between a particular subpopulation and its complement.
Influence Functions Corresponding to the Parameters
To perform inference on and , we rely on their efficient influence functions (EIFs), which are essential for constructing asymptotically efficient estimators and valid confidence intervals. The EIFs incorporate the normalization by :
(8) | ||||
(9) |
Estimation and Inference
In practice, the true functions and , as well as , are unknown and must be estimated from observed data. Let . We define as a suitable estimator of , the conditional density of given . The function is estimated by , and we employ Targeted Maximum Likelihood Estimation (TMLE) to refine this initial estimate, resulting in . The superscript indicates that the estimate has been updated in a manner that targets the parameter of interest, thereby reducing bias and improving efficiency.
Once and are obtained, we define:
(10) |
We then estimate and as:
(11) |
Let and . The parameter of interest, representing the difference in subpopulation-specific effects, is estimated by:
Variance Estimation and Statistical Inference
To derive standard errors and confidence intervals, we use the estimated influence functions:
obtained by plugging the estimated quantities , , and into the EIF formulas.
The variance of is:
assuming negligible covariance due to disjoint subpopulations. The variance estimators require incorporating the appropriate scaling by and . After computing these variances from the EIFs, a normal approximation can be employed to construct confidence intervals and conduct hypothesis tests, such as testing whether .
In summary, defining the parameter as a population-level feature involving a particular subpopulation and using the normalized EIFs ensures valid statistical inference. By applying TMLE and variance estimation through the EIFs, this framework provides efficient, assumption-lean estimates for subpopulation-specific intervention effects and the differences between subpopulations. Such estimates facilitate informed decision-making and targeted interventions in settings where heterogeneous effects are of interest.
Incorporating Variability in Region Selection
Although the statistical parameter quantifies the difference in expected outcomes between the subpopulation and its complement , it is essential to consider the variability of these estimates. Even a large observed difference may not be statistically significant once variability is taken into account. By leveraging the influence functions and their corresponding variance estimates, we can formally assess the statistical significance of the observed effect modification.
To test the null hypothesis , we construct a t-statistic:
Under standard regularity conditions, converges in distribution to a standard normal random variable. Consequently, we can employ conventional inferential procedures (e.g., using standard normal quantiles) to determine whether the observed difference in intervention effects is statistically significant. If exceeds a chosen critical value (e.g., corresponding to a two-sided level), we reject and conclude that there is evidence of a meaningful difference in the intervention effects between and .
Remark
In applied settings, it is crucial to consider both the magnitude and statistical significance of the estimated effect difference. Focusing solely on the point estimate without accounting for variability risks identifying subpopulations based on random fluctuations rather than true differences. By incorporating a careful variance-based assessment, researchers can more confidently identify meaningful subpopulations for targeted interventions and informed policy decisions.
3.3 Identifying Subpopulations with Maximal Effect Difference
Building upon the statistical parameters and influence functions defined in the previous section, we now describe a procedure for identifying subpopulations with maximal effect modification for a given exposure. The estimation involves a two-stage approach:
First Stage: We obtain efficient estimates of the Individual Intervention Effects (IIEs) and the corresponding Influence Curve Estimates (ICEs) for each exposure in the mixture .
Second Stage: Utilizing these estimates, we apply a greedy partitioning algorithm to find covariate regions where the effect modification is maximized and statistically significant.
First Stage: Estimating IIEs and ICEs
For each exposure , we proceed as follows:
-
1.
Initial Estimation of Outcome Regression: Fit an initial estimator for using flexible machine learning methods (e.g., Super Learner).
-
2.
Estimating the Density Ratio: Estimate and , or directly estimate the density ratio .
-
3.
Debiasing with TMLE: Update the initial estimator to via Targeted Maximum Likelihood Estimation (TMLE), targeting the parameter for the chosen exposure. This involves:
-
(a)
Constructing the clever covariate .
-
(b)
Fitting a parametric model (e.g., logistic regression for binary outcomes) of on with offset for binary , or a suitable quasi-likelihood model for continuous .
-
(c)
Obtaining , the updated estimator incorporating the TMLE fluctuation.
-
(a)
-
4.
Calculating Individual-Level Estimates: For each individual :
-
(a)
Individual Intervention Effect (IIE): Compute the estimated IIE for exposure :
(12) -
(b)
Influence Curve Estimate (ICE): Compute the estimated influence function:
(13) where is the estimated population-level parameter (e.g., the average intervention effect for exposure in the sample).
-
(a)
By iterating through each exposure , we construct matrices of IIEs and ICEs , where rows correspond to individuals and columns correspond to exposures .
Second Stage: Identifying Covariate Regions with Maximal Effect Modification
In the second stage, we utilize the estimated IIEs and ICEs for a chosen exposure to identify covariate regions where the effect modification is maximized. We employ a greedy partitioning algorithm based on statistical significance, ensuring that the identified regions exhibit both substantial and statistically significant differences.
The algorithm proceeds as follows:
-
1.
Initialization: Start with the full covariate space .
-
2.
Greedy Search for Optimal Split: For each covariate and each candidate split point :
-
(a)
Define and .
-
(b)
Compute the estimated effect modification:
where
-
(c)
Estimate the variance using the ICEs:
-
(d)
Compute the t-statistic:
-
(e)
Record the split that maximizes , subject to minimum sample size constraints to ensure valid asymptotic approximations.
-
(a)
-
3.
Recursive Partitioning: Recursively apply the above procedure to the resulting subpopulations, increasing the depth of the partitioning tree until a predefined maximum depth is reached or no further statistically significant splits are found.
-
4.
Selection of Optimal Region: Among all considered splits, select the covariate and split point that yield the maximal statistically significant effect modification.
Algorithm Implementation
The detailed pseudocode in Algorithm 1 outlines the recursive partitioning approach, leveraging the t-statistic to identify the covariate regions that maximize the difference in subpopulation intervention effects.
Remark. By integrating the estimated influence functions into the partitioning algorithm, we ensure that the selection of covariate regions is guided not only by the magnitude of the estimated effect differences but also by their statistical significance. This approach helps to mitigate the risk of overfitting and the identification of spurious subpopulations due to random variability.
3.4 Cross-Validation
As discussed, sample splitting is required to avoid bias incurred by discovering subregions of the covariate space and estimating the effect of subpopulation intervention using the same data. To achieve this, we employ -fold cross-validation, where the sample data are partitioned into folds of approximately equal size. For example, consider . In each iteration :
-
1.
Training and Validation Split:
-
•
The training set consists of the indices from all folds except the -th fold, comprising of the data (e.g., 90% for ).
-
•
The validation set consists of the indices in the -th fold, comprising of the data (e.g., 10% for ).
-
•
-
2.
Stage 1: Subpopulation Identification and Parameter Estimation on Training Set:
-
•
Use the training data to identify the maximal exposure-effect modifier sets within the mixture using the recursive partitioning algorithm described in Section 3.3.
-
•
Estimate the nuisance parameters and using appropriate machine learning methods.
-
•
Compute the individual intervention effects (IIE) and influence curve estimates (ICE) for each observation in .
-
•
-
3.
Stage 2: Parameter Estimation on Validation Set:
-
•
Apply the subpopulation rule identified from the training set to the validation data , assigning observations to or .
-
•
Estimate the subpopulation stochastic shift intervention parameter using the CV-TMLE approach with the nuisance parameters estimated from .
-
•
After completing all folds, we aggregate the estimates across folds to obtain the final estimate of the effect modification parameter:
(14) |
If the identified subpopulation is consistent across folds, this pooled estimate leverages the full data for increased efficiency.
3.5 Pooled TMLE
Upon completion of the cross-estimation procedure, a pooled TMLE update provides a summary measure of the oracle maximum effect modification parameter across -folds. In each fold , we obtain estimates for and for the respective regions identified as the maximum effect-modifying regions and their complements. These regions may vary across different folds.
For the pooled TMLE update, we perform the following steps:
-
1.
Stacking Nuisance Parameters: Stack the estimates of the nuisance parameters and from each training sample .
-
2.
Pooled TMLE Update: Apply a pooled TMLE update on the cumulative initial estimates. This involves:
-
•
Combining the initial outcome regression estimates and propensity score estimates across all folds.
-
•
Performing a TMLE update on this pooled data to refine the estimates, ensuring they are both efficient and unbiased.
-
•
-
3.
Aggregating Estimates: Compute the pooled estimate of the effect modification parameter by averaging the estimates across all folds:
(15) where identifies the optimal exposure-covariate modifier set and formulates a plug-in estimator using the training data . refers to the validation sample for deriving the estimates.
-
4.
Interpretation of Pooled Estimates: The pooled estimate represents an average across folds. If the identified subpopulations and are consistent across all folds (e.g., consistently identifying age for and age for ), the pooled estimates are interpretable and provide increased statistical power. This leverages the full data, resulting in tighter confidence intervals and more reliable inference.
3.6 Pooled TMLE for Marginal Exposure Effects
When analyzing a mixture of multiple exposures, it is often important to assess the relative importance of each exposure component. Our approach facilitates this by estimating a stochastic shift for each exposure and deriving the corresponding influence curve estimates (ICE) to identify regions with maximal effect modification.
Since our target parameter requires stochastic shift interventions for each exposure as a preliminary step before recursive partitioning, we can efficiently leverage these estimates to obtain CV-TMLE for each exposure. The procedure is as follows:
Utilizing Existing Estimates: In the first stage of our methodology, we estimate stochastic shift interventions for each exposure in the mixture and derive the corresponding ICEs. These estimates are obtained through the training phase of our cross-validation process and are applied to the validation folds.
Pooled CV-TMLE Estimates: After estimating the stochastic shifts and ICEs for each exposure across all folds, we aggregate these results to compute CV-TMLE estimates for each exposure shift. This pooling process involves:
-
•
Saving Validation Estimates: We store the TMLE-updated estimates of the outcome changes due to shifts in each exposure from the validation folds.
-
•
Aggregating Estimates: By averaging these estimates across all folds, we obtain pooled CV-TMLE estimates for each exposure . This aggregation enhances the reliability and statistical power of our estimates.
Variable Importance Measures: The pooled CV-TMLE estimates for each exposure shift allow us to assess the relative importance of each exposure within the mixture. Specifically, these estimates quantify the impact of shifting each exposure on the outcome, providing a clear measure of variable importance. This comprehensive estimation framework not only identifies regions with maximal effect modification but also offers insights into the marginal effects of each exposure, thereby supporting informed and targeted public health interventions.
In summary, because our target parameter inherently requires stochastic shift interventions for each exposure as part of the initial estimation process, we can directly utilize these results to obtain CV-TMLE estimates. This approach simplifies the workflow and enables the derivation of variable importance measures for the entire exposure mixture.
3.7 Exposure Density Estimation
Accurate estimation of the conditional exposure densities is crucial for evaluating the effects of stochastic interventions within a causal inference framework. We employ two approaches for density estimation: direct conditional density estimation and a reparameterization strategy that transforms density ratio estimation into a classification problem. Both methods are implemented using the Super Learner ensemble van der Laan et al. (2007), which optimally combines multiple algorithms to produce robust and flexible estimates.
3.7.1 Direct Conditional Density Estimation
The direct approach involves estimating the conditional density directly using a Super Learner composed of a diverse library of density estimation algorithms. This method leverages the strengths of various learners to capture complex relationships between exposures and covariates. However, direct density estimation can be challenging in high-dimensional or intricate settings due to the computational complexity and lack of models to estimate accurately.
3.7.2 Reparameterization via Classification
To address the limitations of direct density estimation, we implement a reparameterization strategy inspired by the work of Díaz et al. (2023) and on modified treatment policies (MTPs). This approach reframes the problem of estimating the density ratio as a binary classification task. The procedure involves the following steps:
-
1.
Augmented Dataset Construction: Create an augmented dataset by duplicating the original data. In the duplicated subset, apply the stochastic shift intervention to the exposures , while keeping the exposures in the original subset unchanged. Introduce a binary indicator variable , where denotes shifted exposures and denotes unshifted exposures.
-
2.
Classification Model Training: Fit a Super Learner classifier to predict the probability using the augmented dataset. This model estimates the likelihood that an observation has been subjected to the shift intervention.
-
3.
Density Ratio Estimation: Utilize Bayes’ theorem to transform the predicted probabilities into the desired density ratios:
This reparameterization offers several key advantages:
-
•
Computational Efficiency and Scalability: By converting density ratio estimation into a classification problem, we can leverage highly optimized classification algorithms within the Super Learner framework, enhancing computational efficiency and scalability, especially in high-dimensional settings.
-
•
Numerical Stability: Estimating density ratios through classification probabilities reduces the risk of extreme ratio estimates, thereby enhancing the numerical stability of our estimators.
-
•
Flexibility with Multiple Exposures: This approach naturally extends to scenarios involving multiple exposures, facilitating future work to model the effect modification of joint exposure shifts.
Given these benefits, our simulations and empirical applications utilize the reparameterization strategy for density ratio estimation. This choice is motivated by its superior performance in terms of computational efficiency, estimator stability, which is particularly important in our case where the density ratio needs to be calculated for each exposure.
4 Simulation Study
4.1 Data Generation with Binary Modifiers
To evaluate the performance of our proposed method, we conducted a simulation study using synthetic data that mirrors a realistic environmental epidemiology scenario. We generated a large population with 10000 observations to estimate ground truth of the subpopulation intervention effect. Three binary confounders, , , and , were independently generated from Bernoulli distributions with a success probability of 0.5.
The exposures () were generated as follows:
The outcome was then simulated based on the exposures and covariates, incorporating effect modification effects:
where represents random noise.
4.2 Data Generation with Continuous Modifier
To make detection of the modifying region more difficult and arguably more realistic compared to the binary case, we also create a DGP with a continuous covariate where the impact on the outcome jumps at a threshold. For this, we change to a continuous and create a binary indicator for when and use this indicator in place of in the above outcome generation equation, but provide the EffectXshift algorithm the continuous covariate to determine whether it detects the correct threshold that leads to modification.
4.3 Performance Metrics
To establish the ground truth for the differential impacts of exposures, we applied a shift intervention to reduce by 0.5 units, i.e., we set , and recalculated the outcome to obtain . Other exposures remained unchanged. The difference between the shifted and unshifted outcomes () was aggregated to estimate the true average effect within each level of the covariate , where the true effect modification was created. Here the true average in level 0 for is region and level 1 is .
We assessed the bias, variance, mean squared error (MSE), and coverage of our method across different sample sizes (). For each sample size, we ran the simulation 100 times to evaluate the stability and reliability of our estimator. Bias: The average difference between the estimated and true effect. Variance: The variability of the estimator across simulations. MSE: Sum of the squared bias and variance, providing a comprehensive measure of estimator accuracy. Coverage: The proportion of times the true effect lies within the estimated 95% confidence interval.
These metrics are essential for ensuring the -consistent behavior of our estimator, meaning that as the sample size increases, the estimator converges to the true parameter value at a rate proportional to the square root of the sample size. This property is crucial for an efficient estimator, as it guarantees precise and reliable estimates in large samples.
For our estimator to be asympotically unbaised we need to accurately detect the correct region with increasing sample size. As such we report metrics for Accuracy, Precision, Recall and F1 statistics as well.
4.4 Simulation Process
The simulation process involved the following steps: 1. Data Generation: For each sample size, we generated datasets of the above data. 2. Estimator Application: We applied our method to each dataset, to estimate the intervention effects within discovered subregions using 10-fold CV and default machine learning algorithms in the Super Learners (xgboost, random forest, glm) for the outcome regression and density estimation. 3. Metric Calculation: For each sample size and iteration, we calculated the bias, variance, MSE, and coverage of the estimated effects. 4. Result Aggregation: The results were aggregated to evaluate the overall performance of our method across different sample sizes.
5 Simulation Results
5.1 Identification of Maximum Effect Modifying Subregion
For the binary effect modifier, for all iterations and sample sizes, we consistently identified the correct modifying variable, for the correct region, therefore accuracy, precision, recall and f1 were all 1.0.
The performance of our method in identifying the true subpopulation in continuous data improves as sample size increases. At a sample size of 300, the accuracy is 59.1%, precision is 18.9%, recall is 100%, and the F1 score is 29.9%. The high recall indicates perfect sensitivity, but the low precision suggests a high number of false positives. With a sample size of 500, accuracy improves to 94.2%, precision to 56.5%, recall remains at 100%, and the F1 score rises to 71.2%. The precision increase indicates fewer false positives. At 1000 samples, accuracy is 96.9%, precision is 81.8%, recall is 90.9%, and the F1 score is 82.9%. The balance between precision and recall reflects improved detection. For 2000 samples, accuracy is 99.7%, precision is 96.4%, recall is 99.3%, and the F1 score is 97.8%, showing minimal false positives and false negatives. At 5000 samples, the method nearly perfects performance with an accuracy of 99.9%, precision of 99.6%, recall of 99.1%, and an F1 score of 99.3%. Figure 1 shows these results.

5.2 Binary Effect Modifier Simulation Results
The results are summarized below based on bias, variance, mean squared error (MSE), and coverage probability.
-
•
Bias and Variance: The bias and variance generally decreased as the sample size increased. For smaller samples (e.g., ), the bias was more pronounced in the measure in the region compared to the region, suggesting potential overfitting when sample sizes were limited. However, as the sample size increased, the bias significantly reduced, particularly evident in the region at where the bias was minimal ().
-
•
Mean Squared Error (MSE): The MSE consistently decreased as the sample size increased, reflecting improvements in estimator accuracy. For instance, the MSE in the region decreased from 0.017 at to 0.0001 at . Similarly, in the region, the MSE decreased from 0.010 at to 0.0001 at .
-
•
Coverage Probability: The coverage probability was stablazlied to 0.95 for the region across sample sizes but region maintained a coverage probability of 1, indicating overly conservative estimates which may be due to limited iterations or our DGP.
5.3 Continuous Effect Modifier Simulation Results
The results of the continuous effect modifier simulation are summarized below based on bias, variance, mean squared error (MSE), and coverage probability.
-
•
Bias and Variance: As the sample size increased, both bias and variance generally decreased. For the region, the bias reduced from 0.155 at to a minimal 0.001 at . Similarly, in the region, the bias decreased from 1.025 at to 0.112 at , indicating significant improvement in accuracy with larger sample sizes.
-
•
Mean Squared Error (MSE): The MSE consistently decreased as the sample size increased, reflecting improvements in estimator accuracy. For the region, the MSE dropped from 0.026 at to 0.0007 at . In the region, the MSE decreased from 1.087 at to 0.014 at , demonstrating a substantial reduction in prediction error with larger samples.
-
•
Coverage Probability: The coverage probability showed consistent improvement with increasing sample size. For the region, coverage increased from 0.80 at to 0.95 at , indicating better confidence interval reliability. In the region, coverage improved from 0.72 at to 0.95 at , suggesting that the true parameter value was increasingly captured within the calculated confidence intervals as sample size grew. Variability in coverage may be due to limited iterations in our simulation.
Overall, the results suggest that our estimator’s mean squared error decreases at a rate proportional to , aligning with theoretical expectations.


6 EffectXshift Applied to NHANES Dataset
The 2001-2002 National Health and Nutrition Examination Survey (NHANES) cycle provides a robust data set for our analysis, widely recognized for its credibility in the public health domain. This data set includes interviews with 11,039 individuals, of whom 4,260 provided blood samples and consented to DNA analysis. For our analysis, we focused on a subset of 1,007 participants, ensuring complete exposure data. This is comparable to the subset used by Mitro et al. Mitro et al. (2016), who investigated the association between persistent exposure to organic pollutants (POPs) binding to the aryl hydrocarbon receptor (AhR) and leukocyte telomere length (LTL). This data set was also used by Gibson et al., Gibson et al. (2019) who applied various mixture methods to this data. For these reasons, this data set works nicely for comparison of the marginal variable importance effects given our stochastic shift interventions and provides evaluation of effect modification which has otherwise not been assessed for in these methods.
Our study examined the effect of 18 congeners, including 8 non-dioxin-like PCBs, 2 non-ortho PCBs, 1 mono-ortho PCB, 4 dioxins, and 4 furans. All congeners were adjusted for lipid content in serum using an enzymatic summation method. Leukocyte telomere length was measured using quantitative polymerase chain reaction (qPCR) to determine the T/S ratio, which compares telomere length to a standardized reference DNA.
Our analysis considered several covariates: age, sex, race/ethnicity, education level, BMI, serum cotinine levels, and blood cell distribution and count. Categories for race/ethnicity, education, and BMI were consistent with previous studies Mitro et al. (2016); Gibson et al. (2019). Given evidence that telomere length can be influenced by factors such as smoking and age Valdes et al. (2005); Hoefnagel et al. (2016), our approach aimed to identify baseline covariate regions and specific exposures that exhibit the maximum differential impact due to a shift intervention. All covariate were included as potential effect modifiers.
We utilized our EffectXshift package to perform this analysis. Using 10-fold cross-validation (CV), we explored the counterfactual changes in telomere length for a reduction of each exposure by 1 standard deviation. The standard deviation values for each exposure are as follows: PCB74 Lipid Adj (ng/g): -13589.77, 1,2,3,6,7,8-hxcdd Lipid Adj (pg/g): -40.36301, 1,2,3,4,6,7,8-hpcdd Lipid Adj (pg/g): -55.36449, 2,3,4,7,8-pncdf Lipid Adj (pg/g): -5.758755, 1,2,3,4,6,7,8-hxcdf Lipid Adj (pg/g): -10.67117, and 3,3’,4,4’,5-pcnb Lipid Adj (pg/g): -52.7907.
6.1 NHANES Data Results
We evaluated the impact of 16 persistent organic pollutants (POPs) on leukocyte telomere length (LTL). Our primary objective was to identify the baseline covariate region and the exposure with the maximum differential impact due to a reduction in exposure.
6.2 Identification of Maximum Effect Modifiers
The shift interventions were applied to a subset of uncorrelated exposures: PCB74, 1,2,3,6,7,8-hxcdd, 1,2,3,4,6,7,8-hpcdd, 2,3,4,7,8-pncdf Lipid Adj, 1,2,3,4,6,7,8-hxcdf, and 3,3’,4,4’,5-pcnb. Age was consistently identified as the maximum effect modifier across all analysis folds. The exposure 3,3’,4,4’,5-pcnb (LBXPCBLA) was found in 100% of the folds, indicating a robust association. Table 1 shows the K-fold results. The change in outcome due to a 1 standard deviation reduction in 3,3’,4,4’,5-pcnb is shown under the column ”Effect”, inference is given in the standard error (SE) and confidence interval columns. ”Modifier” is the region in the covariate space and its complement and Fold is the fold number.
Exposure | Effect | SE | Lower CI | Upper CI | Modifier | Fold |
---|---|---|---|---|---|---|
LBXPCBLA | 0.7047676 | 0.04008078 | 0.6262 | 0.7833 | age 14 | 1 |
LBXPCBLA | 0.5625930 | 0.08915217 | 0.3879 | 0.7373 | age 14 | 1 |
LBXPCBLA | 0.8017565 | 0.03205413 | 0.7389 | 0.8646 | age 11 | 2 |
LBXPCBLA | 0.7246825 | 0.08895869 | 0.5503 | 0.8990 | age 11 | 2 |
LBXPCBLA | 0.3222981 | 0.02627014 | 0.2708 | 0.3738 | age 11 | 3 |
LBXPCBLA | 0.3271666 | 0.07242882 | 0.1852 | 0.4691 | age 11 | 3 |
LBXPCBLA | -0.6316206 | 0.02034907 | -0.6715 | -0.5917 | age 11 | 4 |
LBXPCBLA | -0.3153549 | 0.03551484 | -0.3850 | -0.2457 | age 11 | 4 |
LBXPCBLA | 0.8372697 | 0.02145195 | 0.7952 | 0.8793 | age 14 | 5 |
LBXPCBLA | 0.6842051 | 0.07540448 | 0.5364 | 0.8320 | age 14 | 5 |
LBXPCBLA | 0.7044444 | 0.02454143 | 0.6563 | 0.7525 | age 11 | 6 |
LBXPCBLA | 0.6348496 | 0.06230732 | 0.5127 | 0.7570 | age 11 | 6 |
LBXPCBLA | 0.7364454 | 0.02360700 | 0.6902 | 0.7827 | age 17 | 7 |
LBXPCBLA | 0.6057689 | 0.08581898 | 0.4376 | 0.7740 | age 17 | 7 |
LBXPCBLA | 0.6412914 | 0.02309030 | 0.5960 | 0.6865 | age 11 | 8 |
LBXPCBLA | 0.5364844 | 0.06194912 | 0.4151 | 0.6579 | age 11 | 8 |
LBXPCBLA | 0.7782908 | 0.01962983 | 0.7398 | 0.8168 | age 11 | 9 |
LBXPCBLA | 0.5685745 | 0.07544209 | 0.4207 | 0.7164 | age 11 | 9 |
LBXPCBLA | 0.2118344 | 0.01881629 | 0.1750 | 0.2487 | age 17 | 10 |
LBXPCBLA | 0.3519853 | 0.08829586 | 0.1789 | 0.5250 | age 17 | 10 |
6.3 Consistency of Results
Age consistently emerged as the maximum effect modifier across all folds. The exposure 3,3’,4,4’,5-pcnb (LBXPCBLA) was identified in 100% of the folds. Our findings indicate that a reduction in exposure to 3,3’,4,4’,5-pcnb by one standard deviation (52.7907) leads to an increase in LTL. This effect is more pronounced in younger populations, with central ages of 11, 14, and 17 consistently identified as split points. All these results were statistically significant based on the confidence intervals, underscoring the differential impact of this exposure across age.
6.4 Pooled Oracle Estimates
For each region and its complement, we provide the pooled oracle estimates. Here, denotes the region for the lower age group and for the higher age group. Below is Table 2 of these pooled results.
Condition | Psi | Variance | SE | Lower CI | Upper CI | P-value |
---|---|---|---|---|---|---|
0.4879 | 0.0001027 | 0.0101 | 0.5078 | 0.4680 | 0.0000 | |
0.3156 | 0.0010592 | 0.0325 | 0.2518 | 0.3794 | 3.11e-22 |
These estimates indicate that shifting 3,3’,4,4’,5-pcnb by one standard deviation (approximately -52.8) leads to an estimated increase in telomere length of about 0.488 in the lower age group () and about 0.316 in the higher age group ().
To formally assess whether the difference between these two subgroups is statistically significant, we can compare the estimates and their variances directly. Letting , and assuming negligible covariance between the estimates, the variance of the difference is . Thus, .
The corresponding z-statistic is:
which yields a p-value well below 0.001. This confirms that the difference in the effect estimates between the two subpopulations is highly statistically significant.
7 Conclusion
Our study presents a novel methodology, implemented in the open-source package, EffectXshift, found on github, to identify subpopulations that are differentially affected by exposures and to estimate the effects of interventions within these subpopulations. Methodologically, our approach addresses several limitations of traditional causal inference methods by providing a data-driven, assumption-lean framework that leverages machine learning for both the estimation of exposure effects and the identification of effect modifiers. The use of stochastic shift interventions within our framework allows for more realistic and flexible modeling of exposure distributions, which is particularly relevant in environmental health studies, where exposures are often not binary or fixed.
By first establishing an oracle target parameter of interest as a definition of effect modification and then employing a data-adaptive target parameter strategy, we are able to deliver interpretable results grounded in theory. This approach is fundamentally different from other metalearning approaches in estimating heterogeneous treatment effects.
Applying this approach to the NHANES dataset, we demonstrated its ability to uncover significant effect modifiers, specifically highlighting age as a consistent modifier for 3,3’,4,4’,5-pcnb exposure and its association with leukocyte telomere length.
The findings of our analysis may have important implications for environmental health research and policy making. Identifying age as a significant effect modifier, our results suggest that younger individuals are more susceptible to the adverse effects of 3,3’,4,4’,5-pcnb exposure on telomere length. This information can inform targeted public health interventions aimed at reducing exposure to persistent organic pollutants (POPs) among vulnerable subpopulations, particularly younger individuals.
Despite the strengths of our approach, there are some limitations. The accuracy of our estimates depends on the correct estimation of the clever covariate (the ratio of densities). Estimating this ratio can be challenging, which is why we offer an alternative approach via classification. However, more research is necessary to ensure this reparameterization of the density estimation is sound in practice, although we show very little difference in results; our DGP is relatively simple. Additionally, while our method can handle complex and high-dimensional data, the computational cost may be significant for very large datasets. Future research should focus on further optimizing the computational efficiency of the EffectXshift package and exploring its applicability to other types of environmental exposures and outcomes.
In conclusion, the EffectXshift package provides a powerful tool for environmental health researchers to identify and estimate the differential impacts of exposures between subpopulations. By facilitating the discovery of vulnerable groups and informing targeted interventions, this methodology contributes to the broader goal of reducing health disparities and improving public health outcomes.
References
- Balbus and Malina [2009] John M Balbus and Catherine Malina. Identifying vulnerable subpopulations for climate change health effects in the united states. Journal of Occupational and Environmental Medicine, 51(1):33–37, 2009. doi: 10.1097/JOM.0b013e318193e12e.
- Ruiz et al. [2018] Daniel Ruiz, Marlene Becerra, Jyotsna S Jagai, Kerry Ard, and Robert M Sargis. Disparities in environmental exposures to endocrine-disrupting chemicals and diabetes risk in vulnerable populations. Diabetes Care, 41(1):193–205, 2018. doi: 10.2337/dc16-2765.
- Zorzetto et al. [2024a] Renata Zorzetto, Eric S Coker, James F Pearson, and Ceylan Houghton. Confounder-dependent bayesian mixture modeling: Characterizing heterogeneity in causal effects of air pollution. Environmental Health Perspectives, 132(1):174–183, 2024a.
- Makri and Stilianakis [2008] Anna Makri and Nikolaos I. Stilianakis. Vulnerability to air pollution health effects. International Journal of Hygiene and Environmental Health, 211(3):326–336, 2008. ISSN 1438-4639. doi: https://doi.org/10.1016/j.ijheh.2007.06.005. URL https://www.sciencedirect.com/science/article/pii/S1438463907000971.
- Rothman et al. [2008] Kenneth J. Rothman, Sander Greenland, and Timothy L. Lash. Modern Epidemiology. Lippincott Williams & Wilkins, Philadelphia, PA, 3 edition, 2008. ISBN 9780781755641.
- Samoli et al. [2011] E Samoli, PT Nastos, AG Paliatsos, K Katsouyanni, and KN Priftis. Acute effects of air pollution on pediatric asthma exacerbation: Evidence of association and effect modification. Environmental Research, 111(3):418–424, 2011.
- Analitis et al. [2014] Antonis Analitis, Paola Michelozzi, Daniela D’Ippoliti, Francesca de’Donato, Bettina Menne, Franziska Matthies, Richard W. Atkinson, Carmen Iñiguez, Xavier Basagaña, Alexandra Schneider, Agnès Lefranc, Anna Paldy, Luigi Bisanti, and Klea Katsouyanni. Effects of heat waves on mortality: Effect modification and confounding by air pollutants. Epidemiology, 25(1):15–22, 2014. doi: 10.1097/EDE.0b013e31828ac01b.
- Gibson et al. [2019] Elizabeth A. Gibson, Yanelli Nunez, Ahlam Abuawad, Ami R. Zota, Stefano Renzetti, Katrina L. Devick, Chris Gennings, Jeff Goldsmith, Brent A. Coull, and Marianthi-Anna Kioumourtzoglou. An overview of methods to address distinct research questions on environmental mixtures: an application to persistent organic pollutants and leukocyte telomere length. Environmental Health, 18(1):76, 2019.
- Coker et al. [2018] Eric Coker, Silvia Liverani, Jason G Su, and John Molitor. Multi-pollutant modeling through examination of susceptible subpopulations using profile regression. Current Environmental Health Reports, 5(1):59–69, 2018. doi: 10.1007/s40572-018-0181-2.
- Zorzetto et al. [2024b] Dafne Zorzetto, Falco J Bargagli-Stoffi, Antonio Canale, and Francesca Dominici. Confounder-dependent bayesian mixture model: Characterizing heterogeneity of causal effects in air pollution epidemiology. Biometrics, 80(2):ujae025, 2024b. doi: 10.1093/biomtc/ujae025.
- Künzel et al. [2019] Sören R Künzel, Jasjeet S Sekhon, Peter J Bickel, and Bin Yu. Metalearners for estimating heterogeneous treatment effects using machine learning. Proceedings of the National Academy of Sciences, 116(10):4156–4165, 2019.
- Muñoz and van der Laan [2012] Iván Díaz Muñoz and Mark van der Laan. Population intervention causal effects based on stochastic interventions. Biometrics, 68(2):541–549, June 2012. doi: 10.1111/j.1541-0420.2011.01685.x. URL https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4117410/.
- Kennedy [2019] Edward H. Kennedy. Nonparametric causal effects based on incremental propensity score interventions. Journal of the American Statistical Association, 114(526):645–656, 2019. doi: 10.1080/01621459.2018.1425625. URL https://doi.org/10.1080/01621459.2018.1425625.
- van der Laan and Rose [2011] Mark J. van der Laan and Sherri Rose. Targeted Learning: Causal Inference for Observational and Experimental Data. Springer, New York, NY, 2011. ISBN 9781441997821. doi: 10.1007/978-1-4419-9782-1. URL https://doi.org/10.1007/978-1-4419-9782-1.
- Jain [2015] Richa B. Jain. Contribution of firefighting to exposures of poly- and perfluoroalkyl substances (pfas) in women firefighters from san francisco, california. Environmental Research, 142:661–670, 2015. doi: 10.1016/j.envres.2015.07.002. URL https://doi.org/10.1016/j.envres.2015.07.002.
- Li et al. [2022] Haodong Li, Sonali Rosete, Jeremy Coyle, Rachael V. Phillips, Nima S. Hejazi, Ivana Malenica, Benjamin F. Arnold, Jade Benjamin-Chung, Andrew Mertens, and John M. Colford Jr. Evaluating the robustness of targeted maximum likelihood estimators via realistic simulations in nutrition intervention trials. Statistics in Medicine, 41(3):417–433, 2022. doi: 10.1002/sim.9348. URL https://doi.org/10.1002/sim.9348. Funding information: Bill and Melinda Gates Foundation, OPP1165144.
- van der Laan et al. [2007] Mark J van der Laan, Eric C Polley, and Andrew E Hubbard. Super learner. Statistical applications in genetics and molecular biology, 6(1):Article 11, 2007. doi: 10.22002/APGM.30714. URL https://doi.org/10.22002/APGM.30714.
- Díaz et al. [2023] Iván Díaz, Nicholas Williams, Katherine L Hoffman, and Edward J Schenck. Nonparametric causal effects based on longitudinal modified treatment policies. Journal of the American Statistical Association, 118(542):846–857, 2023. doi: 10.1080/01621459.2021.1955691. URL https://doi.org/10.1080/01621459.2021.1955691.
- Mitro et al. [2016] Susanna D. Mitro, Linda S. Birnbaum, Larry L. Needham, and Ami R. Zota. Cross-sectional associations between exposure to persistent organic pollutants and leukocyte telomere length among u.s. adults in nhanes, 2001-2002. Environmental Health Perspectives, 124(5):651–658, 2016.
- Valdes et al. [2005] Ana M. Valdes, Toby Andrew, John P. Gardner, Masayuki Kimura, Elizabeth Oelsner, Lynn F. Cherkas, Abraham Aviv, and Tim D. Spector. Obesity, cigarette smoking, and telomere length in women. The Lancet, 366(9486):662–664, 2005.
- Hoefnagel et al. [2016] Sander J. M. Hoefnagel, Jolanda A. M. J. L. Janssen, Ronald T. Jaspers, Albert de Haan, and Coen A. C. Ottenheijm. The influence of lifestyle factors on telomere length in the adult population. Journal of Gerontology, 71(12):1467–1474, 2016.