This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Treatment Effect Estimation with Disentangled Latent Factors

Weijia Zhang***Corresponding Author, Lin Liu, Jiuyong Li
Abstract

Much research has been devoted to the problem of estimating treatment effects from observational data; however, most methods assume that the observed variables only contain confounders, i.e., variables that affect both the treatment and the outcome. Unfortunately, this assumption is frequently violated in real-world applications, since some variables only affect the treatment but not the outcome, and vice versa. Moreover, in many cases only the proxy variables of the underlying confounding factors can be observed. In this work, we first show the importance of differentiating confounding factors from instrumental and risk factors for both average and conditional average treatment effect estimation, and then we propose a variational inference approach to simultaneously infer latent factors from the observed variables, disentangle the factors into three disjoint sets corresponding to the instrumental, confounding, and risk factors, and use the disentangled factors for treatment effect estimation. Experimental results demonstrate the effectiveness of the proposed method on a wide range of synthetic, benchmark, and real-world datasets.

Introduction

Estimating the effect of a treatment on an outcome is a fundamental problem faced by many researchers and has a wide range of applications across diverse disciplines. In social economy, policy makers need to determine whether a job training program will improve the employment perspective of the workers (Athey and Imbens 2016). In online advertisement, companies need to predict whether an advertisement campaign could persuade a potential buyer into buying the product (Rzepakowski and Jaroszewicz 2011).

To estimate treatment effect from observational data, the treatment assignment mechanism needs to be independent of the possible outcomes when conditioned on the observed variables, i.e., the unconfoundedness assumption (Rosenbaum and Rubin 1983) needs to be satisfied. With this assumption, treatment effects can be estimated from observational data by adjusting on the confounding variables which affects both the treatment assignment and the outcome. The treatment effect estimation may be biased if not all the confounders are considered in the estimation (Pearl 2009).

From a theoretical perspective, practitioners are tempted to include as many variables as possible to ensure the satisfaction of the unconfoundedness assumption. This is because confounders can be difficult to measure in the real-world and practitioners need to include noisy proxy variables to ensure unconfoundedness. For example, the socio-economic status of patients confounds treatment and prognosis, but cannot be included in the electronic medical records due to privacy concerns. It is often the case that such unmeasured confounders can be inferred from noisy proxy variables which are easier to measure. For instance, the zip codes and job types of patients can be used as proxies to infer their socio-economic statuses (Sauer et al. 2013).

From a practical perspective, the inflated number of variables included for confounding adjustment reduces the efficiency of treatment effect estimation. Moreover, it has been previously shown that including unnecessary covariates is suboptimal when the treatment effect is estimated non-parametrically (Hahn 1998; Abadie and Imbens 2006; Häggström 2017). In a high dimensional scenario, eventually many included variables will not be confounders and should be excluded from the set of adjustment variables.

Most existing treatment estimation algorithms treat the given variables “as is”, and leave the task of choosing confounding variables to the user. It is clear that the users are left with a dilemma: on the one hand including more variables than necessary produces inefficient and inaccurate estimators; on the other hand restricting the number of adjustment variables may exclude confounders themselves or proxy variables of the confounders and thus increases the bias of the estimated treatment effects. With only a handful of variables, the problem can be avoided by consulting domain experts. However, a data-driven approach is required in the big data era to deal with the dilemma.

In this work, we propose a data-driven approach for simultaneously inferencing latent factors from proxy variables and disentangling the latent factors into three disjoint sets as illustrated in Figure 1: the instrumental factors 𝐳t\mathbf{z}_{t} which only affect the treatment but not the outcome, the risk factors 𝐳y\mathbf{z}_{y} which only affect the outcome but not the treatment, and the confounding factors 𝐳c\mathbf{z}_{c} that affect both the treatment and the outcome. Since our method builds upon the recent advancement of the research on variational autoencoder (Kingma and Welling 2014), we name our method Treatment Effect by Disentangled Variational AutoEncoder (TEDVAE). Our main contributions are:

  • We address an important problem in treatment effect estimation from observational data, where the observed variable may contain confounders, proxies of confounders and non-confounding variables.

  • We propose a data-driven algorithm, TEDVAE, to simultaneously infer latent factors from proxy variables and disentangle confounding factors from the others for a more efficient and accurate treatment effect estimation.

  • We validate the effectiveness of the proposed TEDVAE algorithm on a wide range of synthetic datasets, treatment effect estimation benchmarks and real-world datasets.

The rest of this paper is organized as follows. In Section 2, we discuss related works. The details of TEDVAE is presented in Section 3. In Section 4, we discuss the evaluation metrics, datasets and experiment results. Finally, we conclude the paper in Section 5.

Refer to caption
Figure 1: Model diagram for the proposed Treatment Effect with Disentangled Autoencoder (TEDVAE). tt is the treatment, yy is the outcome. 𝐱\mathbf{x} is the “as-is” observed variables which may contain non-confounders and noisy proxy variables. 𝐳t\mathbf{z}_{t} are factors that affect only the treatment, 𝐳y\mathbf{z}_{y} are factors that affect only the outcome, and 𝐳c\mathbf{z}_{c} are confounding factors that affect both treatment and outcome.

Related Work

Treatment effect estimation has steadily drawn the attentions of researchers from the statistics and machine learning communities. During the past decade, several tree based methods (Su et al. 2009; Athey and Imbens 2016; Zhang et al. 2017, 2018) have been proposed to address the problem by designing a treatment effect specific splitting criterion for recursive partitioning. Ensemble algorithms and meta algorithms (Künzel et al. 2019; Wager and Athey 2018) have also been explored. For example, Causal Forest(Wager and Athey 2018) builds ensembles using the Causal Tree (Athey and Imbens 2016) as base learners. X-Learner (Künzel et al. 2019) is a meta algorithm that can utilize off-the-shelf machine learning algorithms for treatment effect estimation.

Deep learning based heterogeneous treatment effect estimation methods have attracted increasingly research interest in recent years (Shalit, Johansson, and Sontag 2017; Alaa and Schaar 2018; Louizos et al. 2017; Hassanpour and Greiner 2018; Yao et al. 2018; Yoon, Jordan, and van der Schaar 2018). Counterfactual Regression Net (Shalit, Johansson, and Sontag 2017) and several other methods (Yao et al. 2018; Hassanpour and Greiner 2018) have been proposed to reduce the discrepancy between the treated and untreated groups of samples by learning a representation such that the two groups are as close to each other as possible. However, their designs do not separate the covariates that only contribute to the treatment assignment from those only contribute to the outcomes. Furthermore, these methods are not able to infer latent covariates from proxies.

Variable decomposition (Kun et al. 2017; Häggström 2017) has been previously investigated for average treatment effect estimation. Our method has several major differences from the above methods: (i) our method is capable of estimating the individual level heterogeneous treatment effects, where existing ones only focus on the population level average treatment effect; (ii) we are able to identify the non-linear relationships between the latent factors and their proxies, whereas their approach only models linear relationships. Recently, a deep representation learning based method, DR-CFR (Hassanpour and Greiner 2020) is proposed for treatment effect estimation.

Another work closely related to ours is the Causal Effect Variational Autoencoder (CEVAE) (Louizos et al. 2017), which also utilizes variational autoencoder to learn confounders from observed variables. However, CEVAE does not consider the existence of non-confounders, and is not able to learn the separated sets of instrumental and risk factors. As demonstrated by the experiments, disentangling the factors significantly improves the performance.

Method

Preliminaries

Let ti{0,1}t_{i}\in\{0,1\} denote a binary treatment where ti=0t_{i}=0 indicates the ii-th individual receives no treatment (control) and ti=1t_{i}=1 indicates the individual receives the treatment (treated). We use yi(1)y_{i}(1) to denote the potential outcome of ii if it were treated, and yi(0)y_{i}(0) to denote the potential outcome if it were not treated. Noting that only one of the potential outcomes can be realized, and the observed outcome is yi=(1ti)yi(0)+tiyi(1)y_{i}=(1-t_{i})y_{i}(0)+t_{i}y_{i}(1). Additionally, let 𝐱id\mathbf{x}_{i}\in\mathcal{R}^{d} denote the “as is” set of covariates for the ii-th individual. When the context is clear, we omit the subscript ii in the notations.

Throughout the paper, we assume that the following three fundamental assumptions for treatment effect estimations (Rosenbaum and Rubin 1983) are satisfied:

Assumption 1.

(SUTVA) The Stable Unit Treatment Value Assumption requires that the potential outcomes for one unit (individual) is unaffected by the treatment of others.

Assumption 2.

(Unconfoundedness) The distribution of treatment is independent of the potential outcome when conditioning on the observed variables: t(y(0),y(1))|𝐱t\mathrel{\text{\scalebox{1.07}{$\perp\mkern-10.0mu\perp$}}}(y(0),y(1))|\mathbf{x}.

Assumption 3.

(Overlap) Every unit has a non-zero probability to receive either treatment or control when given the observed variables, i.e., 0<P(t=1|𝐱)<10<P(t=1|\mathbf{x})<1.

The first goal of treatment effect estimation is estimating the average treatment effect (ATE) which is defined as: ATE=𝔼[y(1)y(0)]=𝔼[y|do(t=1)]𝔼[y|do(t=0)]ATE=\mathbb{E}[y(1)-y(0)]=\mathbb{E}[y|do(t=1)]-\mathbb{E}[y|do(t=0)], where do(t=1)do(t=1) denote an manipulation on tt by removing all its incoming edges and setting t=1t=1 (Pearl 2009).

The treatment effect for an individual ii is defined as τi=yi(1)yi(0)\tau_{i}=y_{i}(1)-y_{i}(0). Due to the counterfactual problem, we never observe yi(1)y_{i}(1) and yi(0)y_{i}(0) simultaneously and thus τi\tau_{i} is not observed for any individual. Instead, we estimate the conditional average treatment effect (τ(x))(\tau(x)), defined as τ(𝐱):=𝔼[τ|𝐱]=𝔼[y|𝐱,do(t=1)]𝔼[y|𝐱,do(t=0)]\tau(\mathbf{x})\vcentcolon=\mathbb{E}[\tau|\mathbf{x}]=\mathbb{E}[y|\mathbf{x},do(t=1)]-\mathbb{E}[y|\mathbf{x},do(t=0)].

Treatment Effect Estimation from Latent Factors

In this work, we propose the TEDVAE model (Figure 1) for estimating the treatment effects, where the observed pre-treatment variables 𝐱\mathbf{x} can be viewed as generated from three disjoint sets of latent factors 𝐳=(𝐳t,𝐳c,𝐳y)\mathbf{z}=(\mathbf{z}_{t},\mathbf{z}_{c},\mathbf{z}_{y}). Here 𝐳t\mathbf{z}_{t} are instrumental factors that only affect the treatment but not the outcome, 𝐳y\mathbf{z}_{y} are risk factors which only affect the outcome but not the treatment, and 𝐳c\mathbf{z}_{c} are confounding factors that affect both the treatment and the outcome.

On the one hand, the proposed TEDVAE model in Figure 1 provides two important benefits. The first one is that by explicitly modelling for the instrumental factors and adjustment factors, it accounts for the fact that not all variables in the observed variables set 𝐱\mathbf{x} are confounders. The second benefit is that it allows for the possibility of learning unobserved confounders that from their proxy variables.

On the other hand, our model diagram does not pose any restriction other than the three standard assumptions discussed in Section 3.1. To see this, consider the case where every variable in 𝐱\mathbf{x} itself is a confounder, i.e., 𝐱=𝐱c\mathbf{x}=\mathbf{x}_{c}, then the generating mechanism in the TEDVAE model becomes 𝐳𝐜=𝐱\mathbf{z_{c}}=\mathbf{x} with 𝐳t=𝐳y=\mathbf{z}_{t}=\mathbf{z}_{y}=\emptyset and the model in Figure 1 becomes identical to the widely used diagram for treatment effect estimation (Figure 2 in (Imbens 2019)).

With our model, the estimation of treatment effect is immediate using the back-door criterion (Pearl 2009):

Theorem 1.

The effect of tt on yy can be identified if we recover the confounding factors 𝐳c\mathbf{z}_{c} from the data.

Proof.

From Figure 1 we know that 𝐳t,𝐳c\mathbf{z}_{t},\mathbf{z}_{c} are the parents of the treatment tt, following (3.13) in Pearl we have,

P(y|do(t))=𝐳t𝐳cP(y|t,𝐳t,𝐳c)P(𝐳t)P(𝐳c).P(y|do(t))=\sum_{\mathbf{z}_{t}}\sum_{\mathbf{z}_{c}}P(y|t,\mathbf{z}_{t},\mathbf{z}_{c})P(\mathbf{z}_{t})P(\mathbf{z}_{c}). (1)

Utilizing the fact that y𝐳𝐭|t,𝐳𝐜y\mathrel{\text{\scalebox{1.07}{$\perp\mkern-10.0mu\perp$}}}\mathbf{z_{t}}|t,\mathbf{z_{c}}, we have

P(y|do(t))=𝐳tP(𝐳t)𝐳cP(y|t,𝐳c)P(𝐳c|t,𝐳c,𝐳t).P(y|do(t))=\sum\limits_{\mathbf{z}_{t}}P(\mathbf{z}_{t})\sum\limits_{\mathbf{z}_{c}}P(y|t,\mathbf{z}_{c})P(\mathbf{z}_{c}|t,\mathbf{z}_{c},\mathbf{z}_{t}).

(2)

Furthermore, since 𝐳c\mathbf{z}_{c} is not a descendant of tt, by Markov property we have t𝐳c|𝐳c,𝐳tt\mathrel{\text{\scalebox{1.07}{$\perp\mkern-10.0mu\perp$}}}\mathbf{z}_{c}|\mathbf{z}_{c},\mathbf{z}_{t}. Therefore

P(y|do(t))=𝐳tP(𝐳t)𝐳cP(y|t,𝐳c)P(𝐳c|𝐳c,𝐳t).P(y|do(t))=\sum\limits_{\mathbf{z}_{t}}P(\mathbf{z}_{t})\sum\limits_{\mathbf{z}_{c}}P(y|t,\mathbf{z}_{c})P(\mathbf{z}_{c}|\mathbf{z}_{c},\mathbf{z}_{t}). (3)

Note that 𝐳tP(𝐳t)P(𝐳c|𝐳t,𝐳𝐜)=P(𝐳c)\sum\limits_{\mathbf{z}_{t}}P(\mathbf{z}_{t})P(\mathbf{z}_{c}|\mathbf{z}_{t},\mathbf{z_{c}})=P(\mathbf{z}_{c}), which gives us

P(y|do(t))=𝐳cP(y|t,𝐳c)P(𝐳c).P(y|do(t))=\sum\limits_{\mathbf{z}_{c}}P(y|t,\mathbf{z}_{c})P(\mathbf{z}_{c}).\qed

For the estimation of the conditional average treatment effect, our result follows from Theorem 3.4.1 in (Pearl 2009) as shown in the following theorem:

Theorem 2.

The conditional average treatment effect of tt on yy conditioned on 𝐱\mathbf{x} can be identified if we recover the confounding factors 𝐳c\mathbf{z}_{c} and risk factors 𝐳y\mathbf{z}_{y} .

Proof.

Let Gt¯G_{\overline{t}} denote the causal structure obtained by removing all incoming edges of tt in Figure 1, Gt¯G_{\underline{t}} denote the structure by deleting all outgoing edges of tt.

Noting that y𝐳t|t,𝐳y,𝐳cy\mathrel{\text{\scalebox{1.07}{$\perp\mkern-10.0mu\perp$}}}\mathbf{z}_{t}|t,\mathbf{z}_{y},\mathbf{z}_{c} in Gt¯G_{\overline{t}}, using the three rules of do-calculus we can remove 𝐳t\mathbf{z}_{t} from the conditioning set and obtain P(y|do(t),𝐱)=P(y|do(t),𝐳t,𝐳c,𝐳y)=P(y|do(t),𝐳y,𝐳c)P(y|do(t),\mathbf{x})=P(y|do(t),\mathbf{z}_{t},\mathbf{z}_{c},\mathbf{z}_{y})=P(y|do(t),\mathbf{z}_{y},\mathbf{z}_{c}). with Rule 1. Furthermore, using Rule 2 with (yt|𝐳c,𝐳y)(y\mathrel{\text{\scalebox{1.07}{$\perp\mkern-10.0mu\perp$}}}t|\mathbf{z}_{c},\mathbf{z}_{y}) in Gt¯G_{\underline{t}} yields P(y|do(t),𝐱)=P(y|do(t),𝐳y,𝐳c)=P(y|t,𝐳y,𝐳c)P(y|do(t),\mathbf{x})=P(y|do(t),\mathbf{z}_{y},\mathbf{z}_{c})=P(y|t,\mathbf{z}_{y},\mathbf{z}_{c}). ∎

Refer to caption
(a) Generative Model.
Refer to caption
(b) Inference Model.
Figure 2: Overall architecture of the model network and the inference network for the Treatment Effect Disentangling Variational AutoEncoder (TEDVAE). White nodes correspond to parametrized deterministic neural network transitions, gray nodes correspond to drawing samples from the respective distribution and white circles correspond to switching paths according to the treatment tt. Dashed arrows in the inference model represent the two auxiliary classifiers qωt(t|𝐳t,𝐳c)q_{\omega_{t}}(t|\mathbf{z}_{t},\mathbf{z}_{c}) and qωy(y|𝐳y,𝐳c)q_{\omega_{y}}(y|\mathbf{z}_{y},\mathbf{z}_{c}).

An implication of Theorem 1 and 2 is that they are not restricted to binary treatment. In other words, our proposed method can be used for estimating treatment effect of a continuous treatment variable, while most of the existing estimators are not able to do so. However, due to the lack of datasets with continuous treatment variables for evaluating this, we focus on the case of binary treatment variable and leave the continuous treatment case for future work.

Theorems 1 and 2 suggest that disentangling the confounding factors allows us to exclude unnecessary factors when estimating ATE and CATE. However, keen readers may wonder since we already assumed unconfoundedness, doesn’t straightforwardly adjusting for 𝐱\mathbf{x} suffice?

Theoretically, it has been shown that both the bias (Abadie and Imbens 2006) and the variance (Hahn 1998) of treatment effect estimation will increase if variables unrelated to the outcome is included during the estimation. Therefore, it is crucial to differentiate the instrumental, confounding and risk factors and only use the appropriate factors during treatment effect estimation. In the next section, we propose our data-driven approach to learn and disentangle the latent factors using a variational autoencoder.

Learning of the Disentangled Latent Factors

In the above discussion, we have seen that removing unnecessary factors is crucial for efficient and accurate treatment effect estimation. We have assumed that the mechanism which generates the observed variables 𝐱\mathbf{x} from the latent factors 𝐳\mathbf{z} and the decomposition of latent factors 𝐳=(𝐳t,𝐳c,𝐳y)\mathbf{z}=(\mathbf{z}_{t},\mathbf{z}_{c},\mathbf{z}_{y}) are available. However, in practice both the mechanism and the decomposition are not known. Therefore, the practical approach would be to utilize the complete set of available variables during the modelling to ensure the satisfaction of the unconfoundedness assumption, and utilize a data-driven approach to simultaneously learn and disentangle the latent factors into disjoint subsets.

To this end, our goal is to learn the posterior distribution p(𝐳|𝐱)p(\mathbf{z}|\mathbf{x}) for the set of latent factors with 𝐳=(𝐳t,𝐳y,𝐳c)\mathbf{z}=(\mathbf{z}_{t},\mathbf{z}_{y},\mathbf{z}_{c}) as illustrated in Figure 1, where 𝐳t,𝐳c,𝐳y\mathbf{z}_{t},\mathbf{z}_{c},\mathbf{z}_{y} are independent of each other and correspond the instrumental factors, confounding factors, and risk factors, respectively. Because exact inference would be intractable, we employ neural network based variational inference to approximate the posterior pθ(𝐱|𝐳t,𝐳c,𝐳y)p_{\theta}(\mathbf{x}|\mathbf{z}_{t},\mathbf{z}_{c},\mathbf{z}_{y}). Specifically, we utilize three separate encoders qϕt(𝐳t|𝐱)q_{\phi_{t}}(\mathbf{z}_{t}|\mathbf{x}), qϕc(𝐳c|𝐱)q_{\phi_{c}}(\mathbf{z}_{c}|\mathbf{x}), and qϕy(𝐳y|𝐱)q_{\phi_{y}}(\mathbf{z}_{y}|\mathbf{x}) that serve as variational posteriors over the latent factors. These latent factors are then used by a single decoder pθ(𝐱|𝐳t,𝐳c,𝐳y)p_{\theta}(\mathbf{x}|\mathbf{z}_{t},\mathbf{z}_{c},\mathbf{z}_{y}) for the reconstruction of 𝐱\mathbf{x}. Following standard VAE design, the prior distributions p(𝐳t),p(𝐳c),p(𝐳y)p(\mathbf{z}_{t}),p(\mathbf{z}_{c}),p(\mathbf{z}_{y}) are chosen as Gaussian distributions (Kingma and Welling 2014).

Specifically, the factors and the generative models for 𝐱\mathbf{x} and tt are described as:

p(𝐳t)=j=1Dzt𝒩(ztj|0,1);p(𝐳c)=j=1Dzc𝒩(zcj|0,1);\displaystyle p(\mathbf{z}_{t})=\prod\limits_{j=1}^{D_{z_{t}}}\mathcal{N}(z_{tj}|0,1);\quad p(\mathbf{z}_{c})=\prod\limits_{j=1}^{D_{z_{c}}}\mathcal{N}(z_{cj}|0,1);
p(𝐳y)=j=1Dzy𝒩(zyj|0,1);p(t|𝐳t,𝐳c)=Bern(σ(f1(𝐳t,𝐳c))\displaystyle p(\mathbf{z}_{y})=\prod\limits_{j=1}^{D_{z_{y}}}\mathcal{N}(z_{yj}|0,1);\quad p(t|\mathbf{z}_{t},\mathbf{z}_{c})=Bern(\sigma(f_{1}(\mathbf{z}_{t},\mathbf{z}_{c}))
p(𝐱|𝐳t,𝐳c,𝐳y)=j=1dp(xj|𝐳t,𝐳c,𝐳y),\displaystyle p(\mathbf{x}|\mathbf{z}_{t},\mathbf{z}_{c},\mathbf{z}_{y})=\prod\limits_{j=1}^{d}p(x_{j}|\mathbf{z}_{t},\mathbf{z}_{c},\mathbf{z}_{y}), (4)

with p(xj|𝐳t,𝐳c,𝐳y)p(x_{j}|\mathbf{z}_{t},\mathbf{z}_{c},\mathbf{z}_{y}) being the suitable distribution for the jj-th observed variable, f1f_{1} is a function parametrized by neural network, and σ()\sigma(\cdot) being the logistic function, Dzt,DzcD_{z_{t}},D_{z_{c}}, and DzyD_{z_{y}} are the parameters that determine the dimensions of instrumental, confounding, and risk factors to infer from 𝐱\mathbf{x}.

For continuous outcome variable yy, we parametrize it as using a Gaussian distribution with its mean and variance given by a pair of disjoint neural networks that defines p(y|t=1,𝐳c,𝐳y)p(y|t=1,\mathbf{z}_{c},\mathbf{z}_{y}) and p(y|t=0,𝐳c,𝐳y)p(y|t=0,\mathbf{z}_{c},\mathbf{z}_{y}). This pair of disjoint networks allows for highly imbalanced treatment. Specifically, for continuous yy we parametrize it as:

p(y|t,𝐳c,𝐳y)=𝒩(μ=μ^,σ2=σ^2),\displaystyle p(y|t,\mathbf{z}_{c},\mathbf{z}_{y})=\mathcal{N}(\mu=\hat{\mu},\sigma^{2}=\hat{\sigma}^{2}),
μ^=(tf2(𝐳c,𝐳y)+(1t)f3(𝐳c,𝐳y)),\displaystyle\hat{\mu}=(tf_{2}(\mathbf{z}_{c},\mathbf{z}_{y})+(1-t)f_{3}(\mathbf{z}_{c},\mathbf{z}_{y})),
σ^2=(tf4(𝐳c,𝐳y)+(1t)f5(𝐳c,𝐳y)),\displaystyle\hat{\sigma}^{2}=(tf_{4}(\mathbf{z}_{c},\mathbf{z}_{y})+(1-t)f_{5}(\mathbf{z}_{c},\mathbf{z}_{y})), (5)

where f2f_{2}, f3f_{3}, f4f_{4}, f5f_{5} are neural networks parametrized by their own parameters. The distribution for the binary outcome case can be similarly parametrized with a Bernoulli distribution.

In the inference model, the variational approximations of the posteriors are defined as:

qϕt(𝐳𝐭|𝐱)=j=1Dzt𝒩(μ=μ^t,σ2=σ^t2);\displaystyle q_{\phi_{t}}(\mathbf{z_{t}}|\mathbf{x})=\prod\limits_{j=1}^{D_{z_{t}}}\mathcal{N}(\mu=\hat{\mu}_{t},\sigma^{2}=\hat{\sigma}_{t}^{2});
qϕc(𝐳𝐜|𝐱)=j=1Dzc𝒩(μ=μ^c,σ2=σ^c2);\displaystyle q_{\phi_{c}}(\mathbf{z_{c}}|\mathbf{x})=\prod\limits_{j=1}^{D_{z_{c}}}\mathcal{N}(\mu=\hat{\mu}_{c},\sigma^{2}=\hat{\sigma}_{c}^{2});
qϕy(𝐳𝐲|𝐱)=j=1Dzy𝒩(μ=μ^y,σ2=σ^y2)\displaystyle q_{\phi_{y}}(\mathbf{z_{y}}|\mathbf{x})=\prod\limits_{j=1}^{D_{z_{y}}}\mathcal{N}(\mu=\hat{\mu}_{y},\sigma^{2}=\hat{\sigma}_{y}^{2}) (6)

where μ^t,μ^c,μ^y\hat{\mu}_{t},\hat{\mu}_{c},\hat{\mu}_{y} and σ^t2,σ^c2,σ^y2\hat{\sigma}_{t}^{2},\hat{\sigma}_{c}^{2},\hat{\sigma}_{y}^{2} are the means and variances of the Gaussian distributions parametrized by neural networks similarly to the μ^\hat{\mu} and σ^\hat{\sigma} in the generative model.

Given the training samples, the parameters can be optimized by maximizing the evidence lower bound (ELBO):

ELBO(𝐱,y,t)=\displaystyle\mathcal{L}_{\textrm{ELBO}}(\mathbf{x},y,t)= 𝔼qϕcqϕtqϕy[logpθ(𝐱|𝐳t,𝐳c,𝐳y)]\displaystyle\mathbb{E}_{q_{\phi_{c}}{q_{\phi_{t}}}{q_{\phi_{y}}}}[\log p_{\theta}(\mathbf{x}|\mathbf{z}_{t},\mathbf{z}_{c},\mathbf{z}_{y})]
DKL(qϕt(𝐳t|𝐱)||pθt(𝐳t))\displaystyle-D_{KL}(q_{\phi_{t}}(\mathbf{z}_{t}|\mathbf{x})||p_{\theta_{t}}(\mathbf{z}_{t}))
DKL(qϕc(𝐳c|𝐱)||pθc(𝐳c))\displaystyle-D_{KL}(q_{\phi_{c}}(\mathbf{z}_{c}|\mathbf{x})||p_{\theta_{c}}(\mathbf{z}_{c}))
DKL(qϕy(𝐳y|𝐱)||pθy(𝐳y)).\displaystyle-D_{KL}(q_{\phi_{y}}(\mathbf{z}_{y}|\mathbf{x})||p_{\theta_{y}}(\mathbf{z}_{y})). (7)

To encourage the disentanglement of the latent factors and ensure that the treatment tt can be predicted from 𝐳t\mathbf{z}_{t} and 𝐳c\mathbf{z}_{c}, and the outcome yy can be predicted from 𝐳y\mathbf{z}_{y} and 𝐳c\mathbf{z}_{c}, we add two auxiliary classifiers to the variational lower bound. Finally, the objective of TEDVAE can be expressed as

TEDVAE=\displaystyle\mathcal{L}_{\text{TEDVAE}}= ELBO(𝐱,y,t)\displaystyle\mathcal{L}_{\textrm{ELBO}}(\mathbf{x},y,t)
+αt𝔼qϕtqϕc[logqωt(t|𝐳t,𝐳c)]\displaystyle+\alpha_{t}\mathbb{E}_{q_{\phi_{t}}q\phi_{c}}[\log q_{\omega_{t}}(t|\mathbf{z}_{t},\mathbf{z}_{c})]
+αy𝔼qϕyqϕc[logqωy(y|t,𝐳c,𝐳y)],\displaystyle+\alpha_{y}\mathbb{E}_{q_{\phi_{y}}q\phi_{c}}[\log q_{\omega_{y}}(y|t,\mathbf{z}_{c},\mathbf{z}_{y})], (8)

where αt\alpha_{t} and αy\alpha_{y} are the weights for the auxiliary objectives.

For predicting the CATEs of new subjects given their observed covariates 𝐱\mathbf{x}, we use the encoders q(𝐳y|𝐱)q(\mathbf{z}_{y}|\mathbf{x}) and q(𝐳c|𝐱)q(\mathbf{z}_{c}|\mathbf{x}) to sample the posteriors of the confounding and risk factors for ll times, and average over the predicted outcome yy using the auxiliary classifier qωy(y|t,𝐳c,𝐳y)q_{\omega_{y}}(y|t,\mathbf{z}_{c},\mathbf{z}_{y}).

An important difference between TEDVAE and CEVAE lies in their inference models. During inference, CEVAE depends on tt, xx and yy for inferencing 𝐳\mathbf{z}; in other words, CEVAE needs to estimate p(t|𝐱)p(t|\mathbf{x}) and p(y|t,𝐱)p(y|t,\mathbf{x}), inference 𝐳\mathbf{z} as p(𝐳|t,y,𝐱)p(\mathbf{z}|t,y,\mathbf{x}), and finally predict the CATE as τ^(𝐱)=𝔼[y|t=1,𝐳]𝔼[t|y=0,𝐳]\hat{\tau}(\mathbf{x})=\mathbb{E}[y|t=1,\mathbf{z}]-\mathbb{E}[t|y=0,\mathbf{z}]. The estimations of p(t|𝐱)p(t|\mathbf{x}) and p(y|t,𝐱)p(y|t,\mathbf{x}) are unnecessary since we assume that tt and yy are generated by the latent factors and inferencing the latents should only depend on 𝐱\mathbf{x} as in TEDVAE. As we later show in the experiments, this difference is crucial even when no instrumental or risk factors are present in the data.

Experiments

We empirically compare TEDVAE with traditional and neural network based treatment effect estimators. For traditional methods, we compare with tailor designed methods including Squared t-statistic Tree (t-stats) (Su et al. 2009) and Causal Tree (CT) (Athey and Imbens 2016); ensemble methods including Causal Random Forest (CRF) (Wager and Athey 2018), Bayesian Additive Regression Trees (BART) (Hill 2011), and meta algorithm X-Learner (Künzel et al. 2019) using Random Forest (Breiman et al. 1984) as base learner (X-RF). For deep learning based methods, we compare with representation learning based methods including Counterfactual Regression Net (CFR) (Shalit, Johansson, and Sontag 2017), Similarity Preserved Individual Treatment Effect (SITE) (Yao et al. 2018), and with a deep learning variable decomposition method for Counterfactual Regression (DR-CFR) (Hassanpour and Greiner 2020). We also compare with generative methods including Causal Effect Variational Autoencoder (CEVAE) (Louizos et al. 2017) and GANITE (Yoon, Jordan, and van der Schaar 2018). Parameters for the compared methods are tuned by cross-validated grid search on the value ranges recommended in the code repository. The code is available at https://github.com/WeijiaZhang24/TEDVAE.

Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 3: Comparison of CEVAE and TEDVAE under different settings using the synthetic datasets. Rows: the results for data generating procedure satisfies the assumption of the TEDVAE model and the CEVAE model, respectively. Columns: (Left) Varying the proportional of treated samples; (Middle) Varying the size of the outcome; (Right) Varying the size of the CATE. (Figures are best viewed in colour.)

Evaluation Criteria

For evaluating the performance of CATE estimation, we use the Precision in Estimation of Heterogeneous Effect (PEHE) (Hill 2011; Shalit, Johansson, and Sontag 2017; Louizos et al. 2017; Dorie et al. 2019) which measures the root mean squared distance between the estimated and the true CATE when ground truth is available: ϵPEHE=1Ni=1N(τ^(𝐱i)τ(𝐱i))2\epsilon_{\text{PEHE}}=\sqrt{\frac{1}{N}\sum_{i=1}^{N}(\hat{\tau}(\mathbf{x}_{i})-\tau(\mathbf{x}_{i}))^{2}} , where τ(𝐱)\tau(\mathbf{x}) is the ground truth CATE for subjects with observed variables 𝐱i\mathbf{x}_{i}.

For evaluating the performance of the average treatment effect (ATE) estimation, the ground truth ATE can be calculated by averaging the differences of the outcomes in the treated and control groups if randomized controlled trials data is available. Then, when comparing the ground truth ATE with the estimated ATE obtained from a non-randomized sample (observational sample or created via biased sampling) of the dataset, the performances can then be evaluated using the mean absolute error in ATE (Hill 2011; Shalit, Johansson, and Sontag 2017; Louizos et al. 2017; Yao et al. 2018) for evaluation: ϵATE=|τ^1Ni=1N[tiyi(1ti)yi]|\epsilon_{\text{ATE}}=|\hat{\tau}-\frac{1}{N}\sum\limits_{i=1}^{N}[t_{i}y_{i}-(1-t_{i})y_{i}]|, where τ^\hat{\tau} is the estimated ATE, tit_{i} and yiy_{i} are the treatments and outcomes from the randomized data. For both ϵATE\epsilon_{\text{ATE}} and ϵPEHE\epsilon_{\text{PEHE}}, we use superscripts trtr and tete to denote their values on the training and test sets, respectively.

Synthetic Datasets

We first conduct experiments using synthetic datasets to investigate TEDVAE’s capability of inferring the latent factors and estimating the treatment effects. Due to the page limit, we only provide an outline of the synthetic dataset and provide the detailed settings in the supplementary materials.

The first setting of synthetic datasets studies the benefit of disentangling the confounding factors from instrumental and risk factors, and are generated using the structure depicted in Figure 1. We illustrate the results in the first row of Figure 3. It can be seen that when the instrumental and risk factors exist in the data, the benefit of disentanglement is signficance as demonstrated by the PEHE curves between TEDVAE and CEVAE. When the proportions of the treated samples varies, the performances of CEVAE fluctuates severely and the error remains high even when the dataset is balanced; however, the PEHEs of TEDVAE are stable even when the dataset is highly imbalanced, and are always stays significantly lower than CEVAE. When the scales of outcome and CATE change, TEDVAE also performs consistently and significantly better than CEVAE.

The second setting for the synthetic datasets are designed to study how TEDVAE performs when the instrumental and risk factors are absent, and follow the same data generating procedure as in the CEVAE (Louizos et al. 2017). We illustrate the results of this synthetic dataset in the second row of Figure 3. Therefore, it is reasonable to expect that CEVAE would perform better than TEDVAE since the instrumental factors 𝐳t\mathbf{z}_{t} and risk factors 𝐳y\mathbf{z}_{y} do not exist. However, from the second row of Figure 3 we can see that TEDVAE either performs better than CEVAE, or performs as well as CEVAE using a wide range of parameters under this setting. This is possibly due to the differences in predicting for previous unseen samples between TEDVAE and CEVAE, where CEVAE needs to follow a complicated procedure of inferencing p(t|𝐱)p(t|\mathbf{x}) and p(y|t,𝐱)p(y|t,\mathbf{x}) first and then inferencing the latents as p(z|t,y,𝐱)p(z|t,y,\mathbf{x}), whereas in TEDVAE this is not needed. These results suggests that the TEDVAE model is able to effectively learn the latent factors and estimate the CATE even when the instrumental and risk factors are absent. It also indicates that the TEDVAE algorithm is robust to the selection of the latent dimensionality parameters.

Next, we investigate whether TEDVAE is capable of recovering the latent factors of 𝐳t\mathbf{z}_{t}, 𝐳c\mathbf{z}_{c}, and 𝐳y\mathbf{z}_{y} that are used to generate the observed covariates 𝐱\mathbf{x}. To do so, we compare the performances of TEDVAE when setting the DztD_{z_{t}}, DzcD_{z_{c}} and DzyD_{z_{y}} parameters to 10 against itself when setting one of the latent dimensionality parameter of TEDVAE to 0, i.e., setting Dzt=0D_{z_{t}}=0 and forcing TEDVAE to ignore the existence of 𝐳t\mathbf{z}_{t}. If TEDVAE is indeed capable of recovering the latent factors, then its performances with non-zero latent dimensionality parameters should be better than its performance when ignoring the existence of any of the latent factors. Figure 4 illustrates the capability of TEDVAE for identifying the latent factors using radar chart. Taking the Figure 4(a) as example, the ztz_{t} and ¬zt\neg z_{t} polygons correspond to the performances of TEDVAE when setting the dimension parameter Dzt=5D_{z_{t}}=5 (identify 𝐳t\mathbf{z}_{t}) and Dzt=0D_{z_{t}}=0 (ignore 𝐳t\mathbf{z}_{t}). From the figures we can clearly see that the performances of TEDVAE are significantly better when the latent dimensionality parameters are set to non-zero, than setting any of the latent dimensionality to 0.

Refer to caption
(a)
Refer to caption
(b)
Refer to caption
(c)
Figure 4: Radar charts for TEDVAE’s capability in identifying the latent factors. Each vertex on the polygons is identified with the latent factors’ dimension sequence of the associated synthetic dataset. For example, 5-5-5 indicates that the dataset is generated using 5 dimensions each for the instrumental, confounding, and risk factors.

Benchmarks and Real-world Datasets

In this section, we use two benchmark datasets for treatment effect estimation to compare TEDVAE with the baselines.

Benchmark I: 2016 Atlantic Causal Inference Challenge

Methods ϵPEHEtr\epsilon_{\text{PEHE}}^{tr} ϵPEHEte\epsilon_{\text{PEHE}}^{te}
CT 4.81±\pm0.18 4.96±\pm0.21
t-stats 5.18±\pm0.18 5.44±\pm0.20
CF 2.16±\pm0.17 2.18±\pm0.19
BART 2.13±\pm0.18 2.17±\pm0.15
X-RF 1.86±\pm0.15 1.89±\pm0.16
CFR 2.05±\pm0.18 2.18±\pm0.20
SITE 2.32±\pm0.19 2.41±\pm0.23
DR-CFR 2.44±\pm0.20 2.56±\pm0.21
GANITE 2.78±\pm0.56 2.84±\pm 0.61
CEVAE 3.12±\pm0.28 3.28±\pm0.35
TEDVAE 1.75±\pm0.14 1.77±\pm0.17
Table 1: Means and standard deviations of the PEHE metrics (smaller is better) for training and test sets on the 77 benchmark datasets from ACIC2016. The bolded values indicate the best performers (Wilcoxon signed rank tests at p=0.05p=0.05).

The 2016 Atlantic Causal Inference Challenge (ACIC2016) (Dorie et al. 2019) contains 77 different settings of benchmark datasets that are designed to test causal inference algorithms under a diverse range of real-world scenarios. The dataset contains 4802 observations and 58 variables. The outcome and treatment variables are generated using different data generating procedures for the 77 settings, providing benchmarks for a wide range of treatment effect estimation scenarios. This dataset can be accessed at https://github.com/vdorie/aciccomp/tree/master/2016.

We report the average PEHE metrics across 77 settings where each setting is repeated for 10 replications. For TEDVAE, the parameters are selected using the average of the first five settings, instead of tuning separately for the 77 settings. This approach has two benefits: firstly and most importantly, if an algorithm performs well using the same parameters across all 77 settings, it indicates that the algorithm is not sensitive to the choice of parameters and thus would be easier for practitioners to use in real-world scenarios; the second benefit is to save computation costs, as conducting parameter tuning across a large amount of datasets can be computationally overwhelming for practitioners. As a result, we set the latent dimensionality parameters as Dzy=5D_{z_{y}}=5, Dzt=15D_{z_{t}}=15, Dzc=15D_{z_{c}}=15 and set the weight for auxiliary losses as αt=αy=100\alpha_{t}=\alpha_{y}=100. For all the parametrized neural networks, we use 5 hidden layers and 100 hidden neurons in each layer, with ELU activation. with a 60%/30%/10% train/validation/test splitting proportions.

The results on the ACIC2016 datasets are reported in Table 1. We can see that TEDVAE performs significantly better than the compared methods. These results show that, without tuning parameters individually for each setting, TEDVAE achieves state-of-the-art performances across diverse range of data generating procedures, which empirically demonstrates that TEDVAE is effective for treatment effect estimation across different settings.

Setting A Setting B
Methods ϵPEHEtr\epsilon_{\text{PEHE}}^{tr} ϵPEHEte\epsilon_{\text{PEHE}}^{te} ϵPEHEtr\epsilon_{\text{PEHE}}^{tr} ϵPEHEte\epsilon_{\text{PEHE}}^{te}
CT 1.48±\pm0.12 1.56±\pm0.13 5.46±\pm0.08 5.73±\pm0.09
t-stats 1.78±\pm0.09 1.91±\pm0.12 5.40±\pm0.08 5.71±\pm0.09
CF 1.01±\pm0.08 1.09±\pm0.16 3.86±\pm0.05 3.91±\pm0.07
BART 0.87±\pm0.07 0.88±\pm0.07 2.78±\pm0.03 2.91±\pm0.04
X-RF 0.98±\pm0.08 1.09±\pm0.15 3.50±\pm0.04 3.59±\pm0.06
CFR 0.67±\pm0.02 0.73±\pm0.04 2.60±\pm0.04 2.76±\pm0.04
SITE 0.65±\pm0.07 0.67±\pm0.06 2.65±\pm0.04 2.87±\pm0.05
DR-CFR 0.62±\pm0.15 0.65±\pm0.18 2.73±\pm0.04 2.93±\pm0.05
GANITE 1.84±\pm0.34 1.90±\pm0.40 3.68±\pm0.38 3.84±\pm0.52
CEVAE 0.95±\pm0.12 1.04±\pm0.14 2.90±\pm0.10 3.24±\pm0.12
TEDVAE 0.59±\pm0.11 0.60±\pm0.14 2.10±\pm0.09 2.22±\pm0.08
Table 2: Means and standard deviations of the PEHE metric (smaller is better) on IHDP. The bolded values indicate the best performers (Wilcoxon signed rank tests (p=0.05p=0.05).

Benchmark II: Infant Health Development Program

The Infant Health and Development Program (IHDP) is a randomized controlled study designed to evaluate the effect of home visit from specialist doctors on the cognitive test scores of premature infants. The datasets is first used for benchmarking treatment effect estimation algorithms in (Hill 2011), where selection bias is induced by removing a non-random subset of the treated subjects to create an observational dataset, and the outcomes are simulated using the original covariates and treatments. It contains 747 subjects and 25 variables that describe both the characteristics of the infants and the characteristics of their mothers. We use the same procedure as described in (Hill 2011) which includes two settings of this benchmark: ‘Setting A” and “Setting B”, where the outcomes follow linear relationship with the variables in “Setting A” and exponential relationship in “Setting B”. The datasets can be accessed at https://github.com/vdorie/npci. The reported performances are averaged over 100 replications with a training/validation/test splits proportions of 60%/30%/10%.

Since evaluating treatment effect estimation is difficult in real-world scenarios (Alaa and van der Schaar 2019), a good treatment effect estimation algorithm should perform well across different datasets with minimum requirement for parameter tuning. Therefore, for TEDVAE we use the same parameters in the ACIC dataset and do not perform parameter tuning on the IHDP dataset. For the compared traditional methods, we also use the same parameters as selected on the ACIC benchmark. For the compared deep learning methods, we conduct grid search using the recommended parameter ranges from the relevant papers.

From Table 2 we can see that TEDVAE achieves the lowest PEHE errors among the compared methods on both Setting A and Setting B of the IHDP benchmark. Wilcoxon signed rank tests (p=0.05p=0.05) indicate that TEDVAE is significantly better than the compared methods. Since TEDVAE uses the same parameters on the IHDP datasets as in the previous ACIC benchmarks, these results demonstrate that the TEDVAE model is suitable for diverse real-world scenarios and is robust to the choice of parameters.

Twins
Methods ϵATEtr\epsilon_{\text{ATE}}^{tr} ϵATEte\epsilon_{\text{ATE}}^{te}
CT 0.034±\pm0.002 0.038±\pm0.007
t-stats 0.032±\pm0.003 0.033±\pm0.005
CF 0.025±\pm0.001 0.025±\pm0.001
BART 0.050±\pm0.002 0.051±\pm0.002
X-RF 0.075±\pm0.003 0.074±\pm0.004
CFR 0.029±\pm0.002 0.030±\pm0.002
SITE 0.031±\pm0.003 0.033±\pm0.003
DR-CFR 0.032±\pm0.002 0.034±\pm0.003
GANITE 0.016±\pm0.004 0.018±\pm0.005
CEVAE 0.046±\pm0.020 0.047±\pm0.021
TEDVAE 0.006±\pm0.002 0.006±\pm0.002
Table 3: Means and standard deviations of ϵATE\epsilon_{\text{ATE}} on the Twins datasets. The bolded values indicate the best performers (Wilcoxon signed rank tests (p=0.05p=0.05).

Real-world Dataset: Twins

In this section, we use a real-world randomized dataset to compare the methods capability of estimating the average treatment effects.

The Twins dataset has been previously used for evaluating causal inference in (Louizos et al. 2017; Yao et al. 2018). It consists of samples from twin births in the U.S. between the year of 1989 and 1991 provided in (Almond, Chay, and Lee 2005). Each subject is described by 40 variables related to the parents, the pregnancy and the birth statistics of the twins. The treatment is considered as t=1t=1 if a sample is the heavier one of the twins, and considered as t=0t=0 if the sample is lighter. The outcome is a binary variable indicating the children’s mortality after a one year follow-up period. Following the procedure in (Yao et al. 2018), we remove the subjects that are born with weight heavier than 2,000g and those with missing values, and introduced selection bias by removing a non-random subset of the subjects. The final dataset contains 4,813 samples. The data splitting is the same as previous experiments, and the reported results are averaged over 100 replications. The ATE estimation performances are illustrated in Table 3. On this dataset, we can see that TEDVAE achieves the best performance with the smallest ϵATE\epsilon_{ATE} among all the compared algorithms.

Overall, the experiments results show that the performances of TEDVAE are significantly better than the compared methods on a wide range of synthetic, benchmark, and real-world datasets. In addition, the results also indicate that TEDVAE is less sensitive to the choice of parameters than the other deep learning based methods, which makes our method attractive for real-world application scenarios.

Conclusion

We propose the TEDVAE algorithm, a state-of-the-art treatment effect estimator which infer and disentangle three disjoints sets of instrumental, confounding and risk factors from the observed variables. Experiments on a wide range of synthetic, benchmark, and real-world datasets have shown that TEDVAE significantly outperforms compared baselines. For future work, a path worth exploring is extending TEDVAE for treatment effects with non-binary treatment variables. While most of the existing methods are restricted to binary treatments, the generative model of TEDVAE makes it a promising candidate for extension to treatment effect estimation with continuous treatments.

References

  • Abadie and Imbens (2006) Abadie, A.; and Imbens, G. W. 2006. Large Sample Properties of Matching Estimators for Average Treatment Effects. Econometrica 74(1): 235–267.
  • Alaa and Schaar (2018) Alaa, A.; and Schaar, M. 2018. Limits of Estimating Heterogeneous Treatment Effects: Guidelines for Practical Algorithm Design. In Proceedings of the 35th International Conference on Machine Learning, 129–138.
  • Alaa and van der Schaar (2019) Alaa, A. M.; and van der Schaar, M. 2019. Validating Causal Inference Models via Influence Functions. In Proceedings of the 36th International Conference on Machine Learning, 191–201.
  • Almond, Chay, and Lee (2005) Almond, D.; Chay, K. Y.; and Lee, D. S. 2005. The costs of low birth weight. The Quarterly Journal of Economics 120(3): 1031–1083.
  • Athey and Imbens (2016) Athey, S.; and Imbens, G. 2016. Recursive Partitioning for Heterogeneous Causal Effects. Proceedings of the National Academy of Sciences of the United States of America 113(27): 7353–7360.
  • Breiman et al. (1984) Breiman, L.; Friedman, J.; Stone, C. J.; and Olshen, R. 1984. Classification and Regression Trees. Belmont: Wadsworth.
  • Dorie et al. (2019) Dorie, V.; Hill, J.; Shalit, U.; Scott, M.; and Cervone, D. 2019. Automated versus Do-It-Yourself Methods for Causal Inference: Lessons Learned from a Data Analysis Competition. Stastical Science 34(1): 43–68.
  • Hahn (1998) Hahn, J. 1998. On the Role of the Propensity Score in Efficient Semiparametric Estimation of Average Treatment Effects. Econometrica 66(2): 315–331.
  • Hassanpour and Greiner (2018) Hassanpour, N.; and Greiner, R. 2018. CounterFactual Regression with Importance Sampling Weights. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI’19), 5880–5887.
  • Hassanpour and Greiner (2020) Hassanpour, N.; and Greiner, R. 2020. Learning Disentangled Representations for CounterFactual Regression. In Proceedings of the 8th International Conference on Learning Representations.
  • Hill (2011) Hill, J. L. 2011. Bayesian Nonparametric Modeling for Causal Inference. Journal of Computational and Graphical Statistics 20(1): 217–240.
  • Häggström (2017) Häggström, J. 2017. Data-driven confounder selection via Markov and Bayesian networks. Biometrics 74(2): 389–398.
  • Imbens (2019) Imbens, G. W. 2019. Potential Outcome and Directed Acyclic Graph Approaches to Causality: Relevance for Empirical Practice in Economics. Journal of Economic Literature 58(4): 1129–1179.
  • Kingma and Welling (2014) Kingma, D. P.; and Welling, M. 2014. Auto-Encoding Variational Bayes. In Proceedings of the 5th International Conference on Learning Representations (ICLR’14).
  • Kun et al. (2017) Kun, K.; Cui, P.; Li, B.; Jiang, M.; Yang, S.; and Wang, F. 2017. Treatment effect estimation with data-driven variable decomposition. In Proceedings of the 31st AAAI Conference on Artificial Intelligence (AAAI’17), 140–146.
  • Künzel et al. (2019) Künzel, S. R.; Sekhon, J. S.; Bickel, P. J.; and Yu, B. 2019. Metalearners for estimating heterogeneous treatment effects using machine learning. Proceedings of the National Academy of Sciences 116(10): 4156–4165.
  • Louizos et al. (2017) Louizos, C.; Shalit, U.; Mooij, J.; Sontag, D.; Zemel, R.; and Welling, M. 2017. Causal Effect Inference with Deep Latent-Variable Models. In Advances in Neural Information Processing Systems 30 (NeurIPS’17).
  • Pearl (2009) Pearl, J. 2009. Causality. Cambridge University Press.
  • Rosenbaum and Rubin (1983) Rosenbaum, P. R.; and Rubin, D. B. 1983. The central role of the propensity score in observational studies for causal effects. Biometrika 70(1): 41–55.
  • Rzepakowski and Jaroszewicz (2011) Rzepakowski, P.; and Jaroszewicz, S. 2011. Decision trees for uplift modeling with single and multiple treatments. Knowledge and Information Systems 32(2): 303–327.
  • Sauer et al. (2013) Sauer, B. C.; Brookhart, M. A.; Roy, J.; and VanderWeele, T. 2013. A review of covariate selection for non-experimental comparative effectiveness research. Pharmacoepidemiology and Drug Safety 22(11): 1139–1145.
  • Shalit, Johansson, and Sontag (2017) Shalit, U.; Johansson, F. D.; and Sontag, D. 2017. Estimating individual treatment effect: generalization bounds and algorithms. In Proceedings of the 34th International Conference on Machine Learning (ICML’17), volume 70, 3076–3085.
  • Su et al. (2009) Su, X.; Tsai, C.-L.; Wang, H.; Nkckerson, D. M.; and Li, B. 2009. Subgroup analysis via Recursive Partitioning. Journal of Machine Learning Research 10: 141–158.
  • Wager and Athey (2018) Wager, S.; and Athey, S. 2018. Estimation and Inference of Heterogeneous Treatment Effects using Random Forests. Journal of the American Statistical Association 113(523): 1228–1242.
  • Yao et al. (2018) Yao, L.; Li, S.; Li, Y.; Huai, M.; Gao, J.; and Zhang, A. 2018. Representation Learning for Treatment Effect Estimation from Observational Data. In Advances in Neural Information Processing Systems 31 (NeurIPS’18), 2638–2648.
  • Yoon, Jordan, and van der Schaar (2018) Yoon, J.; Jordan, J.; and van der Schaar, M. 2018. GANITE: Estimation of Individualized Treatment Effects using Generative Adversarial Nets. In Proceedings of the 9th International Conference on Learning Representations (ICLR’18).
  • Zhang et al. (2018) Zhang, W.; Le, T. D.; Liu, L.; and Li, J. 2018. Estimating heterogeneous treatment effect by balancing heterogeneity and fitness. BMC Bioinformatics 19(S19): 61–72.
  • Zhang et al. (2017) Zhang, W.; Le, T. D.; Liu, L.; Zhou, Z.-H.; and Li, J. 2017. Mining heterogeneous causal effects for personalized cancer treatment. Bioinformatics 33(15): 2372–2378.