This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Marginal treatment effects in the absence of instrumental variables

Zhewen Pan1,2   Zhengxin Wang1,2   Junsen Zhang3   Yahong Zhou4
1Center for Quantitative Economic Research, Zhejiang University of Finance & Economics
2School of Economics, Zhejiang University of Finance & Economics
3School of Economics, Zhejiang University
4School of Economics, Shanghai University of Finance & Economics
Corresponding address: School of Economics, Zhejiang University, 866 Yuhangtang Rd, Hangzhou 310058, P.R. China. Email: [email protected] (Junsen Zhang).
Abstract

We propose a method for defining, identifying, and estimating the marginal treatment effect (MTE) without imposing the instrumental variable (IV) assumptions of independence, exclusion, and separability (or monotonicity). Under a new definition of the MTE based on reduced-form treatment error that is statistically independent of the covariates, we find that the relationship between the MTE and standard treatment parameters holds in the absence of IVs. We provide a set of sufficient conditions ensuring the identification of the defined MTE in an environment of essential heterogeneity. The key conditions include a linear restriction on potential outcome regression functions, a nonlinear restriction on the propensity score, and a conditional mean independence restriction which will lead to additive separability. We prove this identification using the notion of semiparametric identification based on functional form. And we provide an empirical application for the Head Start program to illustrate the usefulness of the proposed method in analyzing heterogenous causal effects when IVs are elusive.

Keywords: causal inference; treatment effect heterogeneity; point identification; Head Start

JEL Codes: C14, C31, C51

1 Introduction

The marginal treatment effect (MTE), which was developed in a series of seminal papers by Heckman and Vytlacil (1999, 2001, 2005), has become one of the most popular tools in social sciences for describing, interpreting, and analyzing the heterogeneity of the effect of a nonrandom treatment. The MTE is defined as the expected treatment effect conditional on the observed covariates and normalized error term consisting of unobservable determinants of the treatment status. Although the definition of MTE doesn’t necessitate an instrumental variable (IV), nearly all the existing identification strategies for the MTE rely heavily on valid instrumental variation which can induce otherwise similar individuals into different treatment choices. This is mainly because the MTE has been regarded as an extension of and supplement to the standard IV method in the causal inference literature. However, in practical applications, the validity of IVs is usually difficult to justify. In the most common case of just-identification in which only one IV is available, the exogeneity or randomness of the IV is fundamentally untestable. The exclusion restriction, which is another condition underlying the validity of IVs, has also been challenged more often than not (e.g., van den Berg, 2007; Jones, 2015).

Motivated by the potential invalidity of the available instruments, and inspired by the observation that instruments are unnecessary in the definition of MTE, we model, identify, and estimate the MTE without imposing the standard IV assumptions of conditional independence, exclusion, and separability. Namely, we allow all the observed covariates to be statistically correlated to the error term and have a direct impact on the outcome in addition to an indirect impact through the treatment variable. The cornerstone of our model is a normalized treatment equation, which determines treatment participation by the propensity score crossing a reduced-form error term that is statistically independent of (although functionally dependent on) all the covariates. This is comparable to the conventional IV model, in which normalization is performed with respect to only the non-IV covariates. The independence of the reduced-form treatment error from all the covariates can partly justify a conditional mean independence assumption on the potential outcome residuals, which can ensure the additive separability of the MTE into observables and unobservables. This separability then facilitates the semiparametric identification of the MTE, given a linear restriction on the potential outcome regression functions and a nonlinear restriction on the propensity score function. We prove the identification by representing the MTE as a known function of conditional moments of observed variables. Intuitively, the identification power comes from the excluded nonlinear variation in the propensity score, which plays the role of the IV in exogenously perturbing the treatment status. Our identification strategy closely resembles the semiparametric counterpart of identification based on functional form (Dong, 2010; Escanciano et al., 2016; Lewbel, 2019; Pan et al., 2022). We build our result on the notion of identification based on functional form mainly because (i) it can lead to point identification and estimation, (ii) the required assumptions are regular, and (iii) the resulting estimation procedures are the most compatible with the standard procedures in conventional IV-MTE models. After identification, the MTE can be semiparametrically estimated by implementing the kernel-weighted pairwise difference regression for the sample selection model (Ahn and Powell, 1993) for each treatment status.

We contribute to the literature on program evaluation under essential heterogeneity by proposing a method for defining, identifying, and estimating the MTE in the absence of IVs. The main value of this IV-free framework of the MTE is threefold. First, it provides an approach for consistently estimating heterogeneous causal effects when a valid IV is hard to find. Second, it suggests an easy-to-implement means for testing the validity of a candidate IV, because it nests the models that assume exclusion restrictions. For instance, in studies on returns to education, the validity of most instruments for educational attainment is suspect, such as parental education, distance to the school, and local labor market conditions (Kédagni and Mourifié, 2020; Mourifié et al., 2020). Our framework enables a reliable evaluation of returns to education without imposing IV assumptions on the candidate instruments and implies a simple test for exclusion restrictions by tt-testing the instruments’ coefficients in potential outcome equations. Third, even though the validity of the IV is verified, identification based on functional form can be invoked as a way to increase the efficiency of estimation or to check the robustness of the results to alternative identifying assumptions.

Early explorations of the identification of endogenous regression models when IVs are unavailable focused on linear systems of simultaneous equations, in which the lack of IVs is associated with hardly justifiable exclusion restrictions and insufficient moment conditions. The typical strategy for addressing this underidentification problem is to impose second- or higher-order moment restrictions to construct instruments by using the available exogenous covariates (e.g., Fisher, 1966; Lewbel, 1997; Klein and Vella, 2010). In particular, imposing restrictions on the correlation between the covariates and the variance matrix of the vector of model errors leads to identification based on heteroscedasticity (Lewbel, 2012). D’Haultfœuille et al. (2021) considered a more general nonseparable, nonparametric triangular model without the exclusion restriction, and established identification for the control function approach based on a local irrelevance condition. However, these results are restricted to the case of continuous endogenous variable or treatment. Lewbel (2018) showed that, in linear regression models, the moment conditions of identification based on heteroscedasticity can be satisfied when the endogenous treatment is binary, at the expense of strong restrictions on the model errors. Alternatively, Conley et al. (2012) and Van Kippersluis and Rietveld (2018) presented a sensitivity analysis approach for performing inference for linear IV models while relaxing the exclusion restriction to a certain extent. However, the linear models implicitly impose an undesirable homogeneity assumption on treatment effects.

The literature on sample selection models without the exclusion restriction is also closely related. A general solution to the problem of lack of excluded variables is partial identification and set estimation (Honoré and Hu, 2020, 2022; Westphal et al., 2022). The limitation of this approach is that the identified or estimated set may be too wide to be informative. Another solution is the approach of identification at infinity that uses the data only for large values of a special regressor (Chamberlain, 1986; Lewbel, 2007) or of the outcome (D’Haultfœuille and Maurel, 2013). Although identification at infinity can lead to point identification, it is typically featured as irregular identification (Khan and Tamer, 2010), and the derived estimate will converge at a rate slower than n1/2n^{-1/2}, where nn is the sample size. Heckman (1979) exploited nonlinearity in the selectivity correction function to achieve point identification and root-nn consistent estimation for the linear coefficients of a parametric sample selection model, which is the original version of identification based on functional form. However, Heckman’s model imposes a restrictive bivariate normality assumption on the error terms and thus poses the risk of model misspecification. Escanciano et al. (2016) extended Heckman’s approach to a general semiparametric model and established identification of the linear coefficients by exploiting nonlinearity elsewhere in the model. At the expense of the generality of their model, Escanciano et al. (2016)’s identification result was only up to scale and required a scale normalization assumption. This normalization would be innocuous if we know the sign of the normalized coefficient and are interested in only the sign, but not the magnitude, of the other coefficients. However, the magnitude of the linear coefficients is indispensable to the evaluation of a program or a policy. In addition, the identifying assumption about nonlinearity developed by Escanciano et al. (2016) implicitly rules out the case of the existence of two continuous covariates. We adapt the result of Escanciano et al. (2016) to the MTE model by amending the two defects. First, we take advantage of the specific model structure to relax the scale normalization and identify the magnitude of the linear coefficients. Moreover, the identifying assumption about nonlinearity will be simplified owing to the specific model structure. We give an intuitive interpretation of the new nonlinearity assumption. Second, we combine the nonlinearity assumption with a local irrelevance condition to allow for an arbitrary number of continuous covariates.

The rest of this paper is organized as follows. In Section 2, we introduce our model and the definition of MTE without IVs. In Section 3, we propose a possible set of sufficient conditions for the identification of the MTE, in place of the standard IV assumptions. The key conditions include semiparametric functional form restrictions and the conditional mean independence assumption, of which the latter implies the additive separability of the MTE into observed and unobserved components. Section 4 suggests consistent estimation procedures, and Section 5 provides an empirical application to Head Start. Section 6 concludes. Online appendices comprise (A) a proof of the primary identification result, (B) a discussion on variants of the nonlinearity assumption, and (C) an identification result for limited valued outcomes which entails a slight modification of the assumptions.

2 Model

In the following, we denote random variables or random vectors by capital letters, such as UU, and their possible realizations by the corresponding lowercase letters, such as uu. We denote FU()F_{U}\left(\cdot\right) as the cumulative distribution function (CDF) of UU and FU|X(|x)F_{U\left|X\right.}\left(\cdot\left|x\right.\right) as the conditional CDF of UU given X=xX=x. Our analysis builds on the potential outcomes framework developed by Roy (1951) in econometrics and Rubin (1973a, b) in statistics. Specifically, we consider a binary treatment, denoted by DD, and let Y1Y_{1} and Y0Y_{0} denote the potential outcomes if the individuals are treated (D=1D=1) or not treated (D=0D=0), respectively. The observed outcome is

Y=DY1+(1D)Y0,Y=DY_{1}+\left(1-D\right)Y_{0},

and the quantity of interest is the counterfactual treatment effect Y1Y0Y_{1}-Y_{0}. We suppose that the treatment status is determined by the following threshold crossing rule:

D=1{μ(X)U},D=1\left\{\mu\left(X\right)\geq U\right\}, (1)

where XX is a vector of pretreatment covariates, 1{A}1\left\{A\right\} is the indicator function of event AA, μ()\mu\left(\cdot\right) is an unknown function, and UU is a structural error term containing unobserved characteristics that may affect participation in the treatment, such as the opportunity costs or intangible benefits of the treatment.

Compared with the conventional MTE model, we relax the independence and separability assumptions in the treatment equation (1) to account for the absence of IVs. First, we don’t assume that XX is stochastically independent of UU; that is, no exogenous covariate is needed. Second, we allow the treatment decision rule to be intrinsically nonseparable in observed and unobserved characteristics, which is equivalent to relaxing the monotonicity assumption in the IV model (Vytlacil, 2002, 2006). Specifically, let UxU_{x} be the counterfactual error term denoting what UU would have been if XX had been externally set to xx, then the nonseparability of (1) means that at least two vectors xx and x~\tilde{x} exist in the support of XX such that UxUx~U_{x}\neq U_{\tilde{x}} with positive probability. In particular, UU is allowed to depend functionally on XX, as in the following example.

Example 1.

We consider a latent index rule for treatment participation:

D=1{m(X,ε)0},D=1\left\{m\left(X,\varepsilon\right)\geq 0\right\}, (2)

where the observed XX can be statistically correlated to the unobserved ε\varepsilon, and no restriction is imposed on the cross-partials of the index function mm. Without independence and additive separability, model (2) is completely vacuous, imposing no restrictions on the observed or counterfactual outcomes (Heckman and Vytlacil, 2001). This general latent index rule fits into the treatment equation (1) by taking μ(X)=E[m(X,ε)|X]\mu\left(X\right)=E\left[m\left(X,\varepsilon\right)\left|X\right.\right] and U=μ(X)m(X,ε)U=\mu\left(X\right)-m\left(X,\varepsilon\right).

Example 1 illustrates that no generality is lost by modeling the treatment variable as equation (1) without imposing the independence and separability assumptions. We define the propensity score function as the conditional probability of receiving the treatment given the covariates,

π(x)E[D|X=x]=FU|X(μ(x)|x),\pi\left(x\right)\equiv E\left[D\left|X=x\right.\right]=F_{U\left|X\right.}\left(\mu\left(x\right)\left|x\right.\right),

and define the propensity score variable as Pπ(X)P\equiv\pi\left(X\right). Under the regularity condition that FU|X(|x)F_{U\left|X\right.}\left(\cdot\left|x\right.\right) is absolutely continuous with respect to the Lebesgue measure for all xx, the structural treatment equation (1) can be innocuously normalized into a reduced form:

D=1{PV},D=1\left\{P\geq V\right\}, (3)

where

V=FU|X(U|X)V=F_{U\left|X\right.}\left(U\left|X\right.\right)

is a normalized error term, which by definition follows standard uniform distribution conditional on XX and thus is stochastically independent of XX and PP. This independence, which seems counterintuitive due to the functional dependence of VV on XX, will be lost if we consider Vx=FU|X(Ux|x)V_{x}=F_{U\left|X\right.}\left(U_{x}\left|x\right.\right), the counterfactual variable of VV when XX is set to xx. In general, the conditional distribution of VxV_{x} given X=x~X=\tilde{x} for x~x\tilde{x}\neq x depends on x~\tilde{x}, and the unconditional distribution of VxV_{x} is not uniform.

Example 2.

We suppose that XX is a scalar, and

(UX)N((00),(1σUXσUXσX2)).\left(\begin{array}[]{l}U\\ X\end{array}\right)\sim N\left(\left(\begin{array}[]{l}0\\ 0\end{array}\right),\left(\begin{array}[]{cc}1&\sigma_{UX}\\ \sigma_{UX}&\sigma_{X}^{2}\end{array}\right)\right).

Through a property of the bivariate normal distribution, we obtain U|(X=x)N(μU|X(x),σU|X2)U\left|\left(X=x\right)\right.\sim N\left(\mu_{\left.U\right|X}\left(x\right),\sigma_{U\left|X\right.}^{2}\right) and FU|X(u|x)=Φ([uμU|X(x)]/σU|X)F_{U\left|X\right.}\left(u\left|x\right.\right)=\Phi\left(\left.\left[u-\mu_{U\left|X\right.}\left(x\right)\right]\right/\sigma_{U\left|X\right.}\right), where μU|X(x)=(σUX/σX2)x\mu_{U\left|X\right.}\left(x\right)=\left(\sigma_{UX}\left/\sigma_{X}^{2}\right.\right)x, σU|X2=1(σUX2/σX2)\sigma_{U\left|X\right.}^{2}=1-\left(\sigma_{UX}^{2}\left/\sigma_{X}^{2}\right.\right), and Φ()\Phi\left(\cdot\right) denotes the standard normal CDF. Hence,

V=FU|X(U|X)=Φ(UμU|X(X)σU|X), and Vx=Φ(UμU|X(x)σU|X).V=F_{U\left|X\right.}\left(U\left|X\right.\right)=\Phi\left(\frac{U-\mu_{U\left|X\right.}\left(X\right)}{\sigma_{U\left|X\right.}}\right)\text{, and }V_{x}=\Phi\left(\frac{U-\mu_{U\left|X\right.}\left(x\right)}{\sigma_{U\left|X\right.}}\right).

We observe that VXV\perp\!\!\!\!\perp X because FV|X(v|x)=vF_{V\left|X\right.}\left(v\left|x\right.\right)=v, but Vx /XV_{x}\leavevmode\hbox{\hbox to0.0pt{\thinspace/\hss}{$\perp\!\!\!\!\perp$}}X because

FVx|X(v|x~)=Φ((σUX/σX2)(xx~)σU|X+Φ1(v)),F_{V_{x}\left|X\right.}\left(v\left|\tilde{x}\right.\right)=\Phi\left(\frac{\left(\sigma_{UX}\left/\sigma_{X}^{2}\right.\right)\left(x-\tilde{x}\right)}{\sigma_{U\left|X\right.}}+\Phi^{-1}\left(v\right)\right),

and VxV_{x} is not uniformly distributed because FVx(v)=Φ(μU|X(x)+σU|XΦ1(v))F_{V_{x}}\left(v\right)=\Phi\left(\mu_{U\left|X\right.}\left(x\right)+\sigma_{U\left|X\right.}\Phi^{-1}\left(v\right)\right).

Given this independence, we may consider the reduced-form treatment error VV as the orthogonalized unobservables with respect to the observables, or the unobservables that are projected onto the subspace orthogonal to the one spanned by the observables. Another interpretation of VV is the ranking of the structural error UU conditional on XX. For instance, V=0.2V=0.2 represents a typical individual whose UU value ranks above 20% individuals with identical covariates. VV enters the normalized crossing rule on the right, making an individual less likely to receive treatment; thus, it refers to resistance or distaste regarding the treatment in the MTE literature. If VV is large, then the propensity score PP should be large to induce the individual to participate in the treatment. However, an individual with a VV value close to zero will participate even though PP is small.

In the above instrument-free model, we define the MTE as the expected treatment effect conditional on the observed and unobserved characteristics:

ΔMTE(x,v)E[Y1Y0|X=x,V=v].\Delta^{\text{MTE}}\left(x,v\right)\equiv E\left[\left.Y_{1}-Y_{0}\right|X=x,V=v\right].

ΔMTE(x,v)\Delta^{\text{MTE}}\left(x,v\right) captures all the treatment effect heterogeneity that is consequential for selection bias by conditioning on the orthogonal coordinates of the observable and unobservable dimensions. Given XX and VV, the treatment status DD is fixed and thus independent of the treatment effect Y1Y0Y_{1}-Y_{0}. Similar to the MTE in the IV model, ΔMTE(x,v)\Delta^{\text{MTE}}\left(x,v\right) can be used as a building block for constructing the commonly used causal parameters, such as the average treatment effect (ATE), the treatment effect on the treated (TT), the treatment effect on the untreated (TUT), and the local average treatment effect (LATE), which can be expressed as weighted averages of ΔMTE(x,v)\Delta^{\text{MTE}}\left(x,v\right), as follows:

ΔATE(x)\displaystyle\Delta^{\text{ATE}}\left(x\right) \displaystyle\equiv E[Y1Y0|X=x]=01ΔMTE(x,v)𝑑v,\displaystyle E\left[\left.Y_{1}-Y_{0}\right|X=x\right]=\int_{0}^{1}\Delta^{\text{MTE}}\left(x,v\right)dv,
ΔTT(x)\displaystyle\Delta^{\text{TT}}\left(x\right) \displaystyle\equiv E[Y1Y0|X=x,D=1]=1π(x)0π(x)ΔMTE(x,v)𝑑v,\displaystyle E\left[\left.Y_{1}-Y_{0}\right|X=x,D=1\right]=\frac{1}{\pi\left(x\right)}\int_{0}^{\pi\left(x\right)}\Delta^{\text{MTE}}\left(x,v\right)dv,
ΔTUT(x)\displaystyle\Delta^{\text{TUT}}\left(x\right) \displaystyle\equiv E[Y1Y0|X=x,D=0]=11π(x)π(x)1ΔMTE(x,v)𝑑v,\displaystyle E\left[\left.Y_{1}-Y_{0}\right|X=x,D=0\right]=\frac{1}{1-\pi\left(x\right)}\int_{\pi\left(x\right)}^{1}\Delta^{\text{MTE}}\left(x,v\right)dv,
ΔLATE(x,v1,v2)\displaystyle\Delta^{\text{LATE}}\left(x,v_{1},v_{2}\right) \displaystyle\equiv E[Y1Y0|X=x,v1Vv2]=1v2v1v1v2ΔMTE(x,v)𝑑v.\displaystyle E\left[\left.Y_{1}-Y_{0}\right|X=x,v_{1}\leq V\leq v_{2}\right]=\frac{1}{v_{2}-v_{1}}\int_{v_{1}}^{v_{2}}\Delta^{\text{MTE}}\left(x,v\right)dv.

Somewhat surprisingly, compared with the weights on the conventional MTE (e.g., Heckman and Vytlacil, 2005, Table IB), which are generally difficult to estimate in practice, the weights on ΔMTE(x,v)\Delta^{\text{MTE}}\left(x,v\right) are simpler, more intuitive, and easier to compute.

Heckman and Vytlacil (2001, 2005) considered defining the MTE in a similar nonseparable setting by conditioning on the structural error, and showed how to integrate to generate other causal parameters. However, such an MTE cannot be identified even in the presence of IVs. Our definition, based instead on the reduced-form error, can effectively exploit the statistical independence between the observed and unobserved variables to facilitate identification of the MTE. Zhou and Xie (2019) proposed to redefine the MTE by conditioning on PP rather than covariates, which is a more parsimonious specification of all the relevant treatment effect heterogeneity for selection bias. The extension of our identification and estimation procedures to this alternative definition is straightforward.

3 Identification

Our identification strategy relies on a linearity restriction on the potential outcome equations and a nonlinearity restriction on the propensity score function. The intuition is that the propensity score minus the linear outcome index will provide the excluded variation that perturbs treatment status, which plays the role of a continuous IV. Furthermore, to ensure the exogeneity of the excluded variation, it is necessary to impose a certain form of independence between the potential outcome residuals and covariates. We denote hd(x)=E[Yd|X=x]h_{d}\left(x\right)=E\left[\left.Y_{d}\right|X=x\right] and Ud=Ydhd(X)U_{d}=Y_{d}-h_{d}\left(X\right), d=0,1d=0,1, as the regression functions and residuals of potential outcomes.

Assumption CMI (Conditional Mean Independence). Assume that E[U1U0|V,X]=E[U1U0|V]E\left[U_{1}-U_{0}\left|V,X\right.\right]=E\left[U_{1}-U_{0}\left|V\right.\right] with probability one.

Assumption CMI is standard in the MTE literature and commonly referred to as separability or additive separability assumption (e.g., Brinch et al., 2017; Mogstad et al., 2018; Zhou and Xie, 2019), because it’s a necessary and sufficient condition for the MTE to be additively separable into observables and unobservables (Zhou and Xie, 2019, p.3074):

ΔMTE(x,v)\displaystyle\Delta^{\text{MTE}}\left(x,v\right) =\displaystyle= h1(x)h0(x)+E[U1U0|X=x,V=v]\displaystyle h_{1}\left(x\right)-h_{0}\left(x\right)+E\left[U_{1}-U_{0}\left|X=x,V=v\right.\right]
=\displaystyle= h1(x)h0(x)+E[U1U0|V=v].\displaystyle h_{1}\left(x\right)-h_{0}\left(x\right)+E\left[U_{1}-U_{0}\left|V=v\right.\right].

Namely, under Assumption CMI, the shape of the MTE curve with respect to vv will not vary with covariates. Assumption CMI is partly justified by E[Ud|X]=0E\left[U_{d}\left|X\right.\right]=0 and VXV\perp\!\!\!\!\perp X, which come directly from the definition. On this basis, it is sufficient for Assumption CMI to hold if the conditional covariance of UdU_{d} and VV is independent of XX, which is a key assumption in the heteroscedasticity-based identification method as well (Lewbel, 2012). Assumption CMI is implied by and much weaker than the full independence (Ud,U)X\left(U_{d},U\right)\perp\!\!\!\!\perp X frequently imposed (often implicitly) in applied work, where UU is the structural treatment error in (1). In particular, Assumption CMI doesn’t rule out the marginal dependence of UdU_{d} or UU on XX, as illustrated by Example 3.

Example 3.

Suppose that XX is a scalar and

(UdUX)N((000),(σd2σdU0σdU1σUX0σUXσX2)).\left(\begin{array}[]{c}U_{d}\\ U\\ X\end{array}\right)\sim N\left(\left(\begin{array}[]{c}0\\ 0\\ 0\end{array}\right),\left(\begin{array}[]{ccc}\sigma_{d}^{2}&\sigma_{dU}&0\\ \sigma_{dU}&1&\sigma_{UX}\\ 0&\sigma_{UX}&\sigma_{X}^{2}\end{array}\right)\right).

In this setting, UU is correlated to XX with an unconstrained correlation coefficient. Through a property of the multivariate normal distribution, we obtain

(UdU)|(X=x)N((0μU|X(x)),(σd2σdUσdUσU|X2)),\left.\left(\begin{array}[]{c}U_{d}\\ U\end{array}\right)\right|\left(X=x\right)\sim N\left(\left(\begin{array}[]{c}0\\ \mu_{\left.U\right|X}\left(x\right)\end{array}\right),\left(\begin{array}[]{cc}\sigma_{d}^{2}&\sigma_{dU}\\ \sigma_{dU}&\sigma_{U\left|X\right.}^{2}\end{array}\right)\right), (4)

where μU|X(x)=(σUX/σX2)x\mu_{U\left|X\right.}\left(x\right)=\left(\sigma_{UX}\left/\sigma_{X}^{2}\right.\right)x and σU|X2=1(σUX2/σX2)\sigma_{U\left|X\right.}^{2}=1-\left(\sigma_{UX}^{2}\left/\sigma_{X}^{2}\right.\right). Hence,

E[Ud|U=u,X=x]=σdUσU|X2(uμU|X(x)).E\left[\left.U_{d}\right|U=u,X=x\right]=\frac{\sigma_{dU}}{\sigma_{U\left|X\right.}^{2}}\left(u-\mu_{\left.U\right|X}\left(x\right)\right).

By Example 2, we have V=Φ([UμU|X(X)]/σU|X)V=\Phi\left(\left.\left[U-\mu_{U\left|X\right.}\left(X\right)\right]\right/\sigma_{U\left|X\right.}\right), so that U=σU|XΦ1(V)+μU|X(X)U=\sigma_{U\left|X\right.}\Phi^{-1}\left(V\right)+\mu_{U\left|X\right.}\left(X\right). Consequently,

E[Ud|V=v,X=x]=E[Ud|U=σU|XΦ1(v)+μU|X(x),X=x]=σdUσU|XΦ1(v),E\left[\left.U_{d}\right|V=v,X=x\right]=E\left[\left.U_{d}\right|U=\sigma_{U\left|X\right.}\Phi^{-1}\left(v\right)+\mu_{U\left|X\right.}\left(x\right),X=x\right]=\frac{\sigma_{dU}}{\sigma_{U\left|X\right.}}\Phi^{-1}\left(v\right),

and Assumption CMI holds. More generally, to allow the dependence of UdU_{d} on XX as well, we can set

(UdU)|(X=x)N((0μU|X(x)),(σd2(x)σdUσdUσU|X2))\left.\left(\begin{array}[]{c}U_{d}\\ U\end{array}\right)\right|\left(X=x\right)\sim N\left(\left(\begin{array}[]{c}0\\ \mu_{\left.U\right|X}\left(x\right)\end{array}\right),\left(\begin{array}[]{cc}\sigma_{d}^{2}\left(x\right)&\sigma_{dU}\\ \sigma_{dU}&\sigma_{U\left|X\right.}^{2}\end{array}\right)\right)

in place of (4), where σd2(x)\sigma_{d}^{2}\left(x\right) is the conditional variance of UdU_{d} given X=xX=x. Since E[Ud|V=v,X=x]E\left[\left.U_{d}\right|V=v,X=x\right] is irrelevant to the variance of UdU_{d} according to the above analysis, Assumption CMI still holds in the presence of such heteroscedastic UdU_{d}.

For the purpose of identifying the MTE, we first establish the relationship between ΔMTE(x,v)\Delta^{\text{MTE}}\left(x,v\right) and observed regression functions. Under Assumption CMI, the observed regression functions for treated and untreated individuals are additively separable in a similar way:

E[Y|X,D=d]\displaystyle E\left[Y\left|X,D=d\right.\right] =\displaystyle= E[Yd|X,D=d]\displaystyle E\left[Y_{d}\left|X,D=d\right.\right] (5)
=\displaystyle= hd(X)+E[Ud|X,D=d]\displaystyle h_{d}\left(X\right)+E\left[U_{d}\left|X,D=d\right.\right]
=\displaystyle= hd(X)+gd(P), d=0,1,\displaystyle h_{d}\left(X\right)+g_{d}\left(P\right),\text{ }d=0,1,

where

g1(p)\displaystyle g_{1}\left(p\right) =\displaystyle= E[U1|Vp]=1p0pE[U1|V=v]𝑑v,\displaystyle E\left[\left.U_{1}\right|V\leq p\right]=\frac{1}{p}\int_{0}^{p}E\left[\left.U_{1}\right|V=v\right]dv,
g0(p)\displaystyle g_{0}\left(p\right) =\displaystyle= E[U0|V>p]=11pp1E[U0|V=v]𝑑v.\displaystyle E\left[\left.U_{0}\right|V>p\right]=\frac{1}{1-p}\int_{p}^{1}E\left[\left.U_{0}\right|V=v\right]dv.

By multiplying both sides of the above equations with pp or 1p1-p in order to eliminate the denominators and then differentiating with respect to pp, we obtain

g1(p)+pg1(1)(p)\displaystyle g_{1}\left(p\right)+pg_{1}^{\left(1\right)}\left(p\right) =\displaystyle= E[U1|V=p],\displaystyle E\left[\left.U_{1}\right|V=p\right],
g0(p)+(1p)g0(1)(p)\displaystyle-g_{0}\left(p\right)+\left(1-p\right)g_{0}^{\left(1\right)}\left(p\right) =\displaystyle= E[U0|V=p],\displaystyle-E\left[\left.U_{0}\right|V=p\right],

where gd(1)g_{d}^{\left(1\right)} denotes the derivative function of gdg_{d}. Therefore, the MTE can be represented as

ΔMTE(x,v)=h1(x)h0(x)+[g1(v)g0(v)]+vg1(1)(v)+(1v)g0(1)(v),\Delta^{\text{MTE}}\left(x,v\right)=h_{1}\left(x\right)-h_{0}\left(x\right)+\left[g_{1}\left(v\right)-g_{0}\left(v\right)\right]+vg_{1}^{\left(1\right)}\left(v\right)+\left(1-v\right)g_{0}^{\left(1\right)}\left(v\right), (6)

and the identification of it can be achieved by identifying hdh_{d} and gdg_{d} from the regression equation (5).

Recall that P=π(X)P=\pi\left(X\right), where π\pi is a deterministic function, albeit unknown. Given that the argument of gdg_{d} in (5) exhibits no variation apart from that generated by XX, it is impossible to distinguish between gdg_{d} and hdh_{d} in the absence of additional assumptions. For example, one may choose to let hdh_{d} absorb gd(π())g_{d}\left(\pi\left(\cdot\right)\right) and set gd=0g_{d}=0, or vice versa. Functional form restrictions that differentiate hdh_{d} and gd(π())g_{d}\left(\pi\left(\cdot\right)\right) from one another can address this issue and facilitate the identification of both hdh_{d} and gdg_{d}. To this end, we impose linearity on hdh_{d} and nonlinearity on π\pi.

Assumption L (Linearity). Assume that E[Yd|X]=αd+XβdE\left[\left.Y_{d}\right|X\right]=\alpha_{d}+X^{\prime}\beta_{d} with probability one for some fixed αd\alpha_{d} and βd\beta_{d}, d=0,1d=0,1.

Assumption L imposes a linear restriction on the potential outcome regression functions, which is a common practice in empirical studies. The linear restriction may seem too strong compared with that in the existing identification strategies, which allow highly flexible (especially nonparametric) specifications on the potential outcomes. However, for the estimation of MTE, the linear specification has been nearly universally adopted for tractability and interpretability (e.g., Kirkeboen et al., 2016; Kline and Walters, 2016; Brinch et al., 2017; Heckman et al., 2018; Mogstad et al., 2021; Aryal et al., 2022; Mountjoy, 2022). Noting that Assumption L implicitly requires the potential outcomes to be continuously distributed and supported on the entire real line, we would like to point out that our identification strategy can also be adapted to the case of limited valued outcomes by specifying linear latent index. A detailed discussion is left to Appendix C. With a little abuse of notation, we redenote UdU_{d} to absorb the intercept αd\alpha_{d} so that

Yd=Xβd+Ud,Y_{d}=X^{\prime}\beta_{d}+U_{d},

and thus that the regression equation (5) and the MTE become

E[Y|X,D=d]\displaystyle E\left[Y\left|X,D=d\right.\right] =\displaystyle= Xβd+gd(P), d=0,1,\displaystyle X^{\prime}\beta_{d}+g_{d}\left(P\right),\text{ }d=0,1, (7)
ΔMTE(x,v)\displaystyle\Delta^{\text{MTE}}\left(x,v\right) =\displaystyle= x(β1β0)+[g1(v)g0(v)]+vg1(1)(v)+(1v)g0(1)(v).\displaystyle x^{\prime}\left(\beta_{1}-\beta_{0}\right)+\left[g_{1}\left(v\right)-g_{0}\left(v\right)\right]+vg_{1}^{\left(1\right)}\left(v\right)+\left(1-v\right)g_{0}^{\left(1\right)}\left(v\right). (8)

To present the nonlinearity assumption, we let XX be partitioned as (XC,XD)\left(X^{C},X^{D}\right), where XCX^{C} and XDX^{D} consist of covariates that are continuously and discretely distributed, respectively. We denote XkX_{k}, XkCX_{k}^{C}, and XkDX_{k}^{D} as the kk-th coordinates of XX, XCX^{C}, and XDX^{D}, respectively. And we denote xCx^{C} as a generic element in the support of XCX^{C}; likewise for xDx^{D}, xkCx_{k}^{C}, and xkDx_{k}^{D}. The nonlinearity assumption requires the propensity score function π\pi to be nonlinear in XCX^{C} given a benchmark value of XDX^{D}. Without loss of generality, we suppose that the vector of zeros is in the support of XDX^{D} and is the benchmark value. We denote π0(xC)=π(xC,0)\pi_{0}\left(x^{C}\right)=\pi\left(x^{C},0\right) for notational convenience, where the discrete covariates are equal to zero.

Assumption NL (Non-Linearity). Assume that π0(xC)\pi_{0}\left(x^{C}\right) and E[Y|XC=xC,XD=0,D=d]E\left[Y\left|X^{C}=x^{C},X^{D}=0,D=d\right.\right], d=0,1d=0,1, are differentiable with respect to xCx^{C}, and that the derivative of π0\pi_{0} satisfies the following NL2 when dim(XC)2\dim\left(X^{C}\right)\geq 2 or NL1 when dim(XC)=1.\dim\left(X^{C}\right)=1.

– NL2 (dim(XC)2\dim\left(X^{C}\right)\geq 2): there exist two vectors xCx^{C}, x~C\tilde{x}^{C} in the support of XCX^{C} and two elements kk, jj in set {1,2,,dim(XC)}\left\{1,2,\cdots,\dim\left(X^{C}\right)\right\} such that (i) kπ0(xC)0\partial_{k}\pi_{0}\left(x^{C}\right)\neq 0, (ii) jπ0(xC)0\partial_{j}\pi_{0}\left(x^{C}\right)\neq 0, (iii) kπ0(x~C)0\partial_{k}\pi_{0}\left(\tilde{x}^{C}\right)\neq 0, (iv) jπ0(x~C)0\partial_{j}\pi_{0}\left(\tilde{x}^{C}\right)\neq 0, and (v) kπ0(xC)/jπ0(xC)kπ0(x~C)/jπ0(x~C)\left.\partial_{k}\pi_{0}\left(x^{C}\right)\right/\partial_{j}\pi_{0}\left(x^{C}\right)\neq\left.\partial_{k}\pi_{0}\left(\tilde{x}^{C}\right)\right/\partial_{j}\pi_{0}\left(\tilde{x}^{C}\right), where kπ0(xC)=π0(xC)/xkC\partial_{k}\pi_{0}\left(x^{C}\right)=\left.\partial\pi_{0}\left(x^{C}\right)\right/\partial x_{k}^{C} is the partial derivative of π0\pi_{0} with respect to the kk-th argument.

– NL1 (dim(XC)=1\dim\left(X^{C}\right)=1): there exists a constant x~C\tilde{x}^{C} in the support of XCX^{C} such that π0(1)(x~C)=0\pi_{0}^{\left(1\right)}\left(\tilde{x}^{C}\right)=0, where π0(1)(xC)=dπ0(xC)/dxC\pi_{0}^{\left(1\right)}\left(x^{C}\right)=\left.d\pi_{0}\left(x^{C}\right)\right/dx^{C} is the univariate derivative of π0\pi_{0}.

Assumption NL requires the propensity score function to be nonlinear in continuous covariates in a generalized sense. The combination of Assumptions L and NL will enable identification based on functional form in a semiparametric version, which can realize the identification of linear coefficients by exploiting nonlinearity elsewhere in the model. The nonlinearity assumption has two different forms, depending on the number of continuous covariates. When two or more continuous covariates are available, Assumption NL2 will require some variation in π0\pi_{0} to distinguish it from a linear-index function. Concretely, NL2 will not hold if π0(xC)=f(xCγ)\pi_{0}\left(x^{C}\right)=f\left(x^{C\prime}\gamma\right) for a smooth function ff, because in this case, both sides of the inequality in (v) are equal to γk/γj\left.\gamma_{k}\right/\gamma_{j}. Otherwise, however, it is difficult to construct examples that violate (v). In practice, (v) can be fulfilled even when π0\pi_{0} is single-index specified, if interaction or/and quadratic terms are added, as shown in Example 4.

Example 4.

Consider the case of two continuous covariates. Suppose that for a smooth function ff, π0(xC)=f(γ1x1C+γ2x2C+γ3x1Cx2C)\pi_{0}\left(x^{C}\right)=f\left(\gamma_{1}x_{1}^{C}+\gamma_{2}x_{2}^{C}+\gamma_{3}x_{1}^{C}x_{2}^{C}\right) or π0(xC)=f(γ1x1C+γ2x2C+γ3(x1C)2)\pi_{0}\left(x^{C}\right)=f\left(\gamma_{1}x_{1}^{C}+\gamma_{2}x_{2}^{C}+\gamma_{3}\left(x_{1}^{C}\right)^{2}\right). Then, we obtain 1π0(xC)/2π0(xC)=(γ1+γ3x2C)/(γ2+γ3x1C)\left.\partial_{1}\pi_{0}\left(x^{C}\right)\right/\partial_{2}\pi_{0}\left(x^{C}\right)=\left.\left(\gamma_{1}+\gamma_{3}x_{2}^{C}\right)\right/\left(\gamma_{2}+\gamma_{3}x_{1}^{C}\right) for the interaction case, or 1π0(xC)/2π0(xC)=(γ1+2γ3x1C)/γ2\left.\partial_{1}\pi_{0}\left(x^{C}\right)\right/\partial_{2}\pi_{0}\left(x^{C}\right)=\left.\left(\gamma_{1}+2\gamma_{3}x_{1}^{C}\right)\right/\gamma_{2} for the quadratic case. In both cases, Assumption NL2.(v) will generally hold for xCx^{C} and x~C\tilde{x}^{C} satisfying x1Cx~1Cx_{1}^{C}\neq\tilde{x}_{1}^{C}.

Assumption NL2 requires the existence of at least two continuous covariates. However, in empirical studies based on survey data, most of the demographic characteristics are documented as discrete or categorical variables, such as age, gender, race, marital status, educational attainment, and so on. Therefore, we also impose Assumption NL1, as a supplement to NL2, to take into account the situation in which only one continuous covariate is available. NL1 requires π0\pi_{0} to have a stationary point, which implies that π0\pi_{0} is necessarily nonlinear. NL1 will hold if the probability of receiving treatment is unaffected by some local change in the continuous covariate. It’s a differential version of the local irrelevance assumption imposed in nonseparable models for attaining point identification (e.g., Torgovitsky, 2015; D’Haultfœuille et al., 2021). NL1 can be extended to the case of no continuous covariate, as discussed in Appendix B. However, using only discrete covariates provides little identifying variation, which may lead to poor performance in the subsequent model estimation (Garlick and Hyman, 2022). Hence, we focus on the case of at least one continuous covariate.

We note that Assumption NL doesn’t exclude the widely specified linear-index treatment equation, as long as the structural error is not independent of covariates.

Example 5.

Suppose the treatment status is determined by a linear-index threshold crossing rule:

D=1{XγU},D=1\left\{X^{\prime}\gamma\geq U\right\},

where UU is not independent of XX. Namely, suppose μ(X)=Xγ\mu\left(X\right)=X^{\prime}\gamma in the structural treatment equation (1). Consider a multiplicatively heteroscedastic UU such that U=σ(X)U~U=\sigma\left(X\right)\tilde{U}, where σ(X)\sigma\left(X\right) is a positive function and U~\tilde{U} is an idiosyncratic error independent of XX. The normalization derives

D=1{Xγσ(X)U~},D=1\left\{\frac{X^{\prime}\gamma}{\sigma\left(X\right)}\geq\tilde{U}\right\},

that is, the reduced-form error VV in (3) is equal to U~\tilde{U}, and the propensity score function is equal to π(x)=(xγ)/σ(x)\pi\left(x\right)=\left.\left(x^{\prime}\gamma\right)\right/\sigma\left(x\right). In general, π(x)\pi\left(x\right) is a nonlinear function. Another specification we consider is a linear-index model with endogeneity in a certain component XeX^{e} of XCX^{C}, in which π(x)=FU|Xe(xγ|xe)\pi\left(x\right)=F_{U\left|X^{e}\right.}\left(\left.x^{\prime}\gamma\right|x^{e}\right). Given that π(x)\pi\left(x\right) is generally highly nonlinear in xex^{e}, Assumption NL1 holds straightforwardly, and NL2 holds with xkC=xex_{k}^{C}=x^{e}.

Unlike in the binary response model, the ubiquitous heteroscedasticity and endogeneity benefit our results while inducing no trouble, because the identification of the MTE is irrelevant to the structural coefficients in γ\gamma. Our MTE is defined by the reduced-form treatment error; thus, all we need from the treatment equation is the propensity score, which has a reduced-form nature.

Assumption NL ensures identification of coefficients of continuous covariates. In order to identify coefficients of discrete covariates, we impose a mild support condition. For any xkD0x_{k}^{D}\neq 0, we denote xDkx^{Dk} as the dim(XD)×1\dim\left(X^{D}\right)\times 1 vector with the kk-th coordinate being equal to xkDx_{k}^{D} and all the other coordinates being equal to zero.

Assumption S (Support). For each k{1,2,,dim(XD)}k\in\left\{1,2,\cdots,\dim\left(X^{D}\right)\right\}, assume for some xkD0x_{k}^{D}\neq 0 in the support of XkDX_{k}^{D} that there exists xC(k)x^{C}\left(k\right) in the support of XCX^{C} such that π(xC(k),xDk)\pi\left(x^{C}\left(k\right),x^{Dk}\right) is in the support of π0(XC)\pi_{0}\left(X^{C}\right).

A sufficient condition for Assumption S to hold is that π0(XC)\pi_{0}\left(X^{C}\right) has a full support on the unit interval, or, more generally, that the support of π0(XC)\pi_{0}\left(X^{C}\right) is overlapped with those of π(XC,xD)\pi\left(X^{C},x^{D}\right) for all xD0x^{D}\neq 0. Otherwise, we have to find an xkD0x_{k}^{D}\neq 0 for each kk such that the support of π(XC,xDk)\pi\left(X^{C},x^{Dk}\right) is overlapped with that of π0(XC)\pi_{0}\left(X^{C}\right). The following theorem establishes our main identification result.

Theorem 1.

If Assumptions CMI, L, NL, and S hold, then βd\beta_{d} and gd(p)g_{d}\left(p\right) at all pp in the support of the propensity score PP are identified for d=0,1d=0,1.

The proof of Theorem 1 is grounded on the observed regression functions (7), which summarize the information from the data. As gdg_{d} is unknown, we need to eliminate it through some subtraction to realize the identification of βd\beta_{d}. When only one continuous covariate exists, the subtraction can be carried out locally around the point satisfying Assumption NL1. Otherwise, our strategy is to perturb two continuous covariates, such as xkCx_{k}^{C} and xjCx_{j}^{C}, in such a way that π(x)\pi\left(x\right) remains unchanged. Specifically, for each group dd, we increase xkCx_{k}^{C} by a small ϵ\epsilon and simultaneously change xjCx_{j}^{C} by ϵ\epsilon multiplied by kπ0(xC)/jπ0(xC)\left.-\partial_{k}\pi_{0}\left(x^{C}\right)\right/\partial_{j}\pi_{0}\left(x^{C}\right), resulting in a perturbed value of the regression function. Subtracting the perturbed regression function from the original (7) will cancel out gdg_{d} due to the equality of π(x)\pi\left(x\right), giving rise to an equation for βd,kC\beta_{d,k}^{C} and βd,jC\beta_{d,j}^{C}. Note that the multiplier kπ0(xC)/jπ0(xC)\left.-\partial_{k}\pi_{0}\left(x^{C}\right)\right/\partial_{j}\pi_{0}\left(x^{C}\right) is the partial derivative of xjCx_{j}^{C} with respect to xkCx_{k}^{C} if we view xjCx_{j}^{C} as an implicit function of the other continuous covariates by equating π0(xC)\pi_{0}\left(x^{C}\right) to a constant. Accordingly, Assumption NL2.(v) implies two linearly independent equations, ensuring an exact solution (i.e., identification) of βd,kC\beta_{d,k}^{C} and βd,jC\beta_{d,j}^{C}. The detailed proof of Theorem 1 is presented in Appendix A. It is worth mentioning that our strategy naturally features overidentification in the sense that generally more than one pair of points in the support of XCX^{C} satisfy Assumption NL2 or more than one point satisfies NL1, because XCX^{C} is continuously distributed. Moreover, there may be more than one value of XDX^{D} that satisfies Assumptions NL and S. Consequently, the identification can also be represented as the average of solutions over all points of XCX^{C} satisfying Assumption NL and over all values of XDX^{D} satisfying Assumptions NL and S.

According to (8), Theorem 1 implies the identification of MTE in the absence of IVs. Specifically, This result allows practitioners to include all the relevant observed characteristics into both treatment and outcome equations, without imposing any exclusion or full independence assumptions. Under Theorem 1, the conventional causal parameters can also be identified without instruments, provided that the support of PP contains 0 and/or 1 (which implies identifiability of gd(0)g_{d}\left(0\right) and/or gd(1)g_{d}\left(1\right)):

ΔATE(x)\displaystyle\Delta^{\text{ATE}}\left(x\right) =\displaystyle= x(β1β0)+[g1(1)g0(0)],\displaystyle x^{\prime}\left(\beta_{1}-\beta_{0}\right)+\left[g_{1}\left(1\right)-g_{0}\left(0\right)\right], (9)
ΔTT(x)\displaystyle\Delta^{\text{TT}}\left(x\right) =\displaystyle= x(β1β0)+g1(π(x))+(1π(x))g0(π(x))g0(0)π(x),\displaystyle x^{\prime}\left(\beta_{1}-\beta_{0}\right)+g_{1}\left(\pi\left(x\right)\right)+\frac{\left(1-\pi\left(x\right)\right)g_{0}\left(\pi\left(x\right)\right)-g_{0}\left(0\right)}{\pi\left(x\right)},
ΔTUT(x)\displaystyle\Delta^{\text{TUT}}\left(x\right) =\displaystyle= x(β1β0)+g1(1)π(x)g1(π(x))1π(x)g0(π(x)),\displaystyle x^{\prime}\left(\beta_{1}-\beta_{0}\right)+\frac{g_{1}\left(1\right)-\pi\left(x\right)g_{1}\left(\pi\left(x\right)\right)}{1-\pi\left(x\right)}-g_{0}\left(\pi\left(x\right)\right),
ΔLATE(x,v1,v2)\displaystyle\Delta^{\text{LATE}}\left(x,v_{1},v_{2}\right) =\displaystyle= x(β1β0)+v2g1(v2)v1g1(v1)+(1v2)g0(v2)(1v1)g0(v1)v2v1.\displaystyle x^{\prime}\left(\beta_{1}-\beta_{0}\right)+\frac{v_{2}g_{1}\left(v_{2}\right)-v_{1}g_{1}\left(v_{1}\right)+\left(1-v_{2}\right)g_{0}\left(v_{2}\right)-\left(1-v_{1}\right)g_{0}\left(v_{1}\right)}{v_{2}-v_{1}}.

4 Estimation

Our identification strategy for the MTE implies a separate estimation procedure that works with the partially linear regression (7) by each treatment status. In particular, we recommend the kernel-weighted pairwise difference estimation method proposed by Ahn and Powell (1993) because of its computational simplicity, well-established asymptotic properties, and, most importantly, its capacity to effectively leverage the overidentifying information that underlies the data. Suppose that {(Yi,Di,Xi):i=1,2,,n}\left\{\left(Y_{i},D_{i},X_{i}\right):i=1,2,\cdots,n\right\} is a random sample of observations on (Y,D,X)\left(Y,D,X\right). In the first step, we estimate the nonparametrically specified propensity score using the kernel method, that is,

π^(x)=i=1nDi[l=1dim(XC)k1((XilCxlC)/h1l)]1{XiD=xD}i=1n[l=1dim(XC)k1((XilCxlC)/h1l)]1{XiD=xD}\hat{\pi}\left(x\right)=\frac{\sum_{i=1}^{n}D_{i}\left[\prod_{l=1}^{\dim\left(X^{C}\right)}k_{1}\left(\left.\left(X_{il}^{C}-x_{l}^{C}\right)\right/h_{1l}\right)\right]1\left\{X_{i}^{D}=x^{D}\right\}}{\sum_{i=1}^{n}\left[\prod_{l=1}^{\dim\left(X^{C}\right)}k_{1}\left(\left.\left(X_{il}^{C}-x_{l}^{C}\right)\right/h_{1l}\right)\right]1\left\{X_{i}^{D}=x^{D}\right\}} (10)

and

P^i=π^(Xi),\hat{P}_{i}=\hat{\pi}\left(X_{i}\right), (11)

where h1lh_{1l}, l=1,2,,dim(XC)l=1,2,\cdots,\dim\left(X^{C}\right), are bandwidths and k1k_{1} is a univariate kernel function. If the dimension of XDX^{D} is large, then a smoothed kernel for discrete covariates (Racine and Li, 2004) can be applied as a substitute for the indicator function, to alleviate the potential problem of inadequate observations in each data cell divided by the support of XDX^{D}. If the number of continuous covariates is not small either, then the well-known curse of dimensionality will appear, and a linear-index specification may thus be practically more relevant when modeling the propensity score. The index should include a series of interaction terms and quadratic or even higher-order terms of continuous covariates to supply sufficient nonlinear variation. The linear-index propensity score can be estimated by parametric probit/logit or semiparametric methods (e.g., Powell et al., 1989; Ichimura, 1993; Klein and Spady, 1993; Lewbel, 2000), depending on the distributional assumption on the error term.

In the second step, we estimate βd\beta_{d} for each dd through a weighted pairwise difference least squares regression:

β^d\displaystyle\hat{\beta}_{d} =\displaystyle= argminβi=1n1j=i+1nω^dij[(YiYj)(XiXj)β]2\displaystyle\arg\min_{\beta}\sum_{i=1}^{n-1}\sum_{j=i+1}^{n}\hat{\omega}_{dij}\left[\left(Y_{i}-Y_{j}\right)-\left(X_{i}-X_{j}\right)^{\prime}\beta\right]^{2} (12)
=\displaystyle= [i=1n1j=i+1nω^dij(XiXj)(XiXj)]1[i=1n1j=i+1nω^dij(XiXj)(YiYj)],\displaystyle\left[\sum_{i=1}^{n-1}\sum_{j=i+1}^{n}\hat{\omega}_{dij}\left(X_{i}-X_{j}\right)\left(X_{i}-X_{j}\right)^{\prime}\right]^{-1}\left[\sum_{i=1}^{n-1}\sum_{j=i+1}^{n}\hat{\omega}_{dij}\left(X_{i}-X_{j}\right)\left(Y_{i}-Y_{j}\right)\right],

where the weights are given by

ω^dij=1{Di=Dj=d}k2(P^iP^jh2),\hat{\omega}_{dij}=1\left\{D_{i}=D_{j}=d\right\}k_{2}\left(\frac{\hat{P}_{i}-\hat{P}_{j}}{h_{2}}\right),

with h2h_{2} and k2k_{2} being the bandwidth and kernel function, respectively, which can be different from those in the first step. Given β^d\hat{\beta}_{d}, the nonlinear function gd(p)g_{d}\left(p\right) and its derivative function gd(1)(p)g_{d}^{\left(1\right)}\left(p\right) at any pp in the support of PP can be estimated by the local linear method, namely,

(g^d(p)g^d(1)(p))=[i=1nw^di(p)(1P^ip)(1P^ip)]1[i=1nw^di(p)(1P^ip)(YiXiβ^d)],\left(\begin{array}[]{c}\hat{g}_{d}\left(p\right)\\ \hat{g}_{d}^{\left(1\right)}\left(p\right)\end{array}\right)=\left[\sum_{i=1}^{n}\hat{w}_{di}\left(p\right)\left(\begin{array}[]{c}1\\ \hat{P}_{i}-p\end{array}\right)\left(\begin{array}[]{c}1\\ \hat{P}_{i}-p\end{array}\right)^{\prime}\right]^{-1}\left[\sum_{i=1}^{n}\hat{w}_{di}\left(p\right)\left(\begin{array}[]{c}1\\ \hat{P}_{i}-p\end{array}\right)\left(Y_{i}-X_{i}^{\prime}\hat{\beta}_{d}\right)\right],

where

w^di(p)=1{Di=d}k3(P^iph3).\hat{w}_{di}\left(p\right)=1\left\{D_{i}=d\right\}k_{3}\left(\frac{\hat{P}_{i}-p}{h_{3}}\right).

Finally, we plug β^d\hat{\beta}_{d}, g^d\hat{g}_{d}, g^d(1)\hat{g}_{d}^{\left(1\right)}, and π^\hat{\pi} into identification equations (8) and (9) to estimate the MTE and other causal parameters, as follows:

Δ^MTE(x,v)\displaystyle\hat{\Delta}^{\text{MTE}}\left(x,v\right) =\displaystyle= x(β^1β^0)+[g^1(v)g^0(v)]+vg^1(1)(v)+(1v)g^0(1)(v),\displaystyle x^{\prime}\left(\hat{\beta}_{1}-\hat{\beta}_{0}\right)+\left[\hat{g}_{1}\left(v\right)-\hat{g}_{0}\left(v\right)\right]+v\hat{g}_{1}^{\left(1\right)}\left(v\right)+\left(1-v\right)\hat{g}_{0}^{\left(1\right)}\left(v\right),
Δ^ATE(x)\displaystyle\hat{\Delta}^{\text{ATE}}\left(x\right) =\displaystyle= x(β^1β^0)+[g^1(1)g^0(0)],\displaystyle x^{\prime}\left(\hat{\beta}_{1}-\hat{\beta}_{0}\right)+\left[\hat{g}_{1}\left(1\right)-\hat{g}_{0}\left(0\right)\right],
Δ^TT(x)\displaystyle\hat{\Delta}^{\text{TT}}\left(x\right) =\displaystyle= x(β^1β^0)+g^1(π^(x))+(1π^(x))g^0(π^(x))g^0(0)π^(x),\displaystyle x^{\prime}\left(\hat{\beta}_{1}-\hat{\beta}_{0}\right)+\hat{g}_{1}\left(\hat{\pi}\left(x\right)\right)+\frac{\left(1-\hat{\pi}\left(x\right)\right)\hat{g}_{0}\left(\hat{\pi}\left(x\right)\right)-\hat{g}_{0}\left(0\right)}{\hat{\pi}\left(x\right)},
Δ^TUT(x)\displaystyle\hat{\Delta}^{\text{TUT}}\left(x\right) =\displaystyle= x(β^1β^0)+g^1(1)π^(x)g^1(π^(x))1π^(x)g^0(π^(x)),\displaystyle x^{\prime}\left(\hat{\beta}_{1}-\hat{\beta}_{0}\right)+\frac{\hat{g}_{1}\left(1\right)-\hat{\pi}\left(x\right)\hat{g}_{1}\left(\hat{\pi}\left(x\right)\right)}{1-\hat{\pi}\left(x\right)}-\hat{g}_{0}\left(\hat{\pi}\left(x\right)\right),
Δ^LATE(x,v1,v2)\displaystyle\hat{\Delta}^{\text{LATE}}\left(x,v_{1},v_{2}\right) =\displaystyle= x(β^1β^0)+v2g^1(v2)v1g^1(v1)+(1v2)g^0(v2)(1v1)g^0(v1)v2v1.\displaystyle x^{\prime}\left(\hat{\beta}_{1}-\hat{\beta}_{0}\right)+\frac{v_{2}\hat{g}_{1}\left(v_{2}\right)-v_{1}\hat{g}_{1}\left(v_{1}\right)+\left(1-v_{2}\right)\hat{g}_{0}\left(v_{2}\right)-\left(1-v_{1}\right)\hat{g}_{0}\left(v_{1}\right)}{v_{2}-v_{1}}.

The kernel-weighted pairwise difference estimator has the advantage of having a closed-form expression, so we need not solve any formidable optimization problems. However, it faces the challenging problem of bandwidth selection, similar to most semiparametric estimation methods. Alternatively, we can consider imposing a parametric specification on the unobservable heterogeneity of the MTE such that E[Ud|V=v]=E[Ud|V=v;θd]E\left[\left.U_{d}\right|V=v\right]=E\left[\left.U_{d}\right|V=v;\theta_{d}\right] for finite dimensional θd\theta_{d}, e.g., the polynomial specification that E[Ud|V=v]=j=0JθdjvjE\left[\left.U_{d}\right|V=v\right]=\sum_{j=0}^{J}\theta_{dj}v^{j} or the normal polynomial specification that E[Ud|V=v]=j=0JθdjΦj(v)E\left[\left.U_{d}\right|V=v\right]=\sum_{j=0}^{J}\theta_{dj}\Phi^{-j}\left(v\right). In the latter, setting J=1J=1 will match Heckman’s normal sample selection model. Under the parametric restriction, the second step becomes a global regression E[Y|X,D=d]=Xβd+gd(P;θd)E\left[Y\left|X,D=d\right.\right]=X^{\prime}\beta_{d}+g_{d}\left(P;\theta_{d}\right) with parameterized selection bias correction term gd(p;θd)g_{d}\left(p;\theta_{d}\right) for each dd; therefore, the tuning of h2h_{2} and h3h_{3} is circumvented. Another advantage of a parametrically specified second step is the lower computational burden compared with that of the pairwise difference estimator defined by double summation which requires a squared amount of calculation (Pan and Xie, 2023).

Notably, the local IV (LIV) estimation procedure can be adapted to our model as well, though it needs no IV. Unlike the separate estimation procedure, the adapted LIV approach is based on a whole-sample regression:

E[Y|X]\displaystyle E\left[Y\left|X\right.\right] =\displaystyle= E[Y0+D(Y1Y0)|X]\displaystyle E\left[Y_{0}+D\left(Y_{1}-Y_{0}\right)\left|X\right.\right] (13)
=\displaystyle= E[Y0|X]+E[Y1Y0|X,D=1]Pr(D=1|X)\displaystyle E\left[Y_{0}\left|X\right.\right]+E\left[\left.Y_{1}-Y_{0}\right|X,D=1\right]\Pr\left(D=1\left|X\right.\right)
=\displaystyle= α0+Xβ0+PX(β1β0)+q(P),\displaystyle\alpha_{0}+X^{\prime}\beta_{0}+PX^{\prime}\left(\beta_{1}-\beta_{0}\right)+q\left(P\right),

where

q(p)=pE[U1U0|Vp]=0pE[U1U0|V=v]𝑑v.q\left(p\right)=pE\left[U_{1}-U_{0}\left|V\leq p\right.\right]=\int_{0}^{p}E\left[U_{1}-U_{0}\left|V=v\right.\right]dv.

Note that since

q(1)(p)dq(p)dp=E[U1U0|V=p],q^{\left(1\right)}\left(p\right)\equiv\frac{dq\left(p\right)}{dp}=E\left[U_{1}-U_{0}\left|V=p\right.\right],

the MTE is equal to the derivative of the regression function (13) with respect to PP. As a consequence, it would be sufficient to estimate βd\beta_{d} for d=0,1d=0,1 and functions qq and q(1)q^{\left(1\right)}. Given the estimated propensity score, we can likewise use the pairwise difference principle to obtain

(β^0δ^)\displaystyle\left(\begin{array}[]{c}\hat{\beta}_{0}\\ \hat{\delta}\end{array}\right) =\displaystyle= [i=1n1j=i+1nk2(P^iP^jh2)(XiXjP^iXiP^jXj)(XiXjP^iXiP^jXj)]1\displaystyle\left[\sum_{i=1}^{n-1}\sum_{j=i+1}^{n}k_{2}\left(\frac{\hat{P}_{i}-\hat{P}_{j}}{h_{2}}\right)\left(\begin{array}[]{c}X_{i}-X_{j}\\ \hat{P}_{i}X_{i}-\hat{P}_{j}X_{j}\end{array}\right)\left(\begin{array}[]{c}X_{i}-X_{j}\\ \hat{P}_{i}X_{i}-\hat{P}_{j}X_{j}\end{array}\right)^{\prime}\right]^{-1}
[i=1n1j=i+1nk2(P^iP^jh2)(XiXjP^iXiP^jXj)(YiYj)],\displaystyle\cdot\left[\sum_{i=1}^{n-1}\sum_{j=i+1}^{n}k_{2}\left(\frac{\hat{P}_{i}-\hat{P}_{j}}{h_{2}}\right)\left(\begin{array}[]{c}X_{i}-X_{j}\\ \hat{P}_{i}X_{i}-\hat{P}_{j}X_{j}\end{array}\right)\left(Y_{i}-Y_{j}\right)\right],

where δ^\hat{\delta} is an estimator for β1β0\beta_{1}-\beta_{0} according to (13). Given β^0\hat{\beta}_{0} and δ^\hat{\delta}, we apply the local linear method as well, yielding

(r^(p)q^(1)(p))\displaystyle\left(\begin{array}[]{c}\hat{r}\left(p\right)\\ \hat{q}^{\left(1\right)}\left(p\right)\end{array}\right) =\displaystyle= [i=1nk3(P^iph3)(1P^ip)(1P^ip)]1\displaystyle\left[\sum_{i=1}^{n}k_{3}\left(\frac{\hat{P}_{i}-p}{h_{3}}\right)\left(\begin{array}[]{c}1\\ \hat{P}_{i}-p\end{array}\right)\left(\begin{array}[]{c}1\\ \hat{P}_{i}-p\end{array}\right)^{\prime}\right]^{-1}
[i=1nk3(P^iph3)(1P^ip)(YiXiβ^0P^iXiδ^)],\displaystyle\cdot\left[\sum_{i=1}^{n}k_{3}\left(\frac{\hat{P}_{i}-p}{h_{3}}\right)\left(\begin{array}[]{c}1\\ \hat{P}_{i}-p\end{array}\right)\left(Y_{i}-X_{i}^{\prime}\hat{\beta}_{0}-\hat{P}_{i}X_{i}^{\prime}\hat{\delta}\right)\right],

where r^(p)\hat{r}\left(p\right) is an estimator for α0+q(p)\alpha_{0}+q\left(p\right). Then, we construct the instrument-free LIV estimators for the MTE and other treatment parameters as

Δ^MTE(x,v)\displaystyle\hat{\Delta}^{\text{MTE}}\left(x,v\right) =\displaystyle= xδ^+q^(1)(v),\displaystyle x^{\prime}\hat{\delta}+\hat{q}^{\left(1\right)}\left(v\right),
Δ^ATE(x)\displaystyle\hat{\Delta}^{\text{ATE}}\left(x\right) =\displaystyle= xδ^+r^(1)r^(0),\displaystyle x^{\prime}\hat{\delta}+\hat{r}\left(1\right)-\hat{r}\left(0\right),
Δ^TT(x)\displaystyle\hat{\Delta}^{\text{TT}}\left(x\right) =\displaystyle= xδ^+r^(π^(x))r^(0)π^(x),\displaystyle x^{\prime}\hat{\delta}+\frac{\hat{r}\left(\hat{\pi}\left(x\right)\right)-\hat{r}\left(0\right)}{\hat{\pi}\left(x\right)},
Δ^TUT(x)\displaystyle\hat{\Delta}^{\text{TUT}}\left(x\right) =\displaystyle= xδ^+r^(1)r^(π^(x))1π^(x),\displaystyle x^{\prime}\hat{\delta}+\frac{\hat{r}\left(1\right)-\hat{r}\left(\hat{\pi}\left(x\right)\right)}{1-\hat{\pi}\left(x\right)},
Δ^LATE(x,v1,v2)\displaystyle\hat{\Delta}^{\text{LATE}}\left(x,v_{1},v_{2}\right) =\displaystyle= xδ^+r^(v2)r^(v1)v2v1.\displaystyle x^{\prime}\hat{\delta}+\frac{\hat{r}\left(v_{2}\right)-\hat{r}\left(v_{1}\right)}{v_{2}-v_{1}}.

This adapted LIV approach is computationally more convenient than the separate estimation procedure. Furthermore, if we accept a parametric specification on E[U1U0|V=v]E\left[U_{1}-U_{0}\left|V=v\right.\right] and thus on q(p)q\left(p\right), the adapted LIV approach can be implemented through a parametric least squares regression.

In summary, the identification based on functional form can accommodate most of the frequently-used estimation procedures in a typical IV model.

5 Empirical application

In this section, we revisit the heterogeneous long-term effects of the Head Start program on educational attainment and labor market outcomes by using the proposed MTE method. Head Start, which began in 1965, is one of the largest early child care programs in the United States. The program is targeted at children from low-income families and can provide such children with preschool, health, and nutritional services. Currently, Head Start serves more than a million children, at an annual cost of 10 billion dollars. As a federally funded large-scale program, Head Start has encountered concerns about its effectiveness and thus spawned numerous studies to evaluate its educational and economic effects on the participants. Early studies focused mainly on short-term benefits and found that participation in Head Start is associated with improved test scores and reduced grade repetition at the beginning of primary schooling. However, such benefits seem to fade out during the upper primary grades (Currie, 2001). Garces et al. (2002) provided the first empirical evidence for the longer-term effects of Head Start on high school completion, college attendance, earnings, and crime. Since then, the literature has shifted its focus to the medium- and long-term or intergenerational (e.g., Barr and Gibbs, 2022) gains of Head Start enrollment.

However, despite the enormous policy interest, evidence for the long-term effectiveness of Head Start is not unified, as summarized in Figure 1 in De Haan and Leuven (2020). The lack of consistency between these results may be due to differences in the population or problems related to the empirical approach (Elango et al., 2016). For example, the LATE obtained by the family fixed-effects approach (e.g., Garces et al., 2002; Deming, 2009; Bauer and Schanzenbach, 2016) relies on families that differ from other Head Start families in size and in other observable dimensions. Moreover, the sibling comparison design underlying that approach is limited by endogeneity concerns. To reconcile the divergent evidence, De Haan and Leuven (2020) evaluated the heterogeneous long-term effects of Head Start by using a distributional treatment effect approach that relies on two weak stochastic dominance assumptions, instead of restrictive IV assumptions. The authors found substantial heterogeneity in the returns to Head Start. Specifically, they found that the program has positive and statistically significant effects on education and wage income for the lower end of the distribution of participants.

To produce a complete picture, we assess the causal heterogeneity of Head Start from another perspective; namely, we examine the effects across different levels of unobserved resistance to participation in the program, rather than across the distribution of long-term outcomes. By relating the treatment effects to participation decision, the MTE is informative about the nature of selection into treatment and allows the computation of various causal parameters, such as the ATE, TT, and TUT. Another feature of the MTE is that its description of the effect heterogeneity is irrelevant to the specific outcome variable. Instead, the MTE curve depicts the treatment effects on the unobserved determinants of the treatment. This is another advantage of the MTE in the case of multiple outcomes of interest, as in this application, where the interpretation of the heterogeneity is kept consistent across different outcomes. Finally, in contrast to the distributional treatment effects partially identified by De Haan and Leuven (2020), our MTE method can achieve point identification and estimation.

We use the data provided by De Haan and Leuven (2020), which are from Round 16 (1994 survey year) of the National Longitudinal Study of Youth 1979 (NLSY79). The sample is restricted to the 1960–64 cohorts, because the first cohort eligible for Head Start was born in 1960. In addition, the sample excludes individuals who participated in any preschool programs other than Head Start, implying that we estimate the returns to Head Start relative to informal care. The treatment variable is whether the respondents attended the Head Start program as a child, and the outcome variables are the respondents’ highest years of education and logarithmic yearly wage incomes in their early 30s (they were between 30 and 34 years old in 1994). The covariates are age, gender, race, parental education, and family income in 1978. We refer the reader to the original paper for additional details on the data, sample, and variables. For our analysis, since we apply nonparametric estimation in the first step, we recode parental education into two categories to reduce the number of the data cells or subsamples split by different values of discrete covariates. Specifically, we redefine parental education as a binary variable equal to one if at least one parent went to college or zero if both parents are high school graduates or lower.

We first verify the credibility of Assumptions S and NL for the Head Start data. Table 1 lists the support of the nonparametrically estimated propensity score for all data cells that are split by different values of discrete covariates. We find that the 34th data cell has a full support on the unit interval. Therefore, Assumption S must hold if we choose the corresponding values (32 years old, female, black race, parental education being college or higher) as the benchmark. We then verify Assumption NL for the 34th data cell. Note that only one continuous covariate exists in the data, that is, family income in 1978. Hence, we invoke Assumption NL1, which requires the propensity score function to have at least one stationary point. The left panel of Figure 1 plots the estimated propensity score as a univariate function of the continuous covariate for the 34th data cell. We find that the propensity score is highly nonlinear and nonmonotonic in family income in 1978, possibly owing to the parents’ various self-selection on participation in the program. So Assumption NL1 is fulfilled.

Furthermore, the largest data cell (i.e., the 29th data cell with 175 observations) can be considered an alternative candidate for the benchmark (32 years old, male, white race, parental education being high school or lower). Nevertheless, closer examination of Assumption S is necessary for this data cell, as the support of the propensity score is now limited to [0,0.194]\left[0,0.194\right], which is totally separate from the supports for some other data cells. To this end, we need to find a distinct value of each discrete variable such that the propensity score has overlapping support with [0,0.194]\left[0,0.194\right]. For example, if the value of age is altered from 32 to 30 (or 31, 33, 34) while the other discrete variables remain unaltered, we will go to the 5th (or 17th, 41st, 53rd) data cell with support [0,0.177]\left[0,0.177\right] (or [0,0.242],[0,0.154],[0,0.204]\left[0,0.242\right],\left[0,0.154\right],\left[0,0.204\right]), which is overlapped with [0,0.194]\left[0,0.194\right] as required. Altering the values of gender, race, and parental education leads to the 35th, 25th (or 27th), and 30th data cells with supports [0,0.526]\left[0,0.526\right], [0,0.771]\left[0,0.771\right] (or [0,0.623]\left[0,0.623\right]), and [0,0.061]\left[0,0.061\right], respectively, which all meet the overlapping condition as well. Therefore, Assumption S holds for the largest data cell. The validation of Assumption NL1 for this data cell can be demonstrated as well by the existence of stationary points of the estimated propensity score function, as illustrated in the right panel of Figure 1.

We next turn to the estimation. In the first step, we nonparametrically regress the treatment variable (Head Start) on all of the covariates to generate propensity scores for the sample, by employing the kernel estimation method in (10) and (11) with the rule-of-thumb bandwidth. Figure 2 plots the frequency distribution of the estimated propensity score by treatment status. The figure shows that the propensity score in our sample follows a bimodal distribution, with the main peak being at approximately 0.5 for the participants and approximately 0.1 for the nonparticipants. To reduce the potential impact of outliers, we trim the observations of the 1% smallest and 1% largest propensity scores in the following steps. This trimming leads to a common support ranging from 0 to 0.6, as indicated by the two dashed vertical lines. Given the estimated propensity score, we then estimate in sequence the linear coeffecients, MTE, and summary treatment effect measures including ATE, TT, and TUT. We adopt the separate estimation procedure, since it generally exploits more identifying information behind the data than the LIV procedure (Brinch et al., 2017).

Figure 3 shows the estimated MTE under Heckman’s normal specification that E[Ud|V=v]=ρdΦ1(v)E\left[\left.U_{d}\right|V=v\right]=\rho_{d}\Phi^{-1}\left(v\right) for d=0,1d=0,1, where Φ()\Phi\left(\cdot\right) is the CDF of normal distribution, which can be derived by assuming that (Ud,V)\left(U_{d},V\right) follows a bivariate normal distribution with covariance ρd\rho_{d}. The MTE curves, evaluated at the mean values of XX, relate the unobserved component U1U0U_{1}-U_{0} of the treatment effect to the unobserved component VV of the treatment choice. A high value of VV implies a low probability of treatment; thus, we interpret VV as resistance to participation in Head Start. The MTE curve for wage income plotted in panel B decreases with resistance, revealing a pattern of selection on gains, as expected. In other words, based on the unobserved characteristics, the children who were most likely to enroll in Head Start benefitted the most from the program in terms of their labor market outcome. However, when the outcome is educational attainment in panel A, the positive slope of the MTE curve points to a pattern of reverse selection on gains. In consequence, the TUT exceeds the ATE, which in turn exceeds the TT. The same pattern was observed by Cornelissen et al. (2018) when estimating school-readiness return to a preschool program in Germany. This phenomenon may be attributed partly to the fact that parents have their own objectives in deciding childcare arrangements. Nevertheless, the disagreement between selection patterns for the two outcomes raises concern about the possible functional form misspecification of the normal MTE, which is strictly restricted to be monotonic in the resistance to treatment. Therefore, we consider a nonparametric specification for E[Ud|V=v]E\left[\left.U_{d}\right|V=v\right] and implement a semiparametric separate estimation procedure.

Figure 4 plots the semiparametric MTE curves for education and labor market outcomes in panels A and B, respectively. Under the flexible specification, the MTE curves are no longer monotonic, and the clear pattern of selection disappears. In the case of education outcome, the curve is initially flat, then becomes an inverted U shape, with a statistically significant positive effect appearing in the region of the peak, corresponding to the children with resistance to treatment ranging from 0.32 to 0.42. The complex feature of this curve is hardly captured by any parsimonious parametric function. Similar observations are seen in the case of wage income, in which the MTE curve is nonmonotonic, with a complex shape, and significantly greater than zero for less than 10% of the children who were most likely to attend childcare early. A comparison of the summary treatment effect measures indicates weak selection on gains for both outcomes. Table 2 reports the semiparametric estimates for the effects of the covariates on potential outcomes and their difference based on (12). Columns 3 and 6 show that girls gain significantly more returns to Head Start attendance than boys. However, other than gender, no substantial observable heterogeneity exists in the treatment effects of the program, though parental education and family income in 1978 have a significantly positive effect on the respondents’ potential education and potential labor market outcomes in both the treated and untreated states.

Based on β^1\hat{\beta}_{1} and β^0\hat{\beta}_{0} reported in Table 2, and the separate estimates of E[Ud|V=v]E\left[\left.U_{d}\right|V=v\right] for d=0,1d=0,1 under the semiparametric specification, we can estimate the marginal structural functions

E[Yd|V=v]=E[X]βd+E[Ud|V=v]E\left[\left.Y_{d}\right|V=v\right]=E\left[X\right]^{\prime}\beta_{d}+E\left[\left.U_{d}\right|V=v\right]

for potential outcomes Y1Y_{1} and Y0Y_{0}, which we plot in Figure 5. Panel A sheds light on the significantly positive effect of Head Start on education for the respondents with medium-level resistance to treatment, revealing that their gains from the program are driven mainly by the remarkably low educational attainment when untreated. Panel B leads to similar results for wage income, where significant gain emerges for low values of VV. Moreover, the relatively flat curve of potential wage income in the treated state implies that early childcare attendance serves as an equalizer that diminishes the intergroup difference in the labor market outcome.

In addition, we can semiparametrically estimate conditional expectations of the observed responses given the unobserved determinants of treatment choice, such as the marginal probability of participation

E[D|V=v]=Pr(Pv|V=v)=Pr(Pv),E\left[\left.D\right|V=v\right]=\Pr\left(\left.P\geq v\right|V=v\right)=\Pr\left(P\geq v\right),

which mirrors the distribution of the propensity score, and the marginal observed outcome

E[Y|V=v]\displaystyle E\left[\left.Y\right|V=v\right] =\displaystyle= E[Y0|V=v]+E[D(Y1Y0)|V=v]\displaystyle E\left[\left.Y_{0}\right|V=v\right]+E\left[\left.D\left(Y_{1}-Y_{0}\right)\right|V=v\right]
=\displaystyle= E[Y0|V=v]+E[DX|V=v](β1β0)+E[D(U1U0)|V=v]\displaystyle E\left[\left.Y_{0}\right|V=v\right]+E\left[\left.DX\right|V=v\right]^{\prime}\left(\beta_{1}-\beta_{0}\right)+E\left[\left.D\left(U_{1}-U_{0}\right)\right|V=v\right]
=\displaystyle= E[Y0|V=v]+E[1{Pv}X](β1β0)+Pr(Pv)E[U1U0|V=v],\displaystyle E\left[\left.Y_{0}\right|V=v\right]+E\left[1\left\{P\geq v\right\}X\right]^{\prime}\left(\beta_{1}-\beta_{0}\right)+\Pr\left(P\geq v\right)\cdot E\left[\left.U_{1}-U_{0}\right|V=v\right],

where the last equality follows from the independence of XX and VV and from the conditional mean independence of XX and UdU_{d} given VV. Plugging in proper estimates of each component in the above equations, we obtain the estimated marginal response functions and plot them in Figure 6. In particular, the marginal observed outcome curves relate the individuals with significant positive returns (with medium VV in Panel A and small VV in Panel B) to those with low education and low wage income, thereby bridging our results on the MTE and those of De Haan and Leuven (2020) on the distributional treatment effect.

6 Conclusion

We propose a novel method for defining, identifying, and estimating the MTE in the absence of IVs. Our MTE model allows all the covariates to be correlated to the structural treatment error. In this model, we define the MTE based on a reduced-form treatment error that (i) is uniformly distributed on the unit interval, (ii) is statistically independent of covariates, and (iii) has several economic meanings. The independence property facilitates the identification of our defined MTE. We provide sufficient conditions under which the MTE can be point identified based on functional form. The conditions are standard in a certain sense. The conditional mean independence assumption is equivalent to the separability assumption commonly imposed in the MTE literature, and is implied by and much weaker than the full independence assumption. The linearity and nonlinearity assumptions are the foundation of identification based on functional form, and can make sense to most empirical studies. We prove the identification by using a construction method. Our identification strategy allows the adaptation of most of the existing estimation procedures for conventional IV-MTE models, such as separate estimation, LIV estimation, parametric estimation, and semiparametric estimation. For the empirical application, we evaluate the MTE of the Head Start program on long-term education and labor market outcomes, in which an IV for Head Start participation is difficult to acquire. We find significant positive effects for individuals with medium-level or low resistance to treatment, and substantial heterogeneity exists.

References

  • Ahn and Powell (1993) Ahn, H. and J. L. Powell (1993). Semiparametric estimation of censored selection models with a nonparametric selection mechanism. Journal of Econometrics 58(1-2), 3–29.
  • Aryal et al. (2022) Aryal, G., M. Bhuller, and F. Lange (2022). Signaling and employer learning with instruments. American Economic Review 112(5), 1669–1702.
  • Barr and Gibbs (2022) Barr, A. and C. R. Gibbs (2022). Breaking the cycle? intergenerational effects of an antipoverty program in early childhood. Journal of Political Economy 130(12), 3253–3285.
  • Bauer and Schanzenbach (2016) Bauer, L. and D. W. Schanzenbach (2016). The Long-Term Impact of the Head Start Program. Washington DC: Hamilton Project.
  • Brinch et al. (2017) Brinch, C. N., M. Mogstad, and M. Wiswall (2017). Beyond LATE with a discrete instrument. Journal of Political Economy 125(4), 985–1039.
  • Chamberlain (1986) Chamberlain, G. (1986). Asymptotic efficiency in semi-parametric models with censoring. Journal of Econometrics 32(2), 189–218.
  • Conley et al. (2012) Conley, T. G., C. B. Hansen, and P. E. Rossi (2012). Plausibly exogenous. Review of Economics and Statistics 94(1), 260–272.
  • Cornelissen et al. (2018) Cornelissen, T., C. Dustmann, A. Raute, and U. Schönberg (2018). Who benefits from universal child care? estimating marginal returns to early child care attendance. Journal of Political Economy 126(6), 2356–2409.
  • Currie (2001) Currie, J. (2001). Early childhood education programs. Journal of Economic Perspectives 15(2), 213–238.
  • De Haan and Leuven (2020) De Haan, M. and E. Leuven (2020). Head start and the distribution of long-term education and labor market outcomes. Journal of Labor Economics 38(3), 727–765.
  • Deming (2009) Deming, D. (2009). Early childhood intervention and life-cycle skill development: Evidence from head start. American Economic Journal: Applied Economics 1(3), 111–134.
  • D’Haultfœuille et al. (2021) D’Haultfœuille, X., S. Hoderlein, and Y. Sasaki (2021). Testing and relaxing the exclusion restriction in the control function approach. Journal of Econometrics (in press). Available at https://doi.org/10.1016/j.jeconom.2020.09.012.
  • D’Haultfœuille and Maurel (2013) D’Haultfœuille, X. and A. Maurel (2013). Another look at the identification at infinity of sample selection models. Econometric Theory 29(1), 213–224.
  • Dong (2010) Dong, Y. (2010). Endogenous regressor binary choice models without instruments, with an application to migration. Economics Letters 107(1), 33–35.
  • Elango et al. (2016) Elango, S., J. L. García, J. J. Heckman, and A. Hojman (2016). Early childhood education. In R. A. Moffitt (Ed.), Economics of Means-Tested Transfer Programs in the United States, Volume 2. Chicago: University of Chicago Press.
  • Escanciano et al. (2016) Escanciano, J. C., D. Jacho-Chávez, and A. Lewbel (2016). Identification and estimation of semiparametric two-step models. Quantitative Economics 7(2), 561–589.
  • Fisher (1966) Fisher, F. M. (1966). The Identification Problem in Econometrics. McGraw-Hill.
  • Garces et al. (2002) Garces, E., D. Thomas, and J. Currie (2002). Longer-term effects of head start. American economic review 92(4), 999–1012.
  • Garlick and Hyman (2022) Garlick, R. and J. Hyman (2022). Quasi-experimental evaluation of alternative sample selection corrections. Journal of Business & Economic Statistics 40(3), 950–964.
  • Heckman (1979) Heckman, J. J. (1979). Sample selection bias as a specification error. Econometrica 47(1), 153–61.
  • Heckman et al. (2018) Heckman, J. J., J. E. Humphries, and G. Veramendi (2018). Returns to education: The causal effects of education on earnings, health, and smoking. Journal of Political Economy 126(S1), S197–S246.
  • Heckman and Vytlacil (1999) Heckman, J. J. and E. Vytlacil (1999). Local instrumental variables and latent variable models for identifying and bounding treatment effects. Proceedings of the national Academy of Sciences 96(8), 4730–4734.
  • Heckman and Vytlacil (2001) Heckman, J. J. and E. Vytlacil (2001). Local instrumental variables. In C. Hsiao, K. Morimune, and J. L. Powell (Eds.), Nonlinear Statistical Modeling: Proceedings of the Thirteenth International Symposium in Economic Theory and Econometrics: Essays in Honor of Takeshi Amemiya, Chapter 1, pp.  1–46. Cambridge, U.K.: Cambridge University Press.
  • Heckman and Vytlacil (2005) Heckman, J. J. and E. Vytlacil (2005). Structural equations, treatment effects, and econometric policy evaluation. Econometrica 73(3), 669–738.
  • Honoré and Hu (2020) Honoré, B. E. and L. Hu (2020). Selection without exclusion. Econometrica 88(3), 1007–1029.
  • Honoré and Hu (2022) Honoré, B. E. and L. Hu (2022). Sample selection models without exclusion restrictions: Parameter heterogeneity and partial identification. Journal of Econometrics (in press). Available at https://doi.org/10.1016/j.jeconom.2021.07.017.
  • Ichimura (1993) Ichimura, H. (1993). Semiparametric least squares (SLS) and weighted SLS estimation of single-index models. Journal of Econometrics 58, 71–120.
  • Jones (2015) Jones, D. (2015). The economics of exclusion restrictions in IV models. NBER Working Paper (No.21391). Available at https://doi.org/10.3386/w21391.
  • Kédagni and Mourifié (2020) Kédagni, D. and I. Mourifié (2020). Generalized instrumental inequalities: testing the instrumental variable independence assumption. Biometrika 107(3), 661–675.
  • Khan and Tamer (2010) Khan, S. and E. Tamer (2010). Irregular identification, support conditions, and inverse weight estimation. Econometrica 78(6), 2021–2042.
  • Kirkeboen et al. (2016) Kirkeboen, L. J., E. Leuven, and M. Mogstad (2016). Field of study, earnings, and self-selection. The Quarterly Journal of Economics 131(3), 1057–1111.
  • Klein and Vella (2010) Klein, R. and F. Vella (2010). Estimating a class of triangular simultaneous equations models without exclusion restrictions. Journal of Econometrics 154(2), 154–164.
  • Klein and Spady (1993) Klein, R. W. and R. H. Spady (1993). An efficient semiparametric estimator for binary response models. Econometrica 61, 387–421.
  • Kline and Walters (2016) Kline, P. and C. R. Walters (2016). Evaluating public programs with close substitutes: The case of Head Start. The Quarterly Journal of Economics 131(4), 1795–1848.
  • Lewbel (1997) Lewbel, A. (1997). Constructing instruments for regressions with measurement error when no additional data are available, with an ppplication to patents and r&d. Econometrica 65(5), 1201–1214.
  • Lewbel (2000) Lewbel, A. (2000). Semiparametric qualitative response model estimation with unknown heteroscedasticity or instrumental variables. Journal of Econometrics 97(1), 145–177.
  • Lewbel (2007) Lewbel, A. (2007). Endogenous selection or treatment model estimation. Journal of Econometrics 141(2), 777–806.
  • Lewbel (2012) Lewbel, A. (2012). Using heteroscedasticity to identify and estimate mismeasured and endogenous regressor models. Journal of Business & Economic Statistics 30(1), 67–80.
  • Lewbel (2018) Lewbel, A. (2018). Identification and estimation using heteroscedasticity without instruments: The binary endogenous regressor case. Economics Letters 165, 10–12.
  • Lewbel (2019) Lewbel, A. (2019). The identification zoo: Meanings of identification in econometrics. Journal of Economic Literature 57(4), 835–903.
  • Mogstad et al. (2018) Mogstad, M., A. Santos, and A. Torgovitsky (2018). Using instrumental variables for inference about policy relevant treatment parameters. Econometrica 86(5), 1589–1619.
  • Mogstad et al. (2021) Mogstad, M., A. Torgovitsky, and C. R. Walters (2021). The causal interpretation of two-stage least squares with multiple instrumental variables. American Economic Review 111(11), 3663–98.
  • Mountjoy (2022) Mountjoy, J. (2022). Community colleges and upward mobility. American Economic Review 112(8), 2580–2630.
  • Mourifié et al. (2020) Mourifié, I., M. Henry, and R. Méango (2020). Sharp bounds and testability of a Roy model of STEM major choices. Journal of Political Economy 128(8), 3220–3283.
  • Pan and Xie (2023) Pan, Z. and J. Xie (2023). 1\ell_{1}-penalized pairwise difference estimation for a high-dimensional censored regression model. Journal of Business & Economic Statistics 41(2), 283–297.
  • Pan et al. (2022) Pan, Z., X. Zhou, and Y. Zhou (2022). Semiparametric estimation of a censored regression model subject to nonparametric sample selection. Journal of Business & Economic Statistics 40(1), 141–151.
  • Powell et al. (1989) Powell, J. L., J. H. Stock, and T. M. Stoker (1989). Semiparametric estimation of index coefficients. Econometrica 57(6), 1403–1430.
  • Racine and Li (2004) Racine, J. and Q. Li (2004). Nonparametric estimation of regression functions with both categorical and continuous data. Journal of Econometrics 119(1), 99–130.
  • Roy (1951) Roy, A. D. (1951). Some thoughts on the distribution of earnings. Oxford Economic Papers 3(2), 135–146.
  • Rubin (1973a) Rubin, D. B. (1973a). Matching to remove bias in observational studies. Biometrics 29(1), 159–183.
  • Rubin (1973b) Rubin, D. B. (1973b). The use of matched sampling and regression adjustment to remove bias in observational studies. Biometrics 29(1), 184–203.
  • Torgovitsky (2015) Torgovitsky, A. (2015). Identification of nonseparable models using instruments with small support. Econometrica 83(3), 1185–1197.
  • van den Berg (2007) van den Berg, G. J. (2007). An economic analysis of exclusion restrictions for instrumental variable estimation. IZA Discussion Paper (No. 2585). Available at SSRN: https://ssrn.com/abstract=964965 or https://doi.org/10.2139/ssrn.964965.
  • Van Kippersluis and Rietveld (2018) Van Kippersluis, H. and C. A. Rietveld (2018). Beyond plausibly exogenous. The Econometrics Journal 21(3), 316–331.
  • Vytlacil (2002) Vytlacil, E. (2002). Independence, monotonicity, and latent index models: An equivalence result. Econometrica 70(1), 331–341.
  • Vytlacil (2006) Vytlacil, E. (2006). A note on additive separability and latent index models of binary choice: Representation results. Oxford Bulletin of Economics and Statistics 68(4), 515–518.
  • Westphal et al. (2022) Westphal, M., D. A. Kamhöfer, and H. Schmitz (2022). Marginal college wage premiums under selection into employment. The Economic Journal 132(646), 2231–2272.
  • Zhou and Xie (2019) Zhou, X. and Y. Xie (2019). Marginal treatment effects from a propensity score perspective. Journal of Political Economy 127(6), 3070–3084.

Refer to caption

Refer to caption

Figure 1: An Informal Test for Assumption NL1. The figures plot the nonparametrically estimated propensity score as a function of the continuous covariate, using in the left panel the data cell where the estimated propensity score has a full support (i.e., the 34th data cell in Table 1), and using in the right panel the largest data cell that has 175 observations (i.e., the 29th data cell in Table 1). Both figures show nonmonotonicity of the propensity score, which verifies Assumption NL1.

Refer to caption

Figure 2: Common Support. The figure plots the frequency distribution of the propensity score by treatment status. The propensity score is predicted via nonparametric kernel regression estimation. The dashed reference lines indicate the lower limit (0) and upper limit (0.6) of the propensity score with common support (based on 1% trimming on both sides).

Refer to caption

Refer to caption

Figure 3: MTE Curves under Normal Specification. The MTE is estimated by the separate procedure based on parametric normal specification and evaluated at mean values of the covariates. Panels A and B depict the estimated MTE curves for education outcome and labor market outcome, respectively. The 90% confidence interval is based on bootstrapping with 1,000 replications.

Refer to caption

Refer to caption

Figure 4: MTE Curves under Semiparametric Specification. The MTE is estimated by the separate procedure based on the semiparametric specification, with the rule-of-thumb bandwidth, and evaluated at the mean values of the covariates. Panels A and B depict the estimated MTE curves for education outcome and labor market outcome, respectively. The 90% confidence interval is based on bootstrapping with 1,000 replications.

Refer to caption

Refer to caption

Figure 5: Semiparametric Estimates of Marginal Structural Functions. The marginal structural function refers to the expected potential outcome conditional on the unobserved resistance to treatment. Panels A and B depict the estimated marginal structural function curves for education outcome and labor market outcome, respectively. The 90% confidence interval is based on bootstrapping with 1,000 replications.

Refer to caption

Refer to caption

Figure 6: Semiparametric Estimates of Marginal Response Functions. This figure plots the (conditional) expectation of the observed outcome (solid line on the left y-axis), and the conditional probability of participation (dashed line on the right y-axis). Panels A and B illustrate the estimated marginal response function curves for education outcome and labor market outcome, respectively. The 90% confidence interval is based on bootstrapping with 1,000 replications.
Table 1: Support of the nonparametrically estimated propensity score in each data cell
Data cell Age Gender Race Parental education Observations Support of π^0(XC)\hat{\pi}_{0}\left(X^{C}\right)
1 30 Male Hispanic High school or lower 70 [0, 0.208]
2 30 Male Hispanic College or higher 10 [0, 0.288]
3 30 Male Black High school or lower 93 [0, 0.588]
4 30 Male Black College or higher 21 [0, 0.813]
5 30 Male White High school or lower 147 [0, 0.177]
6 30 Male White College or higher 78 [0, 0.208]
7 30 Female Hispanic High school or lower 65 [0, 0.434]
8 30 Female Hispanic College or higher 11 0
9 30 Female Black High school or lower 100 [0, 0.671]
10 30 Female Black College or higher 20 [0, 0.641]
11 30 Female White High school or lower 125 [0, 0.759]
12 30 Female White College or higher 73 [0, 0.179]
13 31 Male Hispanic High school or lower 87 [0, 0.374]
14 31 Male Hispanic College or higher 19 [0, 0.442]
15 31 Male Black High school or lower 112 [0.030, 0.548]
16 31 Male Black College or higher 24 [0, 0.531]
17 31 Male White High school or lower 148 [0, 0.242]
18 31 Male White College or higher 92 [0, 0.125]
19 31 Female Hispanic High school or lower 97 [0, 0.737]
20 31 Female Hispanic College or higher 14 [0, 0.788]
21 31 Female Black High school or lower 121 [0.249, 0.817]
22 31 Female Black College or higher 24 [0, 0.639]
23 31 Female White High school or lower 149 [0, 0.178]
24 31 Female White College or higher 75 [0, 0.221]
25 32 Male Hispanic High school or lower 65 [0, 0.771]
26 32 Male Hispanic College or higher 13 0
27 32 Male Black High school or lower 133 [0, 0.623]
28 32 Male Black College or higher 23 [0, 0.949]
29 32 Male White High school or lower 175 [0, 0.194]
30 32 Male White College or higher 90 [0, 0.061]
Table 1 (cont.): Support of the nonparametrically estimated propensity score in each data cell
Data cell Age Gender Race Parental education Observations Support of π^0(XC)\hat{\pi}_{0}\left(X^{C}\right)
31 32 Female Hispanic High school or lower 94 [0, 0.642]
32 32 Female Hispanic College or higher 17 [0, 0.351]
33 32 Female Black High school or lower 130 [0.021, 0.999]
34 32 Female Black College or higher 34 [0, 1]
35 32 Female White High school or lower 155 [0, 0.526]
36 32 Female White College or higher 81 [0, 0.118]
37 33 Male Hispanic High school or lower 72 [0, 0.567]
38 33 Male Hispanic College or higher 15 [0, 0.743]
39 33 Male Black High school or lower 123 [0, 0.818]
40 33 Male Black College or higher 25 [0, 0.596]
41 33 Male White High school or lower 141 [0, 0.154]
42 33 Male White College or higher 95 [0, 0.133]
43 33 Female Hispanic High school or lower 83 [0, 0.481]
44 33 Female Hispanic College or higher 10 [0, 0.447]
45 33 Female Black High school or lower 125 [0.258, 0.628]
46 33 Female Black College or higher 27 [0, 0.925]
47 33 Female White High school or lower 152 [0, 0.140]
48 33 Female White College or higher 93 [0, 0.350]
49 34 Male Hispanic High school or lower 85 [0, 0.864]
50 34 Male Hispanic College or higher 16 [0, 0.673]
51 34 Male Black High school or lower 101 [0.176, 1]
52 34 Male Black College or higher 27 [0, 0.790]
53 34 Male White High school or lower 162 [0, 0.204]
54 34 Male White College or higher 85 [0, 0.057]
55 34 Female Hispanic High school or lower 73 [0, 0.443]
56 34 Female Hispanic College or higher 11 0
57 34 Female Black High school or lower 109 [0.271, 0.586]
58 34 Female Black College or higher 24 [0, 0.856]
59 34 Female White High school or lower 143 [0, 0.5]
60 34 Female White College or higher 76 [0, 0.055]
Table 2: Semiparametric estimates of outcome equation coefficients
Years of education Wage income
Treated Untreated Difference Treated Untreated Difference
(1) (2) (3) (4) (5) (6)
Age 0.000 0.000 0.000 0.005 0.007 -0.002
(0.056) (0.027) (0.062) (0.030) (0.014) (0.033)
Female 0.558∗∗∗ 0.254∗∗∗ 0.304 -0.321∗∗∗ -0.503∗∗∗ 0.182∗∗
(0.159) (0.078) (0.174) (0.083) (0.039) (0.091)
Black -0.378 -0.226 -0.152 -0.015 -0.404∗∗∗ 0.389
(0.518) (0.227) (0.559) (0.216) (0.145) (0.263)
Hispanic -0.642 -0.403∗∗∗ -0.239 -0.097 -0.000 -0.097
(0.408) (0.145) (0.430) (0.169) (0.068) (0.183)
Parental education 1.764∗∗∗ 1.796∗∗∗ -0.032 0.247∗∗ 0.197∗∗∗ 0.050
(0.254) (0.108) (0.271) (0.096) (0.050) (0.107)
Family income 1978 0.053∗∗∗ 0.047∗∗∗ 0.006 0.021∗∗∗ 0.014∗∗∗ 0.007
(0.011) (0.004) (0.012) (0.005) (0.002) (0.005)
ATE 0.421 0.457∗∗
(0.492) (0.228)
TT 0.597 0.543
(1.047) (0.484)
TUT 0.376 0.438
(0.579) (0.266)
Sample size 4,554 3,589

Notes: Columns 1 and 4 display the estimates of coefficients in the treated state (β1\beta_{1} in Equation [8]), and columns 2 and 5 display the estimates of coefficients in the untreated state (β0\beta_{0}). Columns 3 and 6 display the difference in the estimates between the treated and untreated states (β1β0\beta_{1}-\beta_{0}), as well as the summary causal parameters (i.e., ATE, TT, and TUT). Bootstrapped standard errors from 1,000 replications are reported in parentheses.
Significant at the 10% level
∗∗ Significant at the 5% level
∗∗∗ Significant at the 1% level

Online Appendices to “Marginal treatment effects in the absence of instrumental variables”

Appendix A Proof of Theorem 1

Denote md(x)=E[Y|X=x,D=d]m_{d}\left(x\right)=E\left[Y\left|X=x,D=d\right.\right] and md0(xC)=md(xC,0)m_{d0}\left(x^{C}\right)=m_{d}\left(x^{C},0\right), d=0,1d=0,1. Note that π(x)\pi\left(x\right) and md(x)m_{d}\left(x\right), and thus π0(xC)\pi_{0}\left(x^{C}\right) and md0(xC)m_{d0}\left(x^{C}\right), are identified functions because they are conditional expectations of observed variables. We first consider identifying βdC\beta_{d}^{C}, the coefficients of continuous covariates, from π0(xC)\pi_{0}\left(x^{C}\right) and md0(xC)m_{d0}\left(x^{C}\right). By equation (7), we have

md0(xC)=xCβdC+gd(π0(xC)).m_{d0}\left(x^{C}\right)=x^{C\prime}\beta_{d}^{C}+g_{d}\left(\pi_{0}\left(x^{C}\right)\right). (A.1)

When dim(XC)=1\dim\left(X^{C}\right)=1, by differentiating both sides of (A.1) at x~C\tilde{x}^{C} and invoking Assumption NL1, we obtain

md0(1)(x~C)=βdC+gd(1)(π0(x~C))π0(1)(x~C)=βdC,m_{d0}^{\left(1\right)}\left(\tilde{x}^{C}\right)=\beta_{d}^{C}+g_{d}^{\left(1\right)}\left(\pi_{0}\left(\tilde{x}^{C}\right)\right)\pi_{0}^{\left(1\right)}\left(\tilde{x}^{C}\right)=\beta_{d}^{C},

which identifies βdC\beta_{d}^{C} for d=0,1d=0,1. When dim(XC)2\dim\left(X^{C}\right)\geq 2, for k,j{1,2,,dim(XC)}k,j\in\left\{1,2,\cdots,\dim\left(X^{C}\right)\right\} satisfying Assumption NL2, taking the partial derivatives of md0(xC)m_{d0}\left(x^{C}\right) with respect to xkCx_{k}^{C} and xjCx_{j}^{C} yields that

kmd0(xC)\displaystyle\partial_{k}m_{d0}\left(x^{C}\right) =\displaystyle= βd,kC+gd(1)(π0(xC))kπ0(xC),\displaystyle\beta_{d,k}^{C}+g_{d}^{\left(1\right)}\left(\pi_{0}\left(x^{C}\right)\right)\partial_{k}\pi_{0}\left(x^{C}\right),
jmd0(xC)\displaystyle\partial_{j}m_{d0}\left(x^{C}\right) =\displaystyle= βd,jC+gd(1)(π0(xC))jπ0(xC).\displaystyle\beta_{d,j}^{C}+g_{d}^{\left(1\right)}\left(\pi_{0}\left(x^{C}\right)\right)\partial_{j}\pi_{0}\left(x^{C}\right).

It follows from Assumption NL2.(i)-(ii) that

kmd0(xC)βd,kCkπ0(xC)=gd(1)(π0(xC))=jmd0(xC)βd,jCjπ0(xC),\frac{\partial_{k}m_{d0}\left(x^{C}\right)-\beta_{d,k}^{C}}{\partial_{k}\pi_{0}\left(x^{C}\right)}=g_{d}^{\left(1\right)}\left(\pi_{0}\left(x^{C}\right)\right)=\frac{\partial_{j}m_{d0}\left(x^{C}\right)-\beta_{d,j}^{C}}{\partial_{j}\pi_{0}\left(x^{C}\right)},

so that

kmd0(xC)jπ0(xC)jmd0(xC)kπ0(xC)=jπ0(xC)βd,kCkπ0(xC)βd,jC,\partial_{k}m_{d0}\left(x^{C}\right)\partial_{j}\pi_{0}\left(x^{C}\right)-\partial_{j}m_{d0}\left(x^{C}\right)\partial_{k}\pi_{0}\left(x^{C}\right)=\partial_{j}\pi_{0}\left(x^{C}\right)\beta_{d,k}^{C}-\partial_{k}\pi_{0}\left(x^{C}\right)\beta_{d,j}^{C}, (A.2)

which is linear in βd,kC\beta_{d,k}^{C} and βd,jC\beta_{d,j}^{C}. The same equation is obtained if we evaluate the expression at another point x~C\tilde{x}^{C} that satisfies Assumption NL2.(iii)-(iv), which gives

(kmd0(xC)jπ0(xC)jmd0(xC)kπ0(xC)kmd0(x~C)jπ0(x~C)jmd0(x~C)kπ0(x~C))=Ψ(βd,kCβd,jC),\left(\begin{array}[]{c}\partial_{k}m_{d0}\left(x^{C}\right)\partial_{j}\pi_{0}\left(x^{C}\right)-\partial_{j}m_{d0}\left(x^{C}\right)\partial_{k}\pi_{0}\left(x^{C}\right)\\ \partial_{k}m_{d0}\left(\tilde{x}^{C}\right)\partial_{j}\pi_{0}\left(\tilde{x}^{C}\right)-\partial_{j}m_{d0}\left(\tilde{x}^{C}\right)\partial_{k}\pi_{0}\left(\tilde{x}^{C}\right)\end{array}\right)=\Psi\left(\begin{array}[]{c}\beta_{d,k}^{C}\\ \beta_{d,j}^{C}\end{array}\right), (A.3)

where

Ψ=(jπ0(xC)kπ0(xC)jπ0(x~C)kπ0(x~C)).\Psi=\left(\begin{array}[]{cc}\partial_{j}\pi_{0}\left(x^{C}\right)&-\partial_{k}\pi_{0}\left(x^{C}\right)\\ \partial_{j}\pi_{0}\left(\tilde{x}^{C}\right)&-\partial_{k}\pi_{0}\left(\tilde{x}^{C}\right)\end{array}\right).

Assumption NL2.(v) ensures that the determinant of Ψ\Psi is nonzero, which shows that Ψ\Psi is nonsingular. Therefore, equation (A.3) can be solved for βd,kC\beta_{d,k}^{C} and βd,jC\beta_{d,j}^{C} by inverting Ψ\Psi, thereby identifying βd,kC\beta_{d,k}^{C} and βd,jC\beta_{d,j}^{C}. Given identification of βd,kC\beta_{d,k}^{C}, we can then identify all other coefficient βd,lC\beta_{d,l}^{C} in βdC\beta_{d}^{C} by solving (A.2) with the subscript jj replaced by ll, which gives

βd,lC=lmd0(xC)kπ0(xC)kmd0(xC)lπ0(xC)+lπ0(xC)βd,kCkπ0(xC).\beta_{d,l}^{C}=\frac{\partial_{l}m_{d0}\left(x^{C}\right)\partial_{k}\pi_{0}\left(x^{C}\right)-\partial_{k}m_{d0}\left(x^{C}\right)\partial_{l}\pi_{0}\left(x^{C}\right)+\partial_{l}\pi_{0}\left(x^{C}\right)\beta_{d,k}^{C}}{\partial_{k}\pi_{0}\left(x^{C}\right)}.

Given the identification of βdC\beta_{d}^{C}, the function gdg_{d} is identified on the support of π0(XC)\pi_{0}\left(X^{C}\right) by

gd(p)=E[md0(XC)XCβdC|π0(XC)=p].g_{d}\left(p\right)=E\left[\left.m_{d0}\left(X^{C}\right)-X^{C\prime}\beta_{d}^{C}\right|\pi_{0}\left(X^{C}\right)=p\right].

Next, we consider identifying βdD\beta_{d}^{D}, the coefficients of discrete covariates. For each k{1,2,,dim(XD)}k\in\left\{1,2,\cdots,\dim\left(X^{D}\right)\right\}, we have

md(xC,xDk)=xCβdC+xkDβd,kD+gd(π(xC,xDk))m_{d}\left(x^{C},x^{Dk}\right)=x^{C\prime}\beta_{d}^{C}+x_{k}^{D}\beta_{d,k}^{D}+g_{d}\left(\pi\left(x^{C},x^{Dk}\right)\right)

for any xCx^{C} in the support of XCX^{C}. By Assumption S, there exists xC(k)x^{C}\left(k\right) in the support of XCX^{C} such that π(xC(k),xDk)\pi\left(x^{C}\left(k\right),x^{Dk}\right) is in the support of π0(XC)\pi_{0}\left(X^{C}\right). It follows from the above identification result that gd(π(xC(k),xDk))g_{d}\left(\pi\left(x^{C}\left(k\right),x^{Dk}\right)\right) is identified. Consequently, βd,kD\beta_{d,k}^{D} is identified by

βd,kD=md(xC(k),xDk)xC(k)βdCgd(π(xC(k),xDk))xkD.\beta_{d,k}^{D}=\frac{m_{d}\left(x^{C}\left(k\right),x^{Dk}\right)-x^{C}\left(k\right)^{\prime}\beta_{d}^{C}-g_{d}\left(\pi\left(x^{C}\left(k\right),x^{Dk}\right)\right)}{x_{k}^{D}}.

This argument holds for each k{1,2,,dim(XD)}k\in\left\{1,2,\cdots,\dim\left(X^{D}\right)\right\}, thereby identifying βdD\beta_{d}^{D}. Finally, given the identification of βd=(βdC,βdD)\beta_{d}=\left(\beta_{d}^{C},\beta_{d}^{D}\right), it follows from (7) that the function gdg_{d} is identified on the support of P=π(X)P=\pi\left(X\right) by

gd(p)=E[md(X)Xβd|π(X)=p],g_{d}\left(p\right)=E\left[\left.m_{d}\left(X\right)-X^{\prime}\beta_{d}\right|\pi\left(X\right)=p\right],

for d=0,1d=0,1.

Appendix B Further discussion on the nonlinearity assumption

In light of the proof of Theorem 1, our identification strategy may work even when no continuous covariate is available in the data. In that case, we put a discretely distributed covariate in XCX^{C}.

Theorem B.1.

Theorem 1 holds if Assumption NL1 is replaced by that there exist two different constants xCx^{C}, x~C\tilde{x}^{C} in the support of XCX^{C} such that π0(xC)=π0(x~C)\pi_{0}\left(x^{C}\right)=\pi_{0}\left(\tilde{x}^{C}\right).

Proof.

When dim(XC)=1\dim\left(X^{C}\right)=1, it follows from (A.1) and the assumption of Theorem B.1 that

md0(xC)xCβdC=gd(π0(xC))=gd(π0(x~C))=md0(x~C)x~CβdC.m_{d0}\left(x^{C}\right)-x^{C}\beta_{d}^{C}=g_{d}\left(\pi_{0}\left(x^{C}\right)\right)=g_{d}\left(\pi_{0}\left(\tilde{x}^{C}\right)\right)=m_{d0}\left(\tilde{x}^{C}\right)-\tilde{x}^{C}\beta_{d}^{C}.

Hence, βdC\beta_{d}^{C} is identified as

βdC=md0(xC)md0(x~C)xCx~C.\beta_{d}^{C}=\frac{m_{d0}\left(x^{C}\right)-m_{d0}\left(\tilde{x}^{C}\right)}{x^{C}-\tilde{x}^{C}}.

The remaining part of the proof is the same as the proof of Theorem 1.    

The assumption of Theorem B.1 in place of NL1 requires the univariate function π0\pi_{0} to be not one-to-one, but imposes no other smoothness or continuity restriction on π0\pi_{0}. Therefore, it doesn’t require XCX^{C} to be continuously valued. In the extreme case of XCX^{C} containing only one binary covariate, this assumption will be equivalent to the full irrelevance of XCX^{C} to the treatment probability, which is a condition suggested by Chamberlain (1986) for identification of the sample selection model. The following two theorems show that Assumption NL1 and the assumption of Theorem B.1 can be extended to the case of multiple continuous covariates.

Theorem B.2.

Theorem 1 holds if Assumption NL is replaced by that (i) there are two points xCx^{C} and x~C\tilde{x}^{C} in the support of XCX^{C} and an index kk such that xkCx~kCx_{k}^{C}\neq\tilde{x}_{k}^{C}, xkC=x~kCx_{-k}^{C}=\tilde{x}_{-k}^{C}, and π0(xC)=π0(x~C)\pi_{0}\left(x^{C}\right)=\pi_{0}\left(\tilde{x}^{C}\right), and (ii) there exists x˘C\breve{x}^{C} in the support of XCX^{C} such that functions π0\pi_{0} and md0m_{d0} are differentiable at x˘C\breve{x}^{C}, with kπ0(x˘C)0\partial_{k}\pi_{0}\left(\breve{x}^{C}\right)\neq 0.

Proof.

It follows from equation (A.1) and π0(xC)=π0(x~C)\pi_{0}\left(x^{C}\right)=\pi_{0}\left(\tilde{x}^{C}\right) that

md0(xC)xkCβd,kCxkCβd,kC=md0(x~C)x~kCβd,kCx~kCβd,kC.m_{d0}\left(x^{C}\right)-x_{k}^{C}\beta_{d,k}^{C}-x_{-k}^{C\prime}\beta_{d,-k}^{C}=m_{d0}\left(\tilde{x}^{C}\right)-\tilde{x}_{k}^{C}\beta_{d,k}^{C}-\tilde{x}_{-k}^{C^{\prime}}\beta_{d,-k}^{C}.

Hence, βd,kC\beta_{d,k}^{C} is identified by condition (i) as

βd,kC=md0(xC)md0(x~C)xkCx~kC.\beta_{d,k}^{C}=\frac{m_{d0}\left(x^{C}\right)-m_{d0}\left(\tilde{x}^{C}\right)}{x_{k}^{C}-\tilde{x}_{k}^{C}}.

For any index jkj\neq k, taking the partial derivatives of md0(xC)m_{d0}\left(x^{C}\right) with respect to xkCx_{k}^{C} and xjCx_{j}^{C} at x˘C\breve{x}^{C} yields that

kmd0(x˘C)\displaystyle\partial_{k}m_{d0}\left(\breve{x}^{C}\right) =\displaystyle= βd,kC+gd(1)(π0(x˘C))kπ0(x˘C),\displaystyle\beta_{d,k}^{C}+g_{d}^{\left(1\right)}\left(\pi_{0}\left(\breve{x}^{C}\right)\right)\partial_{k}\pi_{0}\left(\breve{x}^{C}\right), (B.1)
jmd0(x˘C)\displaystyle\partial_{j}m_{d0}\left(\breve{x}^{C}\right) =\displaystyle= βd,jC+gd(1)(π0(x˘C))jπ0(x˘C).\displaystyle\beta_{d,j}^{C}+g_{d}^{\left(1\right)}\left(\pi_{0}\left(\breve{x}^{C}\right)\right)\partial_{j}\pi_{0}\left(\breve{x}^{C}\right). (B.2)

Since kπ0(x˘C)0\partial_{k}\pi_{0}\left(\breve{x}^{C}\right)\neq 0 by condition (ii), we can find gd(1)(π0(x˘C))g_{d}^{\left(1\right)}\left(\pi_{0}\left(\breve{x}^{C}\right)\right) from (B.1) to be

gd(1)(π0(x˘C))=kmd0(x˘C)βd,kCkπ0(x˘C),g_{d}^{\left(1\right)}\left(\pi_{0}\left(\breve{x}^{C}\right)\right)=\frac{\partial_{k}m_{d0}\left(\breve{x}^{C}\right)-\beta_{d,k}^{C}}{\partial_{k}\pi_{0}\left(\breve{x}^{C}\right)},

and substitute it into (B.2) to identify βd,jC\beta_{d,j}^{C} as

βd,jC=jmd0(x˘C)jπ0(x˘C)kπ0(x˘C)[kmd0(x˘C)βd,kC].\beta_{d,j}^{C}=\partial_{j}m_{d0}\left(\breve{x}^{C}\right)-\frac{\partial_{j}\pi_{0}\left(\breve{x}^{C}\right)}{\partial_{k}\pi_{0}\left(\breve{x}^{C}\right)}\left[\partial_{k}m_{d0}\left(\breve{x}^{C}\right)-\beta_{d,k}^{C}\right].

Therefore, βdC\beta_{d}^{C} is identified. The remaining part of the proof is the same as the proof of Theorem 1.    

Theorem B.3.

Theorem 1 holds if Assumption NL is replaced by that there exist two points xCx^{C} and x~C\tilde{x}^{C} in the support of XCX^{C} and an index kk such that functions π0\pi_{0} and md0m_{d0} are differentiable at xCx^{C} and x~C\tilde{x}^{C}, with kπ0(xC)=0\partial_{k}\pi_{0}\left(x^{C}\right)=0 and kπ0(x~C)0\partial_{k}\pi_{0}\left(\tilde{x}^{C}\right)\neq 0.

Proof.

The identification of βd,kC\beta_{d,k}^{C} is analogous to Theorem 1. Then, the identification of all the other coefficients βd,jC\beta_{d,j}^{C} in βdC\beta_{d}^{C} and the remaining components of the model is the same as in Theorem B.2 and Theorem 1, respectively.    

Alternatively, the identification of coefficients of continuous covariates can be attained by exploiting the local irrelevance of the function gd(p)g_{d}\left(p\right).

Theorem B.4.

Theorem 1 holds if Assumption NL is replaced by the existence of a vector xCx^{C} in the support of XCX^{C} such that gd(1)(π0(xC))=0g_{d}^{\left(1\right)}\left(\pi_{0}\left(x^{C}\right)\right)=0 for d=0,1d=0,1, and that functions π0\pi_{0} and md0m_{d0} are differentiable at xCx^{C}.

Proof.

For each k{1,2,,dim(XC)}k\in\left\{1,2,\cdots,\dim\left(X^{C}\right)\right\}, taking the partial derivative of both sides of (A.1) with respect to xkCx_{k}^{C} gives

kmd0(xC)=βd,kC+gd(1)(π0(xC))kπ0(xC)=βd,kC,\partial_{k}m_{d0}\left(x^{C}\right)=\beta_{d,k}^{C}+g_{d}^{\left(1\right)}\left(\pi_{0}\left(x^{C}\right)\right)\partial_{k}\pi_{0}\left(x^{C}\right)=\beta_{d,k}^{C},

which immediately identifies all βd,kC\beta_{d,k}^{C} in βdC\beta_{d}^{C}. The remaining part of the proof is the same as that of Theorem 1.    

The above discussion further illustrates the overidentification feature of our strategy. In fact, the identifying power of our method is determined by the number of nonlinearity points in the data satisfying Assumption NL or Theorems B.1-B.4, given different benchmark values of XDX^{D}.

Appendix C Limited valued outcomes

To accommodate limited valued outcomes, we first generalize the linear additive model to a linear latent index model.

Assumption C.L (Linear index). Assume that Yd=l(Xγd,Vd)Y_{d}=l\left(X^{\prime}\gamma_{d},V_{d}\right), d=0,1d=0,1, holds for some known link function ll and unknown coefficients γd\gamma_{d}, where VdV_{d} is unobserved error term of YdY_{d}.

The generalized linear model applies to a variety of frequently encountered limited dependent variables, such as l(t,v)=1{tv}l\left(t,v\right)=1\left\{t\geq v\right\} for binary Y{0,1}Y\in\left\{0,1\right\}, l(t,v)=max(tv,0)l\left(t,v\right)=\max\left(t-v,0\right) for censored (at zero) Y0Y\geq 0, l(t,v)=exp(t+v)/[1+exp(t+v)]l\left(t,v\right)=\exp\left(t+v\right)\left/\left[1+\exp\left(t+v\right)\right]\right. for proportion-valued Y(0,1)Y\in\left(0,1\right), l(t,v)=j=1s1{t+vcj}l\left(t,v\right)=\sum_{j=1}^{s}1\left\{t+v\geq c_{j}\right\} for ordered Y{0,1,,s}Y\in\left\{0,1,\cdots,s\right\} with known thresholds c1<c2<<csc_{1}<c_{2}<\cdots<c_{s}, and so forth. Although setting l(t,v)=t+vl\left(t,v\right)=t+v reduces to the linear additive model, the following discussion doesn’t include Theorem 1 as a special case because γd\gamma_{d} here can be only identified up to scale for a general link function ll. Any changes in the scaling of γd\gamma_{d} can be freely absorbed into ll, and a scale normalization is needed for identification of γd\gamma_{d} and ll.

Assumption C.N (Normalization). Decompose Xγd=XCγdC+XDγdDX^{\prime}\gamma_{d}=X^{C\prime}\gamma_{d}^{C}+X^{D\prime}\gamma_{d}^{D}, where XCX^{C} and XDX^{D} consist of covariates that are continuously and discretely distributed, respectively. Assume that γd,1C\gamma_{d,1}^{C}, the first element of γdC\gamma_{d}^{C}, equal to 1.

Assumption C.N imposes the convenient normalization that the first continuous covariate has a unit coefficient. This scaling of γd\gamma_{d} is arbitrary and is innocuous because our focus is on the identification of MTE, which is a difference in the conditional expectations of l(Xγd,Vd)l\left(X^{\prime}\gamma_{d},V_{d}\right), rather than on separate identification of γd\gamma_{d} and ll. Since in the generalized linear model the potential outcomes are not necessarily additive in the error term, the mean independence assumption needs to be strengthened to a stricter distributional independence assumption.

Assumption C.CI (Conditional Independence). Assume that VdX|V\left.V_{d}\perp\!\!\!\!\perp X\right|V for d=0,1d=0,1, namely, VdV_{d} is independent of XX conditional on VV, where VV is the reduced-form treatment error in equation (3).

Recalling that VXV\perp\!\!\!\!\perp X by definition, Assumption C.CI is equivalent to the full independence (Vd,V)X\left(V_{d},V\right)\perp\!\!\!\!\perp X for d=0,1d=0,1, which implies that both the marginal distribution of VdV_{d} and the copula of VdV_{d} and VV are independent of XX. Under Assumptions C.L and C.CI, we have

ΔMTE(x,v)=E[l(xγ1,V1)l(xγ0,V0)|V=v].\Delta^{\text{MTE}}\left(x,v\right)=E\left[\left.l\left(x^{\prime}\gamma_{1},V_{1}\right)-l\left(x^{\prime}\gamma_{0},V_{0}\right)\right|V=v\right].

The additive nonseparability of the observables and unobservables constitutes the primary difficulty in identifying MTE in the limited outcome case. Meanwhile, Assumptions C.L and C.CI lead to a double index form of the observable outcome regression functions for each treatment status, that is,

E[Y1{D=d}|X=x]=rd(xγd,π(x))E\left[\left.Y1\left\{D=d\right\}\right|X=x\right]=r_{d}\left(x^{\prime}\gamma_{d},\pi\left(x\right)\right) (C.1)

for d=0,1d=0,1, where

r0(t,p)\displaystyle r_{0}\left(t,p\right) =\displaystyle= p1E[l(t,V0)|V=v]𝑑v,\displaystyle\int_{p}^{1}E\left[\left.l\left(t,V_{0}\right)\right|V=v\right]dv, (C.2)
r1(t,p)\displaystyle r_{1}\left(t,p\right) =\displaystyle= 0pE[l(t,V1)|V=v]𝑑v.\displaystyle\int_{0}^{p}E\left[\left.l\left(t,V_{1}\right)\right|V=v\right]dv. (C.3)

Provided that π(x)\pi\left(x\right) is a nonlinear index, γd\gamma_{d} and thus rdr_{d} can be identified based on functional form, which ensures identification of MTE since

ΔMTE(x,v)=2r1(xγ1,v)+2r0(xγ0,v),\Delta^{\text{MTE}}\left(x,v\right)=\partial_{2}r_{1}\left(x^{\prime}\gamma_{1},v\right)+\partial_{2}r_{0}\left(x^{\prime}\gamma_{0},v\right), (C.4)

where 2rd\partial_{2}r_{d} is the partial derivative of rdr_{d} with respect to its second argument. Like in the case of unlimited outcomes, the key powers of this identification strategy are supplied by the nonlinearity of π(x)\pi\left(x\right), as specified by the following assumption.

Assumption C.NL (Non-Linearity). Assume that the functions π0\pi_{0} and rdr_{d} are differentiable and denote their partial derivatives with respect to the kk-th argument as kπ0\partial_{k}\pi_{0} and krd\partial_{k}r_{d}, where π0(xC)=π(xC,0)\pi_{0}\left(x^{C}\right)=\pi\left(x^{C},0\right) and rdr_{d}, d=0,1d=0,1, are defined in (C.1) and (C.1). Assume that there exist two vectors xCx^{C} and x~C\tilde{x}^{C} on the support of XCX^{C} and two elements kk and jj of the set {1,2,,dim(XC)}\left\{1,2,\cdots,\dim\left(X^{C}\right)\right\} such that (i) 1rd(xCγdC,π0(xC))0\partial_{1}r_{d}\left(x^{C\prime}\gamma_{d}^{C},\pi_{0}\left(x^{C}\right)\right)\neq 0, (ii) 1rd(x~CγdC,π0(x~C))0\partial_{1}r_{d}\left(\tilde{x}^{C\prime}\gamma_{d}^{C},\pi_{0}\left(\tilde{x}^{C}\right)\right)\neq 0, (iii) kπ0(xC)1π0(xC)γd,kC\partial_{k}\pi_{0}\left(x^{C}\right)\neq\partial_{1}\pi_{0}\left(x^{C}\right)\gamma_{d,k}^{C}, (iv) jπ0(xC)1π0(xC)γd,jC\partial_{j}\pi_{0}\left(x^{C}\right)\neq\partial_{1}\pi_{0}\left(x^{C}\right)\gamma_{d,j}^{C}, (v) lπ0(x~C)1π0(x~C)γd,lC\partial_{l}\pi_{0}\left(\tilde{x}^{C}\right)\neq\partial_{1}\pi_{0}\left(\tilde{x}^{C}\right)\gamma_{d,l}^{C} for l=2,,dim(XC)l=2,\cdots,\dim\left(X^{C}\right), and

(vi) kπ0(xC)jπ0(x~C)kπ0(x~C)jπ0(xC)\displaystyle\partial_{k}\pi_{0}\left(x^{C}\right)\partial_{j}\pi_{0}\left(\tilde{x}^{C}\right)-\partial_{k}\pi_{0}\left(\tilde{x}^{C}\right)\partial_{j}\pi_{0}\left(x^{C}\right)
\displaystyle\neq [1π0(xC)jπ0(x~C)1π0(x~C)jπ0(xC)]γd,kC\displaystyle\left[\partial_{1}\pi_{0}\left(x^{C}\right)\partial_{j}\pi_{0}\left(\tilde{x}^{C}\right)-\partial_{1}\pi_{0}\left(\tilde{x}^{C}\right)\partial_{j}\pi_{0}\left(x^{C}\right)\right]\gamma_{d,k}^{C}
[1π0(xC)kπ0(x~C)1π0(x~C)kπ0(xC)]γd,jC.\displaystyle-\left[\partial_{1}\pi_{0}\left(x^{C}\right)\partial_{k}\pi_{0}\left(\tilde{x}^{C}\right)-\partial_{1}\pi_{0}\left(\tilde{x}^{C}\right)\partial_{k}\pi_{0}\left(x^{C}\right)\right]\gamma_{d,j}^{C}.

Assumption C.NL.(i)-(ii) require rdr_{d} to depend on the linear index, and (iii)-(vi) essentially require some nonlinear variation in π0\pi_{0} under the scale normalization. As in Assumption NL, it is difficult to construct examples other than π0(xC)=f(xCγ)\pi_{0}\left(x^{C}\right)=f\left(x^{C\prime}\gamma\right) that violates Assumption C.NL. Finally, a support assumption and an invertibility assumption are imposed for technical reasons.

Assumption C.S (Support). For each k{1,2,,dim(XD)}k\in\left\{1,2,\cdots,\dim\left(X^{D}\right)\right\}, assume for some xkD0x_{k}^{D}\neq 0 in the support of XkDX_{k}^{D} that there exists xC(k)x^{C}\left(k\right) in the support of XCX^{C} such that π(xC(k),xDk)\pi\left(x^{C}\left(k\right),x^{Dk}\right) is in the support of π0(XC)\pi_{0}\left(X^{C}\right) and that xC(k)γdC+xkDγd,kDx^{C}\left(k\right)^{\prime}\gamma_{d}^{C}+x_{k}^{D}\gamma_{d,k}^{D} is in the support of XCγdCX^{C\prime}\gamma_{d}^{C} for d=0,1d=0,1.

Assumption C.I (Invertibility). Assume that the function rdr_{d} is invertible on its first argument for d=0,1d=0,1.

For the case of a binary outcome, we have r0(t,p)=p1FV0|V(t|v)𝑑vr_{0}\left(t,p\right)=\int_{p}^{1}F_{\left.V_{0}\right|V}\left(\left.t\right|v\right)dv and r1(t,p)=0pFV1|V(t|v)𝑑vr_{1}\left(t,p\right)=\int_{0}^{p}F_{\left.V_{1}\right|V}\left(\left.t\right|v\right)dv, hence a sufficient condition for Assumption C.I to hold is that VdV_{d} is continuously distributed with support \mathbb{R}, conditional on VV, for d=0,1d=0,1. Interestingly, the same continuous distribution condition also suffices for Assumption C.I in the censored case with l(t,v)=max(tv,0)l\left(t,v\right)=\max\left(t-v,0\right), where 1r0(t,p)=p1FV0|V(t|v)𝑑v\partial_{1}r_{0}\left(t,p\right)=\int_{p}^{1}F_{\left.V_{0}\right|V}\left(\left.t\right|v\right)dv and 1r1(t,p)=0pFV1|V(t|v)𝑑v\partial_{1}r_{1}\left(t,p\right)=\int_{0}^{p}F_{\left.V_{1}\right|V}\left(\left.t\right|v\right)dv. Under the imposed assumptions, the following identification theorem for the generalized linear model follows immediately from Escanciano et al. (2016, Theorems 3.1-3.2).

Theorem C.1.

If Assumptions C.L, C.NL, C.CI, C.S, C.N, and C.I hold, then γd\gamma_{d} and rd(t,p)r_{d}\left(t,p\right) at all points in the support of XγdX^{\prime}\gamma_{d} and PP are identified for d=0,1d=0,1.

Having established identification of γd\gamma_{d} and rdr_{d}, the MTE for limited valued outcomes can be identified by (C.4) and estimated by the semiparametric least squares method of Escanciano et al. (2016). Alternatively, the LIV estimation procedure may be adapted according to

E[Y|X=x]=s(xγ0,xγ1,π(x))E\left[\left.Y\right|X=x\right]=s\left(x^{\prime}\gamma_{0},x^{\prime}\gamma_{1},\pi\left(x\right)\right)

and

ΔMTE(x,v)\displaystyle\Delta^{\text{MTE}}\left(x,v\right) =\displaystyle= s(xγ0,xγ1,v)v,\displaystyle\frac{\partial s\left(x^{\prime}\gamma_{0},x^{\prime}\gamma_{1},v\right)}{\partial v},
ΔATE(x)\displaystyle\Delta^{\text{ATE}}\left(x\right) =\displaystyle= s(xγ0,xγ1,1)s(xγ0,xγ1,0),\displaystyle s\left(x^{\prime}\gamma_{0},x^{\prime}\gamma_{1},1\right)-s\left(x^{\prime}\gamma_{0},x^{\prime}\gamma_{1},0\right),
ΔTT(x)\displaystyle\Delta^{\text{TT}}\left(x\right) =\displaystyle= s(xγ0,xγ1,π(x))s(xγ0,xγ1,0)π(x),\displaystyle\frac{s\left(x^{\prime}\gamma_{0},x^{\prime}\gamma_{1},\pi\left(x\right)\right)-s\left(x^{\prime}\gamma_{0},x^{\prime}\gamma_{1},0\right)}{\pi\left(x\right)},
ΔTUT(x)\displaystyle\Delta^{\text{TUT}}\left(x\right) =\displaystyle= s(xγ0,xγ1,1)s(xγ0,xγ1,π(x))1π(x),\displaystyle\frac{s\left(x^{\prime}\gamma_{0},x^{\prime}\gamma_{1},1\right)-s\left(x^{\prime}\gamma_{0},x^{\prime}\gamma_{1},\pi\left(x\right)\right)}{1-\pi\left(x\right)},
ΔLATE(x,v1,v2)\displaystyle\Delta^{\text{LATE}}\left(x,v_{1},v_{2}\right) =\displaystyle= s(xγ0,xγ1,v2)s(xγ0,xγ1,v1)v2v1,\displaystyle\frac{s\left(x^{\prime}\gamma_{0},x^{\prime}\gamma_{1},v_{2}\right)-s\left(x^{\prime}\gamma_{0},x^{\prime}\gamma_{1},v_{1}\right)}{v_{2}-v_{1}},

where

s(t0,t1,p)=r0(t0,p)+r1(t1,p).s\left(t_{0},t_{1},p\right)=r_{0}\left(t_{0},p\right)+r_{1}\left(t_{1},p\right).