capbtabboxtable[][\FBwidth]
Caliper Synthetic Matching: Generalized Radius Matching with Local Synthetic Controls
Abstract
Matching promises transparent causal inferences for observational data, making it an intuitive approach for many applications. In practice, however, standard matching methods often perform poorly compared to modern approaches such as response-surface modeling and optimizing balancing weights. We propose Caliper Synthetic Matching (CSM) to address these challenges while preserving simple and transparent matches and match diagnostics. CSM extends Coarsened Exact Matching (CEM; Iacus et al.,, 2012) by incorporating general distance metrics, adaptive calipers, and locally constructed synthetic controls. We show that CSM can be viewed as a monotonic imbalance bounding (MIB; Iacus et al.,, 2011) matching method, so that it inherits the usual bounds on imbalance and bias enjoyed by MIB methods. We further provide a bound on a measure of joint covariate imbalance. Using a simulation study, we illustrate how CSM can even outperform modern matching methods in certain settings, and finally illustrate its use in an empirical example. Overall, we find CSM allows for many of the benefits of matching while avoiding some of the costs.
1 Introduction
The “Jovem de Futuro” (Young of the Future) program of Brazil aimed to improve education quality by providing strategies and, for those who made certain targets, monetary support to schools in Rio de Janeiro and Sao Paulo, Brazil. Education reform efforts like this are common, and when they occur various stakeholders are typically anxious to understand if such a large commitment of resources actually improved circumstances on the ground.
In the case of Jovem de Futuro, researchers ensured the ability to answer this question by taking a step beyond what is typically seen in practice: they recruited schools and randomized them into treatment and control, thus enabling direct and effective estimation of impacts by comparing two groups that were otherwise similar, save for treatment (Barros et al.,, 2012). But getting agencies to implement a randomized controlled trial is often a hard sell: RCTs require supports beyond those of installing a program in the first place. When, as is common in practice, there is no randomization to ensure one has a collection of comparable schools, researchers will instead turn to observational study methods to, in effect, construct a comparison group that can serve as a counterfactual for those schools who took treatment.
Matching provides a simple approach for such an endeavor. In a basic setting, matching methods pair each treated unit with a similar control unit, producing a matched control sample that mirrors the treated sample in terms of observable covariates. Under standard assumptions, the samples may then be analyzed as if treatment were randomly assigned. The simplicity of this approach has led matching to be a popular method for observational causal inference (Imbens,, 2004; Imbens and Rubin,, 2015).
For conceptual clarity, the gold standard of matching methods is exact matching. For each treated unit, exact matching finds a control unit with the same observed covariates. We then end up with a control group that looks, in terms of the covariates, exactly like the treatment group, except for receipt of treatment. In other words, exact matching perfectly balances the joint covariate distribution between the treated and matched control units, eliminating any potential bias due to associations between observable covariates and potential outcomes (Imai et al.,, 2008; Rosenbaum and Rubin, 1985a, ). Exact matching also produces transparent matched datasets, since the difference in outcomes between a treated unit and its exactly matched control is an unbiased (albeit noisy) estimate of the treatment effect for that unit. This leads to the familiar statistical idea of averaging noisy observations to estimate a target estimand, e.g., the average treatment effect among all (or some subset) of the treated units.
In practice, however, exact matching is usually impossible. Researchers have therefore developed a variety of methods for conducting principled causal inference without exact joint covariate balance. For example, many matching approaches aim to balance the marginal distributions. Researchers construct a matched dataset, check the marginal means of the matched sets, and then iteratively tweak and refine their model if the means are too different. Direct balancing approaches (e.g., Hainmueller,, 2012; Zubizarreta,, 2015; Ben-Michael et al., 2021a, ) improve on this ad-hoc procedure by directly targeting approximate balance on specified features of the marginal, or sometimes even joint, distribution. Alternatively, one can use dimension reduction (e.g., Aikens et al.,, 2020), to make it easier to match “locally.” All of these forms of matching can be viewed as a preprocessing step (Ho et al.,, 2007) done before applying an outcome model, e.g., weighted linear regression, to the matched dataset; the final model can further adjust for remaining covariate imbalances. Ideally the matching reduces the impact of model selection on the final estimates.
Matching and outcome modeling can also be done concurrently. Semiparametric modeling approaches, such as doubly robust (Robins et al.,, 1994; Rotnitzky et al.,, 1998), double machine learning (Chernozhukov et al.,, 2018), or targeted maximum likelihood estimation (Van Der Laan and Rubin,, 2006), use model-assisted averages to target the estimand of interest, leading to provably efficient and unbiased estimates. One can also simply directly model the outcome (e.g., Hill,, 2011), to similar effect.
The methods discussed above generally yield effective causal estimates by aiming for overarching objectives across all units, such as achieving marginal covariate balance or ensuring good model fit. Even if they attain marginal covariate balance, however, they can fail to achieve joint balance, and this can lead to a poor result, as highlighted in Iacus et al., (2011). Furthermore, such methods may not be particularly transparent, or can heavily depend on model specification, as noted in Iacus et al., (2012). Exact matching, by contrast, targets joint balance, reduces model dependency and, ideally, increases transparency. In this paper, we build on a body of literature that focuses on this original spirit of exact matching. We provide four opinionated maxims for methods in this literature:
-
1.
Distances should be intuitive
-
2.
Matches should be local
-
3.
Match each unit as best you can in a way that you can monitor
-
4.
Estimates should be transparent
Given our maxims, we then directly construct Caliper Synthetic Matching (CSM) to effectively and clearly address each of these ideas.
The CSM method is a member of the Monotonic Imbalance Bounding (MIB; Iacus et al.,, 2011) class. MIB is a category of methods ensuring joint balance. When introducing Monotonic Imbalance Bounding (MIB), the authors indicated that while distance matching with a single scalar caliper does not constitute MIB, employing a distinct caliper for each covariate does. This paper explores this idea further by formalizing the scaled distance approach and examining its properties, specifically through a scaled distance metric. The scaled distance allows for specific control over each covariate, mitigating issues such as sensitivity to outliers inherent in similar methods such as Mahalanobis distance.
We also connect CSM to Coarsened Exact Matching (CEM; Iacus et al.,, 2012). CEM, a method within the MIB class, coarsens continuous covariates 111E.g., dividing an age covariate into ranges such as 0-25, 25-50, 50-75, 75+ before exactly matching on these coarsened covariates. We show how CSM keeps the MIB properties of CEM, but gives stricter bounds on worst-case bias. Within this framework we also provide a novel theoretical result on the control of joint balance, along with an alternate derivation of bias control than in the original literature.
Another innovation of this paper is we, post matching, apply a variant of the synthetic control method (SCM; Abadie et al.,, 2010) to each treated unit in turn to refine the weights given to the control units matched to each treated unit. The SCM method’s goal is to create a single synthetic control unit for each treated unit that mirrors the treated unit’s characteristics and outcomes. The synthetic control step further minimizes covariate imbalance over the matching step with respect to a scaled distance metric. We show, via a Taylor expansion in a space defined by the used distance metric, how this leads to removing a linear bias term in the estimated impact in a manner akin to linear interpolation. Our use of SCM diverges from standard SCM by allowing for a scaled distance metric, not necessarily using historical outcomes, and employing a predetermined matrix based on covariate-wise calipers. We argue synthetic controls maintain the transparency of matching by enabling direct evaluation of counterfactual plausibility.
Our matching approach allows for a simple form of inference that accounts for the reuse of controls for different treated units. Uncertainty estimation in matching methods is difficult, as underscored by Abadie and Imbens (Abadie and Imbens,, 2006, 2008, 2011), who illustrated the limitations of the bootstrap method and developed an asymptotically valid approach for the -nearest neighbor estimator. We follow Ben-Michael et al., (2022) (also see Keele et al., (2023) and Lu et al., (2023)), using the resultant weights of the final matched control units along with a plug-in variance formula. In particular, we use the residuals within matched clusters to estimate residual uncertainty, which then provides a variance estimator that takes into account the overall effective sample size imposed by the final control weights.
The rest of the paper proceeds as follows. In Section 2, we provide background and motivate CSM with a toy example. In Section 3, we construct CSM in a stepwise manner to address the four maxims proposed above. We then discuss the properties of CSM in Section 4, and provide our theoretical results on bias control. In Section 5, we conduct a variety of simulation studies to illustrate how CSM performs compared to other observational causal inference methods. Finally, in Section 6 we work through the applied example of the Jovem de Futuro program, analyzing it as a within study comparison to demonstrate how CSM may be used in practice, and to assess its performance on real data.
2 Background
2.1 Setup
Suppose we have independent and identically distributed observations, with treated units and control units. For each unit , let denote its binary treatment status, denote its observed real-valued outcome, and denote its -dimensional real-valued covariate vector. We use the potential outcomes framework and denote the observed outcome for unit as , for potential outcomes and under the stable unit treatment value assumption (Imbens and Rubin,, 2015). We make the standard conditional ignorability assumption:
so that conditioning on the observed covariates is sufficient to identify the causal effect of . Under a population sampling framework, we write , where and are the true conditional expectation functions of the potential outcomes under control and treatment, respectively. We write the set of all treated units’ indices as , the set of all control units’ indices as , and , as individual treated and control units respectively. We denote the set of the indices of the control units matched to a treated unit as . Finally, we denote the size of a set as .
In this paper, we will focus on estimating the sample average treatment effect on the treated (SATT):
where denotes the number of treated units.
Under a population sampling framework, the SATT approaches the overall population average treatment effect on the treated (PATT) as the number of treated units increases. While matching methods can easily be extended to estimate sample and population average treatment effects (SATEs and PATEs), we focus on the SATT to clarify key ideas and simplify exposition.
2.2 Motivation: the spirit of exact matching
To motivate the importance of locality and joint balance, we provide a pair of toy examples. Figure 1 plots the covariates and of control units and treated units for two scenarios. The colors and contours visualize , which takes on greater values within the innermost contour. To conduct causal inference, we use the observed outcomes of the control units to impute the unobserved counterfactual outcomes of the treated units .


For Example 1 (Figure 1(a)), accurate causal inference is not possible due to a lack of overlap: we cannot accurately impute outcomes for and because we do not observe any nearby evaluations of . Many causal inference methods, however, could fail to detect this problem. For example, a matching or balancing approach that works with marginal distributions could exactly balance both and by assigning weights of 1 to all four units. Similarly, an outcome model may fit a flat response surface to the observed outcomes of and . Because the usual marginal balance checks and outcome models would appear good, the analyst would be left unaware that these analyses significantly underestimate counterfactual outcomes, i.e., overestimate the SATT. Approximate exact matching approaches would also fail here, as there are no control units close to the treated units. The failure to find local matches, however, would alert the analyst to the presence of a problem. Therefore, it is important to have a metric on how good controls each treatment can obtain. In our method, we use nearest neighbor distance as this quality metric.
Example 2 (Figure 1(b)) highlights the role of transparent local matches. Because is smooth, using only the outcomes of local control units (i.e., controls and within the dotted circles) leads to accurate counterfactual estimates. Local matches also produce transparent estimates; unlike using a black-box outcome model, it is immediately clear how each control unit contributes to each treated unit’s counterfactual estimate. Finally, local matches encourage joint balance. In Example 2, if we assigned weights of 1 to and , and 0 to and , we would have perfect marginal balance, but poor local match quality and a poor counterfactual. It is worth sacrificing some marginal balance to upweight the closer units and , since doing so greatly improves joint balance and the resulting treatment-effect estimates.
Having illustrated the importance of using distance matching to achieve local matches, we now turn to the selection of the distance metric. The Mahalanobis distance has been commonly used in the literature, but it has drawbacks, including sensitivity to outliers and the curse of dimensionality. Outliers can distort the covariance matrix upon which Mahalanobis distance relies, leading to poor match quality. Moreover, with an increase in the number of covariates, the effectiveness of Mahalanobis matching may diminish due to the curse of dimensionality, where units in high-dimensional spaces appear far apart, complicating the search for suitable matches.
To partially address these issues, we propose a more flexible metric, the scaled distance, which permits analysts to specify the desired tightness of matching for each covariate in turn. Regions of low overlap can then be handled via an adaptive caliper method as detailed in Section 3.3. The challenge of dimensionality is also reduced, given prior knowledge on the relative importance of the covariates, as covariates of interest can be scaled to achieve higher level of control; see Section 3.1.
Lastly, while these toy examples demonstrate the benefits of adhering to the ”spirit of exact matching” by finding local matches to enhance causal estimates and identify potential biases, it is important to acknowledge that in some, possibly even many, situations, focusing on joint balance in this manner may be unnecessary or even detrimental. For example, if the conditional expectation function is additive in its covariates, i.e., for functions , marginal balance suffices (Zubizarreta,, 2015). In such settings, attempting to balance the full joint covariate distribution is excessively conservative, as it protects estimates from bias that does not exist, namely, bias due to imbalances in covariate interactions.
2.3 Related work
Matching methods have a long history in observational causal inference (Stuart,, 2010). Rather than attempt an exhaustive review, we briefly trace how these methods have operationalized the spirit of exact matching over time and provide further details in Section 3 as we develop the method introduced in this paper.
Early work in matching incorporated locality via nearest-neighbor (Rubin,, 1973), caliper (Cochran and Rubin,, 1973; Rosenbaum and Rubin, 1985b, ), and radius matching (Dehejia and Wahba,, 2002) approaches. These methods were typically combined with dimension reduction, e.g., via propensity scores (Rosenbaum and Rubin,, 1983), to circumvent the challenge of near-exact matching with multiple covariates, though some approaches, e.g., Mahalanobis distance matching (Rubin and Thomas,, 2000) and genetic matching (Diamond and Sekhon,, 2013), directly operated on a scaled version of the original covariate space. Also see Aikens et al., (2020), who use set-aside data to fit a prognostic score, and then match on that score. Regardless, to evaluate and improve the quality of a matched dataset, researchers would typically conduct iterative balance checks, revising their matching scheme if it led to poor marginal mean balance.
To circumvent the need for these iterative balance checks, Iacus et al., (2012) introduced Coarsened Exact Matching (CEM), which we discuss in further detail in Section 3.2. CEM fixes a user-specified level of joint balance, operationalized by a given degree of coarsening, and returns the resulting sample. By making local matches a primary rather than a secondary criterion, CEM enjoys the desirable transparency and joint balance properties of exact matching.
More recently, researchers have developed matching methods that flexibly learn notions of locality and similarity rather than using user-specified defaults. Genetic matching (Diamond and Sekhon,, 2013) learns to more precisely match covariates that appear to be important for overall covariate balance, while “almost-exact” matching methods for discrete (Dieng et al.,, 2019; Wang et al.,, 2021) and continuous (Morucci et al.,, 2020; Parikh et al.,, 2022) data aim to more precisely (or exactly) match covariates that are more predictive of the potential outcomes. Matching After Learning to Stretch (Parikh et al.,, 2022) minimizes predictive error over a hold-out training set to identify a similarity metric, and synthetic controls (Abadie et al.,, 2010) minimize the resulting predictive error with respect to a set of held-out pre-treatment outcome variables to obtain optimal variable importance weights in a time-series setting.
Work outside of matching has also noted the importance of locality in observational causal inference. For example, Abadie and L’hour, (2021) augments the popular synthetic controls methodology with a penalty for using control units far from the treated unit. In a similar setting, Ben-Michael et al., 2021b tunes the extent to which synthetic controls may extrapolate away from the control units. More generally, Kellogg et al., (2021) explicitly trades off the bias from extrapolating beyond local matches with the bias from linearly interpolating between distant units. While these approaches do not attempt to directly emulate exact matching, they highlight the use of local data for principled causal inference.
In another direction, one can directly optimize weights with respect to overall balance criterion, rather than engage in the iterative cycle of matching with a subsequent balance check on the result. For three examples of many, see stable balance weights (SBW; Zubizarreta,, 2015), entropy balance weights (Hainmueller,, 2012), or covariate balancing propentiy scores (Imai and Ratkovic,, 2014).
3 Caliper Synthetic Matching
Stuart, (2010) decomposes matching analyses into two phases. In the design phase, researchers select a distance measure, use it to run a matching method, and diagnose the quality of the resulting matches. In the subsequent analysis phase, researchers use the matched units to estimate treatment effects.
In this section, we propose a matching method that satisfies our set of aesthetic maxims delineated above. We construct our method in a modular fashion; at each stage, we increase the complexity of the method to improve its use of the data. While we believe that the final proposed method simultaneously maximizes transparency and performance, in practice researchers may make different choices at each stage to limit complexity as necessary.
3.1 Principle 1: Distances should be intuitive
In the absence of exact matches, matching algorithms find control units as “close” as possible to their treated counterparts to improve joint covariate balance and reduce potential bias (Rosenbaum and Rubin, 1985a, ). One popular distance measure for “closeness” is propensity score distance:
(1) |
where the propensity score represents the probability that a unit is treated, given its covariates (Rosenbaum and Rubin,, 1983).
While matching units with similar propensity scores leads to principled causal estimates, it does not construct intuitive matched sets (King and Nielsen,, 2019). Propensity score distance is not formally a distance metric on ,222For example, does not imply that . For a simple introduction to distance metrics on , see Appendix B.1. so it can violate our natural understanding of “closeness.” For example, two units that are “close” in terms of propensity score distance, which simply means they have similar chances of getting treatment, may have very different covariate values. If a researcher matches two such units, it can be unclear whether they should trust this match, fit a better propensity-score model, or find a closer propensity-score match.
Formal distance metrics provide a more natural approach for assessing similarity between units. Researchers are generally familiar with the covariates in their data, so directly attaching a distance metric to the space of covariates builds upon existing intuitions. Multidimensional distance metrics typically take the form of a scaled Euclidean (i.e., ) distance metric:
(2) |
or scaled distance metric:
(3) |
for a given symmetric positive definite matrix . The matrix scales the raw differences in the covariates and their two-way interactions. For example, if were large, then the resulting distance metric would magnify, i.e., upweight, differences in the first covariate. We consider both scaled and scaled distance metrics in this paper.
A variety of scaled distance metrics have been proposed in the literature. One popular scaled distance metric is Mahalanobis distance, which uses , the inverse covariance matrix estimated from the control group (Rubin,, 1980). Other approaches, such as genetic matching (Diamond and Sekhon,, 2013) restrict the scale matrix to be diagonal and directly optimize it.
In this paper, we set to a diagonal matrix determined by a covariate-wise caliper specified by a researcher, as formalized in Proposition B.3. A covariate-wise caliper, , represents the maximum allowable difference in the -th covariate for matching purposes. When the metric is set to , the -th covariate difference between the two units is at most . The is equivalent to scaling each covariate by and then matching using the distance with threshold . Generally, we advocate setting so that any pair of units with distance bounded by is considered to be a “reasonably similar” comparison.
Using with a covariate-wise caliper is strongly connected to Coarsened Exact Matching (CEM) (Iacus et al.,, 2012), as the distance metric provides researchers with the desired degree of covariate-wise control. In CEM, continuous variables are initially coarsened into discrete bins; for example, an age variable may be segmented into bins of 0-25, 25-50, 50-75, and 75-100 years. Observations are then precisely matched based on these coarsened covariates. By setting a covariate-wise caliper to 12.5, we allow a treated unit to be matched with a control unit if their age difference is no more than 12.5 years. This approach generalizes the concept of coarsening, as coarsening, given the above bins, would not match a 26-year-old with a 24-year-old, but our method could. Further details are discussed in Section 4.5.
3.2 Principle 2: Matches should be local
Section 2.2 argued the importance of local matching. Here we discuss ways of achieving local matching by mainly revisiting the literature.
Given a chosen measure of “closeness,” there are many ways to select the closest matched units. One popular approach is nearest-neighbors matching, where each treated unit is matched with the control unit(s) closest to it. This can be done either greedily for each treated unit (Rubin,, 1973) or optimally over all treated units (Rosenbaum,, 1989). Nearest-neighbors matching can be conducted after some preprocessing steps, such as learning an optimal distance metric (Diamond and Sekhon,, 2013; Parikh et al.,, 2022).
While nearest-neighbors approaches are intuitive, they can quietly fail to achieve covariate balance. While nearest-neighbors approaches guarantee that each treated unit is matched with its closest control, the closest control may still be quite far off. Large distances between treated units and their matched controls, i.e., low match quality, can lead to poor joint covariate balance.
To combat this problem, many methods apply calipers coupled with nearest-neighbor matching. Calipers are a distance beyond which matches are forbidden. Calipers modify the distance metric as:
Using calipers, nearest-neighbor approaches can avoid problems associated with poor match quality, using the closest matches only if they are “close enough.” The simplest form of nearest neighbor matching does not, however, take full advantage of data rich areas of the sample: if there are many controls, using all of them rather than the strictly closest could improve precision.
Calipers are frequently applied to the propensity score, rather than to covariate distance. Unfortunately, units that are local with respect to assignment probability may not be actually local with respect to their covariates; see King and Nielsen, (2019).
The distance metric and the caliper are quite connected; in particular, if we scale our distance metric, we can simply scale the caliper by the same amount. This means indicates a caliper of one unit on the distance metric, which under the infinity-norm means no covariate differs by more than one “width” as defined by . Thus, without loss of generality, we generally assume an initial value of hereafter, unless otherwise noted.
In this paper, we directly use all control units within a given caliper of each treated unit, which maximizes the number of local matches used to produce causal estimates. We allow matching with replacement, meaning a control unit close to multiple treatment units will match to all of then. This matching approach is known as radius matching (Dehejia and Wahba,, 2002) with replacement, though previous proposals mostly use radius matching on propensity score distances. We relegate discussion of caliper selection to Appendix A.1.
3.3 Principle 3: Match each unit as best you can in a way that you can monitor
Matching with a fixed caliper could lead to a substantial fraction of the treated units being dropped, which can significantly change the target estimand. In particular, dropping difficult-to-match treated units, while potentially improving the quality of the resulting estimate, changes the estimand from the SATT to the feasible sample average treatment effect (FSATT):
where denotes the set of indices of treated units with at least one control unit within units, i.e., . The FSATT can differ from the SATT if the excluded units have systematically different treatment impacts than the kept units.
Instead of shifting the estimand based on a selected caliper, we propose instead assigning each treated unit an adaptive caliper , with
(4) |
where is our global minimum caliper that we default to 1 given our distance metric, is the distance between unit and its nearest control-unit neighbor (Dehejia and Wahba,, 2002), and is an inflation factor that allows for catching all control units that are similarly close to the closest unit.
The adaptive caliper guarantees all treated units are matched to at least one control. The floor value allows for taking advantage of data-rich environments; for treated units with many controls, we will not shrink to an overly small caliper but instead keep all those controls measured as “reasonably similar” as a comparison group.
In data-rich contexts, the adaptive caliper may also be selected so that the resulting matched sets work well with synthetic controls (introduced below), e.g., we can set to be the smallest caliper such that treated unit has within-caliper controls, where is the dimension of the covariate space.
In principle, with adaptive calipers our matched dataset allows for direct estimate of the SATT as all treated units are preserved. In practice, some treated units may have very poor matches, which would be indicated by large values. Because we have a direct measure of match quality via , we can monitor the impact of these poor matches on our overall estimand, and possibly deliberately trim based on our diagnostics.
An important step in any matching procedure is to assess the resulting matches. Classic approaches would be to conduct balance checks by comparing marginal covariate distributions between the treated and control groups (usually people will look at standardized mean differences). Such marginal balance checks may reveal significant departures from joint balance, but cannot confirm when joint balance is approximately achieved, as demonstrated in Toy Examples 1 and 2. Checking low-dimensional summaries of joint balance can also fail to assess overlap or identify subsets of the treated units for which it may be easier or more difficult to estimate treatment effects.
Using a distance-metric caliper, on the other hand, directly ensures good covariate balance (e.g., see Proposition 4.2), if the caliper is fixed. We can extend this idea to using the unit-specific caliper values across all treated units to assess the estimate-estimand tradeoff, creating balance-sample-size frontier plots (King et al.,, 2017) to show how dropping poorly matched treated units (i.e., those with large ) affects both potential bias and the SATT estimate for the remaining sample. Also see Aikens et al., (2020); Aikens and Baiocchi, (2023) for other approaches to making diagnostic plots. We can also summarize characteristics of units with different values to better understand regions with poorer overlap.
3.4 Principle 4: Estimates should be transparent
Given matched units from the design phase of a matching analysis, the final step is to produce an estimate. With high-quality matches, the SATT may be, in principle, estimated with a simple average:
This is quite clear, but when matching is imperfect, it can be much lower performing than other methods that do further adjustment.
In this paper, instead of using the average weight for each matched control, we calculate weights with the synthetic control method (SCM; Abadie et al.,, 2010) within the matched sets. For each treated unit , we find convex weights, i.e., weights that sum to 1, for its matched control units that minimize covariate imbalance as measured by a scaled distance metric or :
s.t. | |||
where we have written the weight for control unit associated with treated unit as . The “synthetic control” unit for treated unit gets covariates and outcome , and the ATT estimate is taken as a simple difference-in-means between the outcomes of the treated units and their synthetic controls:
See Appendix B.6 for further detail.
Our approach deviates from the standard SCM setup in a few ways. First, while Abadie et al., (2010) introduces SCM as a quadratic programming problem using scaled distance, we also allow SCM to be implemented as a linear programming problem using scaled distance. Second, while synthetic controls are typically used in time-series settings with past outcomes as additional covariates, we directly apply them in our setting without focus on past outcomes. Finally, the SCM introduced in Abadie et al., (2010) includes an outer optimization to learn an “optimal” scaling matrix , whereas we simply use the matrix implied by the given covariate-wise caliper. This follows other approaches to the SCM. See, e.g., Ben-Michael et al., 2021b .
Using synthetic controls in the analysis phase provides two primary benefits. First, synthetic controls are highly transparent. Because synthetic controls explicitly produce a counterfactual for each treated unit, researchers can directly check whether each counterfactual seems reasonable.
Second, as we prove, synthetic controls naturally reduce bias within calipers as they are mathematically equivalent to linear interpolation. While linear interpolation over long distances can lead to bias, interpolating over short distances, such as within a caliper, typically improves results due to the local linearity of smooth outcome functions.
3.5 The full method
To recap, we propose matching by first selecting an interpretable distance metric that can be used to assess similarity between units based on their covariate profiles. This metric should be interpretable and justifiable by those using it.
Once a metric is in place, identify sets of local controls for each treated unit, possibly adaptively adjusting the caliper (radius) defining locality unit by unit. Finally, build synthetic comparison units by using a variant of the synthetic control procedure for each unit to obtain further reductions in bias. The final estimate is simply the average of the pairwise differences between treated and synthetic control. Finally, run some diagnostics, examining measures such as how the estimated impact shifts with the trimming of hard-to-match units.
We denote the use of synthetic controls within adaptive calipers as Caliper Synthetic Matching (CSM). We generally use a scaled distance for its simplicity, its connections to exact matching, and its connections to CEM. Adapting the caliper enables clear diagnostic plots of the estimate-estimand tradeoff and the synthetic controls produce interpretable local bias corrections.
4 Theoretical Properties
4.1 Monotonic Imbalance Bounding
Iacus et al., (2011) introduces the Monotonic Imbalance Bounding (MIB) class of matching methods. MIB matching methods directly control covariate balance between the treated and matched control groups, independently for each covariate. As a result, MIB matching methods enjoy desirable properties such as bounded covariate imbalance and bounded estimation error, under reasonable assumptions.
Although matching using Mahalanobis distance alone is not MIB, matching using a distance-metric caliper is MIB as long as the caliper for each covariate may be tuned without affecting the caliper for any other covariate (Iacus et al.,, 2011). Any such covariate-wise caliper can be satisfied by a scalar caliper on an appropriately scaled distance metric. For example, if and we want to ensure and , we may define so that restricting satisfies the desired caliper. By the triangle inequality, the bound on the distance translates to bounds on the individual covariates. This idea is formalized by Proposition 4.1, which shows that matching using a distance-metric caliper is a member of the MIB class.
Denote the weighted mean for covariate for the treated and control units after matching as and , respectively, where the weight is obtained from matching. More detailed definitions can be found in Appendix B.3. We then have:
Proposition 4.1.
Given covariatewise caliper and scaling matrix , we have
-
(a)
for all
-
(b)
for all
Proof.
See Appendix B.3. ∎
Proposition 4.1 not only shows that changing for one variable does not affect the imbalance on the other variables, but also shows that covariate balance is controlled by . Again, note that the choice to bound and by 1 is arbitrary, e.g., bounding is equivalent to bounding for and .
With adaptive calipers, CSM is MIB for the feasible treated units within but does not hold for other treated units, and thus the bound does not apply. That said, the adaptive calipers still provide transparency about the extent to which CSM leaves the MIB class in terms of number of units and the amount they deviate as recorded by their caliper sizes. In the subsequent subsections, we focus on cases when all treated units have controls within a set caliper and thus are feasible, i.e., , to focus on several properties of the MIB matching methods.
4.2 Bounded joint covariate imbalance
Having shown the bounded marginal balance, we now show how distance-metric caliper matching methods control joint covariate imbalance. Write the empirical joint covariate distributions of the treated and control units as:
where represents a Dirac delta function at , i.e., if and otherwise. These empirical distributions are simply weighted sums of point masses located at the units’ values.
To demonstrate how distance-metric calipers control the difference between and , we use Wasserstein distance. Formally, the -Wasserstein distance between probability distributions and with distance metric is:
where the infimum is over all couplings (joint distributions) that have marginal distributions of and (). More intuitively, Wasserstein distance is also known as “earth-mover’s distance”: viewing and as piles of soil each with total mass 1, measures the minimum “cost” to move soil to make distribute as (or vice-versa), where the “cost” of moving units of soil from to is . Wasserstein distance therefore uses a distance metric between points in to measure distance between full probability distributions on .
With this notation in hand, Proposition 4.2 shows that radius matching bounds the Wasserstein distance between and .
Proposition 4.2.
For , and a matching method, we have, for both and :
Proof.
See Appendix B.4. ∎
We make a few remarks about Proposition 4.2. First, while the notation is technical, the intuition is straightforward. Distance-metric calipers control how far each treated unit’s covariates can be from its matched controls’ covariates. Since and are weighted sums of the point masses associated with these covariates, the calipers must also control the distance between and . For an exact caliper , Proposition 4.2 simply states that exact matching guarantees that coincides with . In practice, however, reducing the caliper size to drops all of the treated units, leaving the proposition to be vacuously true; must be chosen with the estimate-estimand tradeoff in mind. To the best of our knowledge, this is the first theoretical result for a matching method’s bound on joint balance. Iacus et al., (2011) did show that CEM improved joint balance, but only through simulation.
The Wasserstein distance depends on the chosen distance metric. In the proposition, is dependent on . Thus, if our changes, the Wasserstein distance also changes. However, the core idea of the proof remains intact: the bounds on the covariates due to matching ensure a joint balance between the covariates of the matched controls and treatments, thereby providing local control and helping to avoid those adverse situations discussed in Section 2.2.
Control over joint covariate imbalance naturally implies control over marginal imbalances. For example, Proposition 4.3 shows how distance-metric calipers also bound the distance between the (-dimensional) empirical weighted covariate means of the treated and matched control units, respectively denoted and .
Proposition 4.3.
For and a matching method, we have, for both 2 and :
Proof.
See Appendix B.5. ∎
The above proposition is in effect a joint version of Proposition (b), which gives bounds on the covariates in turn.
Lastly, we note that other distance metrics between and could be chosen. For instance, Iacus et al., (2011) uses the distance, demonstrating that Coarsened Exact Matching reduces the joint imbalance between the control and treatment groups using an empirical example. We use distance for its computational simplicity and use the Wasserstein distance because its properties facilitate easier proof construction, as it inherently utilizes a distance metric.
In summary, distance-metric calipers enable precise control of joint covariate imbalance. While these bounds are not necessarily small in practice, they guarantee that observed imbalance cannot be too great, even in the worst case. This leads to a variety of desirable properties which we illustrate in the following sections.
4.3 Bounded bias
Because MIB matching methods bound the distance between the covariates of matched units, they naturally bound the distance between smooth functions of those covariates as well. Write the control potential outcome for unit as . Then assuming that is smooth (i.e., Lipschitz) immediately bounds the bias of any FSATT estimate produced by a method using a distance-metric calipers. (See Appendix B.2 for technical details about Lipschitz functions in .)
Proposition 4.4.
Suppose is Lipschitz with respect to = or . Then for a matching procedure such that for all , for all :
Proof.
∎
Proposition 4.4 states that for distance-metric caliper matching methods, worst-case bias is proportional to the caliper size .333Proposition 4.4 applies to a slightly different set of methods than Proposition 1 from Iacus et al., (2011), which proves a similar bias bound for MIB matching methods (see Appendix B.3). As in Proposition 4.2, setting shows that exact matching leads to unbiased estimates, but in practice we must use to navigate the estimate-estimand tradeoff without dropping all of the treated units.
Of course, we rarely know the Lipschitz constant in practice, so Proposition 4.4 does not generate empirical bias bounds. Nonetheless, Proposition 4.4 illustrates how distance-metric calipers control bias. If each treated unit is close to all of its matched controls in covariate space, its expected counterfactual outcome must be close to their expected outcomes, for reasonable (i.e., smooth) outcome functions. As a result, any weighted average of the control units’ outcomes cannot differ too much in expectation from the treated unit’s true counterfactual outcome, regardless of the specific form of .
In practice, we can ideally get estimates that beat this bound by assuming local linearity and adjusting further for residual imbalance. We discuss this in the next section.
4.4 Bias reduction from synthetic controls
While using distance-metric calipers bounds bias, using synthetic controls within these calipers actively reduces bias. Specifically, synthetic controls naturally remove linear bias by conducting local linear interpolation. This means that if is linear, synthetic controls would completely eliminate bias if the weights achieved perfect balance on . For nonlinear , bias will be reduced to higher-order nonlinear trends, which are, in general, less influential within tight calipers for smooth functions. Proposition 4.5 more precisely shows how exact synthetic controls eliminate linear bias.
Proposition 4.5.
Let = or . Suppose is differentiable and Lipschitz() with respect to . Then for a matching procedure such that for all , for all , if for all (i.e., we have perfect balance locally for each treated unit):
Proof.
See Appendix B.8 for a proof based on a Taylor expansion with respect to the given distance metric. The resulting expansion differs from the usual multivariate Taylor expansion, since it uses directional derivatives to better utilize the distance-metric calipers. ∎
In Proposition 4.5, the term represents higher-order terms which go to zero more quickly than does the caliper as shrinks toward zero. Compare to the bias bound of without the synthetic control step in Propostion 4.4: we have lost the term, leaving something smaller (asymptotically). In practice, shrinking to zero drops all treated units, but the notation highlights how implementing synthetic controls within calipers takes advantage of local linearity. In particular, while linear interpolation across large distances can lead to significant interpolation bias (Kellogg et al.,, 2021), restricting the donor-pool units to be within a caliper distance from the treated unit reduces the impact of nonlinearities.
4.5 Comparing CSM to CEM
As discussed in Section 3.2, radius matching methods have many similarities with coarsened exact matching (CEM) (Iacus et al.,, 2012), which we celebrate by matching two of the three letters. In particular, radius matching and CEM both possesses the imbalance-bounding and bias-bounding guarantees discussed in the previous sections.
To clarify the benefits of radius matching, we consider an example with two covariates and , as in Figure 2. The coarsening of each variable is made equal sized. We visualizes the CEM grid defined by the coarsening of the two covariates, along with an equivalently sized caliper around treated unit .

Because lies near a boundary defined by the covariate coarsening, CEM does not match to , while the caliper does. On the other hand, CEM matches to because they lie in the same caliper grid, even though the two units lie more than one unit apart from each other. Figure 2 shows how, given a fixed caliper size, CEM guarantees that treated units lie within two units of their matched controls whereas the caliper guarantees the distance is no more than one unit. By centering calipers on each treated unit, radius matching matches each treated unit to all nearby control units while guaranteeing imbalance and bias bounds that are twice as tight as those guaranteed by CEM. In other words, when each method is “equally sized” in terms of how large a treated unit’s cachement area for possible control is, radius matching has tighter control of bias.
As a result, we can present the following proposition, which highlights the improved guarantee of bias bounds. Specifically, it improves the upper bound of the bias for CSM compared to CEM.
Proposition 4.6.
Suppose is Lipschitz with respect to = or , and . Then CSM with covariate-wise caliper and CEM with covariate-wise coarsening have the same cachement size for each treated unit, but
Naturally, there are tradeoffs for these improved bias bounds. Computationally, radius matching requires computing distances between each treated unit and the control units, an operation of order , unlike CEM which only requires a frequency tabulation of order . Non-uniform calipers are also slightly easier to implement via covariate coarsenings, though for many common covariates one can directly transform the covariate and use a uniform caliper to replicate a non-uniform caliper.444E.g., rather than non-uniformly coarsening income as { $0-20k, $20k-50k, $50k-100k, $100k+ }, it may be more reasonable to log-transform the income covariate and use a uniform caliper to avoid, e.g., concluding that an individual earning $20k is as far from an individual earning $20.1k as they are from an individual earning $50k. Overall, however, radius matching preserves the transparency and interpretability of CEM while significantly improving on its useful bias and imbalance bounding properties.
We also note that CSM offers the flexibility to select units that are challenging to match effectively, a feature that CEM lacks. In CEM, after specifying the desired coarsening size, an analyst can only determine if a treated unit has found a match by checking if there is a control within the same stratum. Conversely, with the adaptive caliper, we can evaluate the treated units based on the size of their adaptive calipers, which indicate the extent of their matching difficulty. By contrast, CEM allows for a clean inferential strategy by viewing the bins as strata in a post-stratified experiment; with CSM inference is less clear cut, as we will discuss below.
4.6 Bias-variance tradeoff
Reducing caliper sizes reduces the potential bias of the resulting SATT estimate, as shown by Proposition 4.4, but may increase the estimate’s variance due to reducing the number of controls and, without adaptive calipers, the number of treated units. To formalize this idea, we introduce the (conditional) mean-squared error for the SATT :
where the expectation is conditioned on the observed covariates and treatment assignments. Then, using a working model of the potential outcomes of , where is a 0-centered noise term, we can use standard algebraic manipulation to show that:
(5) |
where
(6) |
We then have
where, using our working model,
(7) |
with representing the population sampling variance of unit conditional on its covariates (Kallus,, 2020).
If we assume homoskedasticity of the residuals within treatment arms (i.e., in the control group) and the conditions of Proposition 4.4 with caliper , we can bound CMSE as:
(8) |
where the effective sample size (ESS) of a set of units with weights is
(9) |
with for , and for .
Equation 8 clarifies the relationship between caliper size and the bias-variance tradeoff. For a fixed estimand, where we do not drop or add treated units as caliper size changes,increasing naturally exposes the resulting estimate to more bias. Increasing also generally reduces variance by dispersing weight across more control units, increasing the effective sample size of the control group.
That said, in contexts where , the overall variance of the average impact estimate could be dominated by the variance associated with the treated units (of order , which remains unchanged even if more control units are added (which shrinks . To be more explicit, consider a single treated unit with matched controls all given uniform weights. Here the variance associated with the treated unit is while the variance associated the controls is (assuming homoskedasticity). Even if were large, we still have the initial term. Once reaches 4 or 5, increasing caliper size would have diminishing returns on variance reduction, suggesting that it may typically be better, in terms of CMSE, to use smaller calipers.
On the other hand, when matching with replacement, a single control may be assigned significant weight for multiple treated units . In these cases, could be greater than 1 for some control units, driving down. In other words, if the same controls get reused heavily enough, the variance associated with the control units may not be dominated by the variance associated with the treated units, even if the initial control pool is large.
4.7 Variance Estimation
How to estimate standard errors for our matching method requires special attention. Abadie and Imbens, through a series of papers (2006, 2008, 2011), argued that the bootstrap method was inadequate and developed an asymptotically valid approach for the -nearest neighbor estimator as the numbers of treatment and control units increase infinitely, with remaining fixed. Otsu and Rai, (2017) introduced a weighted bootstrap methodology that relies on overlap of treated and control covariates.
We instead draw from the survey sampling literature (e.g., Potthoff et al.,, 1992) and plug in effective sample sizes and variance estimates into our Equation 8. This has been successfully applied in various different types of weighting estimators in causal inference (Ben-Michael et al.,, 2022; Lu et al.,, 2023; Keele et al.,, 2023).
Our approach differs from the Abadie and Imbens literature in several aspects: First, we utilize synthetic control weights instead of average -nearest-neighbors weights. Second, we condition on the treated units as defined by their covariates, assume a stochastic model for the units’ outcomes, and make a working assumption of homoskedasticity. Third, we operate in a scenario where the number of treatment units () is low, while the number of control units () is high. As Ferman, (2021) noted, the asymptotics in this setting differ from those Abadie and Imbens described, with increasing to infinity.
Our variance estimator is the following plug-in estimator:
(10) |
where is an estimate of the homoskedastic residuals (assumed the same for treatment and control groups), and follows 9. In particular, is a pooled variance estimator, pooling across clusters:
where each cluster’s residual variance is
with
We drop all clusters with , including for the calculation of . Although we could use the weights from the synthetic control to prioritize “good” controls, we opt for uniform weights over all matched units as the key element is control unit similarity, which is not necessarily optimized by the synthetic step.
Our estimates of are inflated by differences in driven by variation in the within each cluster. In other words, if the control units are widely dispersed and the expected outcome changes rapidly as changes, then the variation in will be absorbed by the terms, resulting in overly large standard errors. This phenomenon was observed in our simulations, as discussed in Section 5.4. This inflation is not necessarily bad, however, as we would expect it to grow roughly in proportion to the size of the bias term, which is also driven by these differences (see the terms in Equation 6). One can thus view these SEs as predicting overall error (see, e.g., the discussion of standard errors as predicting overall error in Sundberg, (2003) and discussion of expanding of standard errors to include bias in Weidmann and Miratrix, 2021b ).
Our inference goal is the Root Mean Squared Error (RMSE) of our estimate, conditioned on the treatment group’s covariates, represented by . Other options are possible, as is well illustrated by the literature. Kallus, (2020) focuses on the conditional standard error, essentially the variance term. Abadie and Imbens, along with Otsu and Rai, (2017), target the unconditional coverage. Ferman, (2021) aims at the unconditional type-I error rate. In particular, it is important to note, as Imbens and Rubin, (2015) mentions, that the conditional and unconditional standard errors differ.
5 Simulation Studies
To understand how CSM performs, we consider a range of simulation studies. First, we examine a simple simulation based on the toy examples in Section 2.2, where local matches tend to be important. We then consider a few canonical simulated datasets taken from the literature to assess general performance. Finally, we assess our inferential method.
5.1 Methods
We compare CSM to a variety of popular matching, balancing, outcome regression, propensity score, and doubly robust methods, as described in Table 1.555We initially also included CEM using synthetic controls within each cell and caliper matching using simple averages within each caliper, but, for clarity, we now exclude these results since their performances typically lie between the performances of CSM and CEM.
\rowcolor[HTML]C0C0C0 Method class | Method name | Description | |||
Baseline | diff | Difference-in-means estimator | |||
match-1NN | One nearest neighbor matching | ||||
Matching | match-CEM | Coarsened exact matching | |||
or-lm | Linear model on all two-way interactions | ||||
Outcome model | or-BART | Bayesian additive regression tree (BART) | |||
ps-lm |
|
||||
Propensity score | ps-BART | BART with binary outcome (probit link) | |||
bal-SBW1 |
|
||||
Balancing | bal-SBW2 |
|
|||
dr-AIPW1 |
|
||||
dr-AIPW2 |
|
||||
dr-TMLE1 |
|
||||
Doubly robust | dr-TMLE2 |
|
We use default settings for all of the algorithms to standardize comparisons. We implement BART, TMLE, and AIPW in R using defaults from the dbarts (Dorie,, 2023), tmle (Gruber and Van Der Laan,, 2012), and AIPW (Yongqi Zhong et al.,, 2021) packages, respectively. For SuperLearner, the linear models include a simple mean, linear regression, and generalized linear regression models; the machine-learning models include the linear models as well as generalized additive models, random forests, BART, and XGBoost (Chen and Guestrin,, 2016).
To standardize comparisons for the matching methods, for each dataset we use the distance metric implied by the covariatewise caliper generated by coarsening each numeric covariate into five equally spaced bins. I.e., we use and a diagonal with entries . We do not tune the covariatewise caliper, assuming zero domain knowledge about variable importance. We conduct CEM using the same uniform coarsening of each covariate into five bins.
5.2 Toy example simulation
To build intuition for situations in which CSM may perform well, we first show results from the simulation based on the toy examples in Section 2.2. We simulate the covariates for 50 treated units each from multivariate normal distributions centered at and , and 225 control units each from multivariate normal distributions centered at and , all with covariance matrices . We then add 100 control units distributed uniformly on the unit square to ensure we do have some overlap. We finally generate outcomes for each unit as:
where is the density function for a multivariate normal distribution centered at with covariance matrix , is the true treatment effect, and is homoskedastic noise. Figure 3 plots a sample simulated dataset.

Figure 4 shows the root mean-squared error and absolute bias of the point ATT estimates for the various methods.

We see that CSM performs well, even compared to complex machine-learning methods.
We emphasize that the goal of this simulation is not to demonstrate CSM’s performance, but rather to illustrate settings in which we may expect good performance. In the toy example, the interaction between the two covariates drives both the control potential outcome function and the implicit propensity score. Because the covariates are interacted with each other, joint covariate balance is very important. As a result, those methods that do not target joint balance or otherwise do not have access to the interaction term between and perform poorly due to high levels of bias.
5.3 Canonical simulations
The datasets from Kang and Schafer, (2007), Hainmueller, (2012), and the 2016 American Causal Inference Conference competition (Dorie et al.,, 2017) have canonically been used to compare methods for observational causal inference. Table 2 contains basic information about the settings we used for each dataset. Please see the original papers for further details.
\rowcolor[HTML]C0C0C0 Dataset |
|
|||||
Kang & Schafer, (2007) | 4 | No | ||||
Hainmueller et al. (2012) | 6 | 50 | 250 | No | ||
ACIC 2016 | 10 | Yes |
Figure 5 summarizes the results of our simulations using these datasets and illustrates the tradeoffs made by CSM.

We see that CSM tends to outperform the simple matching, balancing, and outcome or propensity-score modeling approaches, but that it is outperformed by the complex machine-learning approaches known to do well on these types of datasets (Dorie et al.,, 2017). CSM is designed to maximize performance while preserving the simple intuitions underlying exact matching. Under such constraints, it is remarkable that CSM performs nearly as well as black-box machine-learning approaches on the Kang and Schafer, (2007) and Hainmueller, (2012) datasets.
The trends in Figure 5 highlight important concepts regarding locality in observational causal inference. As the number of covariates increases, the matching methods tend to deteriorate in performance relative to the complex modeling methods. Caliper-based matching methods only use local information. This behavior protects them against incorrect extrapolation, but also prevents them from correctly extrapolating in cases where overlap is unnecessary.
In many simulated (and real) datasets, extrapolating marginal effects can improve counterfactual predictions. For example, suppose increasing clearly increases outcomes among controls which have low values of , and we would like to predict counterfactual outcomes for treated units with high values of . A linear model fit to the controls may extrapolate and assume that increasing always increases outcomes regardless of the value of , while an exact matching estimator would not do so. Increasing the number of covariates exaggerates this behavior, since models extrapolate more marginal trends. Figure 5 shows us that this type of model-based marginal extrapolation improves results for these simulated datasets, since in the considered DGPs there are few high-dimensional interactions to falsify such extrapolation.
Matching aims to produce transparent, model-free causal inferences. As shown by Figure 5, this transparency generally comes with a cost in terms of performance. The simulations show that while avoiding model-based extrapolation controls bias in the worst case (e.g., Proposition 4.4), it can be too conservative in settings where such extrapolation is helpful. Whether these costs are outweighed by the ability to clearly explain how counterfactual predictions are made depends on the particular causal inference setting. Nonetheless, CSM performs competitively in these canonical simulations, making it an attractive option in settings where explainability is paramount.
5.4 Assess the Quality of the Variance Estimator

We next assess the variance estimator described in Section 4.7. We revisit the example from Section 5.2 but vary the degree of overlap. Specifically, we increase the overlap by increasing the proportion of control units distributed uniformly on the unit square and reducing the proportion of controls centered at (0.25, 0.75) and (0.75, 0.25). For the range of overlap, see the three illustrative datasets in Figure 6. These scenarios correspond to the same distribution of treated units, but different pools of potential controls to select matches from. For each degree of overlap, we generate 500 trials.
Our primary estimation target is the SATT.
To capture tuning for bias reduction, we use an adaptive strategy for the caliper size, shrinking the caliper to select no more than five units in regions with great density, and expanding it to include at least one unit in regions of low density. With larger calipers, the synthetic control optimization tends to select only a few units of the set when we have a low-dimensional covariate set, and the selected units tend to be more distant from the treated unit than others within the neighborhood defined by the caliper; for future work we could follow Ben-Michael et al., 2021b and regularize the weights to take more advantage of rich areas of control units.
Overlap | avg | avg | Bias | RMSE | Coverage | |
Very Low | 23.9 | 0.12 | 0.11 | 0.04 | 0.12 | 0.95 |
Low | 39.2 | 0.10 | 0.10 | 0.02 | 0.10 | 0.94 |
Medium | 51.5 | 0.09 | 0.08 | 0.01 | 0.08 | 0.96 |
High | 61.3 | 0.08 | 0.08 | 0.01 | 0.08 | 0.95 |
Very High | 70.2 | 0.08 | 0.08 | 0.01 | 0.08 | 0.94 |
Table 3 presents the results. The column “avg ” shows the estimated standard error averaged over 500 iterations. The “avg ” column shows the true conditional standard error,666This value is calculated as the standard deviation of across the simulations in order to remove variation due to simulation iterations having different true SATT values. Overall, we observe that our method slightly overestimates the true standard error due to the inflation caused by absorbing variation in , as discussed in Section 4.7.
As expected, both the true standard error and bias go down as overlap increases, due to the steadily increasing effective sample size and improved ability to find close matches. In particular, when there is low overlap, we re-use many control units, and only at high overlap is our control group getting close to the treatment group in size. Additionally, with high overlap, there are more control units near the concentration of treated units, allowing for closer matches and thus less bias.
Finally, the “Coverage” column shows the coverage of the 95% confidence interval. The CSM method effectively maintains a small degree of bias and gives reasonably estimated standard errors, resulting in CI coverage close to the nominal 95% across all the scenarios.
An alternate form of inference would some form of bootstrap. Unfortunately, we found that bootstrap methods, including the weighted bootstrap in Otsu and Rai, (2017) applied to our context, and the naïve casewise bootstrap, did not work effectively. We leave further development of bootstrap inference to future work.
6 Applied Example: Effect of Improving Education Quality in Schools in Brazil
We illustrate our approach via a within study comparison based on a randomized control trial, “Jovem de Futuro” (Young of the Future), of schools in Brazil, following a study by Barros et al., (2012). Data from this intervention, targeting education quality, was also analyzed as a within study comparison by Ferman, (2021).
In a within study comparison, we try to recover a known impact via our observational study method. Within study comparisons allow for assessment of how observational methods serve “in the wild.” A typical early example of this is the famous LaLonde paper (LaLonde,, 1986), but for a more extended discussion, see Cook et al., (2008). In particular, we take units enrolled in an experimental trial, but who did not receive treatment (the experimental control group) as our “treatment” and match them to a larger pool of units not in the experiment at all; see Weidmann and Miratrix, 2021a for further discussion of this form of within study comparison.
Jovem de Futuro combined two efforts: a) offering strategies and tools for school management boards to improve efficiency; b) offering grant money for improving education quality, conditional on school’s standardized test score passing a certain threshold. The intervention lasted three years, and there were three rounds delivered: 2010-2012 (the 2010 implementation), 2011-2013 (the 2011 implementation), and 2012-2014 (the 2012 implementation). We focus on the 2010 implementation.
The 2010 implementation had 15 treated schools and 15 control schools in Rio de Janeiro, and 39 treated schools and 39 control schools in Sao Paulo. We set for the control schools who applied for the intervention program but were assigned to the no-intervention group, and for the non-participating schools. Since there is no actual intervention for either the and schools, we expect the “treatment effect” to be zero, if our methods succeed in removing any confounding selection bias. The sample sizes for Rio de Janeiro and for Sao Paulo are and .
For each school, we have four matching variables: , and are standardized test scores from 2007 to 2009, and is an indicator of being in Sao Paolo. Table 4 shows the pre-matching covariate mean difference. Overall, the schools in the RCT scored lower than those not in the RCT.
Our outcome is the test scores in 2010(the outcome following the first year of intervention).
Variable Control (Z = 0) Treated (Z = 1) Difference (treated - control) Score 2007 () 4.7 -2.8 -7.5 Score 2008 () 0.83 -0.99 -1.8 Score 2009 () 2.3 -2.5 -4.8 In Sao Paulo () 78% 72% -6%
6.1 CSM Procedure
We start our CSM analysis by choosing as the distance metric. Next, we set the covariate-wise calipers. For the standardized test scores to , we set the covariate-wise calipers to 0.2, meaning we want matched schools to differ at most 0.2 standard deviations in test scores. For (indicator of being in Sao Paolo), we set the covariate-wise caliper to , a small value to ensure the matching of is exact.
The distance-metric caliper is usually set to a default of 1, as done in Proposition 4.1, to give direct interpretation to the covariate-wise calipers . In practice, however, given we can tune to guarantee each treated unit gets enough but not too many matched controls, thereby optimally trading off data use and potential bias (see Section 6.3). When we set , we obtain tighter matches than initially planned with . When we set , our matches can be worse than initially planned.
To tune , we plot histograms of the distances of the top-1, top-2, and top-3 matches for each treated unit, and look for a natural break. We see that, for most treated units, even the third worst match has a distance of around 0.35 or less. We can thus focus on bias reduction, and set our distance-metric caliper to . See Appendix A.1 for further details.
Some of our treated units have no controls within this set minimum caliper. For these units, we use an adaptive caliper, matching the units to their nearest neighbor. We then apply the synthetic control method to obtain weights for each set of matched controls. Finally, we use these weighted controls to construct the point estimate and estimated standard error, using the inferential method outlined in Section 4.7.
6.2 Assessing the Covariate Balance
Our chosen caliper results in 49 (91%) treated units having at least one matched control within a radius of . These are our “feasible” treated units. Among the 49 feasible treated units, all matched controls are within 0.2 standardized scores for the years 2007, 2008, and 2009. Figure 7 illustrates how the differences in marginal means for three matching variables vary when adding infeasible treated units in order of increasing adaptive caliper size, until all 54 treated units are included.
By Proposition 4.3, the marginal mean difference of each covariate must be within , where is the covariate caliper of the -th covariate, and is the distance-metric caliper. For to , this value is . For , this value is . The leftmost point in each plot in Figure 7 shows the difference in marginal means between the treated units and their synthetic controls for the 49 feasible treated units, with differences all well below the corresponding values. This result is not surprising: represents a worst-case guarantee, and imbalances within each matched control set can cancel out.
An important use of Figure 7 is to provide a diagnostic on covariate balance. The plot assesses the approximate joint balance through marginal covariate balance while including increasingly infeasible units. While joint and marginal balance is guaranteed for feasible units, it is not guaranteed when more infeasible units are added. In this dataset, the marginal balances remain good even when all infeasible treated units are included.

6.3 Assessing the Impact of Synthetic Controls
The synthetic control step after matching ideally improves the comparability of the controls matched to each treated unit, but can also reduce the effective sample size of the control group by downweighting some units. We next assess this bias-variance trade-off, comparing SCM with classic radial matching (where we take the simple average of all close controls as the counterfactual for each treated unit in turn) and one-nearest-neighbor (1-NN) matching. Both can be viewed as weights on the controls:
-
1.
Simple average weight: let for each control unit. This is the default weighting method used in the matching literature after matches are made.
-
2.
One-nearest-neighbor777If unit has unites tied for nearest neighbor, we assign each a weight of .: let
This weight corresponds to 1-to-1 matching.
The results for SCM, average, and 1-NN weights on the feasible treated units are shown on Table 5. We analyze the bias-variance trade-off through two metrics: the mean covariate difference (a proxy for bias, Equation 6) and the effective sample size (ESS, a proxy for variance, Equation 7). The mean covariate difference, shown in the first column of Table 5, is the first-order term in Taylor’s expansion of bias. It is also referred to as the extrapolation bias in Kellogg et al., (2021), representing how far the weighted control is from the treated unit in in the covariate space. SCM performs better than simple average and 1-NN since it is optimized to minimize this bias.
Under homoskedasticity, the variance of an estimator is inversely associated with ESS, meaning a larger ESS implies smaller variance. ESS will be larger when more controls are used and when weights are more evenly distributed among those selected. Average weights achieve the highest ESS by assigning non-zero weights to all 202 distinct controls within the caliper distance of at least one of the 49 feasible units. In contrast, 1-NN uses only 49 distinct controls, as each treated unit is matched to a unique nearest neighbor. SCM falls between these scenarios, assigning zero weights to some control unit otherwise within caliper distance, but using multiple controls where possible. Overall, SCM performs well in the bias-variance trade-off by achieving the lowest imbalance (bias) and moderate variance.
Mean Individual Median Individual Number of Method Imbalance Imbalance ESS Unique Controls Feasible Treated Group - - 49 - CSM 0.057 0.000 95 155 Average 0.118 0.091 202 580 1-NN 0.158 0.148 49 49
6.4 FSATT and SATT Estimates
We finally present the estimated treatment effect on the feasible units and also show the influence of infeasible treated units on the ATT estimate. Before interpreting the results, recall that, given our within study design, we expect to see no treatment impact. In other words, assuming our estimation is sound, we should expect an “impact estimate” of zero.
Figure 8 presents the main results of the analysis, providing a series of confidence intervals given a steady increase in the number of units kept in the analysis. Each confidence interval is constructed using the method in Section 4.7. The leftmost confidence interval, representing the feasible set, covers zero, indicating no significant effect as desired. As we include more infeasible units, the ATT varies slightly, but is overall stable, and all confidence intervals include 0. This stability aligns with Figure 7, which shows covariate balances only vary slightly as more infeasible units are included.

The SATT estimates from CSM and the other two methods are presented in Table 6. Using the point estimates and standard errors, all methods conclude no treatment effect. The standard error follows a trend where 1-NN is the largest, Average is the smallest, and CSM is in the middle, consistent with the trend we observed in Section 6.3: CSM uses more data than 1-NN, but sacrifices some precision to ideally reduce bias.
Method | Point Est. | SE |
CSM | 0.035 | 0.039 |
Average | 0.013 | 0.038 |
1-NN | 0.008 | 0.053 |
7 Discussion
Matching methods enable researchers to simply and intuitively draw causal conclusions from observational data. In practice, however, standard matching approaches typically need to sacrifice either performance or transparency to achieve results comparable to modern optimization and machine-learning methods for causal inference. To address this challenge, this paper introduces Caliper Synthetic Matching. CSM builds on the spirit of exact matching by preserving four key principles: intuitive distances, local matches, ability to monitor matching success, and transparent estimates. We find that CSM can achieve comparable performance to modern methods on standard benchmark datasets, particularly when many covariates exhibit interaction effects.
CSM provides a framework for transparent causal inference, and each stage may be extended in various ways. Future work might reconsider the question of distance-metric and caliper selection, optimizing the distance-metric scaling matrix using held-out data, or incorporating estimated covariate densities to optimize caliper lengths (for work in this area, see Parikh et al., (2022)). Dimension reduction on the number of covariates, possibly by using prognostic scores (as done in, e.g., (Aikens et al.,, 2020)), could also help mitigate the curse of dimensionality. Interpretable estimation methods other than standard synthetic controls may also be used, such as the modified synthetic controls approaches proposed by Abadie and L’hour, (2021) and Ben-Michael et al., 2021b . Ultimately, CSM represents a promising addition to the toolkit for researchers conducting causal inference using matching methods.
References
- Abadie et al., (2010) Abadie, A., Diamond, A., and Hainmueller, J. (2010). Synthetic control methods for comparative case studies: Estimating the effect of california’s tobacco control program. Journal of the American statistical Association, 105(490):493–505.
- Abadie and Imbens, (2006) Abadie, A. and Imbens, G. W. (2006). Large sample properties of matching estimators for average treatment effects. Econometrica, 74(1):235–267.
- Abadie and Imbens, (2008) Abadie, A. and Imbens, G. W. (2008). On the failure of the bootstrap for matching estimators. Econometrica, 76(6):1537–1557.
- Abadie and Imbens, (2011) Abadie, A. and Imbens, G. W. (2011). Bias-corrected matching estimators for average treatment effects. Journal of Business & Economic Statistics, 29(1):1–11.
- Abadie and L’hour, (2021) Abadie, A. and L’hour, J. (2021). A penalized synthetic control estimator for disaggregated data. Journal of the American Statistical Association, 116(536):1817–1834.
- Aikens and Baiocchi, (2023) Aikens, R. C. and Baiocchi, M. (2023). Assignment-control plots: a visual companion for causal inference study design. The American Statistician, 77(1):72–84.
- Aikens et al., (2020) Aikens, R. C., Greaves, D., and Baiocchi, M. (2020). A pilot design for observational studies: using abundant data thoughtfully. Statistics in Medicine, 39(30):4821–4840.
- Barros et al., (2012) Barros, R., Carvalho, M. d., Franco, S., and Rosalém, A. (2012). Impacto do projeto jovem de futuro. Est. Aval. Educ, pages 214–226.
- (9) Ben-Michael, E., Feller, A., Hirshberg, D. A., and Zubizarreta, J. R. (2021a). The balancing act in causal inference. arXiv preprint arXiv:2110.14831.
- (10) Ben-Michael, E., Feller, A., and Rothstein, J. (2021b). The augmented synthetic control method. Journal of the American Statistical Association, 116(536):1789–1803.
- Ben-Michael et al., (2022) Ben-Michael, E., Feller, A., and Rothstein, J. (2022). Synthetic controls with staggered adoption. Journal of the Royal Statistical Society Series B: Statistical Methodology, 84(2):351–381.
- Boyd et al., (2004) Boyd, S., Boyd, S. P., and Vandenberghe, L. (2004). Convex optimization. Cambridge university press.
- Chen and Guestrin, (2016) Chen, T. and Guestrin, C. (2016). XGBoost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, pages 785–794, New York, NY, USA. ACM.
- Chernozhukov et al., (2018) Chernozhukov, V., Chetverikov, D., Demirer, M., Duflo, E., Hansen, C., Newey, W., and Robins, J. (2018). Double/debiased machine learning for treatment and structural parameters. The Econometrics Journal, 21(1):C1–C68.
- Chipman et al., (2010) Chipman, H. A., George, E. I., and McCulloch, R. E. (2010). Bart: Bayesian additive regression trees. The Annals of Applied Statistics, 4(1):266–298.
- Cochran and Rubin, (1973) Cochran, W. G. and Rubin, D. B. (1973). Controlling bias in observational studies: A review. Sankhyā: The Indian Journal of Statistics, Series A, pages 417–446.
- Cook et al., (2008) Cook, T. D., Shadish, W. R., and Wong, V. C. (2008). Three conditions under which experiments and observational studies produce comparable causal estimates: New findings from within-study comparisons. Journal of Policy Analysis and Management: The Journal of the Association for Public Policy Analysis and Management, 27(4):724–750.
- Dehejia and Wahba, (2002) Dehejia, R. H. and Wahba, S. (2002). Propensity score-matching methods for nonexperimental causal studies. Review of Economics and statistics, 84(1):151–161.
- Diamond and Sekhon, (2013) Diamond, A. and Sekhon, J. S. (2013). Genetic matching for estimating causal effects: A general multivariate matching method for achieving balance in observational studies. Review of Economics and Statistics, 95(3):932–945.
- Dieng et al., (2019) Dieng, A., Liu, Y., Roy, S., Rudin, C., and Volfovsky, A. (2019). Interpretable almost-exact matching for causal inference. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 2445–2453. PMLR.
- Dorie, (2023) Dorie, V. (2023). dbarts: Discrete bayesian additive regression trees sampler. R package version 0.9-23.
- Dorie et al., (2017) Dorie, V., Hill, J., Shalit, U., Scott, M., and Cervone, D. (2017). Automated versus do-it-yourself methods for causal inference: Lessons learned from a data analysis competition. Statistical Science, 34.
- Ferman, (2021) Ferman, B. (2021). Matching estimators with few treated and many control observations. Journal of Econometrics, 225(2):295–307.
- Gruber and Van Der Laan, (2012) Gruber, S. and Van Der Laan, M. (2012). tmle: an r package for targeted maximum likelihood estimation. Journal of Statistical Software, 51:1–35.
- Hainmueller, (2012) Hainmueller, J. (2012). Entropy balancing for causal effects: A multivariate reweighting method to produce balanced samples in observational studies. Political analysis, 20(1):25–46.
- Hill, (2011) Hill, J. L. (2011). Bayesian nonparametric modeling for causal inference. Journal of Computational and Graphical Statistics, 20(1):217–240.
- Ho et al., (2007) Ho, D. E., Imai, K., King, G., and Stuart, E. A. (2007). Matching as nonparametric preprocessing for reducing model dependence in parametric causal inference. Political analysis, 15(3):199–236.
- Iacus et al., (2011) Iacus, S. M., King, G., and Porro, G. (2011). Multivariate matching methods that are monotonic imbalance bounding. Journal of the American Statistical Association, 106(493):345–361.
- Iacus et al., (2012) Iacus, S. M., King, G., and Porro, G. (2012). Causal inference without balance checking: Coarsened exact matching. Political analysis, 20(1):1–24.
- Imai et al., (2008) Imai, K., King, G., and Stuart, E. A. (2008). Misunderstandings between experimentalists and observationalists about causal inference. Journal of the royal statistical society: series A (statistics in society), 171(2):481–502.
- Imai and Ratkovic, (2014) Imai, K. and Ratkovic, M. (2014). Covariate balancing propensity score. Journal of the Royal Statistical Society Series B: Statistical Methodology, 76(1):243–263.
- Imbens, (2004) Imbens, G. W. (2004). Nonparametric estimation of average treatment effects under exogeneity: A review. Review of Economics and statistics, 86(1):4–29.
- Imbens and Rubin, (2015) Imbens, G. W. and Rubin, D. B. (2015). Causal inference in statistics, social, and biomedical sciences. Cambridge University Press.
- Kallus, (2020) Kallus, N. (2020). Generalized optimal matching methods for causal inference. J. Mach. Learn. Res., 21:62–1.
- Kang and Schafer, (2007) Kang, J. D. Y. and Schafer, J. L. (2007). Demystifying Double Robustness: A Comparison of Alternative Strategies for Estimating a Population Mean from Incomplete Data. Statistical Science, 22(4):523 – 539.
- Keele et al., (2023) Keele, L. J., Ben-Michael, E., Feller, A., Kelz, R., and Miratrix, L. (2023). Hospital quality risk standardization via approximate balancing weights. The Annals of Applied Statistics, 17(2):901–928.
- Kellogg et al., (2021) Kellogg, M., Mogstad, M., Pouliot, G. A., and Torgovitsky, A. (2021). Combining matching and synthetic control to tradeoff biases from extrapolation and interpolation. Journal of the American Statistical Association, 116(536):1804–1816.
- King et al., (2017) King, G., Lucas, C., and Nielsen, R. A. (2017). The balance-sample size frontier in matching methods for causal inference. American Journal of Political Science, 61(2):473–489.
- King and Nielsen, (2019) King, G. and Nielsen, R. (2019). Why propensity scores should not be used for matching. Political Analysis, 27(4):435–454.
- LaLonde, (1986) LaLonde, R. J. (1986). Evaluating the econometric evaluations of training programs with experimental data. The American economic review, pages 604–620.
- Lu et al., (2023) Lu, B., Ben-Michael, E., Feller, A., and Miratrix, L. (2023). Is it who you are or where you are? accounting for compositional differences in cross-site treatment effect variation. Journal of Educational and Behavioral Statistics, 48(4):420–453.
- Morucci et al., (2020) Morucci, M., Orlandi, V., Roy, S., Rudin, C., and Volfovsky, A. (2020). Adaptive hyper-box matching for interpretable individualized treatment effect estimation. In Conference on Uncertainty in Artificial Intelligence, pages 1089–1098. PMLR.
- Otsu and Rai, (2017) Otsu, T. and Rai, Y. (2017). Bootstrap inference of matching estimators for average treatment effects. Journal of the American Statistical Association, 112(520):1720–1732.
- Parikh et al., (2022) Parikh, H., Rudin, C., and Volfovsky, A. (2022). Malts: Matching after learning to stretch. Journal of Machine Learning Research.
- Potthoff et al., (1992) Potthoff, R. F., Woodbury, M. A., and Manton, K. G. (1992). “equivalent sample size” and “equivalent degrees of freedom” refinements for inference using survey weights under superpopulation models. Journal of the American Statistical Association, 87(418):383–396.
- Robins et al., (1994) Robins, J. M., Rotnitzky, A., and Zhao, L. P. (1994). Estimation of regression coefficients when some regressors are not always observed. Journal of the American statistical Association, 89(427):846–866.
- Rosenbaum, (1989) Rosenbaum, P. R. (1989). Optimal matching for observational studies. Journal of the American Statistical Association, 84(408):1024–1032.
- Rosenbaum and Rubin, (1983) Rosenbaum, P. R. and Rubin, D. B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1):41–55.
- (49) Rosenbaum, P. R. and Rubin, D. B. (1985a). The bias due to incomplete matching. Biometrics, pages 103–116.
- (50) Rosenbaum, P. R. and Rubin, D. B. (1985b). Constructing a control group using multivariate matched sampling methods that incorporate the propensity score. The American Statistician, 39(1):33–38.
- Rotnitzky et al., (1998) Rotnitzky, A., Robins, J. M., and Scharfstein, D. O. (1998). Semiparametric regression for repeated outcomes with nonignorable nonresponse. Journal of the american statistical association, 93(444):1321–1339.
- Rubin, (1973) Rubin, D. B. (1973). Matching to remove bias in observational studies. Biometrics, pages 159–183.
- Rubin, (1980) Rubin, D. B. (1980). Bias reduction using mahalanobis-metric matching. Biometrics, pages 293–298.
- Rubin and Thomas, (2000) Rubin, D. B. and Thomas, N. (2000). Combining propensity score matching with additional adjustments for prognostic covariates. Journal of the American Statistical Association, 95(450):573–585.
- Stuart, (2010) Stuart, E. A. (2010). Matching methods for causal inference: A review and a look forward. Statistical science: a review journal of the Institute of Mathematical Statistics, 25(1):1.
- Sundberg, (2003) Sundberg, R. (2003). Conditional statistical inference and quantification of relevance. Journal of the Royal Statistical Society Series B: Statistical Methodology, 65(1):299–315.
- Van Der Laan and Rubin, (2006) Van Der Laan, M. J. and Rubin, D. (2006). Targeted maximum likelihood learning. The international journal of biostatistics, 2(1).
- Van Der Laan Mark et al., (2007) Van Der Laan Mark, J. et al. (2007). Super learner. Statistical applications in genetics and molecular biology, 6(1):1–23.
- Wang et al., (2021) Wang, T., Morucci, M., Awan, M. U., Liu, Y., Roy, S., Rudin, C., and Volfovsky, A. (2021). Flame: A fast large-scale almost matching exactly approach to causal inference. The Journal of Machine Learning Research, 22(1):1477–1517.
- (60) Weidmann, B. and Miratrix, L. (2021a). Lurking inferential monsters? quantifying selection bias in evaluations of school programs. Journal of Policy Analysis and Management, 40(3):964–986.
- (61) Weidmann, B. and Miratrix, L. (2021b). Missing, presumed different: Quantifying the risk of attrition bias in education evaluations. Journal of the Royal Statistical Society Series A: Statistics in Society, 184(2):732–760.
- Yongqi Zhong et al., (2021) Yongqi Zhong, Edward H. Kennedy, Lisa M. Bodnar, and Ashley I. Naimi (2021). Aipw: An r package for augmented inverse probability weighted estimation of average causal effects. American Journal of Epidemiology. In Press.
- Zubizarreta, (2015) Zubizarreta, J. R. (2015). Stable weights that balance covariates for estimation with incomplete outcome data. Journal of the American Statistical Association, 110(511):910–922.
Appendix A Practical considerations
A.1 How to select a caliper
The choice of caliper size is a notorious practical problem for caliper-based matching methods. Ideally, the researcher would have a covariatewise caliper in mind, so that the distance-metric caliper can be simply to equal 1, as in Proposition B.3. More generally, given a fixed distance metric, the choice of should be chosen based on an a priori desired level of bias control rather than a post hoc assessment based on the observed data. For example, a researcher who wants to ensure that all matches lie within 0.5 standard deviations of each other in each covariate, could restrict with .
In practice, however, researchers may want to select a distance metric caliper that “optimally” trades off data use and potential bias. To visualize this tradeoff, we suggest making a histogram of the distances between the treated and nearby control units in the data: Start by computing the distance matrix used to identify the within-caliper control units for each treated unit. Then, plot the smallest distances for each on a histogram to see how “far” the control units tend to be from the treated units. By doing this, we avoid needing to estimate the -dimensional density of the treated and control units, which is generally challenging to do.
In Figure 9, we created three histograms: closest distance, second closest distance, and third closest distance for each in the Brazilian school dataset discussed in Section 6, for units that are exactly matched on the categorical covariates. We see that , a value slightly above the 75th percentile of the third closest distance, is a good cut-off since most lies on the left of . The 0.35 cut-off also captures most of the third closest and second closest distances, indicating we would get at least three units matched for most of our treated units. In general, one could look for peaks around particular values of , as expanding the fixed caliper to include those peaks can greatly increase the effective sample size of the data.

Appendix B Derivations and Technical details
B.1 Distance metrics in multiple dimensions
Formally, a distance metric on is a (non-negative) function that satisfies the following for :
-
1.
-
2.
-
3.
[Triangle inequality]
These three properties satisfy our intuitive understanding of measuring distance between two points: we measure a distance of zero if and only if the two points concide with each other; the distance is the same regardless of which point we start measuring from; and the distance cannot be made shorter by passing through a third point.
Now consider a scaled norm on , which measures the length of a given vector as:
We say that the scaled distance metric in Equation 2 is induced by the scaled norm defined above, since for we can write:
The same relationship holds for the scaled distance metric in Equation 3 and the scaled norm
Induced distance metrics in possess desirable properties for :
-
•
Translation invariance: for ,
-
•
Absolute homogeneity: for ,
Again, these properties are intuitive: moving two points by the same amount does not change the distance between them, and scaling two points scales the distance between them.
In deriving our formal results below, we will need the following simply lemma about our distance metrics:
Lemma B.1.
For any translation-invariant, absolutely homogeneous distance metric on a metric space, for , and unit vector .
Proof.
[translation invariance] | ||||
[absolute homogeneity] | ||||
[def. metric-induced norm] | ||||
∎
B.2 Lipschitz functions in multiple dimensions
Recall that if a function is Lipschitz(), then for any :
This implies that the function’s derivatives are bounded by : we cannot grow faster than as we go from to .
In higher dimensions, is no longer scalar-valued; as a result, Lipschitz functions must be defined with respect to a distance metric. Formally speaking, we equip with a distance metric of the form given by Equation 2 or 3. Then is Lipschitz() with respect to distance if for any :
Notably, a function can only be Lipschitz relative to a given distance metric, so, e.g., a function that is Lipschitz with respect to distance may not be Lipschitz with respect to distance.
Multivariate Lipschitz functions have bounded derivatives like their unidimensional counterparts. In particular, Lemma B.2 shows that the directional derivatives of any Lipschitz function are bounded.
Lemma B.2 (Lipschitz bound on directional derivative).
Suppose is Lipschitz() with respect to . For any , denote the unit vector . Then for direction
if is defined.
Proof.
[def. directional derivative] | ||||
Lemma B.1 | ||||
def. Lipschitz |
∎
B.3 Proof that CSM is in the MIB (Monotonic Imbalance Bounding) Class
Iacus et al., (2011) defines the monotonic imbalance bounding (MIB) class of matching methods as follows:
Definition B.1.
A matching method is MIB for a function of the dataset with respect to distance metric if there exists some monotonically increasing function on any non-negative p-dimensional vectors such that:
(11) |
Here, is function that summarizes a dataset. is the (-dimensional) vector of tuning parameters. In our paper, is the vector of covariate-wise calipers. and are datasets of matched treated and matched control units, when we set our parameters to . The function is monotonically increasing if it is non-decreasing in every dimention of and strictly increasing in at least one of them. This essentially says as we reduce any dimension of , the bound on also decreases.
We show that CSM is MIB by choosing appropriate functions , , and to verify Equation 11. We select to be the weighted mean for the -th covariate, where the weight is obtained from matching. Specifically, we define and as the weighted means for the treated and control units after matching:
We further choose and . Then Equation 11 becomes . This inequality is established by Proposition 4.1, hence CSM is MIB.
We now provide a proof of Proposition 4.1. First, we introduce a useful lemma:
Lemma B.3.
Given covariatewise caliper on units and , for scaling matrix :
-
(a)
-
(b)
Proof.
-
(a)
Without loss of generality, suppose for contradiction that . Then:
-
(b)
This follows from definitions:
∎
Proof of Proposition 4.1:
The last line is because
B.4 Proof of Proposition 4.2
We prove the bound on the Wasserstein distance by choosing a specific coupling:
Proof.
For :
We choose a coupling, generated as:
-
1.
Sample as , so for some .
-
2.
Sample as for the control units matched to treated unit , with their appropriate weights.
This coupling () clearly produces the correct marginals (), so we can write:
For , take the limit as increases of the above result. ∎
B.5 Proof of Proposition 4.3
Proposition 4.3, showing the bound on the covariate mean differences, is trivially proved using the fact that and are induced norms (see Appendix B.1).
Proof.
[translation invariance] | ||||
∎
B.6 Derivations and technical details of the synthetic controls approach
Synthetic controls naturally control linear bias, due to the following observations.
Proposition B.4.
Synthetic control weights project the treated unit’s covariates onto the convex hull of the donor pool units’ covariates.
Proof.
Recall also that the Euclidean projection of onto the set in the norm is defined as:
subject to |
This optimization problem is equivalent to standard SCM. ∎
The covariates of the donor pool generally form a -dimensional convex hull with a dimensional covariate,888Assuming that there are at least non-collinear donor-pool units. Otherwise, the dimension of the convex hull may be less than . i.e., a -dimensional polygon (i.e., polytope) that contains the points corresponding to each of the donor pool units’ covariates, as well as the lines between those points. Geometrically, synthetic controls simply find the point in this convex hull that is “closest” to the treated unit’s covariates (i.e., “project” the treated unit onto the convex hull), where “closeness” is defined using the given distance metric. Importantly, every point in a convex hull can be written as a convex weighted average of the points generating the hull, so the “closest” point corresponds to a set of non-negative donor-pool-unit weights that sums to one.
To impute the counterfactual outcome for the treated unit (i.e., the outcome for the constructed synthetic control unit), synthetic controls linearly interpolate the donor-pool units’ outcomes to the projected point. Slightly more formally:
Remark 1.
Suppose we use as our estimate of the counterfactual outcome for treated unit , using control units with convex weights . Then we may interpret as a linear interpolation of the control units’ outcomes to .
Proof.
Write , without noise. If we suppose that is linear, by the definition of a linear map we have:
I.e., may be interpreted as the evaluation of at . Convex weights ensure that this is interpolation, not extrapolation. ∎
In summary, synthetic controls linearly interpolate the donor-pool units’ outcomes to the point on the convex hull closest to the treated unit. As a result, they are subject to linear interpolation bias (Kellogg et al.,, 2021) in this first stage, though the bias is controlled by the maximum distance across which they are allowed to interpolate with the caliper. Synthetic controls then flatly extrapolate this outcome to the treated unit, i.e., impute the treated unit’s outcome as the linearly interpolated value. This second step is subject to potential extrapolation bias from the point on the convex hull to the location of the treated unit (Kellogg et al.,, 2021) proportional to , assuming the potential outcome function is Lipschitz. Both interpolation and extrapolation bias are therefore controlled by the caliper size in CSM.
B.7 Optimization for finding SCMs
For a scaled distance, the SCM optimization is typically directly solved as a quadratic programming problem. For a scaled distance, we have to transform the SCM optimization into the following linear programming problem, which we solve for each treated unit in turn:
minimize | |||
subject to | |||
In words, we are trying to find the smallest that “pinches” the covariate balance for each covariate. The are all free (under their constraints), so the goal of shrinking induces us to find to make small achievable.
B.8 Proof of bias reduction from synthetic controls (Proposition 4.5)
Proving Proposition 4.5 requires some additional mathematical exposition. Specifically, we require to be differentiable in order to use a Taylor expansion. That said, to effectively use the Lipschitz property, standard multivariable Taylor expansion does not suffice. Instead, we require a Taylor expansion in a distance metric.
Lemma B.5 (Taylor expansion in a distance metric).
Proof.
By standard multivariate Taylor expansion, we know that:
for the usual Euclidean norm .
We then have, using our definition of our directional derivative
which gives
Finally, for the little- remainder term, we show that if , then . We demonstrate this by proving that .
if both limits exist. We know since , so it remains to show that exists.
The above limit exists because is non-zero when . Hence . ∎
Lemma B.5 is technical, but it captures a very simple intuition. Standard multivariate Taylor expansion takes the dot product of the -dimensional gradient of with the -dimensional vector of differences between and the point . Calipers, however, only control the scalar quantities . Lemma B.5 simply rewrites standard multivariate Taylor expansion to better utilize the fact that Lipschitz functions control scalar directional derivatives (as discussed in Appendix B.2).
With our lemma, we now provide the proof for Proposition 4.5, where we use the fact that, by Lemma B.2, Lipschitz functions have bounded directional derivatives.
Proof.
Recall that by Taylor expansion of around , we have:
Consider the linear term:
where is the fixed (unknown) gradient of at . Thus, if the synthetical control matches, we drop the linear term from our bias.
Using the above we have, given that, for all , for all :
∎