Rate-Optimal Cluster-Randomized Designs for Spatial Interference
Abstract
We consider a potential outcomes model in which interference may be present between any two units but the extent of interference diminishes with spatial distance. The causal estimand is the global average treatment effect, which compares outcomes under the counterfactuals that all or no units are treated. We study a class of designs in which space is partitioned into clusters that are randomized into treatment and control. For each design, we estimate the treatment effect using a Horvitz-Thompson estimator that compares the average outcomes of units with all or no neighbors treated, where the neighborhood radius is of the same order as the cluster size dictated by the design. We derive the estimator’s rate of convergence as a function of the design and degree of interference and use this to obtain estimator-design pairs that achieve near-optimal rates of convergence under relatively minimal assumptions on interference. We prove that the estimators are asymptotically normal and provide a variance estimator. For practical implementation of the designs, we suggest partitioning space using clustering algorithms.
keywords:
supplement.pdf \endlocaldefs
1 Introduction
Consider a population of experimental units. Denote by the potential outcome of unit under the counterfactual that the population is assigned treatments according to the vector , where () implies unit is assigned to treatment (control). Treatments assigned to alters can influence the ego since is a function of for , what is known as interference.
An important estimand of practical interest is the global average treatment effect
where () is the -dimensional vector of ones (zeros). This compares average outcomes under the counterfactuals that all or no units are treated. Each average can only be directly observed in the data under an extreme design that assigns all units to the same treatment arm, which would necessarily preclude observation of the other counterfactual. Common designs used in the literature, including those studied here, assign different units to different treatment arms, so neither average is directly observed in the data. Nonetheless, we show that asymptotic inference on is possible for a class of cluster-randomized designs under spatial interference where the degree of interference diminishes with distance.
Many phenomena diffuse primarily through physical interaction. The government of a large city may wish to compare the effect of two different policing strategies on crime, but more intensive policing in one neighborhood may displace crime to adjacent neighborhoods [8, 49]. A rideshare company may wish to compare the performance of two different pricing algorithms, but these may induce behavior that generates spatial externalities, such as congestion. Other phenomena exhibiting spatial interference include infectious diseases in animal [14] and human [35] populations, pollution [18], and environmental conservation programs [36].
Much of the existing literature assumes that interference is summarized by a low-dimensional exposure mapping and that units are individually randomized into treatment or control either via Bernoulli or complete randomization [e.g. 3, 7, 16, 34, 45]. Jagadeesan et al. [23] and Ugander et al. [46] also utilize exposure mappings but depart from unit-level randomization. They propose new designs that introduce cluster dependence in unit-level assignments in order to improve estimator precision. We build on this literature by (1) studying rate-optimal choices of both cluster-randomized designs and Horvitz-Thompson estimators, (2) avoiding exposure mapping restrictions on interference, which can be quite strong [15], and (3) developing a distributional theory for the estimator and a variance estimator.
Regarding (2), most exposure mappings used in the literature imply that only units within a small, known distance from the ego can interfere with the ego’s outcome. We instead study a weaker restriction on interference similar to [31], which states that the degree of interference decreases with distance but does not necessarily zero out at any given distance. This is analogous to the widespread use of mixing-type conditions in the time series and spatial literature instead of -dependence because the latter rules out interesting forms of autocorrelation, including models as basic as AR.
Regarding (1), we study cluster-randomized designs in which units are partitioned into spatial clusters, clusters are independently randomized into treatment and control, and the assignment of a unit is dictated by the assignment of its cluster. By introducing correlation in assignments, such designs can avoid overlap problems common under Bernoulli randomization, which improves the rate of convergence. For analytical tractability, we focus on designs in which clusters are equally sized squares, each design distinguished by the number of such squares. We pair each design with a Horvitz-Thompson estimator that compares the average outcomes of units with all or no treated neighbors, where the neighborhood radius is of the same order as the cluster size dictated by the design. See Figure 1 for a depiction of a hypothetical design and neighborhoods used to construct the estimator.
Our results inform how the analyst should choose the number of clusters (and hence, the cluster size and neighborhood radius of the estimator) to minimize the rate of convergence of the estimator. Notably, existing work on cluster randomization with interference utilizes small clusters (those with asymptotically bounded size). We show that such designs are generally asymptotically biased under the weaker restriction on interference we impose, which motivates the large-cluster designs we study.

Finally, regarding (3), we show that the estimator is asymptotically normal and provide a variance estimator. These results appear to be novel, as no existing central limit theorems seem to apply to our setup in which treatments exhibit cluster dependence, clusters can be large, and units in different clusters are spatially dependent due to interference. As usual, the variance estimator is biased due to heterogeneity in unit-level treatment effects. However, we show that, in a superpopulation setting in which potential outcomes are weakly spatially dependent, the bias is asymptotically negligible.
Based on our theory, we provide practical recommendations for implementing cluster-randomized designs in § 3.3. Of course, rate-optimality results do not determine the choice of nonasymptotic constants that are often important in practice under smaller sample sizes. Still, they constitute an important first step toward designing practical procedures. Due to the generality of the setting, which imposes quite minimal assumptions on interference, it seems reasonable to first study rate-optimality, as finite-sample optimality appears to require substantial additional structure on the problem. We note that existing results on graph cluster randomization, which require stronger restrictions on interference than this paper, are nonetheless limited to rates, and how “best” to construct clusters in practice has been an open question.
1.1 Related Literature
Most of the literature supposes interference is mediated by a network. Studying optimal design in this setting is difficult because network clusters can be highly heterogeneous in topology, and their graph-theoretic properties can closely depend on the generative model of the network [32]. We study spatial interference, and to make the optimal design problem analytically tractable, we focus on a class of designs that partitions space into equally sized squares while exploring in simulations the performance of more realistic designs that partition using clustering algorithms. We discuss in § 6 the (pessimistic) prospects of extending our approach to network interference.
There is relatively little work on optimal experimental design under interference. Viviano [50] proposes variance-minimizing two-wave experiments under network interference. Baird et al. [5] study the power of randomized saturation designs under partial interference.
A recent literature studies designs for interference that depart from unit-level randomization. A key paper motivating our work is [46], who propose graph cluster randomization designs under network interference. Ugander and Yin [47] study a new variant of these designs, and [19] consider related designs for bipartite experiments. These papers assume interference is summarized by exposure mappings, which enables the construction of unbiased estimators and use of designs in which clusters are small. Under our weaker restriction on interference, we show that large clusters are required to reduce bias, which creates a bias-variance trade-off.
Eckles et al. [15] show that graph cluster randomization can reduce the bias of common estimators for in the absence of correctly specified exposure mappings. Pouget-Abadie et al. [40] propose two-stage cluster-randomized designs to minimize bias under a monotonicity restriction on interference. Several papers [6, 23, 44] study linear potential outcome models and propose designs targeting the direct average treatment effect, rather than . Under a normal-sum model, [6] compute the mean-squared error of the difference-in-means estimator, which they use to suggest model-assisted designs.
The aforementioned papers on cluster randomization target global effects such as [also see 9, 10]. Much of the literature on interference considers fundamentally different estimands defined by exposure mappings. When these mappings are misspecified, the estimands are functions of assignment probabilities, in which case their interpretations can be specific to the experiments run [42, 43]. Hu et al. [21] (§5) views this as “largely unavoidable” in nonparametric settings with interference. Our results show that inference on , which avoids this issue, is possible under restrictions on interference weaker than those typically used in the literature. Additionally, papers in the literature impose an overlap assumption, which implicitly restricts the estimand [31]. We study cluster-randomized designs that directly satisfy overlap.
There is a large literature on cluster-randomized trials [e.g. 20, 37]. This literature predominantly studies partial interference, meaning that units are divided into clusters such that those in distinct clusters do not interfere. That is, the clusters themselves impose restrictions on interference. In our setting, clusters are determined by the design and do not restrict interference.
Finally, [4], [39], and [51] study spatial interference in a different “bipartite” setting in which treatments are assigned to units or locations that are distinct from the units whose outcomes are of interest. This shares some similarities with spatial cluster randomization, where different spatial regions are randomized into treatment, so some of the ideas here may be applicable to optimal design there.
1.2 Outline
The next section defines the model of spatial interference and the class of designs and estimators studied. In § 3, we derive the estimator’s rate of convergence, discuss rate-optimal designs, and provide practical design recommendations. In § 4, we prove that the estimator is asymptotically normal, propose a variance estimator, and characterize its asymptotic properties. We report results from a simulation study in § 5, exploring the use of spectral clustering to implement the designs. Finally, § 6 concludes.
2 Setup
Let be a set of units. We study experiments in which units are cluster-randomized into treatment and control, postponing to § 2.2 the specifics of the design. For each , let be a binary random variable where indicates that is assigned to treatment and indicates assignment to control. Let be the vector of realized treatments and denote a non-random vector of counterfactual treatments. Recall from § 1 that is the potential outcome of unit under the counterfactual that units are assigned treatments according to . Formally, for each and , is a non-random function from to . We denote ’s factual, or observed, outcome by and maintain the standard assumption that potential outcomes are uniformly bounded.
Assumption 1 (Bounded Outcomes).
.
2.1 Spatial Interference
Thus far, the model allows for unrestricted interference in the sense that may vary essentially arbitrarily in any component of . In order to obtain a positive result on asymptotic inference, it is necessary to impose restrictions on interference to establish some form of weak dependence across unit outcomes. The existing literature primarily focuses on restrictions captured by -neighborhood exposure mappings, which imply that can only interfere with if the distance between is at most . We will discuss how this assumption is potentially restrictive and establish results under weaker conditions.
We assume each unit is located in . Label each unit by its location, so that , and equip this space with the sup metric for , , and . Let , the ball of radius centered at . Under the sup metric, balls are squares, and the radius is half the side length of the square. Letting denote the origin, we consider a sequence of population regions such that
That is, units are located in the square with growing radius . Combined with the next increasing domain assumption, the number of units in the region is , but throughout, we will simply assume the number is exactly .
Assumption 2 (Increasing Domain).
There exists such that, for any and , .
This allows units to be arbitrarily irregularly spaced, subject to being minimally separated by some distance , a widely used sampling framework in the spatial literature [e.g. 25]. In contrast, “infill” asymptotic approaches that do not require minimal separation and instead assume increasingly dense sampling from a fixed region can yield nonstandard limiting behavior [28]. For some applications, the spatial distribution of units may exhibit “hotspots” with unusually high densities, perhaps making the infill approach more plausible. Some work adopts a hybrid of the two approaches [29, 30], and it may be possible to extend our results to this framework.
Let denote the set of non-negative reals and
denote the -neighborhood of . We study the following model of interference similar to that proposed by [31].
Assumption 3 (Interference).
There exists a non-increasing function such that , , and, for all ,
To interpret this, observe that maximizes over pairs of treatment assignment vectors that fix the assignments of units in ’s -neighborhood but allow assignments to freely vary outside of this neighborhood. It therefore measures the degree of spatial interference in terms of the maximum change to ’s potential outcome caused by manipulating treatments assigned to units “distant” from in the sense that . The assumption requires the degree of interference to vanish with the neighborhood radius so that treatments assigned to more distant alters interfere less with the ego. The rate at which interference vanishes is controlled by , which is required to decay at a rate faster than .
Remark 1 (Necessity of rate condition).
Assumption 3(b) of [25] and Assumption 4(c) of [27] impose the same minimum rate of decay as Assumption 3 on various measures of spatial dependence (mixing or near-epoch dependence coefficients) to establish central limit theorems. If the rate is slower, then the variance can be infinite. For example, consider a spatial process such that units are positioned on the integer lattice and for some function for any . Then
Note that , with equality achieved for all not near the boundary of the population region. Thus, for large , a finite variance requires that decay with faster than .
We next discuss two models of interference satisfying Assumption 3. The first is the standard approach of specifying a -neighborhood exposure mapping. Such a mapping is given by with the crucial property that its dimension does not depend on , unlike that of . The approach is to assume that the low-dimensional summarizes interference by reparameterizing potential outcomes as
(1) |
That is, once we fix ’s exposure mapping , its potential outcome is fully determined. No less important, it is also typically assumed that exposure mappings are restricted to a unit’s -neighborhood, where is small, meaning fixed with respect to . Formally, for any such that for all , which implies that the treatment assigned to a unit only interferes with if . In practice, choices with are most common, for example for or where . In these examples, captures the direct effect of the treatment, and captures interference from units near .
Crucially, and must be known to the analyst in this approach, which is often a strong requirement. In contrast, Assumption 3 enables the analyst to impose (1) while requiring neither to be known. Indeed, if there exists a -neighborhood exposure mapping satisfying (1), then Assumption 3 holds with for some sufficiently large.
Furthermore, Assumption 3 allows for more complex forms of interference ruled out by (1) in which interference decays more smoothly with distance, rather than being truncated at some distance . The former is analogous to mixing conditions, which are widespread in the time series and spatial literature, while the latter is analogous to -dependence, which rules out interesting forms of autocorrelation, including models as basic as AR.
In the spatial context, our assumption accommodates, for example, the Cliff-Ord autoregressive model [11, 12], which is a workhorse model of spatial autocorrelation used in a variety of fields, including geography [17], ecology [48], and economics [2]. A typical formulation of the model is
where we assume is uniformly bounded to satisfy Assumption 1. Let be the spatial weight matrix whose th entry is . These weights typically decay with distance in a sense to be made precise below. While this model is highly stylized, the important aspect it captures is autocorrelation through the spatial autoregressive parameter . If this is nonzero, then there is no -neighborhood exposure mapping for which (1) holds, a point previously noted by [15] in the context of network interference.
To see this, first note that coherency of the model requires nonsingularity of , where is the identity matrix. Let be the inverse of this matrix and its entry corresponding to units . Then the reduced form of the model is
(2) |
a spatial “moving average” model with spatial weight matrix . (See § 5 for some examples of .) Noticeably, can potentially depend on for any , which is ruled out if one imposes a -neighborhood exposure mapping.
Outcomes satisfying (2) are near-epoch dependent, a notion of weak spatial dependence, when the weights decay with spatial distance in the following sense:
(3) |
for some [see Proposition 5 and eq. (13) of 26]. The next result shows that this condition is sufficient for verifying Assumption 3 if .
Proposition 1.
Suppose potential outcomes are given by (2) and spatial weights satisfy (3) for some . Then Assumption 3 holds with for some that does not depend on .
Proof.
Fix and any such that for all . For , . For ,
Defining and , the inequality in Assumption 3 holds with by construction. Furthermore, by (3) and uniform boundedness of , and satisfies because . ∎
This result shows that, unlike the standard approach of imposing a -neighborhood exposure mapping, Assumption 3 can allow for richer forms of interference in which alters that are arbitrarily distant from the ego can interfere with the ego’s response.
Remark 2 (Literature).
Assumption 3 and Proposition 1 are spatial analogs of Assumptions 4 and 6 and Proposition 1 of [31] who studies interference mediated by an unweighted network, Bernoulli designs, and a different class of estimands defined by exposure mappings satisfying overlap. We study the global average treatment effect and cluster-randomized designs that induce overlap by introducing dependence in treatment assignments, and we further derive rate-optimal designs. These differences require an entirely distinct asymptotic theory.
2.2 Design and Estimator
Much of the literature on interference considers designs in which units are individually randomized into treatment and control, either via Bernoulli or complete randomization. A common problem faced by such designs is limited overlap, meaning that some realizations of the exposure mapping occur with low probability. For example, suppose that (1) holds with exposure mapping , the number of treated units in ’s -neighborhood. Then in a Bernoulli design, for large values of , is small, tending to zero with at an exponential rate. This is problematic for a Horvitz-Thompson estimator such as where since its variance grows rapidly with if either or is zero. Ugander et al. [46] propose cluster-randomized designs, which reduce this problem by deliberately introducing dependence in treatment assignments across certain units.
We consider the following class of such designs. We assign units to mutually exclusive clusters by partitioning the population region into equally sized squares, assuming for simplicity that . That is, to obtain increasingly more clusters, we first divide the population region into four squares, then divide each of these squares into four squares, and so on, as in Figure 2. Label the squares , and call
cluster . Then the number of units in each cluster is uniformly under Assumption 2, and the radius of each cluster is
which we assume is greater than 1. We also assume there are no units on the common boundaries of different squares, so the squares partition .

A cluster-randomized design first independently assigns each cluster to treatment and control with some probability
that is fixed with respect to . Then within a treated (control) cluster , all are assigned (). In order to emphasize that we use this design in later theorems, we state it as a separate assumption.
Assumption 4 (Design).
For any , is realized according to a cluster-randomized design with clusters constructed as above.
Note that will be required to diverge with since a large number of clusters is needed for the estimator to concentrate. If is order , then , so clusters are asymptotically bounded in size, the usual case studied in the literature, which includes unit-level Bernoulli randomization as a special case. If is of smaller order, then cluster sizes grow with .
To construct the estimator, define the neighborhood exposure indicator
This is an indicator for whether ’s -neighborhood is entirely treated () or untreated (). Unlike -neighborhood exposure mappings, the radius will be allowed to diverge. Let . We study the Horvitz-Thompson estimator
Intuitively, estimates using the outcomes of units whose neighbors within radius are all treated. Since the radius depends on through , is a function of the number of clusters dictated by the design. Figure 1 depicts the relationship between the clusters and the -neighborhoods that determine exposure.
Since nontrivial designs will include both treated and untreated units, is biased for the global average treatment effect. The choice of design can trade off the size of the bias against that of the variance. In particular, small choices of (few clusters, large radii) induce lower bias and higher variance. In § 3, we discuss nearly rate-optimal choices of for which the bias is asymptotically negligible.
Remark 3 (Overlap).
Under Bernoulli randomization, overlap needs to be imposed as a separate assumption for asymptotic inference. By overlap we mean the probability weights and are uniformly bounded away from zero and one, which imposes potentially strong restrictions on the types of exposure mappings the analyst can use, as previously illustrated. In our setup, however, overlap is directly satisfied because and , where is the number of clusters that intersect ’s -neighborhood. Our choice of implies for all , so overlap holds.
Remark 4 (Neighborhood radius).
Let be the centroid of cluster . The choice of ensures that, for any unit in the “interior” of a cluster in the sense that , ’s -neighborhood also lies within that cluster, in which case the exposure probabilities are simply given by the cluster assignment probability: and . If we had instead chosen, say, , then this would be true only for the centroid, while for the remaining units, and could be as small as , which means less overlap and a more variable estimate. For the purposes of the asymptotic theory, the main requirement is that has the same asymptotic order as . If were of smaller order, then results in § 3 show that the bias of could be non-negligible, whereas if were of larger order, then in Remark 3 would grow with , overlap would be limited, and could be large.
3 Rate-Optimal Designs
We next derive the rate of convergence of as a function of , , and , which we use to obtain rate-optimal choices of . Recall that designs are parameterized by , which determines the number and sizes of clusters, and also that depends on through the neighborhood exposure radius , so we will be optimizing over both the design and the radius that determines the estimator.
3.1 Rate of Convergence
We first provide asymptotic upper bounds on the bias and variance of . For two sequences and , we write to mean and to mean .
Proof.
First, we bound the bias. If , then all units in are treated, so by Assumption 3,
The same argument applies to , so combining these results, we obtain the rate for the bias.
Next, we bound the variance. The following is an important object in our analysis and also for later constructing the variance estimator:
(4) |
This is the set of units whose -neighborhoods intersect a cluster that also intersects ’s -neighborhood. We have
(5) |
Note that contains units from at most 16 clusters (the worst case is when intersects four clusters), and clusters contain uniformly units by Lemma A.1 of [25]. By Assumption 1 and Remark 3, is uniformly bounded, so
Our second result provides asymptotic lower bounds.
Theorem 2.
Suppose for some . Under Assumption 4, there exists a sequence of units and potential outcomes satisfying Assumptions 1–3 such that and .
Proof.
See supplemental material [33]. ∎
The result shows that we can construct potential outcomes satisfying the assumptions of Theorem 1 such that the bias is at least order and the variance at least . As discussed in § 1, existing work on cluster randomization under interference assumes clusters have asymptotically bounded size, which, in our setting, implies . Theorem 2 implies that the bias of the Horvitz-Thompson estimator can then be bounded away from zero, showing that existing results strongly rely on the exposure mapping assumption to obtain unbiased estimates. In the absence of this assumption, it is necessary to consider designs in which cluster sizes are large to ensure the bias vanishes with .
3.2 Design Examples
Theorem 1 implies the mean-squared error of is at most of order , and Theorem 2 provides a similar asymptotic lower bound. Under either bound, the bias increases with while the variance decreases, so there exists a bias-variance trade-off in the choice of design. We next derive rates for that minimize or nearly minimize the upper bound under different assumptions on . Based on these results, we make recommendations for practical implementation in the next subsection.
Oracle design. Suppose is known to decay with at some rate . Then by definition of , a rate-optimal design chooses to minimize .
Exposure mappings. If we assume (1) holds for some -neighborhood exposure mapping, then for all . If is known, then by choosing , we have and zero bias. In this case, clusters are asymptotically bounded in size, the estimator converges at rate , and both the design and estimator qualitatively coincide with those of [46].
On the other hand, if is unknown, then for a nearly rate-optimal design, we can choose to grow at a slow rate so that it eventually exceeds any fixed . This may be achieved by choosing to grow slightly slower than , say . Then for large enough , the bias is zero, and the rate of convergence is .
Exponential decay. Common specifications of the spatial weight matrix in the Cliff-Ord model imply that decays exponentially with , for example, the row-normalized matrix
(6) |
If is known to decay at some exponential rate but the exponent is unknown, then we may choose for any small for a nearly rate-optimal design, which yields a rate of convergence of , close to an -rate. This shows that rates close to are attainable in the absence of exposure mapping assumptions, despite targeting the global average treatment effect.
Worst-case decay. In practice, we may have little prior knowledge about . Recall that Assumption 3 requires the rate of decay to be no slower than for . As discussed in Remark 1, this is the slowest rate for spatial dependence that ensures a finite variance. For this rate, since is order , the bias is order . Without knowledge of , we can settle for a nearly rate-optimal design by setting and choosing to minimize , which yields and an -rate of convergence. Under this design, cluster sizes grow at the rate .
In the last three designs, the bias is , which is of smaller order than the variance. This makes the bias negligible from an asymptotic standpoint, but it would be useful to develop bias-reduction methods. We also reiterate that, while this analysis only provides rates, it is apparently by necessity at this level of generality. A finite-sample optimal design seems to require substantially more knowledge of the functional form of .
3.3 Practical Recommendations
The designs in the previous section rely on varying degrees of knowledge of , the rate at which interference vanishes with distance. In practice, this is likely unknown, so we recommend operating under the worst-case rate of decay discussed in the previous subsection. The default conservative choice we recommend using in practice is the near-optimal rate described there, namely
(7) |
To construct the clusters, we recommend partitioning space into clusters using a clustering algorithm, such as spectral clustering. A confidence interval (CI) for is given in (11). In § 5, we explore in simulations the performance of the CI when clusters are constructed according to these recommendations.
Our large-sample theory assumes space is subdivided into evenly sized squares in order to avoid the difficult problem of optimizing over arbitrary shapes. However, since units are typically irregularly distributed in practice, division into equally sized squares may be inefficient, which is why we recommend the use of clustering algorithms. We suggest spectral clustering because it recovers, under weak conditions, low-conductance clusters [38], and low conductance is the key property of clusters utilized in our proofs, as discussed in § 6.
4 Inference
We next state results for asymptotic inference on . Define .
Assumption 5 (Non-degeneracy).
.
This is a standard condition and reasonable to impose in light of the lower bound on the variance derived in Theorem 2.
Proof.
See § B. ∎
The result centers at its expectation, not the estimand . However, designs discussed in § 3.2 result in small bias, meaning , so we can replace with on the left-hand side. Also note that the assumption implies that cluster sizes grow with . If instead were of order , then , so by Theorem 2, we would additionally need to assume that there exists a -neighborhood exposure mapping in the sense of (1) in order to guarantee that the bias vanishes at all. In this case, it is straightforward to establish a normal approximation using existing results.
4.1 Proof Sketch
To our knowledge, there is no off-the-shelf central limit theorem that we can apply to . Under Assumption 3, the outcomes appear to be near-epoch dependent on the input process , but the treatments are cluster-dependent with growing cluster sizes, rather than -mixing, as required by [27]. To prove a central limit theorem, they split the average into two parts: its expectation conditional on the dependent input process , and a remainder that they show is small. Rather than conditioning on all treatments, we find that the following unit-specific conditioning event is more useful for proving our result.
Let be the cluster containing unit , and . Rewrite the estimator as
(8) |
We first show that the last term is relatively small, to be precise, which means that, on average, is primarily determined by . The proof of this claim is somewhat complicated [and different from that of 27], but it is similar to the argument showing in (5). To then establish a central limit theorem for , we observe that the dependence between “observations” is characterized by the following dependency graph , which, roughly speaking, links two units only if they are dependent. Recalling the definition of from (4), we connect units in if and only if (or equivalently ). Then is indeed a dependency graph because, under Assumption 4, implies that the treatment assignments that determine are independent of those that determine . The result follows from a central limit theorem for dependency graphs.
The proof highlights two sources of dependence. The first-order source is the first term on the right-hand side of (8). Dependence in this average is due to cluster randomization, which induces correlation in the treatments determining across . The second-order source is the second term on the right-hand side of (8). Dependence in this average is due to interference, which decays with distance due to Assumption 3. Sävje [42] derives a similar decomposition in a different context with misspecified exposure mappings. The previous arguments show that the second-order source of dependence is small relative to the first-order source because, with large clusters, dependence induced by cluster randomization dominates dependence induced by interference. This is generally untrue with small clusters.
4.2 Variance Estimator
The proof sketch suggests that, to estimate , it suffices to account for dependence induced by cluster randomization. Define , where is defined in (4), and note that and . Let , which is equivalent to . Our proposed variance estimator is
(9) |
Proof.
See supplemental material [33]. ∎
The bias term is typically nonzero due to the unit-level heterogeneity. That is, does not approach zero asymptotically, except in the special case of homogeneous treatment effects where does not vary across . In the no-interference setting, it is well-known that the variance of the difference-in-means estimator is biased for the same reason and that consistent estimation of the variance is impossible. However, due to the term in , we will argue that typically , meaning that is asymptotically exact.
Let us first compare to its formulation under no interference. In this case, , and we replace with and with to estimate the usual average treatment effect. Furthermore, we set for all because units are independent and set since there is no longer a need to cluster units. With these changes, , and
(10) |
for and . This is the well-known expression for the bias in the absence of interference [e.g. 22, Theorem 6.2].
In our setting, we have additional “covariance” terms included in due to the non-zero off-diagonals of the dependency graph . These would be problematic if they were negative and larger in magnitude than the main variance terms since that would make anti-conservative. We show that this occurs with small probability, and in fact, that is . Observe that and has the form of a HAC (heteroskedasticity and autocorrelation consistent) variance estimator [1, 13]. Hence, under conventional regularity conditions, is consistent for a variance term , in which case is non-negative in large samples, and furthermore, . To formalize this intuition, we need to specify conditions on the superpopulation from which potential outcomes are drawn. In § A, we show that, if potential outcomes are -mixing, then is asymptotically unbiased for , and furthermore, . Consequently, due to the term in its expression.
Remark 5 (Confidence interval).
As previously discussed, the bias of is for the near-optimal designs discussed at the end of § 3. Thus, for such designs, the preceding discussion justifies the use of
(11) |
as an asymptotic 95-percent CI for , where is defined in (9).
Remark 6 (Literature).
Leung [31] proves a result similar to Theorem 4 but for a different variance estimator under a different design and variety of interference. Due to the lack of an analogous term, in his setting, weak dependence conditions would only ensure , in which case the estimator would be asymptotically conservative, whereas ours is asymptotically exact. He does not provide a formal result on the limit of .
5 Monte Carlo
We next present results from a simulation study illustrating the quality of the normal approximation in Theorem 3 and coverage of the CI (11) when constructing clusters using spectral clustering. To generate spatial locations, let be i.i.d. draws from . Unit locations in are given by for with . We let where is the Euclidean norm.
We set the number of clusters according to (7), rounded to the nearest integer, which corresponds to the near-optimal design under the worst-case decay discussed in § 3.2. To construct clusters, we apply spectral clustering to with the standard Gaussian affinity matrix whose th entry is . Clusters are randomized into treatment with probability . Figure 3 displays the clusters and treatment assignments for a typical simulation draw.

We generate outcomes from three different models. Let be drawn independently of the other primitives. The first model is Cliff-Ord:
with and spatial weight matrix given by the row-normalized adjacency matrix (6). As discussed in § 3, this model features exponentially decaying , in fact of order [31, Proposition 1].
We construct the second and third models to explore how our methods break down when Assumption 3 is violated or close to violated. For this purpose, we use the “moving average” model (2) with and for for the two respective models, so that decays at a polynomial rate. Notably, the choice of implies that the rate of decay is slow enough that Assumption 3 can fail to hold. This is because
for some by Lemma A.1(iii) of [25]. The right-hand side does not converge for some , as required by Proposition 1. On the other hand, the choice of is large enough for Assumption 3 to be satisfied since we now replace the 3 on the right-hand side of the previous display with 4. However, in smaller samples, or 5 may not be substantially different, so our methods may still break down from the assumption being “close to” violated.
Table 1 displays the results of 5000 simulation draws. Row “Bias” displays , estimated by taking the average over the draws, while “Var” is the variance of across the draws. The next rows display the coverage of three different confidence intervals. “Our CI” corresponds to the empirical coverage of (11). “Naive CI” corresponds to (11) but replaces with the i.i.d. standard error, so the extent to which its coverage deviates from 95 percent illustrates the degree of spatial dependence. “Oracle CI” corresponds to (11) but replaces with the “oracle” SE, which is the standard deviation of across the draws. Note that the oracle SE approximates because the variance is taken over the randomness of the design as well as of the potential outcomes. Lastly, “SE” displays our standard error .
Moving Avg, | Moving Avg, | Cliff-Ord | |||||||
---|---|---|---|---|---|---|---|---|---|
250 | 500 | 1000 | 250 | 500 | 1000 | 250 | 500 | 1000 | |
40 | 63 | 100 | 40 | 63 | 100 | 40 | 63 | 100 | |
Our CI | 0.909 | 0.925 | 0.924 | 0.922 | 0.937 | 0.940 | 0.979 | 0.983 | 0.982 |
Naive CI | 0.530 | 0.489 | 0.447 | 0.575 | 0.539 | 0.494 | 0.918 | 0.913 | 0.916 |
Oracle CI | 0.943 | 0.943 | 0.936 | 0.950 | 0.952 | 0.949 | 0.983 | 0.982 | 0.982 |
Bias | 0.108 | 0.093 | 0.083 | 0.033 | 0.027 | 0.024 | 0.009 | 0.004 | 0.003 |
Var | 0.143 | 0.087 | 0.054 | 0.108 | 0.065 | 0.039 | 0.341 | 0.177 | 0.087 |
SE | 0.364 | 0.292 | 0.232 | 0.317 | 0.252 | 0.199 | 0.601 | 0.432 | 0.309 |
1.432 | 1.467 | 1.492 | 1.289 | 1.307 | 1.319 | 5.804 | 5.822 | 5.851 |
-
5k simulations. The “CI” rows show the empirical coverage of 95% CIs. “Naive” and “Oracle” respectively correspond to i.i.d. and true standard errors.
There are at most 100 clusters in all designs, and the rate of convergence is quite slow at for our choice of . Nonetheless, across all designs, the coverage of the oracle CI is close to 95 percent or above, which illustrates the quality of the normal approximation. For the Cliff-Ord model, our CI attains at least 95 percent coverage even for small sample sizes, despite being chosen suboptimally for the worst-case decay. For the moving average model with , we see some under-coverage in smaller samples due to the larger bias, which is unsurprising from the above discussion, but coverage is close to the nominal level for larger . The results for , as expected, are worse since it is deliberately constructed to violate our main assumption. Once again, our CI exhibits under-coverage due to the larger bias, but coverage improves and bias decreases as grows.
6 Conclusion
This paper studies the design of cluster-randomized experiments targeting the global average treatment effect under spatial interference. Each design is characterized by a parameter that determines the number and sizes of clusters. We propose a Horvitz-Thompson estimator that compares units with different neighborhood exposures to treatment, where the neighborhood radius is of the same order as clusters’ sizes given by the design. We asymptotically bound the estimator’s bias and variance as a function of and the degree of interference and derive rate-optimal choices of . Our lower asymptotic bound shows that designs using small clusters (those with asymptotically bounded sizes) generally result in a non-negligible asymptotic bias. On the other hand, constructing large clusters reduces the total number of clusters, resulting in a bias-variance trade-off that we seek to optimize in terms of rates through the choice of design.
In the worst case where the degree of interference is substantial, the estimator has an -rate of convergence under a nearly rate-optimal design, whereas in the best case where interference is characterized by a -neighborhood exposure mapping, the rate is under a rate-optimal design. We derive the asymptotic distribution of the estimator and provide an estimate of the variance.
Important areas for future research include data-driven choices of and and methods to reduce the bias of the estimator. However, a rigorous theory appears to require more substantive restrictions on interference than what we impose.
Our results focus on the canonical case of spatial data in . We conjecture that they can be extended to for because our proofs fundamentally rely on the following key property of Euclidean space, which is true for any dimension: it is always possible to construct many clusters with low conductance, or boundary-to-volume ratio, for example by partitioning space into hypercubes or by spectral clustering [32]. This appears in our proofs through the use of Lemma A.1 of [25], which, together with Assumption 3, is crucial to establish that spatially distant units have small covariance, despite dependence induced by cluster randomization and interference. In this sense, the technical idea behind this paper is to exploit a useful property of Euclidean space – the existence of many low-conductance clusters – to show that cluster-randomized designs may be fruitfully applied to the problem of spatial interference.
The story for network interference appears to be different. Existing cluster-randomized designs have theoretical guarantees under exposure mapping assumptions, but it is an open question whether such designs work under weaker restrictions on interference such as Assumption 3. In order to directly apply our idea in the previous paragraph, the network must possess many low-conductance clusters across which we can randomize. Unfortunately, this is a strong requirement in practice because, as discussed in [32], not only do some networks not possess multiple low-conductance clusters, but, of those that do, some apparently possess only a small number of such clusters. Because network “space” differs from Euclidean space in this fundamental aspect, under network interference, clusters can be strongly dependent in the absence of exposure mapping assumptions.
Appendix A Bias of the Variance Estimator
Characterizing the asymptotic behavior of requires conditions on the superpopulation from which units are drawn. In this section, we assume potential outcomes are random and constitute a weakly dependent spatial process (independent of ). Accordingly, we rewrite in Theorem 4 as
We require the spatial process to be -mixing, which is a standard concept of weak spatial dependence. The results we use in fact apply to the weaker concept of near-epoch dependence, but we focus on mixing since it requires less exposition.
Definition A.1.
Let be the probability space, be sub--algebras of , and
For , let , where is -algebra generated by . The -mixing coefficient of is
for , , and .
That is, for any two sets of units with respective sizes such that the minimum distance between is at least , bounds their dependence with respect to observations .
Example A.1.
Suppose that, for any and , there exists a function such that . If the unobserved heterogeneity is -mixing and is a Borel-measurable function of (a mild requirement since treatments are independent of potential outcomes), then is -mixing.
Example A.2.
Generalizing the previous example, suppose , where and is similarly defined. Under some conditions on , one can ensure that is near-epoch dependent on the input [e.g. 27, Proposition 1]. However, we only focus on mixing.
We next discuss the intuition behind our main result. Let , so that
Observe that , and is essentially a HAC variance estimator with “kernel” . More precisely, is sandwiched between two uniform kernels:
(A.1) |
This is a consequence of the construction of clusters. The lower bound holds because must include ’s -neighborhood. The upper bound is achieved if is located at a corner shared by four clusters, and each such cluster intersects some -neighborhood that is maximally distant from the cluster. Notably, the bandwidths of the two kernels are of the same asymptotic order (recalling that has the same order as ). Hence, should behave as a HAC variance estimator. This has two implications.
-
1.
should be consistent for a non-negative variance term under standard regularity conditions, so .
-
2.
If, in the formula for , we replace with the uniform kernel in the upper bound (A.1), then we would have a positive semidefinite HAC estimator, implying a.s. [1, 13, §3.3.1]. Then in the finite-population setting of our main results, would be at worst asymptotically conservative.
We choose not use a spatial HAC for because they have a reputation for being anti-conservative in smaller samples [see references in e.g. 32]. In our estimator, functions as a sort of heterogeneous bandwidth determined by the design that allows different units to have different neighborhood radii in the variance estimator, whereas HAC kernels imply homogeneous radii determined by the bandwidth. The hope is that heterogeneous radii could translate to better finite-sample properties since they directly capture the first-order dependency neighborhood.
We next state regularity conditions taken from [24], which we use to apply her Theorem 4 on consistency of HAC estimators.
Assumption A.1.
The mixing coefficient satisfies for and .
This is Assumption 2 of [24]. The substance of the condition is the requirement that the mixing coefficient decays at a sufficiently fast rate with distance . For , the rate requirement is stronger than what we require of in Assumption 3.
Assumption A.2.
(a) for all . (b) .
This is Assumption 7(a) of [24]. Part (a) is a standard mean-homogeneity condition required for HAC to be asymptotically unbiased. Such a requirement is untenable in the finite population setting of Theorem 4 because is a function of ’s potential outcomes which are generally heterogeneous across units. Heterogeneity is responsible for the appearance of the bias . However, in the superpopulation setting of this section, the assumption is much more tenable since we integrate over the randomness of the potential outcomes. The condition then requires that the mean be invariant to unit labels. The finiteness requirement in part (b) can be proven as a consequence of Assumption A.1 and moment conditions, so the only substance of the assumption is the existence of a limit.
We next discuss the implications of the theorem for (also see the discussion in § 4.2) and then conclude with the proof. The designs discussed at the end of § 3, other than under the worst-case decay, choose to be of substantially higher order than . In this case, Theorem A.1 yields
For the worst-case decay, , in which case remains asymptotically unbiased for , and we still have , so again .
Proof of Theorem A.1.
We apply Theorem 4 of [24]. Note that our setting is a simple mean estimation problem, which is a special case of her semiparametric model with and equal to our (by Assumption A.2(a)). The moments in her formula (13) for the HAC estimator corresponds to our . Other than Assumption 10, her assumptions are either satisfied (increasing domain corresponds to our Assumption 2 and the moment conditions hold by our Assumption 1 and Remark 3) or are irrelevant in our setting.
Assumption 10 concerns the properties of the kernel function. For context, note that if, hypothetically, we replaced in with its upper bound in (A.1), then the kernel and bandwidth in [24] would correspond in our setting to the uniform kernel and , respectively, so that in Jenish’s notation would correspond to our .
Now, because is only bounded by kernel functions but cannot be written as one, we cannot directly verify Assumption 10. However, inspection of the proof reveals that the assumption is used as follows. First, to derive bounds on the variance of the HAC estimator (Step 1 of the proof), uniform boundedness of the kernel is used, but this is trivially satisfied by . Second, to derive bounds on the bias (Step 2 of the proof), Assumption 10 is used to show that the term as for any . In our case, corresponds to . But this has the desired property; it is in fact exactly zero for sufficiently large due to (A.1).
Hence, the conclusions of the proof of Jenish’s Theorem 4 apply to , which we now apply to prove our claims. Part (a) of our theorem follows from Step 2 of her proof. Next, in Step 1 of her proof, in our setting because the data is -mixing rather than near-epoch dependent. Accordingly, the variance bound on in that step implies that the variance of the HAC estimator is where is the bandwidth and is the dimension of the space. In our case, by (A.1), the bandwidth corresponds to , so , and part (b) of our theorem follows. ∎
Appendix B Proofs
The proofs use the following definitions. Let be the centroid of cluster . For , define
(B.1) |
For , this is the “boundary” of , and as we increase , moves through contour sets within that are increasingly further from the boundary. Also, for any two sets , let .
The proofs make use of the following facts, which are a consequence of Lemma A.1 of [25] and use Assumption 2. Given that has radius , and for some universal constant that does not depend on or . Also, since is the boundary of a ball of radius , .
Proof.
Step 1. We first establish covariance bounds. Fix such that , the latter defined in (4), and set . Trivially,
First consider the case . Let be the cluster containing unit , , and . As a preliminary result, we bound the discrepancy between and .
Let , the distance between and the nearest unit in the boundary of . By Assumption 3, for any ,
The equality holds because conditions on , and by Assumption 4, implies , which implies . The inequality holds because fixes at their realized values. Similarly, , so by the law of total probability and Remark 3, for some universal constant ,
(B.2) |
Define . Since , by Assumption 4. Applying the Cauchy-Schwarz and Jensen’s inequalities and (B.2) for ,
for some universal .
Next consider the case . Abbreviate , and redefine . Note that and is the diameter of a cluster, so . Consequently, following the previous argument,
(B.3) |
Step 2. For any , let denote rounded down to the nearest integer. Using the covariance bounds derived in step 1,
(B.4) |
where takes the part involving and the remainder. Note that can be at most because is the 1-ball centered at the centroid of , and it can be at most because and since .
As discussed at the start of this section, and for some universal by Assumption 2. Then by Assumption 3,
(B.5) |
Finally,
(B.6) |
∎
Proof of Theorem 3.
Recall that , and define .
Step 1. We show that
Recalling (4), the left-hand side equals
If , then . Using this fact and (B.2) with ,
For , we first establish some covariance bounds. Fix such that , and let . Let , the distance between and the nearest unit in the boundary of . By Assumption 3,
Similarly, , so by the law of total probability and Remark 3, for some universal constant ,
We derive an alternate bound for the case . Let and . Then
since . Moreover, since and is the diameter of a cluster. Consequently, by (B.3),
Applying the covariance bounds,
Step 2. We show that
First, let . By Minkowski’s inequality,
(B.7) |
which is by step 1. By Assumption 5, it then suffices to show
(B.8) |
We apply Lemma B.2 with and dependency graph defined after (8). The maximum degree of is at most , and . Therefore, by Assumptions 1 and 5,
Since , (B.8) follows. ∎
Lemma B.2 ([41], Theorem 3.6).
Let be random variables with dependency graph such that and . Define , , and . For ,
(B.9) |
where is the Wasserstein distance.
[Acknowledgments] The author thanks the referees and associate editor for helpful comments that improved the exposition of the paper.
supplement.zip \sdescriptionThis zip file contains a PDF with proofs omitted in this text and code used to produce the simulation results in § 5.
References
- [1] {barticle}[author] \bauthor\bsnmAndrews, \bfnmD.\binitsD. (\byear1991). \btitleHeteroskedasticity and Autocorrelation Consistent Covariance Matrix Estimation. \bjournalEconometrica \bpages817–858. \endbibitem
- [2] {bincollection}[author] \bauthor\bsnmAnselin, \bfnmL.\binitsL. (\byear2001). \btitleSpatial Econometrics. In \bbooktitleA Companion to Theoretical Econometrics (\beditor\bfnmB.\binitsB. \bsnmBaltagi, ed.) \bchapter14 \bpublisherBlackwell Publishing Ltd. \endbibitem
- [3] {barticle}[author] \bauthor\bsnmAronow, \bfnmP.\binitsP. and \bauthor\bsnmSamii, \bfnmC.\binitsC. (\byear2017). \btitleEstimating Average Causal Effects Under General Interference, with Application to a Social Network Experiment. \bjournalThe Annals of Applied Statistics \bvolume11 \bpages1912–1947. \endbibitem
- [4] {barticle}[author] \bauthor\bsnmAronow, \bfnmP.\binitsP., \bauthor\bsnmSamii, \bfnmC.\binitsC. and \bauthor\bsnmWang, \bfnmY.\binitsY. (\byear2020). \btitleDesign-Based Inference for Spatial Experiments with Interference. \bjournalarXiv preprint arXiv:2010.13599. \endbibitem
- [5] {barticle}[author] \bauthor\bsnmBaird, \bfnmS.\binitsS., \bauthor\bsnmBohren, \bfnmJ.\binitsJ., \bauthor\bsnmMcIntosh, \bfnmC.\binitsC. and \bauthor\bsnmÖzler, \bfnmB.\binitsB. (\byear2018). \btitleOptimal Design of Experiments in the Presence of Interference. \bjournalReview of Economics and Statistics \bvolume100 \bpages844–860. \endbibitem
- [6] {barticle}[author] \bauthor\bsnmBasse, \bfnmG.\binitsG. and \bauthor\bsnmAiroldi, \bfnmE.\binitsE. (\byear2018). \btitleModel-Assisted Design of Experiments in the Presence of Network-Correlated Outcomes. \bjournalBiometrika \bvolume105 \bpages849–858. \endbibitem
- [7] {barticle}[author] \bauthor\bsnmBasse, \bfnmG.\binitsG., \bauthor\bsnmFeller, \bfnmA.\binitsA. and \bauthor\bsnmToulis, \bfnmP.\binitsP. (\byear2019). \btitleRandomization Tests of Causal Effects Under Interference. \bjournalBiometrika \bvolume106 \bpages487–494. \endbibitem
- [8] {barticle}[author] \bauthor\bsnmBlattman, \bfnmC.\binitsC., \bauthor\bsnmGreen, \bfnmD.\binitsD., \bauthor\bsnmOrtega, \bfnmD.\binitsD. and \bauthor\bsnmTobón, \bfnmS.\binitsS. (\byear2021). \btitlePlace-Based Interventions at Scale: The Direct and Spillover Effects of Policing and City Services on Crime. \bjournalJournal of the European Economic Association \bvolume19 \bpages2022–2051. \endbibitem
- [9] {barticle}[author] \bauthor\bsnmChin, \bfnmA.\binitsA. (\byear2019). \btitleRegression Adjustments for Estimating the Global Treatment Effect in Experiments with Interference. \bjournalJournal of Causal Inference \bvolume7. \endbibitem
- [10] {barticle}[author] \bauthor\bsnmChoi, \bfnmD.\binitsD. (\byear2017). \btitleEstimation of Monotone Treatment Effects in Network Experiments. \bjournalJournal of the American Statistical Association \bvolume112 \bpages1147–1155. \endbibitem
- [11] {bbook}[author] \bauthor\bsnmCliff, \bfnmA.\binitsA. and \bauthor\bsnmOrd, \bfnmJ.\binitsJ. (\byear1973). \btitleSpatial Autocorrelation. \bpublisherLondon: Pion. \endbibitem
- [12] {bbook}[author] \bauthor\bsnmCliff, \bfnmA.\binitsA. and \bauthor\bsnmOrd, \bfnmJ.\binitsJ. (\byear1981). \btitleSpatial Processes: Models and Applications. \bpublisherLondon: Pion. \endbibitem
- [13] {barticle}[author] \bauthor\bsnmConley, \bfnmT.\binitsT. (\byear1999). \btitleGMM Estimation with Cross Sectional Dependence. \bjournalJournal of Econometrics \bvolume92 \bpages1–45. \endbibitem
- [14] {barticle}[author] \bauthor\bsnmDonnelly, \bfnmC.\binitsC., \bauthor\bsnmWoodroffe, \bfnmR.\binitsR., \bauthor\bsnmCox, \bfnmD.\binitsD., \bauthor\bsnmBourne, \bfnmJ.\binitsJ., \bauthor\bsnmGettinby, \bfnmG.\binitsG., \bauthor\bsnmLe Fevre, \bfnmA.\binitsA., \bauthor\bsnmMcInerney, \bfnmJ.\binitsJ. and \bauthor\bsnmMorrison, \bfnmI.\binitsI. (\byear2003). \btitleImpact of Localized Badger Culling on Tuberculosis Incidence in British Cattle. \bjournalNature \bvolume426 \bpages834–837. \endbibitem
- [15] {barticle}[author] \bauthor\bsnmEckles, \bfnmD.\binitsD., \bauthor\bsnmKarrer, \bfnmB.\binitsB. and \bauthor\bsnmUgander, \bfnmJ.\binitsJ. (\byear2017). \btitleDesign and Analysis of Experiments in Networks: Reducing Bias from Interference. \bjournalJournal of Causal Inference \bvolume5. \endbibitem
- [16] {barticle}[author] \bauthor\bsnmForastiere, \bfnmL.\binitsL., \bauthor\bsnmAiroldi, \bfnmE.\binitsE. and \bauthor\bsnmMealli, \bfnmF.\binitsF. (\byear2021). \btitleIdentification and Estimation of Treatment and Interference Effects in Observational Studies on Networks. \bjournalJournal of the American Statistical Association \bvolume116 \bpages901–918. \endbibitem
- [17] {barticle}[author] \bauthor\bsnmGetis, \bfnmA.\binitsA. (\byear2008). \btitleA History of the Concept of Spatial Autocorrelation: A Geographer’s Perspective. \bjournalGeographical Analysis \bvolume40 \bpages297–309. \endbibitem
- [18] {barticle}[author] \bauthor\bsnmGiffin, \bfnmA.\binitsA., \bauthor\bsnmReich, \bfnmB.\binitsB., \bauthor\bsnmYang, \bfnmS.\binitsS. and \bauthor\bsnmRappold, \bfnmA.\binitsA. (\byear2020). \btitleGeneralized Propensity Score Approach to Causal Inference with Spatial Interference. \bjournalarXiv preprint arXiv:2007.00106. \endbibitem
- [19] {barticle}[author] \bauthor\bsnmHarshaw, \bfnmC.\binitsC., \bauthor\bsnmSävje, \bfnmF.\binitsF., \bauthor\bsnmEisenstat, \bfnmD.\binitsD., \bauthor\bsnmMirrokni, \bfnmV.\binitsV. and \bauthor\bsnmPouget-Abadie, \bfnmJ.\binitsJ. (\byear2021). \btitleDesign and Analysis of Bipartite Experiments Under a Linear Exposure-Response Model. \bjournalarXiv preprint arXiv:2103.06392. \endbibitem
- [20] {bbook}[author] \bauthor\bsnmHayes, \bfnmR.\binitsR. and \bauthor\bsnmMoulton, \bfnmL.\binitsL. (\byear2017). \btitleCluster Randomised Trials. \bpublisherChapman and Hall/CRC. \endbibitem
- [21] {barticle}[author] \bauthor\bsnmHu, \bfnmY.\binitsY., \bauthor\bsnmLi, \bfnmS.\binitsS. and \bauthor\bsnmWager, \bfnmS.\binitsS. (\byear2022). \btitleAverage Direct and Indirect Causal Effects Under Interference. \bjournalBiometrika (forthcoming). \endbibitem
- [22] {bbook}[author] \bauthor\bsnmImbens, \bfnmG\binitsG. and \bauthor\bsnmRubin, \bfnmD.\binitsD. (\byear2015). \btitleCausal Inference for Statistics, Social, and Biomedical Sciences: An Introduction. \bpublisherCambridge University Press. \endbibitem
- [23] {barticle}[author] \bauthor\bsnmJagadeesan, \bfnmR.\binitsR., \bauthor\bsnmPillai, \bfnmN.\binitsN. and \bauthor\bsnmVolfovsky, \bfnmA.\binitsA. (\byear2020). \btitleDesigns for Estimating the Treatment Effect in Networks with Interference. \bjournalThe Annals of Statistics \bvolume48 \bpages679–712. \endbibitem
- [24] {barticle}[author] \bauthor\bsnmJenish, \bfnmN.\binitsN. (\byear2016). \btitleSpatial Semiparametric Model with Endogenous Regressors. \bjournalEconometric Theory \bvolume32 \bpages714–739. \endbibitem
- [25] {barticle}[author] \bauthor\bsnmJenish, \bfnmN.\binitsN. and \bauthor\bsnmPrucha, \bfnmI.\binitsI. (\byear2009). \btitleCentral Limit Theorems and Uniform Laws of Large Numbers for Arrays of Random Fields. \bjournalJournal of Econometrics \bvolume150 \bpages86–98. \endbibitem
- [26] {barticle}[author] \bauthor\bsnmJenish, \bfnmN.\binitsN. and \bauthor\bsnmPrucha, \bfnmI.\binitsI. (\byear2011). \btitleOn Spatial Processes and Asymptotic Inference Under Near-Epoch Dependence. \bjournalU. Maryland working paper. \endbibitem
- [27] {barticle}[author] \bauthor\bsnmJenish, \bfnmN.\binitsN. and \bauthor\bsnmPrucha, \bfnmI.\binitsI. (\byear2012). \btitleOn Spatial Processes and Asymptotic Inference Under Near-Epoch Dependence. \bjournalJournal of Econometrics \bvolume170 \bpages178–190. \endbibitem
- [28] {barticle}[author] \bauthor\bsnmLahiri, \bfnmS.\binitsS. (\byear1996). \btitleOn Inconsistency of Estimators Based on Spatial Data Under Infill Asymptotics. \bjournalSankhyā: The Indian Journal of Statistics, Series A \bpages403–417. \endbibitem
- [29] {barticle}[author] \bauthor\bsnmLahiri, \bfnmS.\binitsS. (\byear2003). \btitleCentral Limit Theorems for Weighted Sums of a Spatial Process Under a Class of Stochastic and Fixed Designs. \bjournalSankhyā: The Indian Journal of Statistics \bpages356–388. \endbibitem
- [30] {barticle}[author] \bauthor\bsnmLahiri, \bfnmS.\binitsS. and \bauthor\bsnmZhu, \bfnmJ.\binitsJ. (\byear2006). \btitleResampling Methods for Spatial Regression Models Under a Class of Stochastic Designs. \bjournalThe Annals of Statistics \bvolume34 \bpages1774–1813. \endbibitem
- [31] {barticle}[author] \bauthor\bsnmLeung, \bfnmM.\binitsM. (\byear2022). \btitleCausal Inference Under Approximate Neighborhood Interference. \bjournalEconometrica \bvolume90 \bpages267-293. \endbibitem
- [32] {barticle}[author] \bauthor\bsnmLeung, \bfnmM.\binitsM. (\byear2022). \btitleNetwork Cluster-Robust Inference. \bjournalarXiv preprint arXiv:2103.01470. \endbibitem
- [33] {barticle}[author] \bauthor\bsnmLeung, \bfnmM.\binitsM. (\byear2022). \btitleSupplement to “Rate-Optimal Cluster-Randomized Designs for Spatial Interference”. \bjournalDOI: 10.1214/22-AOS2224SUPP. \endbibitem
- [34] {barticle}[author] \bauthor\bsnmManski, \bfnmC.\binitsC. (\byear2013). \btitleIdentification of Treatment Response with Social Interactions. \bjournalThe Econometrics Journal \bvolume16 \bpagesS1–S23. \endbibitem
- [35] {barticle}[author] \bauthor\bsnmMiguel, \bfnmE.\binitsE. and \bauthor\bsnmKremer, \bfnmM.\binitsM. (\byear2004). \btitleWorms: Identifying Impacts on Education and Health in the Presence of Treatment Externalities. \bjournalEconometrica \bvolume72 \bpages159–217. \endbibitem
- [36] {btechreport}[author] \bauthor\bsnmPaler, \bfnmL.\binitsL., \bauthor\bsnmSamii, \bfnmC.\binitsC., \bauthor\bsnmLisiecki, \bfnmM.\binitsM. and \bauthor\bsnmMorel, \bfnmA.\binitsA. (\byear2015). \btitleSocial and Environmental Impact of the Community Rangers Program in Aceh \btypeTechnical Report, \bpublisherWorld Bank, Washington, DC. \endbibitem
- [37] {barticle}[author] \bauthor\bsnmPark, \bfnmC.\binitsC. and \bauthor\bsnmKang, \bfnmH.\binitsH. (\byear2021). \btitleAssumption-Lean Analysis of Cluster Randomized Trials in Infectious Diseases for Intent-to-Treat Effects and Spillover Effects Among a Vulnerable Subpopulation. \bjournalJournal of the American Statistical Association (forthcoming). \endbibitem
- [38] {barticle}[author] \bauthor\bsnmPeng, \bfnmR.\binitsR., \bauthor\bsnmSun, \bfnmH.\binitsH. and \bauthor\bsnmZanetti, \bfnmL.\binitsL. (\byear2017). \btitlePartitioning Well-Clustered Graphs: Spectral Clustering Works! \bjournalSIAM Journal on Computing \bvolume46 \bpages710–743. \endbibitem
- [39] {barticle}[author] \bauthor\bsnmPollmann, \bfnmM.\binitsM. (\byear2020). \btitleCausal Inference for Spatial Treatments. \bjournalarXiv preprint arXiv:2011.00373. \endbibitem
- [40] {binproceedings}[author] \bauthor\bsnmPouget-Abadie, \bfnmJ.\binitsJ., \bauthor\bsnmMirrokni, \bfnmV.\binitsV., \bauthor\bsnmParkes, \bfnmD.\binitsD. and \bauthor\bsnmAiroldi, \bfnmE.\binitsE. (\byear2018). \btitleOptimizing Cluster-Based Randomized Experiments Under Monotonicity. In \bbooktitleProceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining \bpages2090–2099. \endbibitem
- [41] {barticle}[author] \bauthor\bsnmRoss, \bfnmN.\binitsN. (\byear2011). \btitleFundamentals of Stein’s Method. \bjournalProbability Surveys \bvolume8 \bpages210–293. \endbibitem
- [42] {barticle}[author] \bauthor\bsnmSävje, \bfnmF.\binitsF. (\byear2021). \btitleCausal Inference with Misspecified Exposure Mappings. \bjournalarXiv preprint arXiv:2103.06471. \endbibitem
- [43] {barticle}[author] \bauthor\bsnmSävje, \bfnmF.\binitsF., \bauthor\bsnmAronow, \bfnmP.\binitsP. and \bauthor\bsnmHudgens, \bfnmM.\binitsM. (\byear2021). \btitleAverage Treatment Effects in the Presence of Unknown Interference. \bjournalThe Annals of Statistics \bvolume49 \bpages673–701. \endbibitem
- [44] {barticle}[author] \bauthor\bsnmSussman, \bfnmD.\binitsD. and \bauthor\bsnmAiroldi, \bfnmE.\binitsE. (\byear2017). \btitleElements of Estimation Theory for Causal Effects in the Presence of Network Interference. \bjournalarXiv preprint arXiv:1702.03578. \endbibitem
- [45] {binproceedings}[author] \bauthor\bsnmToulis, \bfnmP.\binitsP. and \bauthor\bsnmKao, \bfnmE.\binitsE. (\byear2013). \btitleEstimation of Causal Peer Influence Effects. In \bbooktitleInternational Conference on Machine Learning \bpages1489–1497. \endbibitem
- [46] {binproceedings}[author] \bauthor\bsnmUgander, \bfnmJ.\binitsJ., \bauthor\bsnmKarrer, \bfnmB.\binitsB., \bauthor\bsnmBackstrom, \bfnmL.\binitsL. and \bauthor\bsnmKleinberg, \bfnmJ.\binitsJ. (\byear2013). \btitleGraph Cluster Randomization: Network Exposure to Multiple Universes. In \bbooktitleProceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining \bpages329–337. \endbibitem
- [47] {barticle}[author] \bauthor\bsnmUgander, \bfnmJ.\binitsJ. and \bauthor\bsnmYin, \bfnmH.\binitsH. (\byear2020). \btitleRandomized Graph Cluster Randomization. \bjournalarXiv preprint arXiv:2009.02297. \endbibitem
- [48] {barticle}[author] \bauthor\bsnmValcu, \bfnmM.\binitsM. and \bauthor\bsnmKempenaers, \bfnmB.\binitsB. (\byear2010). \btitleSpatial Autocorrelation: an Overlooked Concept in Behavioral Ecology. \bjournalBehavioral Ecology \bvolume21 \bpages902–905. \endbibitem
- [49] {barticle}[author] \bauthor\bsnmVerbitsky-Savitz, \bfnmN.\binitsN. and \bauthor\bsnmRaudenbush, \bfnmS.\binitsS. (\byear2012). \btitleCausal Inference Under Interference in Spatial Settings: A Case Study Evaluating Community Policing Program in Chicago. \bjournalEpidemiologic Methods \bvolume1 \bpages107–130. \endbibitem
- [50] {barticle}[author] \bauthor\bsnmViviano, \bfnmD.\binitsD. (\byear2020). \btitleExperimental Design Under Network Interference. \bjournalarXiv preprint arXiv:2003.08421. \endbibitem
- [51] {barticle}[author] \bauthor\bsnmZigler, \bfnmC.\binitsC. and \bauthor\bsnmPapadogeorgou, \bfnmG.\binitsG. (\byear2021). \btitleBipartite Causal Inference with Interference. \bjournalStatistical Science \bvolume36 \bpages109. \endbibitem
See pages 1 of supplement.pdf See pages 0 of supplement.pdf