Stable Differentiable Causal Discovery
Supplementary Materials: Stable Differentiable Causal Discovery
Appendix A Theoretical Results
A.1 Proof of LABEL:th:pst-constraint
Before proving LABEL:th:pst-constraint, we precisely define an acyclic matrix and prove a few lemmas.
Definition A.1 (Cyclic and acyclic matrices).
Take a matrix .
We say that has a cycle of length if and only if:
(1) |
We say that is cyclic if it contains at least one cycle. We note that if contains a cycle of length for (the set of strictly positive integers), then also contains a cycle of length for (this follows from the pigeon hole principle).
We say that is acyclic if it does not contain any cycle (or equivalently if it does not contain any cycle of length ).
Lemma A.2.
For any matrix ,
-
•
for any
-
•
has a cycle of length if and only if
-
•
if and only if is acyclic.
Proof.
Fix a matrix . We have,
(2) |
Each addend is non-negative so . Furthermore, the total sum is strictly positive if and only if at least one addend is strictly positive. This happens if and only if has a cycle of length by definition.
Similarly, we have
(3) |
If , then one addend is strictly positive and so there exists such that and . By the pigeon-hole principle, two are identical, which provides a cycle. Reciprocally, if is a cycle of length , then by repeating until having a path of length , we have that . Hence, if and only if is acyclic. ∎
We recall LABEL:th:pst-constraint.
*
Proof.
Fix a matrix and a sequence such that for any .
By definition, we have,
(4) | ||||
(5) |
-
1.
By Lemma A.2, and so . This proves the second property.
-
2.
Then, if and only if for all for which . Since for any we conclude that if , then does not contain cycles of length , so is acyclic by Definition A.1. Reciprocally, if is acyclic, it does not contain cycles of any length, so .
-
3.
Finally, if we write the radius of convergence of , then converge absolutely over the set of matrices with so it is differentiable with gradient given by: .
This concludes the proof.
∎
A.2 Proof of Theorem 2
We recall LABEL:th:pst-unstable.
*
Proof.
Take a PST constraint for some with for .
We will show the E-unstable and V-unstable results using a particular adjacency matrix .
Define as the adjacency matrix of the cycle with edges weights of . That is:
(7) |
We have and .
We obtain for any ,
(8) |
-
•
In particular, we have for any , (since ). This proves the E-instability.
-
•
Define where is the radius of convergence of .
Then, for any ,
(9) (10) (11) (12) (13) Where we obtain Equation 11 by noting that and . Finally, since , is finite. Hence the result.
The D-instability result follows from the definition of the radius of convergence. ∎
A.3 Proof of Theorem 3
We recall LABEL:th:constraint-spectral-valid. \spectralAcyclicity*
The two properties stated in LABEL:th:constraint-spectral-valid are standard results.
Proof.
-
–
We provide proof for the statement for the sake of completeness.
-
If then all eigenvalues are zeros. But since , we have for any and by Lemma A.2, is acyclic.
-
Assume is acyclic, then by Lemma A.2. But then all eigenvalues are 0 (as for eigenvalue and associated eigenvector , we have .
Hence, is a valid acyclicity constraint.
-
-
–
magnus1985differentiating shows that is differentiable at every that has mutually distinct eigenvalues, with the formula provided in LABEL:th:constraint-spectral-stable. The set of matrices with all distinct eigenvalues is dense in the set of matrices (horn2012matrix)[Theorem 2.4.7.1], which proves the result
∎
A.4 Proof of Theorem 4
We recall LABEL:th:constraint-spectral-stable. \spectralStable*
Proof.
We prove each stability criterion.
-
–
E-stable: For any and matrix , .
-
–
V-stable:For any and matrix such that , .
-
–
D-stable: Every matrix has eigenvalues ( is algebraically closed), so is well defined everywhere. In addition, LABEL:th:constraint-spectral-valid proved that was differentiable almost everywhere.
Hence, is a stable constraint. ∎
A.5 Proof for Stage 1 – Theorem 5
In this section, we guarantee that if the optimization problem solved in stage 1 is solved exactly, without relaxation and with infinite data, then stage 1 does not remove any true causal parent.
The optimization problem solved in stage 1 is given in LABEL:eq:stage-1 as
With infinite data , the optimization problem writes,
(14) where is the proportion of data coming from intervention .
Furthermore, in its non-relaxed form, Equation 14 above writes
(15) where the L1 and L2 regularization are reverted back into the number of edges regularization (for some ).
Since we are interested in the graph induced by , that we write , we can rewrite Equation 15 as
(16) Finally, since there are no constraint over other than no self-loops, Equation 16 can be solved as independent optimization problems, each one determining the parents of in the graph ,
(17) Furthermore, whenever , our model class has — we know we have perfect interventions and the interventions are known. So the is not related to the coordinates of that define . That is to say, Equation 17 is equivalent to
(18) We recall LABEL:th:stage1.
\theoremMarkovBoundary*
The assumptions are similar to the ones detailed in brouillard2020differentiable to guarantee that differentiable causal discovery can identify causal graphs.
The assumptions are:
-
–
– we observe some observational data,
-
–
– the model class can express the true model ,
-
–
The observational distribution is faithful to the graph (that is any edge in indeed result in a nonzero cause-and-effect relation in the distribution . See neapolitan2004learning for more details.
-
–
The true distributions and any distribution of the model class have strictly positive density , . This avoids technical difficulty when forming conditional distributions (e.g., ).
-
–
The expectations are well defined (they are finite). This enables us to consider the likelihood expectations in the first place.
-
–
The regularization strength is strictly positive and small enough (see the proof for how small).
Proof.
Fix .
For clarity of notations, we rewrite Equation 18 as
(19) where the condition is fully captured by the notation .
Then, define
(20) Further, define to be the Markov boundary of node in the true causal graph .
We will show that for any other .
We compute,
(21) (22) (23) The line 22 comes from where we added and substracted the term (the is decomposed into , where the second expectation is in the KL divergence). We use the assumption of strictly positive density here to define the conditional without technical difficulties.
The line 23 comes from the assumption of sufficient model class capacity and the definition of the Markov boundary. Indeed, we first have by definition of the Markov boundary , and since the model class is expressive enough, there exists such that .
Let’s finally define and fix any .
Let’s assume now that for some , and show that we obtain contradictions.
First, we would have . In particular we deduce that (since ).
Now, two possibilities:
-
1.
If , then and by definition of , which is absurd.
-
2.
If , then . This implies that for all ; since has positive density and . Hence, the conditional and are identical. Since was the Markov boundary of , that makes also a Markov blanket of . But then, by minimality of the Markov boundary in a faithful graph, we have . Remember that we had deduced . So .
This ends the proof, where . ∎
A.6 Proof for Stage 2
Since stage 1 does not remove any true causal parents, theorem 1 of brouillard2020differentiable remains valid.
A.7 Lemma: Asymptotic Bound on number of edges returned in Stage 1
We denote the Markov boundary of in by , and recall that
The following lemma upper-bounds the theoretical number of edges returned by stage 1 when each node has at most parents.
Lemma A.3.
Assume is sparse such that each node has at most parents. Then, the total size of all the Markov boundaries is upper-bounded by .
Proof.
First, note that if each node has at most parents, then . Finally,
(27) (28) (29) (30) (31) ∎
Appendix B Methods
B.1 Model Details
In SDCD, the conditional distributions, , are modeled as Gaussian distributions where the mean and variance are learned by a neural network that takes in all of the other as input. The initial layer of the network applies independent linear transformations followed by a sigmoid nonlinearity to the input and outputs hidden states of size 10. Each of the hidden states corresponds to the features then used to predict each variable. Each hidden state is fed into two linear layers: one to predict the mean parameter of the conditional and one to predict the variance parameter of the conditional. For the variance, a softplus operation is applied to the output of the linear layer to constrain the variance to be strictly positive.
B.2 Algorithm Details
Spectral Acyclicity Constraint Estimation. As described in LABEL:th:constraint-spectral-valid, the gradient of the spectral acyclicity constraint can be computed as , where are the right and left eigenvectors of respectively. Using the power iteration method, which involves a fixed number of matrix-vector multiplications, can be estimated in . Specifically, the updates are as follows:
where are initialized as at the very first epoch of SDCD. In our implementation, we use 15 iterations to estimate the spectral acyclicity constraint value.
Importantly, we re-use the estimates of and from one epoch to another, as we don’t expect (and its eigenvectors) to change drastically.
Hence, at each epoch, we initialize using their last epoch’s value and perform 15 power iterations.
SDCD Algorithm. The SDCD algorithm follows a two-stage procedure. In the first stage, the coefficient of the spectral acyclicity constraint, , is fixed at zero. We use an Adam optimizer with a learning rate, , specific to stage 1 to perform minibatch gradient-based optimization. The coefficients corresponding to the L1 and L2 penalties, and , respectively, are fixed throughout training. The stage 1 training loss is written as:
To prevent the model from learning implicit self-loops, the weights corresponding to the predicted variable are masked out for every hidden state output by the initial neural network layer. Thus, the prediction of each variable is prevented from being a function of the same variable.
In interventional regimes, the log-likelihood terms corresponding to the prediction of intervened variables are zeroed out. The intervened variables do not have to be modeled as we assume perfect interventions.
Stage 1 is run for a fixed number of epochs. By default, stage 1 also has an early stopping mechanism that uses the reconstruction loss of a held-out validation set of data (sampled uniformly at random from the training set) as the early stopping metric. If the validation reconstruction loss does not achieve a new minimum after a given number of epochs, the stage 1 training loop is exited.
At the end of stage 1, the learned input layer weights are used to compute a set of removed edges, , for stage 2. Let represent the input layer weights. Then, each value of the implicitly defined weighted adjacency matrix is computed as the L2 vector norm for the corresponding set of weights (i.e., ). This weighted adjacency matrix is discretized with a fixed threshold, , such that each edge, , is removed if it falls below the threshold (i.e., ).
In stage 2, the spectral acyclicity constraint is introduced. Like stage 1, we use an Adam optimizer with learning rate, , and perform minibatch gradient-based optimization. Once again, the L1 and L2 coefficients, , are fixed throughout training. Rather than a fixed , SDCD takes an increment value, , determining the rate at which increases every epoch. The training loss for stage 2 is as follows:
The same masking strategy as in stage 1 is used to prevent self-loops in . However, the input layer weights corresponding to edges are also masked.
Like before, the reconstruction loss terms corresponding to intervened variables are removed from the loss.
To reduce the sensitivity of stage 2 to the choice of and to prevent the acyclicity constraint term from dominating the loss, the linear increment schedule is frozen when achieves a DAG at the final threshold, . In practice, the DAG check is performed every 20 epochs. If the adjacency matrix returns to being cyclic throughout training, the increment schedule restarts to increase from where it left off.
The early stopping metric is computed similarly to stage 1, but in stage 2, the early stopping can only kick in when has been frozen. If the schedule is resumed due to reintroducing a cycle, the early stopping is reset.
Lastly, once stage 2 is complete, is computed and thresholded according to a fixed threshold, . All values exceeding the threshold (i.e., ) are considered edges in the final graph prediction.
The thresholded adjacency matrix may contain cycles if stage 2 runs to completion without hitting early stopping. To ensure a DAG, we follow a greedy edge selection procedure detailed in Algorithm 2.
Pseudocode for a simplified SDCD algorithm (excludes freezing and early stopping) is provided in Algorithm 1.
Time and Space Complexity. The time complexity of each iteration of SDCD is . The forward pass in stage 1 can be computed in time. On the other hand, each of the prediction problems can be computed independently. This allows for parallelizing the problems, each taking time. Stage 2 also takes time as the spectral acyclicity constraint and the forward pass both take time to compute. Thus, the time complexity of each iteration in both stages is .
If the sparsity pattern of the underlying causal graph is known beforehand such that each variable has at most parents, we can further tighten the time complexity of SDCD. By Section A.7, we know the size of the set of remaining edges after stage 1 is . Using sparse matrix multiplication, the spectral acyclicity constraint can be done in , which is effectively linear in if . However, this improvement only becomes significant when (from experiments not reported in this paper).
The space complexity of the algorithm is , as the number of parameters in the input layer scale quadratically in the number of features.
Algorithm 1 SDCD 0: ,RandomGaussianInit()while doend whileRandomGaussianInit()while doend whileAlgorithm 2 DAGTrim 0:{Initialize the set of final edges.}{Candidate edges above threshold .}Sort by decreasing .for each doif the graph with edges is still acyclic then{We add the edge if it does not create a cycle.}end ifend forAppendix C Empirical Studies Details
C.1 Simulation Details
To judge the performance of SDCD against existing methods over both interventional and observational data, we generated simulated data according to the following procedure:
-
–
Draw a random undirected graph from the Erdős-Rényi distribution.
-
–
Convert the undirected graph into a DAG by setting the direction of each edge if , where is a random permutation of the nodes.
-
–
Form possible sets of interventions that target one variable at a time: and .
-
–
Draw a set of random fully connected neural networks , each one with one 100-dimensional hidden layer. Each neural network parametrizes the mean of the observational conditional distributions:
-
–
For intervention distribution , perform a hard intervention on variable and set
-
–
Draw the data according to the model, with 10,000 observational samples and 500 extra interventional samples per target variable.
-
–
Standardize the data.
We consider several values of to simulate different scenarios.
C.2 Choice of Hyperparameters
We fixed the hyperparameters as follows: . We found that these selections worked well empirically across multiple simulated datasets and were used in all experiments without simulation-specific fine-tuning.
Each stage was run for 2000 epochs with a batch size of 256, and the validation loss was computed over a held-out fraction of the training dataset (20% of the data) every 20 epochs for early stopping. In stage 2, the DAG check of the implicit adjacency matrix was performed every 20 epochs before the validation loss computation.
C.3 Baseline Methods
Here, we provide details on the baseline methods and cite which implementations were used for the experiments. For DCDI and DCDFG, we used the implementations from https://github.com/Genentech/dcdfg, using the default parameters for optimization. For DCDFG, we used 10 modules in our benchmarks, as reported in the paper experiments. For GIES, we used the Python implementation from https://github.com/juangamella/gies, using the default parameters. For DAGMA, we used the original implementation from https://github.com/kevinsbello/dagma with the default parameters. For NOTEARS, we used the implementation from https://github.com/xunzheng/notears, and for NOBEARS, we used the implementation from https://github.com/howchihlee/BNGPU. For NOTEARS and NOBEARS, we found the default thresholds for determining the final adjacency matrix performed poorly or did not return a DAG, so for each of these baselines, we followed the same procedure described in lopez2022large: we find the threshold that returns the largest possible DAG via binary search. sortnregress (reisach2021beware) is a trivial baseline meant to ensure that the causal graph cannot be easily inferred from the variance pattern across the variables. For this baseline, we used the implementation in https://github.com/Scriddie/Varsortability.
C.4 Robustness Checks
Below, we discuss three categories of issues that commonly arise when evaluating causal discovery methods and address each issue with a diagnostic metric.
Sparsity. Particularly when the true causal graph is sparse, SHD may favor sparser predictions since, in the extreme case, the empty graph achieves an SHD equal to the number of true edges. To show the relative performance of the benchmarked methods with respect to this trivial solution, we indicate the number of true edges for each simulated setting in LABEL:fig:observational and LABEL:fig:interventional. We find that most methods outperform this baseline. Additionally, we report the F1 score and the recall of the predictions (see Figures 6 and 3), two metrics that suffer when a method predicts many false negatives. We find that SDCD still outperforms other methods with these metrics.
Identifiability. In settings with incomplete or no interventional data, the true causal graph may be unidentifiable, meaning multiple -Markov equivalent graphs can maximize the score (brouillard2020differentiable). Therefore, graphs in the same Markov equivalence class as the true causal graph may have positive SHD values despite being optimal with respect to the available data. As proposed in peters2014causal, we also compute an adapted version of the SHD to compare the Markov equivalence class of the methods’ results against the true Markov equivalence class instead of the graphs themselves. This metric, called SHD-CPDAG, is computed as the SHD between the completed partially directed acyclic graph (CPDAG) of the predicted graph and the CPDAG of the true graph. Unlike the regular SHD metric, this metric is zero if two graphs are in the same equivalence class. We report it alongside SHD for our experiments in Figure 3 to better represent the results in scenarios with an unidentifiable causal graph. We find very similar results.
Simulation issues. As discussed in reisach2021beware, certain simulation processes used for causal discovery benchmarking exhibit an issue where the order of the variables, when sorted by sample variance, reflects the true causal ordering of the graph. As a result, methods that exploit this phenomenon to accurately infer the causal graph may be misrepresented. To ensure that our simulation process does not suffer from this issue and that the methods are being properly evaluated, we take two complementary steps recommended in reisach2021beware: (i) we standardize the data before being input into any of the evaluated methods so that no artificial sample variance information can be exploited, and (2) we include the trivial baseline, sortnregress (reisach2021beware), which is designed to exploit sample variance artifacts from a flawed simulation, and should be outperformed by an effective, scale-invariant algorithm. We find that sortnregress performs poorly, which confirms that our normalization scheme removes simulation artifacts, and we find that SDCD and its competing methods beat sortnregress by a wide margin.
Appendix D Supplementary Figures and Tables
Figure 1: The effect of constraints on the learned graph throughout training. The training with penalty (dashed purple, exactly with a hard mask on the diagonal as to prevent self-loops, as implemented in SDCD) converges the fastest toward a DAG. (left) training with as a regularization penalty. (right) training with as an augmented Lagrangian constraint. Threshold to DAG is the smallest at which all edges with weight form a DAG. s d Method SDCD DCDI-G DCDI-DSF 1 10 22.2% L NL-Add NL-NN 20 10.5% L NL-Add NL-NN 4 10 88.9% L NL-Add NL-NN 20 42.1% L NL-Add NL-NN Table 1: Means and standard deviations of SHD scores over simulations from brouillard2020differentiable. The “Method” column refers to the model used to simulate the causal relationships. “L” refers to linear model, “NL-Add” refers to nonlinear, additive model, and “NL-NN” refers to nonlinear, non-additive (neural network) model. We refer to brouillard2020differentiable for the simulation details. The results are reported alongside the values presented in the original paper. refers to the expected number of edges per node, denotes the number of nodes, and the edge density, , is computed as the fraction of , the maximum number of edges for a DAG. The lowest average SHD values are set in bold. Figure 2: Training runtimes across simulations from LABEL:fig:observational. SDCD on GPU (dashed) scales to 500 variables in under 20 minutes. Error bars indicate std on 5 random datasets for and 3 random datasets for . SDCD SDCD-GPU DCDI DCDFG GIES DAGMA NOTEARS NOBEARS SCORE sortnregress d 10 14.7 NA 24.3 24.6 27.8 25.3 35.3 33.5 14.4 28.6 20 35.7 NA 31.7 108.0 123.4 62.0 75.4 74.2 118.6 81.6 30 53.8 NA 55.5 258.8 NA 89.4 113.1 113.7 275.9 134.6 40 64.0 NA 102.4 426.6 NA 115.3 147.9 151.1 454.3 172.9 50 69.9 68.3 NA 660.8 NA NA 183.4 192.0 619.4 216.6 100 92.7 89.7 NA 1807.3 NA NA 327.3 389.0 NA 421.3 200 225.3 228.0 NA 5657.3 NA NA 619.0 770.0 NA 824.0 300 350.0 360.0 NA 7284.7 NA NA NA 1149.0 NA 1190.7 400 466.3 471.7 NA 3779.7 NA NA NA 1534.7 NA 1585.0 500 621.7 621.0 NA 7252.7 NA NA NA 1915.7 NA 1974.3 Table 2: Detailed results of SHD means and standard deviations from LABEL:fig:observational. SDCD-GPU was only run for . All other NA values correspond to failed runs. Figure 3: F1 and SHD-CPDAG metrics across simulations from LABEL:fig:observational, observational data with increasing numbers of variables . Missing data points imply the method failed to run. Error bars indicate std on 30 random datasets. Figure 4: SDCD on GPU (dashed) scales to 4000 variables under 3 hours while maintaining competitive SHD. Error bars indicate std on 3 random datasets for and 2 random datasets for . d SDCD-GPU 1000 1438.7 2000 3356.7 3000 5172.5 4000 7567.0 Table 3: Detailed results of SHD means and standard deviations from Figure 4. In addition to SHD, we computed precision and recall metrics over the predicted edges with respect to the true edges for both observational and interventional scenarios. The precision is the fraction of true edges among all the predicted edges. The recall is the fraction of true edges that have been correctly predicted.
Figure 5: Precision across simulations from LABEL:fig:observational, observational data with increasing numbers of variables . The SDCD(-CPU) and SDCD-GPU lines overlap, indicating consistent results. Missing data points imply the method failed to run. Error bars indicate std on 30 random datasets for and 5 random datasets for . Figure 6: Recall across simulations from LABEL:fig:observational, observational data with increasing numbers of variables . The SDCD(-CPU) and SDCD-GPU lines overlap, indicating consistent results. Missing data points imply the method failed to run. Error bars indicate std on 30 random datasets for and 5 random datasets for . Figure 7: Precision and recall across simulations from Figure 4, observational data with increasing numbers of variables . Error bars indicate std on 3 random datasets for and 2 random datasets for . Figure 8: SHD across simulations with an increasing proportion of variables intervened on, varying the total number of variables (columns) and average edges per variable (rows). Extended version of LABEL:fig:interventional with DCDFG and GIES and . Boxplots over 5 random datasets. s d Fraction of Variables Intervened on SDCD DCDI DCDFG GIES 0.00 18.0 24.8 112.3 80.3 0.25 13.4 27.4 99.0 70.0 2 20 21% 0.50 9.4 25.4 109.0 69.7 0.75 9.0 19.8 76.0 62.7 1.00 9.0 18.8 98.0 65.7 0.00 24.6 56.0 183.0 NA 0.25 26.8 60.4 204.3 NA 30 13% 0.50 21.8 56.2 206.7 NA 0.75 16.2 43.2 207.0 NA 1.00 18.2 31.7 283.7 NA 0.00 44.2 109.2 465.0 NA 0.25 33.4 123.8 372.7 NA 40 10% 0.50 33.0 105.6 479.7 NA 0.75 27.6 76.0 469.3 NA 1.00 27.0 63.0 333.7 NA 0.00 36.0 31.2 104.0 110.7 0.25 32.2 33.0 105.7 110.7 4 20 42% 0.50 34.6 30.4 107.7 101.3 0.75 25.8 29.6 102.0 105.7 1.00 22.4 25.8 107.0 105.3 0.00 54.0 57.6 234.7 NA 0.25 43.8 67.0 269.3 NA 30 27% 0.50 39.2 72.4 262.0 NA 0.75 35.0 72.2 232.7 NA 1.00 29.0 60.3 236.0 NA 0.00 69.0 99.0 460.0 NA 0.25 56.8 107.0 438.3 NA 40 20% 0.50 50.4 105.6 457.7 NA 0.75 41.4 97.8 426.3 NA 1.00 34.4 93.5 458.7 NA 0.00 51.2 56.6 112.0 117.7 0.25 44.0 37.8 92.3 117.3 6 20 63% 0.50 42.2 29.0 97.0 112.7 0.75 38.8 25.2 94.0 91.3 1.00 34.0 23.8 93.3 86.0 0.00 85.4 75.8 256.7 NA 0.25 79.8 69.2 260.7 NA 30 41% 0.50 69.4 80.6 257.3 NA 0.75 67.0 86.2 245.3 NA 1.00 57.4 82.0 259.7 NA 0.00 95.4 107.8 460.3 NA 0.25 75.6 130.2 401.0 NA 40 30% 0.50 63.6 146.0 370.0 NA 0.75 57.4 128.3 387.7 NA 1.00 47.2 114.5 469.0 NA 0.00 53.6 82.8 117.7 111.7 0.25 51.0 58.2 108.3 96.7 8 20 84% 0.50 47.0 41.4 100.7 91.0 0.75 40.8 26.0 73.7 86.0 1.00 43.0 19.8 81.7 79.7 0.00 111.8 93.4 255.0 NA 0.25 90.8 75.6 242.7 NA 30 55% 0.50 89.6 78.8 255.0 NA 0.75 77.6 81.2 226.3 NA 1.00 71.0 81.6 222.3 NA 0.00 150.4 151.0 450.0 NA 0.25 127.0 188.4 452.0 NA 40 41% 0.50 113.4 200.0 424.3 NA 0.75 104.4 193.0 426.7 NA 1.00 92.0 190.0 422.0 NA Table 4: Detailed results of SHD means and standard deviations from Figure 8. GIES failed to run on . Figure 9: Precision across simulations from Figure 8 increasing proportion of variables intervened on, varying the total number of variables (columns) and average edges per variable (rows). Boxplots over 5 random datasets. Figure 10: Recall across simulations from Figure 8 increasing proportion of variables intervened on, varying the total number of variables (columns) and average edges per variable (rows). Boxplots over 5 random datasets. Name d=10 d=20 d=30 d=40 SDCD 14.7 40.3 54.3 69.0 SDCD-warm 14.7 40.7 55.0 68.7 SDCD-warm-nomask 19.3 69.7 156.0 272.7 SDCD-no-s1 19.3 68.3 155.3 272.3 SDCD-no-s1-2 16.3 56.7 95.0 135.0 DCDI 24.0 35.7 56.7 87.0 Table 5: Ablation study for SDCD stage 1. We observe that the described version of SDCD performs the best out of all variations. SDCD-warm performs competitively but generally provides little benefit. SDCD-warm-nomask performs much worse than SDCD, demonstrating that enforcing the mask during stage 2 is important. We report mean SHD over three random seeds of observational data (no interventions) with a fixed number of edges per variable, , for a range of numbers of variables, . SDCD-warm refers to starting stage 2 of SDCD, where the input layer is ported over from stage 1 instead of re-learned. SDCD-warm-nomask performs the same warmstart as SDCD-warm but does not enforce the mask in stage 2. SDCD-no-s1 only performs stage 2. SDCD-no-s1-2 only does stage 2, but sets to the default values from stage 1 . We report these values alongside DCDI. The lowest SHD values are bolded for each value of . Name SDCD 13.33 33.47 54.07 70.80 76.60 SDCD-exp 11.60 44.33 69.07 85.07 89.93 SDCD-log 11.20 52.00 87.47 117.87 116.00 Table 6: Ablation study for SDCD stage 2 (choice of the constraint). We observe that SDCD performs the best out of the three variations. Additionally, the variations using the PST constraints do not crash for any of the runs, even for those with . We attribute this improved stability (as compared to DCDI and DAGMA) to stage 1 since there are fewer non-zero parameters contributing to the value of the constraint. We report mean SHD over five random seeds of observational data (no interventions) with a fixed number of edges per variable, , for a range of numbers of variables, . SDCD-exp is SDCD except using the constraint in place of the , and SDCD-log uses the constraint. -
–