Neuro-PC: Causal Functional Connectivity from Neural Dynamics
Abstract
Functional connectome extends the anatomical connectome by capturing the relations between neurons according to their activity and interactions. When these relations are causal, the functional connectome maps how neural activity flows within neural circuits and provides the possibility for inference of functional neural pathways, such as sensory-motor-behavioral pathways. While there exist various information approaches for non-causal estimations of the functional connectome, approaches that characterize the causal functional connectivity - the causal relationships between neuronal time series, are scarce. In this work, we develop the Neuro-PC algorithm which is a novel methodology for inferring the causal functional connectivity between neurons from multi-dimensional time series, such as neuronal recordings. The core of our methodology relies on a novel adaptation of the PC algorithm, a state-of-the-art method for statistical causal inference, to the multi-dimensional time-series of neural dynamics. We validate the performance of the method on network motifs with various interactions between their neurons simulated using continuous-time artificial network of neurons. We then consider the application of the method to obtain the causal functional connectome for recent multi-array electrophysiological recordings from the mouse visual cortex in the presence of different stimuli. We show how features of the mapping can be used for quantification of the similarities between neural responses subject to different stimuli.
1 Introduction
The anatomical connectome is the physical map of all the neurons and the connections between them within a particular part of the brain [1]. For example, the connectome of Caenorhabditis elegans somatic nervous system contains all the somatic neurons (279) and enumerates the synaptic connections between them, such as glutamatergic, cholinergic, GABA, and also gap junctions [2, 3, 4]. The mapping of the anatomical connectome can be achieved with the help of imaging techniques and computer vision technologies at different scales [5, 6, 7]. While the anatomical connectome includes the backbone information on the possible ways how neurons could interact, it does not fully reflect the “wiring diagram of the brain”, which is expected to incorporate the dynamic nature of neurons’ activity and their interactions [8, 9, 10, 11].
It is therefore desirable to obtain a more complete map reflecting the particular functions that neurons perform. Such a mapping is the functional connectome (FC) and it represents the network of associations between the neurons with respect to their activity over time [12]. FC is expected to include and facilitate inference of the governing neuronal pathways essential for brain functioning and behavior [13]. Two neurons are said to be functionally connected if there is a significant relationship between their activity over time, where the activity can be observed by recording from neurons over time and measured with various measures [14]. In contrast to the anatomical connectome, the functional connectome cannot be directly observed or mapped, since the transformation from the activity to a network of associations is non-unique. This indicates that even if the basic physical attributes of a neurobiological system have been extensively characterized, the functional units and associations between them could remain poorly understood. To address these challenges, there has been an increasing interest in the inference of brain network interactions described in the functional connectome at different levels of neural organization [15, 16, 17].
Existing methods to characterize FC include approaches based on measuring correlations, such as pairwise correlation [18, 19], or sparse covariance matrix that is comparatively better than correlations given limited time points [20, 21]. Furthermore, for such scenarios, regularized precision matrix approaches were proposed to better incorporate conditional dependencies between the time courses [22, 23], where the precision matrix is inferred by a penalized maximum likelihood [24] to promote sparsity. For example, Allen et. al [25] proposed to find the FC by regularized precision matrix between the time-courses after Independent Component Analysis to account for time-varying noise. These methods were applied to functional Magnetic Resonance Imaging (fMRI) data to obtain the functional connectivity between different broad regions or voxels of the brain. In addition, the aforementioned methods have been extended to obtain FC over short windows of time and to compile them to give a dynamic FC which changes over time [26, 27, 28, 29, 30, 31, 32]. Furthermore, consideration of dominant neural patterns has been proposed to replace the consideration of correlations to measure association between neurons. In particular, it was proposed to obtain a low-dimensional embedding through application of SVD on neural activity and to translate the patterns to conditional probabilities for the construction a probabilistic graphical model that corresponds to the FC [33].
The prevalent research on FC, outlined above, deals with finding associations between neural signals in a non-causal manner. That is, we would know that a neuron A and a neuron B are active in a correlated manner, however, we would not know the answer to whether the activity in neuron causes neuron to be active (), or is it the other way around ()? Or, is there a neuron which intermediates the correlation between and ()? These three questions distinguish causation from correlation. In the current paper, we aim to incorporate the aspect of causality by finding the causal functional connectome, which would specifically answer the aforementioned causal questions. At the outset, insights from existing FC approaches [34, 35] communicate that the phenomenon of theoretical interest for FC research is ultimately to find the causal connection between the neural entities. It is noteworthy that results of association have also been used to make causal implications, for example, correlated brain regions have been considered as causal entities [36, 37, 38, 39]. This suggests that it would be helpful to make causal reasoning explicit in the FC methodology.
2 Related Work
In the literature of FC research, there have been contributions to infer causality using approaches like Granger Causality [40, 41, 42, 43], and Dynamic Causal Modeling [44]. Dynamic Causal Modeling treats the brain as a deterministic non-linear dynamical system, incorporating interactions due to the experimental equipment, and determines the strength of causal effect along the anatomical connections and changes due to external perturbations, however it requires a detailed mechanistic description of neuronal dynamics [45]. On the other hand, Granger Causality is a widely used statistical method aiming to infer causality, and recent extensions to Granger Causality have also been proposed for non-linear neural time series scenario by levying recurrent neural networks [46]. However between a pair of time series Granger Causality gauges whether one series is helpful in predicting the other and the former has temporal precedence [47, 48, 49], and there are studies expressing concern on the extent it causes the data in the other [50]. In several causal situations, it fails to give accurate result, for example, in the presence of a common cause: Hot days cause more people to swim, and hot days cause more heat strokes, so for the series on number of people swimming and the number of heat-strokes, the former would be predictive of the later and so swimming would be Granger Causal for heat-stroke, while they are not actually causal [51]. Notwithstanding the limitations, Granger Causality is a well-known method in the neural time series scenario [52, 53]. In this work we explore modeling causality by another approach, known as the Probabilistic Graphical Models [54], which gives a nuanced alternative formulation, yet lacking in research and application in the neural dynamics scenario and is recently gathering interest [55, 56], although finds wide application in other fields like genetics and genomics [57, 58, 59, 60, 61].

In the statistics literature, Spirtes-Glymour-Scheines (SGS) algorithm [62], Peter-Clark (PC) algorithm [63], Greedy Equivalence Search (GES) algorithm [64] are popular to infer causality by outputting a directed acyclic graph among the variables of interest. Under the Causal Markov Condition, which says that - a variable X is independent of every other variable (except X’s effects) conditional on all of its direct causes - it follows that all the common causes among the variables, as illustrated in the preceding paragraph, will be included in the directed acyclic graph outputted by these algorithms. Furthermore, any intervention into one variable would change only those variables who are causally “downstream” to the intervened variable [63]. However, these algorithms are developed for static data and are not suitable to be directly applied to infer causal relationships in data with dependency in the temporal domain, such as multidimensional time-series. Aspects like non-stationarity of the time series would allow the output of these constraint-based causal inference algorithms to output the true causal structure. This creates a gap in the literature of algorithmic approaches to infer causal FC from multi-neuron recordings over time.
In this paper, we develop a novel methodology for causal inference in the multi-dimensional time-series scenario, coined Neuro-PC, and show applications to neural dynamics (See Figure 1). We propose to use the PC algorithm [63] as a starting point for building the methodology, given it is one of the widely used causal inference algorithms on static data, it is consistent under i.i.d. sampling with no latent confounders, it is faster than the SGS algorithm and it is not greedy or structurally restrictive like GES [65]. Neuro-PC relies on adapting the PC algorithm to the multi-dimensional neural time-series setup, by following processes such as time-delay, bootstrapping and neuron ablation, to make it robust and well suited to the setup. We validate the proposed methodology on simulated neural signals in form of firing-rates from continuous time artificial neural networks set according to particular motifs. We show the importance of causality in being able to recover the generating motifs and that Neuro-PC is able to recover the causal relationships and the generating motifs comparatively well. We further use the method to obtain the causal FC among sampled neurons in mice from electrophysiological neural signals.
We organize the paper as follows. The following section gives a brief overview of causal inference for static variables using the PC algorithm. Section 3 discussed the Neuro-PC algorithm. Section 4 outlines the results of application on simulated data and validation of performance of the algorithm, followed by Section 5 with application to neurobiological data. We conclude with a discussion.

3 Causal Functional Connectivity Inference for Static Variables - The PC Algorithm
In this section, we provide a concise summary of the PC algorithm that will be used in further sections to develop our proposed method. The PC algorithm [63], outlined in Figure 2, is a popular statistical method to infer causal connections between static random variables. The PC algorithm is a constraint-based approach, in which any consistent statistical test for conditional independence is used to select the connectivity graph among variables of interest. For Gaussian random variables, the standard choice for such a test is the Fisher’s Z-transform, as constructed in [65].
For our purpose of extending the PC algorithm for multi-neuron recordings, we describe the PC algorithm in the context of neural firing rates and depict it in Figure 2. The first step is to determine the undirected edges between pairs of neurons. We start with an empty graph and decide whether to put an undirected edge between a particular pair of neurons, say neurons 2 and 5 in Figure 2. In order to determine whether such edge should exist between the two neurons, we perform a series of hypothesis tests. We use conditional independence test by Fisher’s Z-score. We first test if the time series for neurons 2 and 5 are independent. If they are not, we put an edge between the two. If they are independent, we test with each other single neuron, if they are conditionally independent given that neuron. Say for the pair of neurons 2 and 5, we test conditional independence given neuron 1, if still independent, given neuron 3, and so on. If so far conditional independence holds, we test if they are conditionally independent given pairs of neurons, followed by triples, and so on. If the conditional independence test fails at any step, we connect the neurons 2 and 5 by an edge. We repeat the process for each pair of neurons. The obtained undirected graph is called the skeleton graph. Using rules such as the Collider Detection rule, Known Non-Colliders rule and Cycle Avoidance rule, the skeleton graph is converted to assign directed connectivity to the graph edges.
While the PC algorithm is simple to implement and efficient variants exist, it finds poor relevance in the context of neural time series since there are inter-temporal causal dependencies within and between the time-series, and there are no replications for the recordings at each time that make the results relying on a single instance and hence unreliable. In our proposed method we consider both these aspects of making the PC algorithm a suitable approach for application to the multi-dimensional neural time series data.
4 The Neuro-PC algorithm
Neuron’s activity at the previous time point affects recordings at next time point and possibly further time points of the same neuron and of other neurons. To incorporate these effects we would like to keep different variables separately representing previous time point of neuron i, following time point of neuron i and following time point of neuron j, in our setup. With this intuition, we disassociate each neuron signal into time delayed signals, which are distinct signals whose sampled time points have a fixed time delay between the signals. Denoting the original signals by for time point for neuron , where , being the number of neurons, and the number of time points recorded for each neuron, the time delayed signals are given by
where is the -th time delayed signal corresponding to neuron , and is the maximum time delay incorporated. represents the maximum delay of interactions among neurons and is predetermined. Instead of the original time series for the neurons , we will consider the disassociated time delayed series for each neuron in the next steps. In this way a dependency of the -th delayed variable for neuron , taking values non-sequentially, on the -th delayed variable for neuron , taking values non-sequentially, would mean that the following time point of neuron depends on previous time point of neuron in sequence. Dependency of the -th delayed variable for neuron , taking values non-sequentially, on the -th delayed variable for neuron , taking values non-sequentially, would mean that the following time point of neuron depends on previous time point of neuron in sequence. This formulation brings the temporal dependence in sequential neural time series down to dependence between non-sequential variables. This transformation helps to capture the connectivity which is inherently sequential, based on non-sequential variables with algorithms usable for non-sequential data.
The consideration of delayed time series increases the dimensionality, and calls for a robust output never-the-less. Furthermore, in order to maintain stationarity of the data, smaller windows are advantageous. For these purposes, we propose to employ a windowed bootstrap procedure which will enhance the robustness of the inferred connectivity between the signals. The procedure defines time-windows in the time-delayed series over the neurons, , within the total duration sampled. By drawing a time-window of fixed size from any possible overlapping time window on the series, uniformly with replacement, we obtain bootstrapped samples of the series. For example, time-windows of width 100 msecs could be obtained as follows: First draw a uniform random integer with replacement time points to give the window to msecs. Second, draw another uniform random integer with replacement time points to give the window to msecs, and so on.
The bootstrap step in Neuro-PC is thereby such that for each time-window , say of width , the recordings in the time delayed series are fetched into the PC algorithm which generates a candidate connectivity graph, say . To find the weights of a directed connection in from node i to j, labeled as , we perform a linear regression with the time series of node j as response and the time series of node i as covariate. The coefficient to node i found in this regression is considered to be the weight of the directed connection from nodes i to j. We repeat the process for different time windows and obtain the FC graph and matrix for each window.
From the previous step we obtained a collection of connectivity graphs and matrices , for each time-window . We now consolidate the time-delayed collection into a single time-windowed connectivity graph and matrix between the neurons in the following way. If there is an edge directed from one of neuron ’s time delayed series at lower delay to one of neuron ’s time delayed series at higher delay, then we draw a directed edge starting from neuron to neuron . In the connectivity matrix we find the weight of a particular directed connection among the neurons by averaging the weights of the connections in the time-delayed series if that connection makes a lower lag of to be an ancestor of a higher lag of neurons. This is according to the intuition that when we term neuron has causal influence on neuron , we really mean that the previous time point of neuron has causal influence on a next time point of neuron . For each of the time-windows, , in the bootstrap procedure we obtain the pair : FC graph , and FC matrix . To consolidate these time windowed FC components into a single FC graph and matrix, we perform the next step in Neuro-PC, which is the consensus step. The consensus step forms a directed graph by only having those connections that are present in of the connectivity graphs obtained from the bootstrap iterations. We perform a consensus of the bootstrap connectivity matrices by averaging weights of those connections which are preserved in the resultant connectivity graph.

We include a final post-processing step to capture any connections that could be missed out by the procedure so far. In this step, we ablate one neuron’s time series from the dataset, and proceed with the dataset with one neuron left out. We then repeat the time-delayed series and bootstrapped PC steps upon the new leave-one-out neural time series. This results in a connectivity graph and matrix from each neuron’s ablation. We form consensus of these connectivity graphs and matrices to output a single graph and matrix. For consensus, we check for the following conditions to hold: i) is present in the connectivity graph obtained when neuron is ablated, the graph being denoted by , ii) ancestor of in the connectivity graph obtained with no neurons ablated, iii) is absent in . If these three conditions are satisfied, then we must have , because, if , then with the above conditions (i-iii), we have that is an ancestor of and in , which implies that, in , is not ancestor of which again implies that in , contradicting condition (i). So, under conditions (i)-(iii), it must follow that . With this understanding, in this post-processing step we check if conditions (i-iii) are satisfied. Then if the algorithm did not infer in the connectivity graph, in that case, we update the connectivity graph between the neurons by adding the connection . We update the connectivity matrix by including the weights for the added edges by the linear regression approach discussed earlier. We refer to the procedure discussed in this section with the three steps of time delay, bootstrapped PC and ablation as the Neuro-PC algorithm. Figure 3 visualizes the steps of the Neuro-PC. The steps are outlined in Algorithm 1.
5 Application to Simulated Data
We simulate neural dynamics by Continuous Time Recurrent Neural Networks, given in Equation (1), where is the instantaneous firing rate at time for a post-synaptic neuron , is the linear coefficient to pre-synaptic neuron ’s input on the post-synaptic neuron , is the input current on neuron at time , is the time constant of the post-synaptic neuron , with being indices for neurons with being the total number of neurons.
(1) |
In the simulation study, we consider six motifs ranging over different number of neurons and their connectivity motifs: (1) neurons with ; (2) neurons with , (3) neurons with , (4) neurons with , (5) neurons with , (6) neurons with . The time constant is set to 10 msecs for each neuron . We consider to be distributed as independent Gaussian process with the mean of 2 and the standard deviation of 2. The signals are sampled at a time gap of msecs for a total duration of msecs.
We compute the inferred connectivity graphs from the simulated data by the PC algorithm and by Neuro-PC and compare them with the Ground Truth connectivity motifs set for the network. We show an example in Fig. 4 that demonstrates the computed graph by the PC Algorithm and the contribution of each step in Neuro-PC to refine the graph. In particular, this examples demonstrates the typical performance of the PC Algorithm which provides multiple edges which should be filtered out. The time delay step of the Neuro-PC algorithm ensures that the edges, that are essentially inter-temporal, are incorporated with greater accuracy and spurious edges are excluded. The bootstrapping step ensures more robustness of the inferred connectivity by excluding those edges which appeared in less than of the graphs inferred from the bootstrap iterations.

Figure 5 displays the causal FC graphs and matrices inferred from the six simulations using Neuro-PC as well as of the PC algorithm. Each of the two algorithms output a connectivity graph and matrix. In the graph, means that there is a functional connection from neuron to neuron . The FC matrix provides weights to direct as well as indirect connections, where in, entry has a non-zero weight if there is a path from neuron to neuron in the FC graph. The Ground Truth FC of the motifs is based on the weights in equation (1), where, in the Ground Truth FC graph if , and entry of the Ground Truth FC matrix has a non-zero weight if there is a path from to in the Ground Truth FC graph.

We quantified the performance of the algorithms by first estimating FC matrices for 50 simulated samples with the same specifications from each motif as discussed in Section 5. The sensitivity or true positive rate is calculated by the proportion out of the non-zero entries of the estimated FC matrices that were truly non-zero. Specificity or true negative rate is calculated by the proportion out of the zero entries of the estimated FC matrices that were truly zero. Accuracy is calculated by the proportion of the entries of the estimated FC matrices that matched with the corresponding entry in the ground truth FC matrix. Neuro-PC recovers the structure of the Ground Truth with greater specificity and accuracy exceeding by more than for all the six motifs and at least as good sensitivity for Motifs in rows 3-6 in Figure 5.
We also compared the performance with Granger Causality for all the six motifs. Granger Causality is commonly implemented in terms of multivariate auto-regressive model in multivariate time series scenario [66]. In the implementation, it is tested whether the addition of a prediction of the time-series from another time-series through a multivariate auto-regressive (MVAR) model may improve our prediction of the present behavior of the time-series [67]. We use the Nitime Python library [68] for the analysis to fit a MVAR model of order 3 with sampling interval of msecs, followed by computation of the Granger Causality by the GrangerAnalyzer function. The specificity, sensitivity and accuracy chart in Figure 5 shows that Neuro-PC performed comparatively better in recovering the ground truth with greater than 20% improvement in specificity and accuracy in Motifs 1-2, 4-6 in Figure 5.
6 Application to Neurobiological Data
We use the Visual Coding Neuropixels database from the Allen Brain Observatory [69, 70]. The database consists of sorted spike trains and local field potentials recorded simultaneously from up to six cortical visual areas, hippocampus, thalamus, and other adjacent structures of mice, while the mice passively view a stimuli shown to them. The stimuli include static gratings, drifting gratings, natural scenes/images and natural movies, which are shown to the mice with repetitions. The data has been recorded from the neurons with the recently developed technology of Neuropixels which allows real-time recording from hundreds of neurons across the brain simultaneously by inserting multiple probes into the brain [71].
For the purpose of application and comparison of the results of the methods discussed in this paper, we restrict our analysis to a 116 days old male mouse (Session ID 791319847) with 555 neurons whose spike trains are recorded simultaneously by six Neuropixel probes. The spike trains during the entire experiment were recorded at a frequency of KHz. We analyze the spike trains for four stimuli:
-
1.
Natural scenes, consisting of 118 natural scenes selected from three databases (Berkeley Segmentation Dataset, van Hateren Natural Image Dataset and McGill Calibrated Colour Image Database), with each scene presented briefly for 250ms and then replaced with the next scene image. Each scene is repeated 50 times in random order with intermittent blank intervals.
-
2.
Static gratings consisting of full-field sinusoidal gratings with 6 orientations (the angle of the grating), 5 spatial frequencies (the width of the grating), and 4 phases (the position of the grating) resulting in 120 stimulus conditions. Each grating is presented briefly (250 ms) before being replaced with a different orientation, spatial frequency and phase condition and each condition is repeated 50 times, in random order with intermittent blank intervals.
-
3.
Gabor patches with 3 orientations where the patch center is lying at one of the points in a visual field. Each gabor patch is being presented for 250ms and then replaced by a different patch, and each condition is repeated 50 times in random order with intermittent blank intervals.
-
4.
Full-field flashes, lasting for 250 ms followed by a blank interval of 1.75 s, and then the next flash, totaling 150 repetitions.
This variety of stimuli is ranging from relatively natural stimuli invoking mice’s natural habitats (natural scenes) to artificial stimuli (static gratings, gabors and flashes). Among the artificial stimuli, static gratings incorporate sinusoidal patches, while full-field flashes incorporate sharp changes in luminosity in the whole visual field in short period of time, and gabor patches incorporate sinusoidal patches with declining luminosity with distance from the center of the patch. With this choice of four stimuli we investigate how the variety of stimuli possibly invokes distinct patterns of neuronal interactions and connectivity. We exclude dynamic stimuli like natural movies, and drifting gratings, from this analysis because their results would require more nuanced study and interpretation, which we defer for future analysis.

Method | Noise-level | Outcome graph | Causality |
---|---|---|---|
MLE | Noisy | Undirected | Non-causal |
Granger Causality | Noisy | Directed | Concerns on causality |
Neuro-PC | Less noisy | Directed | Markov Property for causality |
Data Processing: We convert the spike trains recorded at 1 KHz to bin size of 10 ms by aggregating and then separating by start and end times of each stimuli presentation and obtain the Peri-Stimulus Time Histograms (PSTH) with bin-size 10 ms. We smooth the PSTHs by a Gaussian smoothing kernel of bandwidth ms which provides a smoothed version of the PSTH for each neuron and each stimulus presentation. Some examples of the smoothed PSTH are displayed in Figure 6. We use the smoothed PSTHs for neurons over each stimuli type as input for inference of the FC between the neurons for each stimuli presentation. For each stimulus presentation, we first selected the set of neurons that were active in at least of the bins in the PSTH. This was done since the inference of functional connections between the inactive neurons is expected to be noisy and unreliable. In Neuro-PC inference, to further cope with noisy estimates, we excluded the connections that were present in less than of the bootstrap iterations, and those connections between pairs of neurons which were detected with p-value of more than in the skeletal graph outputted by the PC algorithm.
Results: We used three methods for inferring the FC from neural signals and compare their results: 1) Undirected FC inferred by Graphical LASSO penalized Maximum Likelihood Estimation (MLE), 2) Granger Causality, and 3) Neuro-PC (Our). A summary of the results is provided in Figure 6. Table 1 outlines comparisons of the results from the three methods on three key aspects - noise directionality and causal interpretation.
Visual inspection indicates that gabor patches and full-field flashes result in a sparser matrix in comparison to natural scenes and static gratings for each of the three methods. This can be intuitively understood since natural scenes mimic the mouse’s natural habitat and static gratings have a pattern covering the entire visual field of the mouse. These stimuli could encourage a stronger and more intricate response in comparison to gabor patches, which occupy a small fraction of the visual field and flashes which even though are full-field, are characterized by only a heightened luminosity and devoid of patterns like gratings or rich content.
Examining the Neuro-PC graph structure, we observe that natural scenes and gabor patches lead to a more ’asymmetric connectivity’ graph in contrast to static gratings and flashes. This global structure could reflect the specific features of the stimuli. Both natural scenes and gabor patches have features that encourage attention asymmetrically on the visual field, unlike static gratings and full-field flashes which are symmetric over the visual field. In comparison, the MLE always outputs a symmetric connectivity graph, and Granger Causality doesn’t reflect a distinct asymmetry.
We further analyze whether change in stimulus type alters significantly the functional connectivity between any pair of neurons. In particular, we performed statistical tests to determine whether each cell in the FC matrix has a value that remains the same between pairs of stimuli-types. We use a paired two-sample t-test for the purpose. We considered 30 trials for each stimulus-type that generated 30 trials for each cell of the connectivity matrix. For each pair of stimulus-type, we plotted a histogram of the p-values, where each p-value corresponds to testing whether a cell of the FC matrix incurred a change between the stimulus pair. The results are shown in Figure 7.

We observe from the histograms in Figure 7 that both MLE and Granger Causality do not exhibit visible changes in the distribution of the p-values for different stimuli. This is indicative of similar comparisons associated with distinct pairs of stimulus-type. Whereas in the case of Neuro-PC, the p-value distributions reflected visible changes for different stimulus-type pairs. The more sparse result from Neuro-PC compared to the other approaches helps in noting changes in the histograms by visual inspection. Indeed, functional connections are expected to be sensitive to stimuli [72, 73], such that for different stimulus pairs, a change in the distribution of the p-values is warranted. Furthermore, Neuro-PC shows that some functional connections changed significantly (the connections whose p-value lie to the left of the red 0.05 p-value line) as stimulus changed between natural scenes and flashes, static gratings and flashes and gabor patches and flashes, which gives insight into those pairs of neurons with functional connections that are responsive to the stimulus change. The neurons which showed significant change of response with change in stimulus would be of interest in studying neurobiological response to these stimuli.
For each pair of stimuli, we can also quantify the degree of functional dissimilarity between the pair of stimuli by the percentage of connections which showed strong change with switch between the two stimuli (having p-value less than 0.3) given by Neuro-PC. Such analysis indicates that natural scenes vs static gratings were most similar with respect to this metric, while, least similar were natural scenes vs flashes, static gratings vs flashes and gabors vs flashes. This can be intuitively understood since natural scenes and static gratings both provide rich content and both cover the full visual field, whereas, flashes lack rich content invoking less neuronal interactions. Gabor patterns may evoke distinctly different neuronal interactions than flashes since the former does not cover the entire visual field while the latter does.
7 Discussion
In this paper, we propose a novel methodology for inference of causal functional connectivity between neurons from multiple neural time series. We demonstrated the accuracy and robustness of the method in simulations and performed a comparative study with other related methods in the literature. We applied the method to a neurobiological dataset of signals from the brain of the mouse and compared with other approaches. The results give insights into the FC between the neurons in the mouse brain in a variety of stimuli scenario, for example, how the FC varies with stimuli, whether changes in stimuli are able to invoke a significant change in the FC structure, and subsequently, similarity in stimuli in terms of joint connectivity structures.
Notably, while the importance and the need of causal FC from neuronal recordings is recognized in literature, e.g., as described Reid et al. [15] there is currently a gap in the literature in obtaining the causal FC from neuronal recordings [74]. Existing methods for inference of FC obtain the associative and undirected connectivity with statistical guarantees [75, 76]. The focus of the current paper is to contribute to finding the causal FC from multiple neuronal time series based on directed Probabilistic Graphical Models (PGM)[54]. It provides a formulation which shown effective in finding causal networks in other disciplines [57, 58, 59, 60, 61]. Directed PGM provides an alternative approach to causality than Granger Causality which have been used in neuroscience literature to characterize causality. However, Granger Causality has difficulties in the presence of common causes [77, 51] and it depends on the MVAR autoregressive linear model assumption in it’s popular implementation [66]. The PGM, on the other hand, deals with common causes is a non-parametric approach not requiring parametric model assumptions. The key step in the Neuro-PC algorithm is obtaining the directed PGM satisfying the Markov Property, between the neuronal time series [63]. The Markov property is the principle that if we fix the states of neurons that directly influence a neuron of interest, , then the states of neurons that are connected indirectly to through direct connections are rendered independent with respect to [78]. With regards to unobserved confounding variables, in the basic setup, Neuro-PC algorithm does not account for them, similarly to Granger Causality. However, by switching the PC algorithm with the FCI algorithm [63] in Step 2 (Bootstrap) of Neuro-PC, could account for unobserved confounding variables, because the FCI algorithm is a modification of the PC algorithm that takes into account unobserved confounding variables.
In addition to having the PC algorithm at its core, Neuro-PC includes components targeted towards the particular neural interactions. In neuroscience literature, causality is often defined as a cause of an observed neural event (the ‘effect’) as a preceding neural event whose occurrence is necessary to observe the effect [15]. The approach of time delay incorporates this definition into Neuro-PC since it essentially identifies whether the previous time values of the neurons impact the present value of a particular neuron by the obtained directed graph. Bootstrapping step gets rid of spurious connections by repeating the inference of causal FC over several subsampled blocks of the neuronal time series and discarding those connections that are absent in half the repetitions. The neuron ablation step is similar to the practice of interrogation of neural circuits [79] and intervention in statistical causal inference literature [80].
The Neuro-PC algorithm is applicable to a wide range of recordings. In principle, it can be applied to any continuous neural signal, such as spiking rates of individual neurons, calcium dynamics of individual neurons, fMRI recordings, and also across organisms and accross parts of the nervous system. We have investigated in the current paper the FC from the brain of a 116 days old male mouse for application of the Neuro-PC and comparison with related approaches. A noteworthy aspect of the dataset used in this paper is that it consists of simultaneously recorded spike-trains from neurons by probes inserted within the mouse brain [81, 70]. While existing literature of FC focused primarily on fMRI recordings from voxels in the brain that represent aggregate states of thousands of neurons [82, 83], the advent of Neuropixels opens the doors to individual neuron recordings.
In further research, we would perform a more detailed analysis of the FC of the mouse brain with more insights into the neurons involved in FC in terms of their locations in the brain and comparisons of the neurons in FC over different stimuli, multiple mice, and mice of different age and gender. We also aim to investigate the causal FC of other organisms such as C. elegans and Zebra fish.
References
- [1] Cornelia I Bargmann and Eve Marder. From the connectome to brain function. Nature methods, 10(6):483, 2013.
- [2] John G White, Eileen Southgate, J Nichol Thomson, and Sydney Brenner. The structure of the nervous system of the nematode caenorhabditis elegans. Philos Trans R Soc Lond B Biol Sci, 314(1165):1–340, 1986.
- [3] Lav R Varshney, Beth L Chen, Eric Paniagua, David H Hall, and Dmitri B Chklovskii. Structural properties of the caenorhabditis elegans neuronal network. PLoS computational biology, 7(2):e1001066, 2011.
- [4] Steven J Cook, Travis A Jarrell, Christopher A Brittin, Yi Wang, Adam E Bloniarz, Maksim A Yakovlev, Ken CQ Nguyen, Leo T-H Tang, Emily A Bayer, Janet S Duerr, et al. Whole-animal connectomes of both caenorhabditis elegans sexes. Nature, 571(7763):63–71, 2019.
- [5] Yonggang Shi and Arthur W Toga. Connectome imaging for mapping human brain pathways. Molecular psychiatry, 22(9):1230–1240, 2017.
- [6] Tabinda Sarwar, Caio Seguin, Kotagiri Ramamohanarao, and Andrew Zalesky. Towards deep learning for connectome mapping: A block decomposition framework. NeuroImage, 212:116654, 2020.
- [7] C Shan Xu, Michal Januszewski, Zhiyuan Lu, Shin-ya Takemura, Kenneth Hayworth, Gary Huang, Kazunori Shinomiya, Jeremy Maitin-Shepard, David Ackerman, and Stuart Berg. A connectome of the adult drosophila central brain. BioRxiv, 2020.
- [8] Wei-Chung Allen Lee and R Clay Reid. Specificity and randomness: structure–function relationships in neural circuits. Current opinion in neurobiology, 21(5):801–807, 2011.
- [9] Nancy J Kopell, Howard J Gritton, Miles A Whittington, and Mark A Kramer. Beyond the connectome: the dynome. Neuron, 83(6):1319–1328, 2014.
- [10] Jimin Kim, William Leahy, and Eli Shlizerman. Neural interactome: Interactive simulation of a neuronal system. Frontiers in Computational Neuroscience, 13:8, 2019.
- [11] Jimin Kim, Julia A Santos, Mark J Alkema, and Eli Shlizerman. Whole integration of neural connectomics, dynamics and bio-mechanics for identification of behavioral sensorimotor pathways in caenorhabditis elegans. bioRxiv, page 724328, 2019.
- [12] R Clay Reid. From functional architecture to functional connectomics. Neuron, 75(2):209–217, 2012.
- [13] Emily S Finn, Xilin Shen, Dustin Scheinost, Monica D Rosenberg, Jessica Huang, Marvin M Chun, Xenophon Papademetris, and R Todd Constable. Functional connectome fingerprinting: identifying individuals using patterns of brain connectivity. Nature neuroscience, 18(11):1664–1671, 2015.
- [14] Eli Shlizerman, Konrad Schroder, and J Nathan Kutz. Neural activity measures and their dynamics. SIAM Journal on Applied Mathematics, 72(4):1260–1291, 2012.
- [15] Andrew T Reid, Drew B Headley, Ravi D Mill, Ruben Sanchez-Romero, Lucina Q Uddin, Daniele Marinazzo, Daniel J Lurie, Pedro A Valdés-Sosa, Stephen José Hanson, Bharat B Biswal, et al. Advancing functional connectivity research from association to causation. Nature neuroscience, 1(10), 2019.
- [16] Nicholas J Foti and Emily B Fox. Statistical model-based approaches for functional connectivity analysis of neuroimaging data. Current opinion in neurobiology, 55:48–54, 2019.
- [17] Xi-Nian Zuo, Ross Ehmke, Maarten Mennes, Davide Imperati, F Xavier Castellanos, Olaf Sporns, and Michael P Milham. Network centrality in the human functional connectome. Cerebral cortex, 22(8):1862–1875, 2012.
- [18] Baxter P Rogers, Victoria L Morgan, Allen T Newton, and John C Gore. Assessing functional connectivity in the human brain by fmri. Magnetic resonance imaging, 25(10):1347–1357, 2007.
- [19] Maria Giulia Preti, Thomas AW Bolton, and Dimitri Van De Ville. The dynamic functional connectome: State-of-the-art and perspectives. Neuroimage, 160:41–54, 2017.
- [20] Yuting Xu and Martin A Lindquist. Dynamic connectivity detection: an algorithm for determining functional connectivity change points in fmri data. Frontiers in neuroscience, 9:285, 2015.
- [21] Chong-Yaw Wee, Pew-Thian Yap, and Dinggang Shen. Diagnosis of autism spectrum disorders using temporally distinct resting-state functional connectivity networks. CNS neuroscience & therapeutics, 22(3):212–219, 2016.
- [22] Gaël Varoquaux, Alexandre Gramfort, Jean-Baptiste Poline, and Bertrand Thirion. Brain covariance selection: better individual functional connectivity models using population prior. In Advances in neural information processing systems, pages 2334–2342, 2010.
- [23] Stephen M Smith, Karla L Miller, Gholamreza Salimi-Khorshidi, Matthew Webster, Christian F Beckmann, Thomas E Nichols, Joseph D Ramsey, and Mark W Woolrich. Network modelling methods for fmri. Neuroimage, 54(2):875–891, 2011.
- [24] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3):432–441, 2008.
- [25] Elena A Allen, Eswar Damaraju, Sergey M Plis, Erik B Erhardt, Tom Eichele, and Vince D Calhoun. Tracking whole-brain connectivity dynamics in the resting state. Cerebral cortex, 24(3):663–676, 2014.
- [26] Pablo Barttfeld, Lynn Uhrig, Jacobo D Sitt, Mariano Sigman, Béchir Jarraya, and Stanislas Dehaene. Signature of consciousness in the dynamics of resting-state brain activity. Proceedings of the National Academy of Sciences, 112(3):887–892, 2015.
- [27] Ivor Cribben, Ragnheidur Haraldsdottir, Lauren Y Atlas, Tor D Wager, and Martin A Lindquist. Dynamic connectivity regression: determining state-related changes in brain connectivity. Neuroimage, 61(4):907–920, 2012.
- [28] Hilary A Marusak, Vince D Calhoun, Suzanne Brown, Laura M Crespo, Kelsey Sala-Hamrick, Ian H Gotlib, and Moriah E Thomason. Dynamic functional connectivity of neurocognitive networks in children. Human brain mapping, 38(1):97–108, 2017.
- [29] Barnaly Rashid, Eswar Damaraju, Godfrey D Pearlson, and Vince D Calhoun. Dynamic connectivity states estimated from resting fmri identify differences among schizophrenia, bipolar disorder, and healthy control subjects. Frontiers in human neuroscience, 8:897, 2014.
- [30] Chong-Yaw Wee, Sen Yang, Pew-Thian Yap, Dinggang Shen, Alzheimer’s Disease Neuroimaging Initiative, et al. Sparse temporally dynamic resting-state functional connectivity networks for early mci identification. Brain imaging and behavior, 10(2):342–356, 2016.
- [31] Ivor Cribben, Tor Wager, and Martin Lindquist. Detecting functional connectivity change points for single-subject fmri data. Frontiers in computational neuroscience, 7:143, 2013.
- [32] Eswar Damaraju, Elena A Allen, Aysenil Belger, Judith M Ford, S McEwen, DH Mathalon, BA Mueller, GD Pearlson, SG Potkin, A Preda, et al. Dynamic functional connectivity analysis reveals transient states of dysconnectivity in schizophrenia. NeuroImage: Clinical, 5:298–308, 2014.
- [33] Hexuan Liu, Jimin Kim, and Eli Shlizerman. Functional connectomics from neural dynamics: probabilistic graphical models for neuronal network of caenorhabditis elegans. Philosophical Transactions of the Royal Society B: Biological Sciences, 373(1758):20170377, 2018.
- [34] Pedro A Valdes-Sosa, Alard Roebroeck, Jean Daunizeau, and Karl Friston. Effective connectivity: influence, causality and biophysical modeling. Neuroimage, 58(2):339–361, 2011.
- [35] Joseph D Ramsey, Stephen José Hanson, Catherine Hanson, Yaroslav O Halchenko, Russell A Poldrack, and Clark Glymour. Six problems for causal inference from fmri. neuroimage, 49(2):1545–1558, 2010.
- [36] BT Thomas Yeo, Fenna M Krienen, Jorge Sepulcre, Mert R Sabuncu, Danial Lashkari, Marisa Hollinshead, Joshua L Roffman, Jordan W Smoller, Lilla Zöllei, Jonathan R Polimeni, et al. The organization of the human cerebral cortex estimated by intrinsic functional connectivity. Journal of neurophysiology, 106(3):1125, 2011.
- [37] Jonathan D Power, Alexander L Cohen, Steven M Nelson, Gagan S Wig, Kelly Anne Barnes, Jessica A Church, Alecia C Vogel, Timothy O Laumann, Fran M Miezin, Bradley L Schlaggar, et al. Functional network organization of the human brain. Neuron, 72(4):665–678, 2011.
- [38] Jonathan D Power and Steven E Petersen. Control-related systems in the human brain. Current opinion in neurobiology, 23(2):223–228, 2013.
- [39] Stephen M Smith, Peter T Fox, Karla L Miller, David C Glahn, P Mickle Fox, Clare E Mackay, Nicola Filippini, Kate E Watkins, Roberto Toro, Angela R Laird, et al. Correspondence of the brain’s functional architecture during activation and rest. Proceedings of the National Academy of Sciences, 106(31):13040–13045, 2009.
- [40] Alex Tank, Emily B Fox, and Ali Shojaie. Granger causality networks for categorical time series. arXiv preprint arXiv:1706.02781, 2017.
- [41] Anil K Seth, Adam B Barrett, and Lionel Barnett. Granger causality analysis in neuroscience and neuroimaging. Journal of Neuroscience, 35(8):3293–3297, 2015.
- [42] Karl Friston, Rosalyn Moran, and Anil K Seth. Analysing connectivity with granger causality and dynamic causal modelling. Current opinion in neurobiology, 23(2):172–178, 2013.
- [43] Hongteng Xu, Mehrdad Farajtabar, and Hongyuan Zha. Learning granger causality for hawkes processes. In International Conference on Machine Learning, pages 1717–1726, 2016.
- [44] Maksim G Sharaev, Viktoria V Zavyalova, Vadim L Ushakov, Sergey I Kartashov, and Boris M Velichkovsky. Effective connectivity within the default mode network: dynamic causal modeling of resting-state fmri data. Frontiers in human neuroscience, 10:14, 2016.
- [45] Klaas Enno Stephan, Will D Penny, Rosalyn J Moran, Hanneke EM den Ouden, Jean Daunizeau, and Karl J Friston. Ten simple rules for dynamic causal modeling. Neuroimage, 49(4):3099–3109, 2010.
- [46] Alex Tank, Ian Covert, Nicholas Foti, Ali Shojaie, and Emily Fox. Neural granger causality for nonlinear time series. arXiv preprint arXiv:1802.05842, 2018.
- [47] Patrick A Stokes and Patrick L Purdon. A study of problems encountered in granger causality analysis from a neuroscience perspective. Proceedings of the national academy of sciences, 114(34):E7063–E7072, 2017.
- [48] Rainer Dahlhaus and Michael Eichler. Causality and graphical models in time series analysis. Oxford Statistical Science Series, pages 115–137, 2003.
- [49] Ruocheng Guo, Lu Cheng, Jundong Li, P Richard Hahn, and Huan Liu. A survey of learning causality with data: Problems and methods. arXiv preprint arXiv:1809.09337, 2018.
- [50] Karl Friston. Causal modelling and brain connectivity in functional magnetic resonance imaging. PLoS biology, 7(2), 2009.
- [51] Mariusz Maziarz. A review of the granger-causality fallacy. The journal of philosophical economics: Reflections on economic and social issues, 8(2):86–105, 2015.
- [52] Xinling Guo, Qiaosheng Zhang, Amrita Singh, Jing Wang, and Zhe Sage Chen. Granger causality analysis of rat cortical functional connectivity in pain. Journal of Neural Engineering, 17(1):016050, 2020.
- [53] Jianping Qiao, Zhishun Wang, Guihu Zhao, Yuankai Huo, Carl L Herder, Chamonix O Sikora, and Bradley S Peterson. Functional neural circuits that underlie developmental stuttering. PLoS One, 12(7):e0179255, 2017.
- [54] Daphne Koller and Nir Friedman. Probabilistic graphical models: principles and techniques. MIT press, 2009.
- [55] Hexuan Liu, Jimin Kim, and Eli Shlizerman. Functional connectomics from neural dynamics: probabilistic graphical models for neuronal network of caenorhabditis elegans. Philosophical Transactions of the Royal Society B: Biological Sciences, 373(1758):20170377, 2018.
- [56] Swathi P Iyer, Izhak Shafran, David Grayson, Kathleen Gates, Joel T Nigg, and Damien A Fair. Inferring functional connectivity in mri using bayesian network structure learning with a modified pc algorithm. Neuroimage, 75:165–175, 2013.
- [57] Huange Wang, Fred A van Eeuwijk, and Johannes Jansen. The potential of probabilistic graphical models in linkage map construction. Theoretical and Applied Genetics, 130(2):433–444, 2017.
- [58] Christine Sinoquet. Probabilistic graphical models for genetics, genomics, and postgenomics. OUP Oxford, 2014.
- [59] Raphaël Mourad, Christine Sinoquet, and Philippe Leray. Probabilistic graphical models for genetic association studies. Briefings in bioinformatics, 13(1):20–33, 2012.
- [60] Junbai Wang, Leo Wang-Kit Cheung, and Jan Delabie. New probabilistic graphical models for genetic regulatory networks studies. Journal of biomedical informatics, 38(6):443–455, 2005.
- [61] Nir Friedman. Inferring cellular networks using probabilistic graphical models. Science, 303(5659):799–805, 2004.
- [62] Clark Glymour, Peter Spirtes, and Richard Scheines. Causal inference. Erkenntnis, 35(1-3):151–189, 1991.
- [63] Peter Spirtes, Clark N Glymour, Richard Scheines, and David Heckerman. Causation, prediction, and search. MIT press, 2000.
- [64] David Maxwell Chickering. Optimal structure identification with greedy search. Journal of machine learning research, 3(Nov):507–554, 2002.
- [65] Markus Kalisch and Peter Bühlmann. Estimating high-dimensional directed acyclic graphs with the pc-algorithm. Journal of Machine Learning Research, 8(Mar):613–636, 2007.
- [66] John Geweke. Measurement of linear dependence and feedback between multiple time series. Journal of the American statistical association, 77(378):304–313, 1982.
- [67] Mingzhou Ding, Yonghong Chen, and Steven L Bressler. 17 granger causality: basic theory and application to neuroscience. Handbook of time series analysis: recent theoretical developments and applications, 437, 2006.
- [68] Ariel Rokem, M Trumpis, and F Perez. Nitime: time-series analysis for neuroimaging data. In Proceedings of the 8th Python in Science Conference, pages 68–75, 2009.
- [69] Saskia EJ de Vries, Jerome A Lecoq, Michael A Buice, Peter A Groblewski, Gabriel K Ocker, Michael Oliver, David Feng, Nicholas Cain, Peter Ledochowitsch, Daniel Millman, et al. A large-scale standardized physiological survey reveals functional organization of the mouse visual cortex. Nature Neuroscience, 23(1):138–151, 2020.
- [70] Allen-Brain-Observatory. Allen institute for brain science. Available from: https://portal.brain-map.org/explore/circuits/visual-coding-neuropixels, October, 2019.
- [71] James J Jun, Nicholas A Steinmetz, Joshua H Siegle, Daniel J Denman, Marius Bauza, Brian Barbarits, Albert K Lee, Costas A Anastassiou, Alexandru Andrei, Çağatay Aydın, et al. Fully integrated silicon probes for high-density recording of neural activity. Nature, 551(7679):232–236, 2017.
- [72] Ron D Frostig, Yechezkel Gottlieb, Eilon Vaadia, and Moshe Abeles. The effects of stimuli on the activity and functional connectivity of local neuronal groups in the cat auditory cortex. Brain research, 272(2):211–221, 1983.
- [73] Armen C Arevian, Vikrant Kapoor, and Nathaniel N Urban. Activity-dependent gating of lateral inhibition in the mouse olfactory bulb. Nature neuroscience, 11(1):80–87, 2008.
- [74] Pedro A Valdes-Sosa, Alard Roebroeck, Jean Daunizeau, and Karl Friston. Effective connectivity: influence, causality and biophysical modeling. Neuroimage, 58(2):339–361, 2011.
- [75] Karl J Friston. Functional and effective connectivity: a review. Brain connectivity, 1(1):13–36, 2011.
- [76] Barry Horwitz. The elusive concept of brain connectivity. Neuroimage, 19(2):466–470, 2003.
- [77] David Marc Anton Mehler and Konrad Paul Kording. The lure of causal statements: Rampant mis-inference of causality in estimated connectivity. arXiv preprint arXiv:1812.03363, 2018.
- [78] Clark Glymour, Kun Zhang, and Peter Spirtes. Review of causal discovery methods based on graphical models. Frontiers in genetics, 10:524, 2019.
- [79] Valentina Emiliani, Adam E Cohen, Karl Deisseroth, and Michael Häusser. All-optical interrogation of neural circuits. Journal of Neuroscience, 35(41):13917–13926, 2015.
- [80] Laura Schulz, Tamar Kushnir, and Alison Gopnik. Learning from doing: Intervention and causal inference. Causal learning: Psychology, philosophy, and computation, pages 67–85, 2007.
- [81] Nicholas A Steinmetz, Christof Koch, Kenneth D Harris, and Matteo Carandini. Challenges and opportunities for large-scale electrophysiology with neuropixels probes. Current opinion in neurobiology, 50:92–100, 2018.
- [82] Martijn P Van Den Heuvel and Hilleke E Hulshoff Pol. Exploring the brain network: a review on resting-state fmri functional connectivity. European neuropsychopharmacology, 20(8):519–534, 2010.
- [83] Joseph D Ramsey, Stephen José Hanson, Catherine Hanson, Yaroslav O Halchenko, Russell A Poldrack, and Clark Glymour. Six problems for causal inference from fmri. neuroimage, 49(2):1545–1558, 2010.