This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Critical behaviour at the onset of synchronization in a neuronal model

Amin Safaeesirat Physics department, Sharif University of Technology, P.O. Box 11155-9161, Tehran, Iran    Saman Moghimi-Araghi Physics department, Sharif University of Technology, P.O. Box 11155-9161, Tehran, Iran
Abstract

The presence of both critical behaviour and oscillating patterns in brain dynamics is a very interesting issue. In this paper, we consider a model for a neuron population where each neuron is modeled by an over-damped rotator. We find that in the space of external parameters, there exist some regions that system shows synchronization. Interestingly, just at the transition point, the avalanche statistics show a power-law behaviour. Also, in the case of small systems, the (partially) synchronization and power-law behavior can happen at the same time.

I Introduction

Scale-free and critical behavior has long been observed in animal and human brain experiments, both in vitro and in vivo Beggs ; Peterman ; Gireesh ; Hahn ; Priesemann ; Pasq . Neuronal avalanches, the bursts of activity and spiking of neurons that spread in large areas of the brain network, exhibit power-law forms in their size and duration distributions. Also, other data like the data of the human electroencephalography (EEG) show similar patterns in the background activity of the brain Freeman ; Shriki .

The criticality of neuronal systems is supported by the existence of power-law behavior and data collapse. Being at criticality is thought to play a crucial role in some of brain fundamental properties like optimal response and storage and transfer of information Shew ; Kinouchi ; ShewPlenz ; Gautam ; Legenstein . As there is no need to tune any parameter to arrive at criticality, it was suggested that the brain is an example of Self-Organized Critical (SOC) systemsHesse . The idea of self-Organized criticality was introduced by Bak, Tang, and Wiesenfeld BTW , where using a simple sandpile model shows that some complex systems can organize themselves towards their critical point and therefore, there is no need to regulate external parameters. A range of models was introduced to show that self-organized criticality is plausible in the neural systems Bornholdt ; De Archangelis ; Levina ; Millmann ; Meisel ; Tetzlaff ; Kossio . However, being at criticality does not necessarily imply that the system arrives at criticality through a self-organizing mechanism. Rather, it has been shown that neural systems may not be an example of SOC systems Bonachela . Also, for the brain network, which has a highly hierarchical and modular structure, the critical points are generalized to critical regions, similar to that of Griffiths phases Moretti ; Moosavi , and there may be no need to fine-tune a parameter. Actually, it seems that a long term evolutionary mechanism would be enough to explain why the brain works at criticality.

Therefore, some efforts have been made to find how power law can happen in models of the neuronal system even you need to tune some external parametersZare . Yet it remains to understand what is the phase transition that determines the critical state. One of the most promising candidates is the synchronization transition. It is found that the brain is more or less maintained on the transition point to synchronization Gautam ; Poil ; Dalla , and the criticality in avalanche statistics can happen at this transition pointFontenele ; Montakhab . Also, a general framework based on Landau-Ginzburg theory has been proposed to describe the dynamics of cortex and it is found that at the edge of synchronization scale-free avalanches appear Munoz . In similar studies, it is found that the bifurcation that leads to successive firings of a neuron might have effect on the overall dynamics .

In this paper, we study a network of neurons modeled by over-damped rotators to mimic type I neurons, rather than the usual integrate-and-fire model that more or less mimics type II neurons’ behaviour. Therefore, our setup is similar to that of the Kuramoto modelKuramoto and hence each oscillator is identified by a single-phase variable. This helps us to study the synchronization of the system more easily. Of course, the dynamics, especially the way the two neighbours affect each other is very different from the Kuramoto model and therefore it has completely different features. We explore the phase space of external parameters and find the transition lines of synchronization. The main parameters we have explored are the external stimulation power, the relative strength of inhibitory neurons to excitatory ones and the axonal time delay. We analyze the system’s avalanche size statistics at different points of the phase space of the parameters and interestingly observe that right at the transition the avalanche size and period statistics obey power-law.

II Models and Methods: The single neuron model

There are lots of models for single neuron dynamics. However, for simulating relatively large networks, simple models should be chosen. A simple model only keeps the main characteristics of the system and throws away other details. For a neuron, this main characteristic may be stated as follows: If a relatively small stimulation is applied to a neuron, It doesn’t react considerably. However, a neuron that receives a slowly varying current throws off a fairly regular set of spikes and in this situation, it may be regarded as an oscillator. In the language of dynamical systems, when a neuron is sufficiently stimulated, the stable fixed point is lost and its movement in phase space can be regarded as a limit cycle, at least in short times. Therefore, for the single neuron model, it is good to consider a simple model that has such a cycle when it is stimulated. The simplest model might be the well-known leaky integrate and fire model (LIF); however, the LIF model. As we will discuss later, this model more or less mimics the behavior of Type II neurons, that is, there is a ”discontinuity” in the frequency of successive firings of the modeled neuron, or its activity, at the onset activity. In fact the activity is continuous but with a logarithmic singularity which in some sense is equivalent to discontinuity. In contrast, Type I neurons are characterized mainly by the appearance of oscillations with arbitrarily low frequency as the current is injected. In our study, we prefer to use a simple model that behaves more closely to a type I neuron.

A suitable model for type I neurons is given by the equation

dϕdt=Isinϕ,\frac{d\phi}{dt}=I-\sin\phi, (1)

where II is the external stimulation and ϕ\phi is the dynamical variable. The model has been considered before as a simple model to simulate the dynamics of neurons and is closely related to quadratic non-linear integrate and fire modelErmentrout-1 ; Ermentrout-2 ; Ermen .

The equation (1) can be interpreted as an over-damped rotator in the following way: Consider a rod pendulum of mass mm and length ll driven by a constant torque in a viscous medium. The equation of motion of such pendulum is given by:

ml2d2ϕdt2+bdϕdt+mglsinϕ=Tml^{2}\frac{d^{2}\phi}{dt^{2}}+b\frac{d\phi}{dt}+mgl\sin{\phi}=T (2)

where ϕ\phi is the angle of the rotating rod with the downward vertical, bb is the viscous damping constant, gg is the acceleration due to gravity, and TT is the external torque.

Equation (2) is a second-order system; however, in the limit of very large bb’s, where the rotator is over-damped, it may be approximated with a first-order system. Neglecting the inertia term ml2ϕ¨ml^{2}\ddot{\phi} and non-dimensionalizing the resulted equation, one arrives at equation (1) Strogatz .

Although equation (2) has a very rich dynamics Strogatz , the over-damped limit is sufficient for modeling single neuron dynamics. It shows a saddle-node bifurcation at I=1I=1, when I<1I<1 there exist two distinct fixed points, one of which is stable and the other is unstable. If I>1I>1, there will be no fixed points and the rotator will overturn continually. The rest state of the neuron corresponds with the stable fixed point and when I>0I>0 and the rotator overturns continually, the neuron fires repeatedly.

It is more convenient to take θ=ϕπ/2\theta=\phi-\pi/2 and rewrite Eq. 1 as

dθdt=Icosθ.\frac{d\theta}{dt}=I-\cos\theta. (3)

Consider the case I<1I<1. The two equilibrium points are shown in figure 1.

Refer to caption
Figure 1: Two fixed points of the system for I<1I<1 and α>>1\alpha>>1. Here TT is the external torque exerted on the system from outside (take a look at Eq. 2). The system will stand in its stable fixed point if there is no external stimulation. If the stimulation is enough and the system proceeds its unstable fixed point, it will return to the stable fixed point counterclockwise. We define this as a ”spike” in our system.

When the system is at rest, it stands at the stable fixed point. If the rotator is stimulated slightly, it returns back to the stable fixed point. However, if the stimulation is large enough to take the rotator beyond the unstable fixed point, or the threshold, it will rotate a complete circle and then arrives at the stable fixed point. We call this complete rotation of the rotator a spike. Of course, usually, it is considered that the ”neuron” spikes when θ=π\theta=\pi, that is, it produces an action potential right at that moment. This model can be mapped to quadratic integrate and fire neuron by the transformation u=tan(θ/2)u=\tan(\theta/2)Ermen . It is easy to show that Eq. 3 models type I neurons. If the current applied to the neuron exceeds the threshold value I=1I=1, it spikes repeatedly. The inter spike interval (ISI) is determined via

ISI=tftf+ISIdt=θfθf+2π2πdθIcosθ=2πI21\rm{ISI}=\int_{t_{f}}^{t_{f}+{\rm{ISI}}}dt=\int_{\theta_{f}}^{\theta_{f}+2\pi}\frac{2\pi d\theta}{I-\cos{\theta}}=\frac{2\pi}{\sqrt{I^{2}-1}} (4)

where tft_{f} is the spike time and θf\theta_{f} is the spike angle. Therefore, the gain function is

f(I)=1ISI=I212π.f(I)=\frac{1}{\rm ISI}=\frac{\sqrt{I^{2}-1}}{2\pi}. (5)

and hence it grows as I21\sqrt{I^{2}-1} for currents slightly above the threshold current. Note that the gain function behaves as (I1)μ(I-1)^{\mu} with μ>0\mu>0 and therefore the function is continuous. For μ<0\mu<0, the function is clearly this continuous and for the case of μ=0\mu=0 it is usually regarded that there is jump in the function and is therefore discontinuous. In the case of the leaky integrate-and-fire model with the dynamics given by v˙=Iv\dot{v}=I-v and the threshold potential vTh.=1v_{\rm Th.}=1, the gain function is proportional to 1/log(I1)(I1)0-1/\log(I-1)\simeq(I-1)^{0}, which resembles a discontinuity, although the function is actually continuous. Therefore, to mimic the type I neurons, we prefer to choose the overdamped rotator as our single-neuron model.

III Models and Methods: the network of neurons

We have considered a network composed of NN neurons modeled by the over-damped rotator where 80%80\% of them are randomly chosen to be excitatory neurons, and the rest are inhibitory ones. Each neuron, regardless of being excitatory or inhibitory, receives a constant number of internal connections, that is from a fixed number of neurons within the network. We denote this number by CC and write C=CE+CIC=C_{E}+C_{I} where CEC_{E} and CIC_{I} are the number of connections from excitatory and inhibitory neurons respectively. We have considered the ratio of inhibitory inputs to excitatory ones is the same as the population ratio, which is CI/CE=1/4C_{I}/C_{E}=1/4. The neurons also receive CextC_{ext} connections from external neurons. CextC_{ext} is taken to be equal to the number of excitatory connections from the network (Cext=CEC_{ext}=C_{E}). The external neurons fire randomly so that the intervals between two successive firings have a Poisson distribution with the average Text=1/νextT_{ext}=1/\nu_{ext}. These external neurons effectively play the role of external (noisy) current, therefore we set I=0I=0 in the dynamics of over-damped rotators (Eq. 3) and control the external current through tuning νext\nu_{ext}.

When the rotator ii arrives at θ=π\theta=\pi, it fires and hence changes the phase of its neighbour jj by the amount JijJ_{ij}. For presynaptic excitatory neurons Jij=JJ_{ij}=J and for presynaptic inhibitory ones Jij=gJJ_{ij}=-gJ. The same strength JJ is considered for external (excitatory) neurons. Also, we assume that if the neuron ii fires at time tt, the spike arrives at the neuron jj at t+Dt+D, that is, an axonal time delay introduced to the system.

Now we can write the dynamics of the neuron ii in the following form

θit=cosθi+jJijkδ(ttjkD),\frac{\partial\theta_{i}}{\partial t}=-\cos\theta_{i}+\sum_{j}J_{ij}\sum_{k}\delta(t-t_{j}^{k}-D), (6)

where the first summation, is on presynaptic neurons connected to the neuron ii, and the second summation is on the spikes of the neuron jj. In fact, the delta function δ(ttjkD)\delta(t-t_{j}^{k}-D) represents the kkth spike from the neuron jj arriving at t=tjk+Dt=t_{j}^{k}+D.

Usually, the models describing networks of neurons have a handful of parameters and are difficult to analyze completely. In our model, the parameters νext,g,D,J\nu_{ext},g,D,J, and NN are the five adjustable parameters. The role of each parameter is obvious; νext\nu_{ext} controls the strength of external drive, gg controls the relative strength of inhibition to excitation within the network, DD is the axonal time delay, JJ controls the strength of the synapses and NN is simply the total number of neurons. To make the parameter space smaller, we have fixed J=0.015J=0.015 and C=100C=100. We try to investigate the dynamical characteristics of the system by exploring considerable parts of the remaining 3-dimensional space.

IV Synchronization of the neurons

We have simulated populations of the neurons to find out if synchronization happens throughout the system or not. It has been observed that in such neuron populations synchronization might happenBrunell ; Montakhab ; Shahbazi ; Luccioli . For example, in Brunell a network similar to ours is studied and it has been observed that in the presence of delay, one can find some regions where the system shows synchronization. In his paper, Brunell has studied its network for different values of external drive and the relative strength of inhibition to excitation in the system and has found regions in this space where synchronization happens. Also in Valizade17 , using a Kuramoto-like model it has been shown that in presence of delay a transition from a asynchrony to synchrony happens when the level of input to the network is changed. Note that although we have rotators as our single neuron model, it differs from Kuramoto model, both in single agent dynamics and and in interactions between two adjacent neurons: it has a preferred angle θ0\theta_{0} (corresponding to rest state of the neurons) and, when a neuron fires, it kicks the post-synaptic neuron. In Kuramoto model, there is no preferred angle and also the interactions comes from a term proportional to sine of the angle difference of the two neuron. Quite recently, a model has been proposed in which all the above terms are present: the intrinsic angular velocity of the rotator and the sine terms of the Kuramoto and the preferred rest point and the kicks of our model Monuz20 . Within this model both synchronous and asynchronous phases have been observed and a rich phase diagram has been obtained. Also in some transition points scaling laws in avalanche statistics is found.

Refer to caption
Figure 2: The plot illustrates the effective potential for the rotator. The rotator tends to stay at the local minimums of the function where θ=2kπcos1I\theta=2k\pi-\cos^{-1}I in which k is an integer. Due to the outside stimulation, over time, the rotator will reach to the local maximum and consequently spikes and goes to the next relative minimum. The plot is sketched for I=0.3I=0.3

To distinguish the synchron phase from asynchron one, we need a suitable order parameter which vanishes when the system is not synchronous and takes non-zero value otherwise. In the context of Kuramoto-type rotators Kuramoto , the order parameter is defined as r=|(1/N)k=1NeiθK|r=|(1/N)\sum_{k=1}^{N}e^{i\theta_{K}}|, where θk\theta_{k} is the phase of kk’th rotator. If all the rotators are in phase, rr will be unity and if the rotators are all independent, it would be of the order of 1/N1/\sqrt{N}, which becomes negligible for sufficiently large systems. Hopefully, we have phases as our dynamical variables and at the first sight, one thinks of the same order parameter. However, as we discuss below, we find this definition inappropriate for our model. Our single-neuron model is given by Eq. 3. To have a better insight let us interpret the same equation as a Langevin-type equation:

dθidt=Icosθi+ηi=V(θi)θi+ηi,\frac{d\theta_{i}}{dt}=I-\cos\theta_{i}+\eta_{i}=-\frac{\partial V(\theta_{i})}{\partial\theta_{i}}+\eta_{i}, (7)

where we have collected the effect of network spikes and external spikes in the noise term ηi\eta_{i} and have defined the potential VV as

V(θ)=Iθ+sinθ.V(\theta)=-I\theta+\sin\theta. (8)

This potential function is sketched schematically in figure 2 for I=0.3I=0.3.

The rotator tries to reach the local minimum, which is identified as the rest state of a neuron. However, because of the noise term which is resulted from stimulations received by the neurons, it may go up-hill and then fall off to the next minimum (and as a result fire). Therefore, it is very probable to find the rotator at the point cosθ=I\cos\theta=I, or equivalently find the neuron at rest. In such a situation, even if the firings of the neurons are irregular and asynchron, the order parameter can take a non-vanishing value. In fact, if the ratio of the neurons at rest to the total number of neurons is μ\mu, the order parameter will be μ+O(1/N)\mu+O(1/\sqrt{N}). Therefore, especially for small external stimulation, this order parameter is useless and it doesn’t show synchronize ”activity”.

If we want to see if the ”activity” is synchronous, it is better to consider just the neurons that are firing. In the case of rotators, we may consider only the rotators with the phase parameter in the interval [π/2,3π/2][\pi/2,3\pi/2]. We will call these rotators, the spiking rotators. However, within this interval, the real part of eiθe^{i\theta} is always negative and it’s average will be never zero. On the other hand, within the same interval, the imaginary part of eiθe^{i\theta}, ( sinθ\sin\theta) ranges from -1 to 1. Consider the quantity r1=(1/Na)ksinθkr_{1}=(1/N_{a})\sum_{k}\sin\theta_{k}, where NaN_{a} is the number of spiking rotators and the summation is just over the set of spiking rotators. If the network activity is irregular, then r1r_{1} will be of the order of 1/Na1/\sqrt{N_{a}}, however if all the rotators spike synchronously, that is θk(t)f(t)\theta_{k}(t)\simeq f(t) for all spiking rotators, then r1=sin(f(t))r_{1}=\sin(f(t)). This function fluctuates in the interval [1,1][-1,1], but we need a just number to distinguish the synchronous phase from asynchronous one. This leads us to the last step to define the proper order parameter as

m=(1Nai=1Nasinθk)2,m=\langle(\frac{1}{N_{a}}\sum_{i=1}^{N_{a}}\sin{\theta_{k}})^{2}\rangle, (9)

where .\langle.\rangle means time averaging over a period comparable with the spike period. In this way, if the network activity is irregular, m=O(1/Na)m=O(1/N_{a}) and in the case of synchronous activity m=O(1)m=O(1). Just we have to check NaN_{a} is not very small, and as we will see in our model, in all the regions we are interested in, the activity is large enough.

Refer to caption Refer to caption
Refer to caption Refer to caption
Figure 3: The contour plots show the order parameter of systems of 4000 neurons for different values of gg (inhibitory strength), νext\nu_{ext} (average external stimulation frequency), and delay (Eq. 9). The brighter areas in the plots are the synchronous ones. The color varies between blue and pink so that the higher the order parameter, the color is closer to pink. Not only is the delay necessary for the existence of the synchronization but also the different values of delay can change the location of the synchronous regions. As the delay increases, they are pushed to the lower νext\nu_{ext} and dwindling. a) D=0.5D=0.5, b) D=1D=1, c) D=1.5D=1.5 and d) D=2.5D=2.5. The resolution of the graphs are of Δg=0.15\Delta g=0.15 and Δνext=0.06\Delta\nu_{\rm ext}=0.06. The points shown in plot ”c”, are selected to be representatives of different states for this system. The activity pattern and correspondent histogram of size and period of avalanches for these points are shown in Fig. 4 and Fig. 6 respectively.

In simulations, we have considered several samples with different sizes ranging from 1000 neurons to 16000 ones. Also, we have considered several random configurations of links with the above-mentioned statistics. As for the initial condition, we have assigned a random phase between 0 and 2π2\pi to each rotator and have numerically integrated the differential equations. The time steps were set to Δt=0.01\Delta t=0.01. After the system arrives at the steady-state, which happens typically no more than 20000 temporal steps, we begin our actual measurements.

A central quantity of a neuron population is its activity. The activity time series A(t,δt)A(t,\delta t) is defined as the number of spikes in the network between tt and t+δtt+\delta t. The parameter δt\delta t should be taken much smaller than the average period of a single neuron successive firings, but not too small that only single firings are found within each interval. We have taken δt\delta t of the order of Δt\Delta t which seems appropriate for our systems, therefore we fix δt=Δt\delta t=\Delta t and will simply denote A(t,δt)A(t,\delta t) by A(t)A(t).

Within the same networks, we have calculated the order parameter mm (Eq. 9) for different values of νext\nu_{ext}, the external stimulation strength, gg relative number of inhibitory neurons to excitatory ones ,and the delay parameter DD. Figure 3 shows the order parameter in νext\nu_{ext}-gg plane for four different values of DD. We have considered νext>1\nu_{ext}>1 where the network is active. In the simulation we have set N=4000N=4000 and J=0.015J=0.015 and DD takes the values 0.5, 1, 1.5 and 2.5 in sub-figures (a) to (d) respectively. For all values of DD, there are two distinct regions that the order parameter takes non-zero values. These regions are pushed towards the line νext=1\nu_{ext}=1 as DD is increased. In the case D=0.5D=0.5, only one of the two regions is seen in the graph as the other is at much higher values of νext\nu_{ext}. We have also checked the case of D=0D=0 and within a larger area (up to νext=60\nu_{e}xt=60) and observed no synchronous phases. In other words, the presence of the delay in the system plays a crucial role in the properties of this collective behaviour. For a larger delay, smaller external stimulation is needed to arrive at synchronization. We will denote the left region by L-Synch. and the right one by R-Synch. In the L-Synch. region, the excitatory neurons dominate and in R-Synch. region the inhibitory neurons dominate.

To see how the order parameter actually distinguishes the synchronous activity from an asynchronous activity, we have plotted the activity pattern of the system for a few points in the phase graph of Fig. 3(c) that is for D=1.5D=1.5. The points selected are a=(g,νext)=(1,2.5)a=(g,\nu_{ext})=(1,2.5), e=(9,1.5)e=(9,1.5) and d=(4,3.5)d=(4,3.5) which lay in the L-Synch. region, R-Synch. region and the region where m0m\simeq 0, respectively. Actually, the order parameters of these points are ma=0.62m_{a}=0.62, me=0.66m_{e}=0.66 and md=0.001m_{d}=0.001. Fig. 4 illustrates activity patterns with their corresponding raster plots of the system at these points. It is clear that the order parameter mm is completely sensitive to synchronization. We have checked similar graphs for many different points and found the order parameter consistent with the activity patterns.

Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 4: The activity of the system for different points located at different parts of the phase diagram (Fig. 3 (c)).For each point, the raster plot and the activity of the system are sketched. Also, the average of the activity of the network is shown by a red dashed line. The order parameter for a=(g,νext)=(1,2.5)a=(g,\nu_{ext})=(1,2.5), b=(2,1.5)b=(2,1.5), c=(8,1.17)c=(8,1.17), d=(4,3.5)d=(4,3.5) and e=(9,1.5)e=(9,1.5) are 0.62, 0.001, 0.12, 0.0010.62,\,0.001,\,0.12,\,0.001 and 0.660.66 respectively, which are completely consistent with their correspondent activity patterns. Point c is right at the margin of synchronous and asynchronous states where the system shows power-law behaviour. It has been investigated elaborately in the next part in terms of criticality.
Refer to caption
Refer to caption
Refer to caption
Figure 5: Three cross-sections of the order parameter for systems with D=1.5D=1.5 and different sizes (the total phase diagram is shown for the system of 4000 neurons and various delays in Fig. 3 ) . a) The g=1g=1 cross-section. b) The g=1g=1 cross-section. c) The νext=1.5\nu_{\rm ext}=1.5. The cross-sections and the phase diagram approximately remains unchanged as we change the size of the system.

The next point to be explored is if the synchronous regions are changed if the system size is altered. We have done simulations for sizes N=1000,2000,4000N=1000,2000,4000 and 80008000. Fig. 5 shows a cross-section of the phase diagram at g=8g=8, g=1g=1 and νext=1.5\nu_{ext}=1.5. It is observed that the order parameter shows little dependence on the size of the system, the only difference is that the fluctuations are stronger in smaller systems. Also, as it is clearly seen in Fig. 5(c), larger networks have sharper transitions and in the limit NN\rightarrow\infty the derivative of the order parameter will become discontinuous which is a hallmark of second-order phase transition. However, as we are dealing with systems with sizes up to N=16000N=16000, we have to be careful about the finite size effects.

One last point that is worth mentioning: the order parameter does depend on the number of synapses per neuron, but as we have kept this quantity constant in all simulations shown here, this effect is not observed. This dependence can be expected, for example, if we reduce this number to zero there will be no synchronization and on the other hand, if all the neurons are connected to each other the synchronization happens much more easily.

V Results: Synchronization and power-law behaviour

The phase diagram of the system (Fig.3) suggests something like a continuous phase transition between synchronous and asynchronous regions. To assess this transition and identify the possible critical behaviour, we investigate neural avalanches in the network. In the literature, the existence of scale-free neural avalanches is the most important indicator of criticality in neural systemsBeggs . Generally, the network displays successive activities of various sizes ss and duration dd, which are called avalanches. If the system is critical, the size and duration of avalanche distributions show power-law behaviour, i.e, p(s)sγsp(s)\sim s^{-\gamma_{s}} and p(d)dγdp(d)\sim d^{-\gamma_{d}}Beggs .

Identification of successive avalanches is a challenging task. One may think of non-stop continuous activities within the system so that at least one quiescent temporal step Δt\Delta t exists between two separated avalanches. However, this definition depends on Δt\Delta t, additionally for very large systems with lots of neurons, such a quiescent period may happen rarely. A better mechanism for identifying the avalanches is to consider the activity of the network and consider each period of time within which the activity is higher than the time-average of the network’s activity as an avalanche. This definition has been used before in some similar systems Delpapa ; Montakhab . To avoid incorrect statistics, we have considered the activity over the threshold as the size of activity VeligasSanto Probability distributions of avalanche size and avalanche period, for networks of 4000 neurons and various values of gg and νext\nu_{ext}, are performed in Fig 6. Asynchronous systems, whose order parameter is nearly zero, show sub-critical behaviour with a Poisson like distribution, whereas synchronous systems that lie within L-Synch. or R-Synch. regions, display super-critical distributions. However, right at the transition curve, the probability distribution clearly shows a power-law distribution, though there is a bump in the end that usually is referred to as a sign of super-critical behavior. We will come back to this issue later.

Refer to caption Refer to caption
Figure 6: Avalanche distributions for both size (a) and period (b) for some points of the phase diagram of the system shown in Fig. 6 (c). For sub-critical points (b,c) distributions in both plots illustrate Poissonian behavior. For supper-critical points (a,e), in addition to the Poissonian behavior, there is a bump at the end of the distributions due to the synchronization. Right between these two regimes, point c performs power-law distribution which is the indicator of criticality. This suggests a second-order phase transition between synchronous and asynchronous stats at this point.
Refer to caption
Figure 7: The log-log plot of νext\nu_{\rm ext} at the critical point as a function of NN, the number of neurons. The system shows a good scaling with the shifting exponent λ=0.9±0.1\lambda=0.9\pm 0.1. The best fit gives ν=1.098±0.005\nu_{\infty}=1.098\pm 0.005 which is the critical νext\nu_{ext} for an infinitely large system.

Consider a cross-section like the one shown in Fig. 5(a). Increasing νext.\nu_{\rm ext.} from unity, the shape of the probability distribution function changes from a Poisson-like curve to a power-law and then again to a Poisson-like curve but with a bump at very large values of ss. Let us call the point that the system exhibits power-law behaviour by ”the scaling point”. This scaling point happens to be near the transition point of the system from the asynchronous region to the synchronous one. Identification of the exact value of νext.\nu_{\rm ext.} for the scaling point in such a cross-section is hard because the probability distribution function gradually changes as we increase νext.\nu_{\rm ext.}. In fact, moving from asynchronous region to the synchronous ones, a power-law behaviour is found without the bump in the end. Very quickly, with a little change in the external parameters, the bump forms and the probability distribution function keeps its form in a small interval of the external parameters and then the scaling behavior fades away. Although, usually in the literature of neural systems the presence of a bump is considered as a sign of super-criticality, this is not the case in the literature of critical systems and self-organized criticality. The presence of the bump can be a general feature in probability distribution functions to keep the total probability equal to unity. In addition to this reasoning, in our case, when the bump is present, due to the higher number of larger events we will have a finer tail with a less statistical error that allows us to check criticality through data collapse. Therefore, we consider the bumpy probability distribution to investigate the critical properties of the scaling point. We have also calculated indicators like ss3/(s2)2\langle s\rangle\langle s^{3}\rangle/(\langle s^{2}\rangle)^{2} Pruessner to identify the scaling point, however, such parameters show a relatively broad peak at the transition too, unless for our largest system sizes. For the cross-section g=8g=8 and for systems with different sizes, the approximate values of νext.\nu_{\rm ext.} of the scaling point is brought in table 1. For smaller systems, the scaling point is not so close to the transition point, but in the case of large systems the scaling point turns out to be right at the transition point.

NN 1000 2000 4000 8000 16000
νc(N)\nu_{c}(N) 1.34±0.091.34\pm 0.09 1.22±0.041.22\pm 0.04 1.169±0.0151.169\pm 0.015 1.133±0.0071.133\pm 0.007 1.117±0.0031.117\pm 0.003
Table 1: The critical point with its correspondent error for various sizes of the system. As the size of the system increases, the value of the error decreases. We have estimated ν=1.098±0.005\nu_{\infty}=1.098\pm 0.005.

In fact, it can be shown that this is a finite-size effect. When the usual statistical mechanics’ models are simulated, a size-dependent (pseudo)critical point is found. If we denote the (pseudo)critical point by νC(N)\nu_{C}(N) and the actual critical point by ν\nu_{\infty} then we have

|νC(N)ν|Nλ|\nu_{C}(N)-\nu_{\infty}|\sim N^{-\lambda} (10)

where λ\lambda is the shifting exponent. Fig. 7 shows a log-log plot of νC(N)ν\nu_{C}(N)-\nu_{\infty} as a function of size of the system NN. The parameters ν\nu_{\infty} and λ\lambda are obtained by the best fitting. A very good linear fit is observed with ν=1.098±0.005\nu_{\infty}=1.098\pm 0.005 and λ=0.9±0.1\lambda=0.9\pm 0.1. The value obtained for ν\nu_{\infty} is quite the same as the value of νext.\nu_{\rm ext.} at which the transition to synchronization occurs within the precision we have. In fact, for the line g=8g=8 we have νext.=1.098±0.005\nu_{\rm ext.}=1.098\pm 0.005 for the transition point. We have checked the same phenomenon for a few other cross-sections, some of them being horizontal and some on the R-Synch. region, and in all cases the scaling point coincides with the transition point. Also, the scaling exponents obtained in the following are the same within error-bars.

Let us focus on the probability distribution function on the scaling point. Fig. 8 shows the log-log plot of the probability distribution function of avalanche sizes and avalanche periods for different system sizes. Both plots show a linear part, which is the scaling region, and a cut-off that is clearly size-dependent, which is a hallmark of criticality. As said before, we have considered the external parameters where the probability distributions with a bump at the end of the plot. This consideration has no effect on the scaling exponents, however, gives a less statistical error because of finer data at the tail of the probability distribution. Note that the probability of avalanches at the bump is extremely small (for example 10510^{-5} for system size N=16000N=16000 in the case of avalanche size distribution) and does not have an important role in the avalanche size statistics. The next interesting point is that in the case of avalanche size distribution, the linear part of the plot for systems with different sizes do not form a single line and for a system with a larger size, the line stands slightly above the line of a smaller system. This phenomenon, however, does not happen in the case of the period of avalanches. We will discuss it later and will see that the dependence causes some interesting phenomena to happen.

Refer to caption Refer to caption
Figure 8: Probability distribution function of size (a) and period (b) for systems of various sizes at their critical points. The exponents of the distributions are τs=1.87±0.06\tau_{s}=1.87\pm 0.06 and τd=2.3±0.15\tau_{d}=2.3\pm 0.15 respectively for size and period.

Despite this size dependency, the scaling exponent, τs\tau_{s}, does not depend on the size. We have obtained the exponents using the technique explained in Clauset and found τs=1.87±0.06\tau_{s}=1.87\pm 0.06. Also for the case of the period of avalanches, we have τd=2.3±0.15\tau_{d}=2.3\pm 0.15. The values obtained from brain activity data are 1.51.5 and 22, respectively Beggs . Considering the error bars of the two exponents obtained here, the exponent associated with the period of avalanches, τd\tau_{d}, is not much far from the experimental value, however, τs\tau_{s} is completely different. As we show in the following, this exponent is an apparent exponent and finite-size scaling reveals another (real) exponent for the system.

To study the scaling behavior of the system more thoroughly, we have performed a Finite Size Scaling analysis (FSS) for both the avalanche sizes and their periods. If a system is critical and hence scale-invariant, the probability distribution function should be of the form

p(x)=xτ~f(x/Nϕ)p(x)=x^{-\tilde{\tau}}f(x/N^{\phi}) (11)

where xx can be avalanche size or its period and τ~\tilde{\tau} and ϕ\phi are the scaling exponents. Of course, the above equation should be valid for x>x0x>x_{0} where x0x_{0} is a lower cut-off. The function ff is usually called the scaling function and depends only on the quantity x/Nϕx/N^{\phi}.

To check if a system obeys the above equation, we have to do a data collapse. This can be achieved by plotting xτ~×p(x)x^{\tilde{\tau}}\times p(x) against x/Nϕx/N^{\phi} for the different system sizes. If for all xx greater than a lower cutoff, the data of different system sizes NN can be collapsed onto a single curve, then it is manifestly shown that the probability distribution function has the scaling property. Fig. 9 shows such a data collapse for avalanche size and avalanche period. The scaling exponents are obtained to be τ~s=1.53±0.05\tilde{\tau}_{s}=1.53\pm 0.05 and ϕs=0.72±0.04\phi_{s}=0.72\pm 0.04 for avalanche size and τ~d=2.3±0.15\tilde{\tau}_{d}=2.3\pm 0.15 and ϕd=0.33±0.05\phi_{d}=0.33\pm 0.05 for avalanche period. Interestingly, the value obtained for the scaling exponent τ~s\tilde{\tau}_{s} is different from τs\tau_{s}. Such a phenomenon has been observed before Chris-Prus . In fact, it is usually assumed that the scaling function f(u)f(u) has a finite value in the limit u0u\rightarrow 0. However, in some cases, the scaling function behaves as xαx^{\alpha} in the limit mentioned above. If this is the case, we can write f(x)=xαg(x)f(x)=x^{\alpha}g(x) where limx0g(x)=g0\lim_{x\rightarrow 0}g(x)=g_{0} is a finite number. Therefore, we will have

p(s)=sτ~s(s/Nϕs)αg(s/Nϕ)=sτ~s+αNαϕsg(s/Nϕ).p(s)=s^{-\tilde{\tau}_{s}}\left(s/N^{\phi_{s}}\right)^{\alpha}g(s/N^{\phi})=s^{-\tilde{\tau}_{s}+\alpha}N^{-\alpha\phi_{s}}g(s/N^{\phi}). (12)
Refer to caption Refer to caption
Figure 9: Finite-size-scaling for a) avalanche size and b) avalanche period probability distribution function shown in Fig.8. The scaling parameters are τ~s=1.53±0.05\tilde{\tau}_{s}=1.53\pm 0.05 and ϕs=0.72±0.04\phi_{s}=0.72\pm 0.04 for avalanche size and τ~d=2.3±0.15\tilde{\tau}_{d}=2.3\pm 0.15 and ϕd=0.33±0.05\phi_{d}=0.33\pm 0.05 for avalanche period.

In the limit of large system sizes, for avalanche sizes much smaller than NN, the quantity s/Nϕs/N^{\phi} becomes very small and therefore, we can effectively take the argument of gg equal to zero. Then, the probability distribution function will be proportional sτ~s+αs^{-\tilde{\tau}_{s}+\alpha} and hence, the apparent scaling exponent τ=τ~sα\tau=\tilde{\tau}_{s}-\alpha will be observed. As it is seen in Fig. 9, the scaling function f(u)f(u) does not have a finite value when its argument tends to zero, rather, it behaves as uαu^{\alpha} with α=0.33±0.03\alpha=-0.33\pm 0.03, that is, the scaling function becomes very large when u0u\rightarrow 0. Note that for each system size, there is a lower cut-off equal to s0/Nϕs_{0}/N^{\phi} below which the scaling function deviates from g(u)uαg(u)\sim u^{\alpha}. This analysis gives the apparent scaling exponent τ1.86\tau\simeq 1.86 which was obtained before by fitting in Fig. 8.

This size dependency of probability distribution function has some other consequences. As an example, consider the relation between mean avalanche size and mean avalanche period. Assuming narrow joint distributions, ss and dd are functions of each other and we may write

sddγ,\langle s\rangle_{d}\simeq d^{\gamma}, (13)

where sd\langle s\rangle_{d} is the average of ss conditioned to dd. In fact, Eq. 13 comes from two assumptions. First, the assumption of narrow joint distribution and second, the fact that both the probability distribution functions of ss and dd are given by Eq. 11. also, one may find a relation among the scaling exponents τ~s\tilde{\tau}_{s}, τ~d\tilde{\tau}_{d} and γ\gamma. This relation is read to be:

γ=1τ~d1τ~s.\gamma=\frac{1-\tilde{\tau}_{d}}{1-\tilde{\tau}_{s}}. (14)

However, in this derivation it is assumed a finite value for limu0f(u)\lim_{u\rightarrow 0}f(u). With a simple algebra, it can be shown that if limu0f(u)uαlim_{u\rightarrow 0}f(u)\sim u^{\alpha}, the relation between mean values of avalanche size and avalanche period can be written as

sdNρdγ\langle s\rangle_{d}\simeq N^{\rho}d^{\gamma} (15)

with

γ=1τ~d1τ~s+α=1τ~d1τs\gamma=\frac{1-\tilde{\tau}_{d}}{1-\tilde{\tau}_{s}+\alpha}=\frac{1-\tilde{\tau}_{d}}{1-\tau_{s}} (16)

and

ρ=ϕsα1τs.\rho=\frac{\phi_{s}\alpha}{1-\tau_{s}}. (17)

With the obtained values for scaling exponents we find γ=1.5±0.2\gamma=1.5\pm 0.2 and ρ=0.24±0.06\rho=0.24\pm 0.06. Fig. 10 shows a log-log plot of sd\langle s\rangle_{d} for different systems sizes as a function of dd scaled by NρN^{\rho}. In this graph, we have assumed ρ=0.2\rho=0.2 and the best fit for the collapsed curves give γ=1.32±0.08\gamma=1.32\pm 0.08 which is consistent with some previously obtained value and experimental dataFriedman 2012 , although originated from different scaling exponents τ\tau and α\alpha.

Refer to caption
Figure 10: Average avalanche size versus period scaled by NρN^{\rho}. We obtain ρ=0.2\rho=0.2 for data collapse γ=1.32±0.08\gamma=1.32\pm 0.08 for the best fit.

VI Summary and conclusion

In this paper, we studied the behavior of neural networks consisting of over-damped rotators as a model for the dynamic of type I single neurons. Tuning external stimulation, inhibitory strength, and axonal delay time, we have identified both synchronous and asynchronous regions in the phase diagram of our system. Through finite size analysis, we have proposed that the transition is actually a second-order phase transition between synchronous and asynchronous states in the system. Interestingly we have observed that the probability distribution function shows the power-law distribution for both size and period of avalanches in the vicinity of critical lines. At these lines, the probability distribution functions show a very good data collapse. The interesting point is that for the smaller systems, the point where the power-law behavior emerges falls inside the synchronous region. Therefore, in these systems, it is possible to observe both criticality and synchronization. Although, it is seen that for a very large system, the phase transition happens right at the synchronization transition point.

Yet there remain some unanswered questions. First of all, the transition line hardly touches the line on which the balance between inhibitory and excitatory neurons happens. Also, we have not proposed a mechanism that takes the system towards the critical line. One of the answers might be the evolution of synapses which is absent in our theory. For example, if we devise a mechanism that strengthens the excitatory neurons at the synchronous phase and strengthens the inhibitory ones in asynchronous phase, the transition line to the r-synch region would become stable. However, at this point, we have not implemented any kind of synapse dynamics in the system.

References

  • (1) Beggs, John M., and Dietmar Plenz. ”Neuronal avalanches in neocortical circuits.” Journal of neuroscience 23, no. 35 (2003): 11167-11177.
  • (2) Petermann, Thomas, Tara C. Thiagarajan, Mikhail A. Lebedev, Miguel AL Nicolelis, Dante R. Chialvo, and Dietmar Plenz. ”Spontaneous cortical activity in awake monkeys composed of neuronal avalanches.” Proceedings of the National Academy of Sciences 106, no. 37 (2009): 15921-15926.
  • (3) Gireesh, Elakkat D., and Dietmar Plenz. ”Neuronal avalanches organize as nested theta-and beta/gamma-oscillations during development of cortical layer 2/3.” Proceedings of the National Academy of Sciences 105, no. 21 (2008): 7576-7581.
  • (4) Pasquale, V., P. Massobrio, L. L. Bologna, M. Chiappalone, and S. Martinoia. ”Self-organization and neuronal avalanches in networks of dissociated cortical neurons.” Neuroscience 153, no. 4 (2008): 1354-1369.
  • (5) Hahn, Gerald, Thomas Petermann, Martha N. Havenith, Shan Yu, Wolf Singer, Dietmar Plenz, and Danko Nikolić. ”Neuronal avalanches in spontaneous activity in vivo.” Journal of neurophysiology 104, no. 6 (2010): 3312-3322.
  • (6) Priesemann, V., M. Wibral, M. Valderrama, R. Pröpper, M. Le Van Quyen, T. Geisel, J. Triesch, D. Nikolic, and M. H. J. Munk. ”Spike avalanches in vivo suggest a driven, slightly subcritical brain state. Front Syst Neurosci 8 (108): 80–96. 10.3389/fnsys. 2014.00108.” (2014).
  • (7) Freeman, Walter J., Linda J. Rogers, Mark D. Holmes, and Daniel L. Silbergeld. ”Spatial spectral analysis of human electrocorticograms including the alpha and gamma bands.” Journal of neuroscience methods 95, no. 2 (2000): 111-121.
  • (8) Shriki, Oren, Jeff Alstott, Frederick Carver, Tom Holroyd, Richard NA Henson, Marie L. Smith, Richard Coppola, Edward Bullmore, and Dietmar Plenz. ”Neuronal avalanches in the resting MEG of the human brain.” Journal of Neuroscience 33, no. 16 (2013): 7079-7090.
  • (9) Shew, Woodrow L., Hongdian Yang, Thomas Petermann, Rajarshi Roy, and Dietmar Plenz. ”Neuronal avalanches imply maximum dynamic range in cortical networks at criticality.” Journal of neuroscience 29, no. 49 (2009): 15595-15600.
  • (10) Shew, Woodrow L., and Dietmar Plenz. ”The functional benefits of criticality in the cortex.” The neuroscientist 19, no. 1 (2013): 88-100.
  • (11) Kinouchi, Osame, and Mauro Copelli. ”Optimal dynamical range of excitable networks at criticality.” Nature physics 2, no. 5 (2006): 348-351.
  • (12) Gautam, Shree Hari, Thanh T. Hoang, Kylie McClanahan, Stephen K. Grady, and Woodrow L. Shew. ”Maximizing sensory dynamic range by tuning the cortical state to criticality.” PLoS computational biology 11, no. 12 (2015): e1004576.
  • (13) Legenstein, Robert, and Wolfgang Maass. ”What makes a dynamical system computationally powerful.” New directions in statistical signal processing: From systems to brain (2007): 127-154.
  • (14) Hesse, Janina, and Thilo Gross. ”Self-organized criticality as a fundamental property of neural systems.” Frontiers in systems neuroscience 8 (2014): 166.
  • (15) Bak, Per, Chao Tang, and Kurt Wiesenfeld. ”Self-organized criticality: An explanation of the 1/f noise.” Physical review letters 59, no. 4 (1987): 381.
  • (16) Bornholdt, Stefan, and Thimo Rohlf. ”Topological evolution of dynamical networks: Global criticality from local dynamics.” Physical Review Letters 84, no. 26 (2000): 6114.
  • (17) Levina, Anna, J. Michael Herrmann, and Theo Geisel. ”Dynamical synapses causing self-organized criticality in neural networks.” Nature physics 3, no. 12 (2007): 857-860.
  • (18) Meisel, Christian, and Thilo Gross. ”Adaptive self-organization in a realistic neural network model.” Physical Review E 80, no. 6 (2009): 061917.
  • (19) Tetzlaff, Christian, Samora Okujeni, Ulrich Egert, Florentin Wörgötter, and Markus Butz. ”Self-organized criticality in developing neuronal networks.” PLoS Comput Biol 6, no. 12 (2010): e1001013.
  • (20) de Arcangelis, Lucilla, Carla Perrone-Capano, and Hans J. Herrmann. ”Self-organized criticality model for brain plasticity.” Physical review letters 96, no. 2 (2006): 028107.
  • (21) Millman, Daniel, Stefan Mihalas, Alfredo Kirkwood, and Ernst Niebur. ”Self-organized criticality occurs in non-conservative neuronal networks during ‘up’states.” Nature physics 6, no. 10 (2010): 801-805.
  • (22) Kossio, Felipe Yaroslav Kalle, Sven Goedeke, Benjamin van den Akker, Borja Ibarz, and Raoul-Martin Memmesheimer. ”Growing critical: self-organized criticality in a developing neural system.” Physical review letters 121, no. 5 (2018): 058301.
  • (23) Bonachela, Juan A., Sebastiano De Franciscis, Joaquín J. Torres, and Miguel A. Munoz. ”Self-organization without conservation: are neuronal avalanches generically critical?.” Journal of Statistical Mechanics: Theory and Experiment 2010, no. 02 (2010): P02015.
  • (24) Moretti, Paolo, and Miguel A. Munoz. ”Griffiths phases and the stretching of criticality in brain networks.” Nature communications 4, no. 1 (2013): 1-10.
  • (25) Moosavi, S. Amin, Afshin Montakhab, and Alireza Valizadeh. ”Refractory period in network models of excitable nodes: self-sustaining stable dynamics, extended scaling region and oscillatory behavior.” Scientific reports 7, no. 1 (2017): 1-10.
  • (26) Zare, Marzieh, and Paolo Grigolini. ”Cooperation in neural systems: bridging complexity and periodicity.” Physical Review E 86, no. 5 (2012): 051918.
  • (27) Poil, Simon-Shlomo, Richard Hardstone, Huibert D. Mansvelder, and Klaus Linkenkaer-Hansen. ”Critical-state dynamics of avalanches and oscillations jointly emerge from balanced excitation/inhibition in neuronal networks.” Journal of Neuroscience 32, no. 29 (2012): 9817-9823.
  • (28) Dalla Porta, Leonardo, and Mauro Copelli. ”Modeling neuronal avalanches and long-range temporal correlations at the emergence of collective oscillations: Continuously varying exponents mimic M/EEG results.” PLoS computational biology 15, no. 4 (2019): e1006924.
  • (29) Fontenele, Antonio J., Nivaldo AP de Vasconcelos, Thaís Feliciano, Leandro AA Aguiar, Carina Soares-Cunha, Barbara Coimbra, Leonardo Dalla Porta et al. ”Criticality between cortical states.” Physical review letters 122, no. 20 (2019): 208101.
  • (30) Khoshkhou, Mahsa, and Afshin Montakhab. ”Spike-timing-dependent plasticity with axonal delay tunes networks of Izhikevich neurons to the edge of synchronization transition with scale-free avalanches.” Frontiers in Systems Neuroscience 13 (2019).
  • (31) Di Santo, Serena, Pablo Villegas, Raffaella Burioni, and Miguel A. Muñoz. ”Landau–Ginzburg theory of cortex dynamics: Scale-free avalanches emerge at the edge of synchronization.” Proceedings of the National Academy of Sciences 115, no. 7 (2018): E1356-E1365.
  • (32) Kuramoto, Yoshiki. Chemical oscillations, waves, and turbulence. Courier Corporation, 2003.
  • (33) Ermentrout, G. Bard, and David H. Terman. Mathematical foundations of neuroscience. Vol. 35. Springer Science and Business Media, 2010.
  • (34) Ermentrout, Bard. ”Type I membranes, phase resetting curves, and synchrony.” Neural computation 8, no. 5 (1996): 979-1001.
  • (35) Ermentrout, G. Bard, and Nancy Kopell. ”Parabolic bursting in an excitable system coupled with a slow oscillation.” SIAM Journal on Applied Mathematics 46, no. 2 (1986): 233-253.
  • (36) Strogatz, Steven H. Nonlinear dynamics and chaos with student solutions manual: With applications to physics, biology, chemistry, and engineering. CRC press, 2018.
  • (37) Brunel, Nicolas. ”Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons.” Journal of computational neuroscience 8, no. 3 (2000): 183-208.
  • (38) Esfahani, Zahra G., Leonardo L. Gollo, and Alireza Valizadeh. ”Stimulus-dependent synchronization in delayed-coupled neuronal networks.” Scientific reports 6, no. 1 (2016): 1-10.
  • (39) Buend V., Villegas P., Burioni R., and Munoz M. A., 2020. Hybrid-type synchronization transitions: where marginal coherence, scale-free avalanches, and bistability live together. arXiv preprint, arXiv:2011.03263.
  • (40) Luccioli, Stefano, and Antonio Politi. ”Irregular collective behavior of heterogeneous neural networks.” Physical review letters 105, no. 15 (2010): 158104.
  • (41) Safari, Nahid, Farhad Shahbazi, Mohammad Dehghani-Habibabadi, Moein Esghaei, and Marzieh Zare. ”Spike-Phase Coupling As an Order Parameter in a Leaky Integrate-and-Fire Model.” arXiv preprint arXiv:1903.00998 (2019).
  • (42) Del Papa, Bruno, Viola Priesemann, and Jochen Triesch. ”Criticality meets learning: Criticality signatures in a self-organizing recurrent neural network.” PloS one 12, no. 5 (2017): e0178683.
  • (43) Villegas, Pablo, Serena di Santo, Raffaella Burioni, and Miguel A. Muñoz. ”Time-series thresholding and the definition of avalanche size.” Physical Review E 100, no. 1 (2019): 012133.
  • (44) Clauset, Aaron, Cosma Rohilla Shalizi, and Mark EJ Newman. ”Power-law distributions in empirical data.” SIAM review 51, no. 4 (2009): 661-703.
  • (45) Villegas, Pablo, Serena di Santo, Raffaella Burioni, and Miguel A. Muñoz. ”Time-series thresholding and the definition of avalanche size.” Physical Review E 100, no. 1 (2019): 012133.
  • (46) Pruessner, Gunnar. Self-organised criticality: theory, models and characterisation. Cambridge University Press, 2012.
  • (47) Christensen, Kim, Nadia Farid, Gunnar Pruessner, and Matthew Stapleton. ”On the scaling of probability density functions with apparent power-law exponents less than unity.” The European Physical Journal B 62, no. 3 (2008): 331-336.
  • (48) Friedman, Nir, Shinya Ito, Braden AW Brinkman, Masanori Shimono, RE Lee DeVille, Karin A. Dahmen, John M. Beggs, and Thomas C. Butler. ”Universal critical dynamics in high resolution neuronal avalanche data.” Physical review letters 108, no. 20 (2012): 208102.