Tight Bounds on Probabilistic Zero Forcing on Hypercubes and Grids
Abstract.
Zero forcing is a deterministic iterative graph colouring process in which vertices are coloured either blue or white, and in every round, any blue vertices that have a single white neighbour force these white vertices to become blue. Here we study probabilistic zero forcing, where blue vertices have a non-zero probability of forcing each white neighbour to become blue. We explore the propagation time for probabilistic zero forcing on hypercubes and grids.
Key words and phrases:
zero forcing, probabilistic zero forcing, hypercubes, grids1. Introduction
1.1. Definition of Zero Forcing
Zero forcing is an iterative graph colouring procedure which can model certain real-world propagation and search processes such as rumor spreading. Given a graph and a set of marked, or blue, vertices , the process of zero forcing involves the application of the zero forcing colour change rule in which a blue vertex forces a non-blue (white) vertex to become blue if , that is, forces to become blue if is the only white neighbour of .
We say that is a zero forcing set if when starting with as the set of initially blue vertices, after iteratively applying the zero forcing colour change rule until no more vertices can be forced blue, the entire vertex set of becomes blue. Note that the order in which forces happen is arbitrary since if is in a position in which it can force , this property will not be destroyed if other vertices are turned blue. As a result, we may process vertices sequentially (in any order) or all vertices that are ready to turn blue can do so simultaneously. The zero forcing number, denoted , is the cardinality of the smallest zero forcing set of .
Zero forcing has sparked a lot of interest recently. Some work has been done on calculating or bounding the zero forcing number for specific structures such as graph products [12], graphs with large girth [8] and random graphs [1, 16], while others have studied variants of zero forcing such as connected zero forcing [4] or positive semi-definite zero forcing [2].
While zero forcing is a relatively new concept (first introduced in [12]), the problem has many applications to other branches of mathematics and physics. For example, zero forcing can give insight into linear and quantum controllability for systems that stem from networks. More precisely, in [5], it was shown that for both classical and quantum control systems that can be modelled by networks, the existence of certain zero forcing sets guarantees controllability of the system.
Another area closely related to zero forcing is power domination [19]. Designed to model the situation where an electric company needs to continually monitor their power network, one method that is used is to place phase measurement units (PMUs) periodically through their network. To reduce the cost associated with this, one asks for the least number of PMUs necessary to observe a specific network fully. To be more specific, given a network modelled with a simple graph , a PMU placed at a vertex will be able to observe every adjacent vertex. Furthermore, if an observed vertex has exactly one unobserved neighbour, this observed vertex can observe this neighbour. In this way, power domination involves an observation rule compatible with the zero forcing colour change rule.
In the present paper we are concerned with a parameter associated with zero forcing known as the propagation time, which is the fewest number of rounds necessary for a zero forcing set of size to turn the entire graph blue. More formally, given a graph and a zero forcing set , we generate a finite sequence of sets , where , , and given , we define , where is the set of white vertices that can be forced in the next round if is the set of the blue vertices. Then the propagation time of , denoted , is defined to be . The propagation time of the graph is then given by , where the minimum is taken over all zero forcing sets of cardinality . The propagation time for zero forcing has been studied in [13].
1.2. Definition of Probabilistic Zero Forcing
Zero forcing was initially formulated to bound a problem in linear algebra known as the min-rank problem [12]. In addition to this application to mathematics, zero forcing also models many real-world propagation processes. One specific application of zero forcing could be to rumor spreading, but the deterministic nature of zero forcing may not fit the chaotic nature of real-life situations. As such, probabilistic zero forcing has also been proposed and studied where blue vertices have a non-zero probability of forcing white neighbours, even if there is more than one white neighbour. More specifically, given a graph , a set of blue vertices , and vertices and such that , in a given time step, vertex will force vertex to become blue with probability
where is the closed neighbourhood of .
In a given round, each blue vertex will attempt to force each white neighbour independently. If this happens, we may say that the edge is forced. A vertex becomes blue as long as it is forced by at least one blue neighbour, or in other words if at least one edge incident with it is forced. Note that if is the only white neighbour of , then with probability , forces , so given an initial set of blue vertices, the set of vertices forced via probabilistic zero forcing is always a superset of the set of vertices forced by traditional zero forcing. In this sense, probabilistic zero forcing and traditional zero forcing can be coupled. In the context of rumor spreading, the probabilistic colour change rule captures the idea that someone is more likely to spread a rumor if many of their friends have already heard the rumor.
Under probabilistic zero forcing, given a connected graph, it is clear that starting with any non-empty subset of blue vertices will with probability 1 eventually turn the entire graph blue, so the zero forcing number of a graph is not an interesting parameter to study for probabilistic zero forcing. Initially in [17], the authors studied a parameter that quantifies how likely it is for a subset of vertices to become a traditional zero forcing set the first time-step that it theoretically could under probabilistic zero forcing.
Instead, in this paper, we will be concerned with a parameter that generalizes the zero forcing propagation time. This generalization was first introduced in [11]. Given a graph , and a set , let be the random variable that outputs the propagation time when probabilistic zero forcing is run with the initial blue set . For ease of notation, we will write . The propagation time for the graph will be defined as the random variable . More specifically, is a random variable for the experiment in which iterations of probabilistic zero forcing are performed independently, one for each vertex of , then the minimum is taken over the propagation times for these independent iterations.
1.3. Asymptotic Notation
Our results are asymptotic in nature, that is, we will assume that . Formally, we consider a sequence of graphs (for example, is an -dimensional hypercube or by grid) and we are interested in events that hold asymptotically almost surely (a.a.s.), that is, events that hold with probability tending to 1 as .
Given two functions and , we will write if there exists an absolute constant such that for all , if , if and , and we write or if . In addition, we write if and we write if , that is, .
Finally, as typical in the field of random graphs, for expressions that clearly have to be an integer, we round up or down but do not specify which: the choice of which does not affect the argument.
1.4. Results on Probabilistic Zero Forcing
In [11], the authors studied probabilistic zero forcing, and more specifically the expected propagation time for many specific structures. A summary of this work is provided in the following theorem.
Theorem 1.1 ([11]).
Let . Then
-
•
-
•
-
•
,
-
•
.
In [6], the authors used tools developed for Markov chains to analyze the expected propagation time for many small graphs. The authors also showed, in addition to other things, that and for any connected graph , . This result was then improved in [18], where the authors showed that
for general connected graphs . In the same paper, the authors also showed that
(1.1) |
where denotes the radius of the connected graph . Moreover, they provided a class of graphs for which the bounds of the theorem are tight.
In addition to the results mentioned above, the authors of [11] also considered the binomial random graph , proving the following theorem.
Theorem 1.2 ([11]).
Let be constant. Then a.a.s. we have that
However, the authors in [18] conjectured that for the random graph, a.a.s.
Of course, Theorem 1.2 can be improved immediately via (1.1) and the fact that for constant, the radius of is a.a.s. (see e.g. [10]), but even with this improvement, the bound is still far from the conjectured value. In [9], the authors explored probabilistic zero forcing on in more detail and, in particular, proved the above conjecture. Their results can be summarized in the following theorem that shows that probabilistic zero forcing occurs much faster in than in a general graph , as evidenced by the bounds of (1.1).
Theorem 1.3 ([9]).
Let be any vertex of .
If (in particular, if is a constant), then a.a.s.
On the other hand, if , then a.a.s.
The results of most interest to us are from [14]. The authors of that paper are concerned with hypercubes and grids , and prove the following result.
Theorem 1.4 ([14]).
The following bounds hold:
-
•
,
-
•
.
1.5. Our Results
In this paper, we improve both results from [14]. Instead of considering the expectation of the propagation time, we will calculate bounds on the propagation time that a.a.s. hold. The lower bounds for both families of graphs are trivial so the improvement is for the upper bounds.
Theorem 1.5.
The following bounds hold a.a.s.:
-
(a)
,
-
(b)
.
The bounds for are asymptotically tight. Unfortunately, we could not prove the matching upper bound for but our upper bound is very tight, and so it supports the conjecture that a.a.s. . This conjecture is supported by independent simulations performed in [14] as well as our own.
2. Preliminaries
2.1. Chernoff inequality
Let us first state a specific instance of Chernoff’s bound that we will find useful. Let be a random variable with the binomial distribution with parameters and . Then, a consequence of Chernoff’s bound (see e.g. [15, Corollary 2.3]) is that
(2.1) |
for . Moreover, let us mention that the bound holds for the general case in which and with (possibly) different (again, e.g. see [15] for more details).
2.2. Hoeffding–Azuma inequality
Let be an infinite sequence of random variables that make up a martingale; that is, for any we have . Suppose that there exist constants such that for each . Then, the Hoeffding–Azuma inequality implies that for every ,
(2.2) |
2.3. Chernoff–Heoffding bounds for Markov chains
This result is due to Chung, Lam, Liu and Mitzenmacher [7]. Let be a discrete time ergodic Markov chain with state space and the stationary distribution . may be interpreted as either the chain itself or the corresponding by transition matrix. The total variation distance between and , two distributions over , is defined as
For any , the mixing time of Markov chain is defined as
where is an arbitrary initial distribution. We will also need a definition of the -norm. The inner product under the -kernel is defined as . Then, the -norm of is defined as . Note that, in particular, .
Now we are ready to state the main result from [7]. Let be the -mixing time of Markov chain for . Let denote a -step random walk on starting from an initial distribution on . For every , let be a weight function at step such that the expected weight for all . Define the total weight of the walk by . There exists some constant (which is independent of , and ) such that for
(2.3) |
2.4. Useful Coupling
We will be using the following obvious coupling to simplify both upper and lower bounds. Having said that, in our case we will only need to prove upper bounds. Indeed, for lower bounds, it might be convenient to make some white vertices blue at some point of the process. Similarly, for upper bounds, one might want to make some blue vertices white. Given a graph , , and , let be the event that starting with blue set , after rounds every vertex in is blue. The following simple observation was proved in [9].
Lemma 2.1 ([9]).
For all sets , , and ,
3. Hypercubes
This section is devoted to proving part (a) of Theorem 1.5. Let us start with a formal definition of the hypercube. The -dimensional hypercube has vertex set consisting of all binary strings of length and there is an edge between two vertices if and only if their binary strings differ in exactly one bit. For , level of the hypercube is defined to be the set of all vertices whose binary strings contain exactly ones. Note that each vertex in level has neighbours in level and neighbours in level .
Proof of Theorem 1.5(a).
Trivially, for any vertex , we deterministically have that . Hence, in order to show that , it is enough to show that a.a.s. for some vertex . Let be the vertex on level ; in fact, since is a vertex-transitive graph, can be any vertex.
Suppose that for some integer , , the following property holds at some point of the process: all vertices at levels up to are blue (including level ). We will show that with probability property holds after an additional rounds. Since trivially holds, by the union bound (over possible values of ) we will conclude that a.a.s. holds after rounds.
Suppose that property holds from some , . By Lemma 2.1, we may assume that all vertices at level are white. It will be convenient to independently consider 3 phases after which property will hold with probability . The first phase lasts rounds. The probability that a given vertex at level stays white during this phase is at most
since . Hence, for a given vertex at level , the number of neighbours at level that turned blue during this phase is stochastically lower bounded by with . It follows from Chernoff’s bound (2.1) (applied with ) that has at most blue neighbours at level with probability at most
By the union bound (over at most vertices at level ), with probability , each vertex at level has at least blue neighbours at level at the end of the first phase. As we aim for a statement that holds with probability , we may assume that this property holds once we enter the second phase.
The second phase lasts rounds. As before, let us concentrate on a given vertex at level . Suppose that has blue neighbours for some integer such that ( has only neighbours at level so, of course, it includes neighbours at level ). The number of white neighbours of (at level ) that turned blue in one round of the process is stochastically lower bounded by with . We get from Chernoff’s bound (2.1) (applied with ) that with probability at most
By the union bound (over at most vertices at level and at most rounds), with probability , each vertex at level increases the number of blue neighbours by a multiplicative factor of each round, reaching at least blue neighbours by the end of the second phase. We may assume that this property holds once we enter the third phase.
The third (and last) phase lasts rounds. This time, let us concentrate on a given white vertex at level . This vertex has neighbours at level , each of which has at least blue neighbours. Hence, vertex stays white by the end of this phase with probability at most
By the union bound (over at most vertices at level ), with probability , all vertices at level turn blue by the end of this phase and so property holds.
Since we aim for a statement that holds a.a.s., we may assume that holds after rounds, and continue the process from there. We say that a vertex at layer is happy if all but at most neighbours at layer are blue. Similarly, a vertex at layer is very happy if not only it is happy but also all but at most neighbours at layer are blue. (Note that a happy or even a very happy vertex might still be white.) Trivially, all vertices at levels up to are happy (including level ), and all vertices at levels before level are very happy.
Suppose that for some integer , , all vertices at levels up to are happy (including level ). We will show that after one single round all vertices at layer are going to be happy and all vertices at layer are going to be very happy with probability . By the union bound (over all possible values of ), we will get that a.a.s. after less than rounds all vertices of the hypercube are going to be very happy.
For simplicity, by Lemma 2.1 we may assume that all vertices at level are white (despite the fact that they are happy). Let us concentrate on a given vertex at level . Since is happy and all of its blue neighbours at level are happy too, the probability that stays white is at most
Now, the probability that a given vertex at level is not happy is at most
Then the property that all vertices at level are happy holds by the union bound (over at most vertices at level ).
We may also show that all vertices at level are very happy. If , then all vertices at level are trivially very happy as they have less than neighbours at level , so there is nothing to show. If (but still ), then a given vertex at level is not very happy with probability at most
and the desired bound holds by the union bound (over at most vertices at level ).
At this point we may assume that all vertices of the hypercube are very happy. It is easy to see that all remaining white vertices will turn blue in one single round. Indeed, since all vertices are very happy, the probability that some vertex stays white is at most
The proof is finished. ∎
4. Grids
This section is devoted to proving part (b) of Theorem 1.5. Let us start with a formal definition of grids and tori. The by grid graph is the vertex graph that is the Cartesian product of a path on vertices and a path on vertices. The by torus graph is the Cartesian product of a cycle on vertices and a cycle on vertices.
We can restrict our focus to the square grid or the square torus graph . For a non-square grid with , once a central square is all blue, the two adjacent columns turn blue with probability one on the next time-step and so on. Thus . A similar argument holds for the non-square torus. Hence, it will be straightforward to generalize the results for asymmetric cases which we will do once we are done with symmetric cases.
We define the origin of to be the central vertex if is odd and one of the four central vertices if is even (the choice can be made arbitrarily as all of them are the same up to symmetry). Since is vertex transitive, we may fix any vertex of to be the origin of . For a given positive integer we define the -principal-square to be the by sub-grid centred on the origin.
Let . We will work in phases, where in phase we start with all vertices in an -principal-square being blue and end with all vertices in an -principal-square being blue. For simplicity, by Lemma 2.1 we may assume that at the beginning of each phase the only vertices that are blue are the ones that belong to the corresponding principal-square. In fact, during each phase we will turn a few more vertices white (if needed) so that the process behaves more predictably. We aim to bound the number of time-steps it takes to go from the -principal-square to the -principal square. As a result, in total there will be at most phases. Note that the only phase that potentially differs between the grid and the torus is the final phase.
Let us start with the following simple observation that will allow us to ignore a few initial and final phases.
Lemma 4.1.
Let . Suppose that at time all vertices in a -principal-square are blue. Then with probability at time all vertices in a -principal-square are blue.
Proof.
Let , and suppose that at time all vertices in a -principal-square are blue. Note that vertices adjacent to a blue vertex with three blue neighbours turn blue with probability 1, and so all vertices adjacent to a non-corner vertex of the -principal-square will turn blue at time . There are twelve (eight if ) vertices in the -principal square that do not turn blue deterministically: the four corner vertices and the eight (four if ) vertices adjacent to the corners. We will call these corner–adjacent.
A corner–adjacent vertex has a blue neighbour at time and so the probability that it stays white in one time-step is at most . The probability it remains white after steps is at most and so the probability that all corner–adjacent vertices are blue at time is at least .
Conditioning on a corner vertex having two blue neighbours at time , the probability it stays white after the next time-step is at most . The probability it is white after a further time-steps is at most . In particular, the probability that all four corner vertices are blue at time steps is at least . ∎
Lemma 4.1 tells us that with high probability any single phase takes time-steps, so we may safely ignore what happens in the first five phases and the last phase. In particular, our argument is the same whether we are run the process on the grid or the torus. Hence, we may focus on the square grid graph .
Recall that . Let and suppose that at time all vertices in the -principal-square are blue. By Lemma 2.1, we may assume that these are the only blue vertices at that point of the process. We will show that with probability , at time all vertices in the -principal-square are blue, where will be an explicit, very small constant that we are not ready to introduce yet.
For simplicity, we identify the vertices of with pairs from the set
in the natural way with the origin at . In particular, note that a -principal-square contains all vertices with and . We will focus on the top-right quadrant where both coordinates are positive; by symmetry, the argument for the other quadrants will be exactly the same.
In order to control how the process progresses with time, we will pay attention to a few blue vertices at the top-right corner that are at the same distance from the origin. Hence, the following definition will be useful. For fixed positive integer , we define a -window (rooted at vertex ) to be a -tuple of vertices
where . Note that the distance of all vertices from such a -window from the origin is .
Blue vertices from a -window at distance from the origin will most likely turn some other vertices at distance from the origin to be blue. If that happens, we will simply move the window appropriately and continue the process. In order to compute the transition probabilities between given configurations that might occur in a -window, it will be convenient to assume that each blue vertex in the window has exactly two blue neighbours, namely the bottom and the left neighbours. We may do so based on the following observation.
Lemma 4.2.
Suppose that at time , a sub-grid of vertices are all blue for some integer . Let be the top-right corner (blue) vertex of this sub-grid. If a neighbour of turns blue at time , then it is the top-right corner of a sub-grid of vertices that are all blue.
Proof.
Consider the four vertices adjacent to . Clearly, the vertex below and the vertex to the left of are each the top right corner of a all blue grid. Let be the vertex to the right of . The column of vertices directly below are all blue at time with probability 1 (as each is adjacent to a blue vertex with three blue neighbours at time ). Thus, if turns blue at time , then it is the top-right corner of a blue sub-grid—see Figure 1. An analogous argument holds for the vertex above . ∎

Now, we are ready to define an auxiliary process, which monitors the behaviour of the original process, giving a sequence of triples of -windows at times , and associated binary -tuples indicating which vertices in the -window are blue at time and which ones are white (1 represents blue and 0 represents white). This process will control the expansion of the blue principal squares and so can be used to upper bound the number of rounds needed to reach the end of a given phase. We will run this process for at most steps and stop it prematurely if the end of the phase is not reached after rounds. However, we will show that we do not stop prematurely with probability and, in fact, when is large enough, with this probability we will finish in almost rounds which is a trivial lower bound for the number of rounds needed to finish a given phase.
-
(1)
We start with and time-step . Since all vertices in are blue, the corresponding configuration is . (In fact, the initial triple is not important. Another natural starting point would be to start with a -window including vertex and so would have only one blue vertex in the corresponding configuration.)
-
(2)
Given the triple , we define triple differently according to whether contains a blue vertex or not. Suppose that is a -window rooted at vertex .
-
(3)
If contains a blue vertex, let and consider the -windows
Let be whichever of or has more blue vertices at time (see Figure 2 for an illustration). If they have an equal number of blue vertices, then pick one of the two uniformly at random. Let be the corresponding configuration capturing the information about which vertices in the window are blue at time . Go to step (2).
Figure 2. A 6-window and 6-windows , as defined in Step (3) of the auxiliary process. Since has more blue vertices, . -
(4)
If is all white (that is, all vertices in are white at time ), we set . In this situation we need to introduce an auxiliary triple that corresponds to an earlier -window that is not all white at time . To that end we chose to have the same distance from the origin as . Note that is not all zeros and represents a configuration in the -window at time ; indeed, the process is designed in such a way that no and can be all zeros at any step in this process. We may simply fix and but our goal is to create a memory-less Markov chain so we will follow a different strategy.
Let be a random vertex in that was blue at time . Let be the -window centred on . To be specific, if is odd then let
and if is even then let be one of
and
chosen uniformly at random. At time , is blue and it follows from Lemma 4.2 that and are blue and have three blue neighbours. (See the discussion below for details about implications of Lemma 4.2.) This means at time , the vertices are each blue and in . Applying Lemma 2.1, we may assume that any vertices in that are not one of are white at time . Let us stress again that by our choice of , is not all white at time . Go to step (2).
Recall that we start the current phase at time with all vertices in the -principal-square being blue. Moreover, we already dealt with the first few phases so . By this assumption, every vertex in the starting -window is the top-right corner of a blue sub-grid at time . Applying Lemma 4.2 recursively for each , we see that if a vertex in turns blue at time because of its blue neighbour in , then it is the top-right corner of a blue sub-grid. In particular, since we run the auxiliary process for at most steps, it will be the top-right corner of a blue sub-grid.
We are now ready to investigate the transition probabilities between configurations and guide the auxiliary process so that it yields a Markov chain. Suppose that . Let us first investigate step (3) of the auxiliary process. By the assumption of this step, is not all zeros; that is, contains a blue vertex at time . By Lemma 2.1 we may assume that at time all vertices in the next -windows and are white, and that the vertices and are white. Moreover, as discussed above, Lemma 4.2 implies that any vertex in that is blue at time has exactly two blue neighbours. Thus the probability that a vertex in turns blue at time is exactly if it has two blue neighbours in at time , if it has one blue neighbour in at time , and otherwise. In particular, the configuration representing the state of at time depends only on the configuration representing the state of at time . The transition probability between any two possible configurations and can be easily computed.
Now, let us investigate step (4). This is set up so that if is all white (that is, contains no blue vertices at time ) then deterministically consists of three centred consecutive blue vertices and all other vertices white. In particular, for odd and all white, if and otherwise. For even and all white,
Finally, for the degenerate case and all white, if is all blue, and 0 otherwise.
Since the state of at time (namely, configuration ) depends only on the state of at time (namely, configuration ) independent of the time and the choice of vertices, we can capture the behaviour of the process using a Markov chain on state space . For example, when the Markov chain has transition matrix
where the states are in the order . It is easy to see that for any this Markov chain is irreducible and aperiodic, and so it has a limiting distribution which is equal to the stationary distribution. We define to be the probability of the all white state in the limiting distribution.
The next lemma is the crux of our argument.
Lemma 4.3.
Take . Let and suppose that at time all vertices in the -principal-square are blue. With probability , before the end of round all vertices in the -principal-square are blue, where is the probability of the all white state in the limiting stationary distribution of the Markov’s chain defined above.
Applying the Lemma together with earlier observations we immediately obtain the following corollary.
Corollary 4.4.
The following bound holds a.a.s.:
where is the probability of the all white state in the limiting stationary distribution of the Markov’s chain defined above.
Proof of Lemma 4.3.
Let us fix an integer . Suppose we run the auxiliary process monitoring the trajectory of the -window for steps. Let be the number of steps we spend in the all white state . We have that , since when we are in the all white state , and otherwise. The distance of from the origin is . The distance of from the origin is , since when we are in the all white state is one closer to the origin than , and otherwise is one further from the origin than .
Applying Chernoff–Heoffding bounds for Markov chains (equation (2.3)), we have that
for , where is some positive constant depending only on the Markov chain (in particular, independent of and ). Set
(Note that implies that , which is the number of steps taken. We will show that satisfies this property below and so we may assume that the process does not stop prematurely.) Then, using that , we have that with probability the distance of from the origin is at least
and the number of rounds in this phase that passed in the original zero-forcing process is equal to
It remains to show that the -windows are travelling broadly North-East rather than North or East, in order to be sure of obtaining a principal square as opposed to a rectangle. To that end, let us define the discrepancy of a -window to be . The discrepancy captures how far from the North-East diagonal the -window is shifted. The discrepancy of is . By definition, differs from by at most one if is not all white at time , and differs from by at most if is all white at time .
Define a sequence where and for , is the least such that is all white at time (that is, ). For any for which , the probability that is at least . Hence, the probability that we do not hit an all white state after consecutive steps, is at most
Thus, since there are at most terms in the sequence, we have for all with probability . Since we aim for a statement that holds with probability , we may assume that this property is deterministically satisfied.
Consider the sequence . Based on our assumption, for all
By symmetry of the auxiliary process, we know that given and integer the probability that must be the same as the probability that . Thus the sequence is a martingale. We can apply the Hoeffding-Azuma inequality (2.2) with and to see that
and so
where the additional term needed to be added because the last term in the sequence can be smaller than but, with the desired probability, not smaller than .
Putting all of this together, we conclude that with probability at time there is a blue vertex at distance from the origin with a discrepancy of less than . Using Lemma 4.2 for the last time, we get that this blue vertex is the top-right corner of a all blue sub-grid at time . Since , we get that
and so this all blue sub-grid entirely contains the top-right quadrant of the -principal-square.
By symmetry, the same conclusion holds for the other three quadrants and so with probability at time the -principal-square is entirely blue. The proof of the lemma is finished. ∎
The only thing remaining to finish the main theorem is to calculate the stationary distribution of the Markov chain and thus . Based on Corollary 4.4, each value of implies that a.a.s.,
When or the transition matrix is small enough to be calculated by hand and one can check that and , respectively. It gives us and . For , one can use a computer to calculate the exact fractions in the transition matrix and thus the exact values of . This gives
For larger values of , we must move to numerical approximations. Rounding errors become a concern for when the minimum entry in the transition matrix is close to the precision of the computer. Below we summarize these numerical values in the table. In particular, when we obtain and the main theorem holds.
2 | ||
---|---|---|
3 | ||
4 | ||
5 | ||
6 | ||
7 | ||
8 | ||
9 | ||
10 | ||
11 | ||
12 | ||
13 | ||
14 |
5. Acknowledgment
This research has been partially supported by NSERC and by the Ryerson University Faculty of Science Dean’s Research Fund. The numerical results presented in this paper were obtained using the Julia language [3]. We would like to thank Bogumił Kamiński from SGH Warsaw School of Economics for helping us to implement it. The program is available on-line.111https://math.ryerson.ca/~pralat/
References
- [1] D. Bal, P. Bennett, S. English, C. MacRury, and P. Prałat. Zero-forcing in random regular graphs. Journal of Combinatorics, 12(1):85–116, 2021.
- [2] F. Barioli, W. Barrett, S. Fallat, H. Hall, L. Hogben, B. Shader, P. van den Driessche, and H. van der Holst. Zero forcing parameters and minimum rank problems. Linear Algebra Appl., 433(2):401–411, 2010.
- [3] J. Bezanson, A. Edelman, S. Karpinski, and V.B. Shah. Julia: A fresh approach to numerical computing. SIAM review, 59(1):65–98, 2017.
- [4] B. Brimkov and I. Hicks. Complexity and computation of connected zero forcing. Discrete Appl. Math., 229:31–45, 2017.
- [5] D. Burgarth, D. D’Alessandro, L. Hogben, S. Severini, and M. Young. Zero forcing, linear and quantum controllability for systems evolving on networks. IEEE Trans. Automat. Control, 58(9):2349–2354, 2013.
- [6] Y. Chan, E. Curl, J. Geneson, L. Hogben, K. Liu, I. Odegard, and M. Ross. Using markov chains to determine expected propagation time for probabilistic zero forcing. Electronic Journal of Linear Algebra, 36:318–333, 2020.
- [7] K. Chung, H. Lam, Z. Liu, and M. Mitzenmacher. Chernoff-hoeffding bounds for markov chains: Generalized and simplified. In 29th International Symposium on Theoretical Aspects of Computer Science, page 124, 2012.
- [8] R. Davila and F. Kenter. Bounds for the zero forcing number of graphs with large girth. Theory Appl. Graphs, 2(2):Art. 1, 8, 2015.
- [9] S. English, C. MacRury, and P. Prałat. Probabilistic zero forcing on random graphs. European J. Combin., 91:103207, 22, 2021.
- [10] A. Frieze and M. Karoński. Introduction to random graphs. Cambridge University Press, Cambridge, 2016.
- [11] J. Geneson and L. Hogben. Propagation time for probabilistic zero forcing. arXiv:1812.10476, 2018.
- [12] AIM Minimum Rank-Special Graphs Work Group. Zero forcing sets and the minimum rank of graphs. Linear Algebra Appl., 428(7):1628–1648, 2008.
- [13] L. Hogben, M. Huynh, N. Kingsley, S. Meyer, S. Walker, and M. Young. Propagation time for zero forcing on a graph. Discrete Appl. Math., 160(13-14):1994–2005, 2012.
- [14] D. Hu and A. Sun. Probabilistic zero forcing on grid, regular, and hypercube graphs. arXiv:2010.12343, 2020.
- [15] S. Janson, T. Łuczak, and A. Rucinski. Random graphs. Wiley-Interscience Series in Discrete Mathematics and Optimization. Wiley-Interscience, New York, 2000.
- [16] T. Kalinowski, N. Kamčev, and B. Sudakov. The zero forcing number of graphs. SIAM J. Discrete Math., 33(1):95–115, 2019.
- [17] C. Kang and E. Yi. Probabilistic zero forcing in graphs. Bull. Inst. Combin. Appl., 67:9–16, 2013.
- [18] S. Narayanan and A. Sun. Bounds on expected propagation time of probabilistic zero forcing. arXiv:1909.04482, 2019.
- [19] M. Zhao, L. Kang, and G. Chang. Power domination in graphs. Discrete Math., 306(15):1812–1816, 2006.