Central Limit Theorem for Majority Dynamics: Bribing Three Voters Suffices
Abstract
Given a graph and some initial labelling of its vertices, the majority dynamics model is the deterministic process where at each stage, every vertex simultaneously replaces its label with the majority label among its neighbors (remaining unchanged in the case of a tie). We prove—for a wide range of parameters—that if an initial assignment is fixed and we independently sample an Erdős–Rényi random graph, , then after one step of majority dynamics, the number of vertices of each label follows a central limit law. As a corollary, we provide a strengthening of a theorem of Benjamini, Chan, O’Donnell, Tamuz, and Tan about the number of steps required for the process to reach unanimity when the initial assignment is also chosen randomly.
Moreover, suppose there are initially three more red vertices than blue. In this setting, we prove that if we independently sample the graph , then with probability at least , the majority dynamics process will converge to every vertex being red. This improves a result of Tran and Vu who addressed the case that the initial lead is at least 10.
1 Introduction
Majority dynamics is a model for belief propagation on a graph. In this process, each vertex of the graph has one of two opinions (which we will refer to as the colors ‘red’ and ‘blue’). Each round, the opinions of the vertices are all updated simultaneously, and each vertex replaces their opinion so as to match the majority opinion that was held by their neighbors (in the case of a tie, a vertex does not change its opinion). This updating step is then iterated, and the resulting process is referred to as majority dynamics. This model has been used in a variety of settings including social choice theory [14, 17], economics [3, 8], and statistical physics [6, 7], and many related processes have been proposed as well (see [1, 2, 4, 24] for just a few examples).
The majority dynamics model has gotten a lot of attention for fixed deterministic graphs, with many of the primary questions being related to the long-term behavior [13, 12]. The problem has also been studied in the settings of fixed graphs with random initial colorings [15, 21, 10]. We direct the interested reader to [16] and [19] for helpful surveys.
The situation of majority dynamics on random graphs has also been a focus of interest, but as discussed below this setting is not particularly well understood. Before diving into this literature, let us first establish the following notation to be used throughout.
Notation
The Erdős–Rényi random graph, , is defined as the graph with a vertex set of size and each potential edge independently included with probability . Given a graph and an intial coloring and , we will let (resp. ) denote the set of vertices that are red (resp. blue) after steps of majority dynamics. The initial coloring may be fixed or determined randomly, but in either case it will always be independent of the random graph .
Asymptotic notation is understood to mean as the relevant parameter (typically ) tends to infinity. We say an event happens with high probability to mean its probability tends to . All logarithms have base . Finally, we use the standard notation to denote the probability that a mean 0, variance 1 Gaussian random variable is at most .
Previous results for random graphs
In studying majority dynamics on random graphs, a common approach is as follows. One first tries to control the number of red vertices after one [5, 23, 22] or two [9, 23] steps (hoping that in these initial steps, one color will have managed to establish a substantial lead). After this, one typically argues that with probability tending to 1, if any color ever manages to establish a sufficiently large lead, then this lead is going to increase even if the vertices were adversarially recolored (provided this recoloring preserves the number of vertices of each color).
The second part of this strategy (i.e., showing that with high probability large leads cannot be squandered) often involves some variant of a relatively direct union bound using the fact that the random graph is sufficiently expansive. On the other hand, the first part of this strategy (i.e., controlling the number of vertices of each color after one or two steps) is considerably more nuanced.
Following this general strategy, Benjamini, Chan, O’Donnell, Tamuz, and Tan [5] obtained rough estimates for the mean and variance of under the assumption that the initial coloring is assigned uniformly at random with . Their argument was based on a combination of Fourier analysis and spectral graph techniques, and using their estimates, they were able to prove that if and , then . Thus, for , they proved that with probability at least , for a randomized initial coloring of , the color that was initially ahead will propagate to include all the vertices of the graph within at most four steps. When , their same estimates on also prove that with probability at least , the same result holds after at most 3 steps.
Analyzing the same setting, Fountoulakis, Kang, and Makai [9] improved on the first of these results by focusing instead on estimating the mean and variance of . Namely, they proved that if is chosen uniformly at random and , then with probability tending to 1, the color that was initially ahead will propagate to the entire graph within at most four steps.
For smaller values of , Zehmakan [23] studied the case that the vertices of are each initially colored blue independently with probability . For he proved that when , then with high probability, majority dynamics converges to all vertices being red after a bounded number of steps. (For smaller , there will be isolated blue vertices.) Gärtner and Zehmakan [11] also proved a similar result for -regular random graphs. See [5] for several conjectures about how the majority dynamics process evolves for values of less than and .
Finally, in a different direction, Tran and Vu [22] considered the case that the initial coloring is fixed with . In this setting, they argued that with probability at least , the graph will result in all of its vertices being red.
Our results
It is worth emphasizing that in the above papers, the key first step has been to analyze the distribution of or . In fact, essentially all of these previous results have followed from some new estimate on the mean or variance of these random variables. To quote [5], “the main task is to analyze what happens at time 1.” This brings us to our primary result.
Theorem 1.
Let , be any fixed initial coloring of vertices, and suppose we independently sample the random graph . Let denote the number of vertices that will be red after one step of majority dynamics. Let , let , and define
-
(i)
For any choice of parameters, the variance of satisfies .
-
(ii)
If and , then , and obeys a central limit law. Namely, for all fixed reals , we have
In particular, if and , then the hypotheses and conclusion of (ii) hold.
This provides an unqualified upper bound that , and for a wide range of parameters, it also gives a central limit theorem. Crucially, the statement is for a given value of (which may certainly depend on ) as opposed to being a random variable. Conditioning on in this way enables us to control all the moments of , whereas otherwise the variance in this quantity would be dominated by the variance of .
Theorem 1 is proven by the method of moments, but to illustrate the main challenge, consider the calculation of the variance of . The difficulty here is that one quickly arrives at this variance as a sum of covariances, but each covariance term is large in absolute value. Bounding this sum by its maximum term gives the incorrect asymptotic growth, and the key idea is therefore devising a way to analyze the large amount of cancellation in these sums. We do this by studying the relevant -biased Fourier coefficients (with random variables viewed as functions of the edges present in ).
At this point, it is worth noting that for a wide range of parameters (concretely, for and constant) the distribution of converges to a normal, but the distribution of certainly will not. This is simply because if is sufficiently far from (e.g., at least away [see Proposition 3]) then this would force to be very far away from . Thus, the mass of the distribution of is pushed very far away from , resulting in a particularly bimodal distribution (having mean , variance , but being bounded between and ). The distribution of would be even more exaggerated, and it would likely be essentially two peaks at and . For many settings of parameters, we believe the distribution of would converge to literally two point masses at and . Thus, we ask the following (with the unproven belief that is often only interesting for ).
Question 2.
How is distributed? (For concreteness, take and )
Regardless of the above, Theorem 1 is already strong enough to imply most of the known results (including the bounds on from [9]), but to leverage this we use the following.
Proposition 3.
For each , there are constants and such that if and , then with probability at least , the following statement simultaneously holds for every choice of and every . If and
then the number of vertices of having at least as many edges to as to is at most . In particular, if and , we may take .
For , a result such as the above is not difficult to prove for sufficiently large , and in fact versions of this have also appeared in [5, 22]. But we state the above so as to improve the case of as studied in Tran and Vu. In that setting, the constants involved are frustratingly important, and a proof with becomes necessary.
Applications
From Theorem 1—and in fact merely from our variance estimate—we readily obtain:
Corollary 4.
There is a universal constant such that the following holds. Suppose we fix any initial red-blue partition , of vertices where , and we independently sample a random graph . Then for all real , we have
Moreover, if and , then we have
The above gives us sufficient control of to immediately improve several results of Benjamini, Chan, O’Donnell, Tamuz, and Tan [5], which we summarize below.
Theorem 5.
For each , there is a such that the following holds. Let , and initially assign each vertex one of two colors (independently uniformly at random). Without loss of generality, suppose this assignment initially colors at least half of the vertices red.
-
(a)
If , then with probability at least , the majority dynamics process on will be unanimously red after at most four steps.
-
(b)
If , then with probability at least , the majority dynamics process on will be unanimously red after at most three steps.
-
(c)
If , then with probability at least , the majority dynamics process on will be unanimously red after at most two steps.
A proof of part (a) was the main theorem of [9], but results (b) and (c) are new. Parts (a) and (b) directly generalize Theorems 2 and 3.9 (respectively) of [5], where these events were only proven to hold with probability at least 0.4. That said, essentially the only difference in our proof is that we are able to use Corollary 4, which strengthens a weaker result of their paper (namely Theorem 3.5 of [5]). Part (c) of our result also follows in this same way, and we include it here mostly for completeness.
As a second application, the following improves upon a recent paper of Tran and Vu [22].
Theorem 6.
Suppose we have a red-blue coloring of a vertex set such that there are initially at least more red vertices than blue. We then independently sample a random graph . For , let denote the probability that the majority dynamics process on eventually results in the vertices all being colored . Then
In particular, if , and is sufficiently large, then we have .
Interpreting this in a somewhat playful light, the above result argues that if majority dynamics represents the propagation of opinions in a social network, then one side can obtain a noticeable advantage merely by bribing three extra vertices. This improves on a result of Tran and Vu [22] who argued a similar claim but with an initial lead of at least .
The bottleneck in our argument is our reliance on Proposition 3, for which the value of surprisingly becomes important. We make no claim that our constant is especially close to the best possible, and sufficiently improving this constant would immediately prove a version of Theorem 6 for an initial lead of vertex. However, it is easy to imagine that such incremental refinements might be somewhat unsatisfactory, and we instead propose the following, which gets to the heart of the matter.
Conjecture 7.
For each fixed , there is a corresponding value of such that if there are initially more red vertices than blue, then with probability at least , the majority dynamics process on will result in all vertices being red.
This conjecture, if true, would likely require a new approach, and we also believe the following stronger conjecture is true (which, in light of our central limit theorem, would exactly determine the value of in Conjecture 7).
Conjecture 8.
Given any initial coloring of vertices and any probability , if we sample (independently of the initial coloring) then with probability tending to 1, whichever color has more vertices after the first step of the majority dynamics process will have increasingly more vertices after each subsequent step until this is the unanimous color.
Simulations suggest that both conjectures are true, but we are unable to prove either even when is constant. One heuristic argument is that after the first step, one side is likely to have a relatively extreme advantage (e.g., in the case of constant, after one step one side will be winning by ). If we were to resample the random graph at this point, then such an advantage would be nearly impossible to squander. We aren’t sure how to make this argument rigorous, but we think a resolution of this either way would be very interesting.
Outline of paper
We begin in Section 2 with a brief collection of known technical results. In Section 3, we present our Fourier-analytic proof of our main result, Theorem 1. We continue in Section 4 with Proposition 3 and Theorem 4. We then show quick applications in Section 5 by deriving Theorems 5 and 6. Finally, for the sake of clarity, we prove several lemmas in an Appendix.
2 Known technical lemmas
We will need the following. Throughout, we let denote a binomial random variable with mean and variance .
Bernstein’s inequality: Let be independent random variables with for all . Then for all , we have
Lemma 9.
Let with . There is a universal constant (independent of , , and ) such that the following holds.
-
(i)
For all , we have .
-
(ii)
For all , we have .
Proof.
For each of these claims, we may assume that . To see this, we would first condition on the value of either or (whichever has smaller variance), and then the claim reduces to that of a single binomial (with a different value of and ).
When , the first claim is a particularly well-known anticoncentration result. As for the second claim, consider . Then by routine manipulation, we see is positive iff . Thus, we need only verify the claim for values of with . For these, we have
We bound by (i), and for in the given range, we readily see that the second term is also at most . ∎
3 Proof of Theorem 1
For each , define
and
We will use the notation that (resp. ) denotes the set of red (resp. blue) vertices after one step of majority dynamics. Then let denote the following (centered) indicator
so that by our choice of , we have . Finally, define as
Since is an affine transformation of , we need only prove a central limit theorem for . In fact, for each fixed , we will prove
(1) |
where the implied constants depend only on .
Derivation of the theorem statement assuming (1)
Since , establishing (1) would immediately prove statement (i) of the theorem. As for statement (ii), suppose and . By Bernstein’s inequality, we have
Since , and , this would imply . Thus, we would need to have . Therefore, would imply that for all fixed
And since and , these error terms converge to zero for any fixed . Thus, the moments of converge to those of the standard Gaussian, which establishes a central limit theorem by the method of moments.
Finally, we need to argue that if and , then the hypotheses of (ii) hold. For this, we simply note if , then is bounded below,111This is by an immediate application of the usual central limit theorem for the binomial random variables in the definition of . and .
Set up for establishing (1)
Our proof of (1) is via standard techniques of (-biased) Fourier analysis. We first give a brief review and relevant set up, but for more, we refer the reader to O’Donnell’s helpful text [18].
Let and consider the space of functions mapping from . Each input string is indexed by the -element subsets of , and each corresponds to a graph whose edge set is . Consider the product measure on where each bit is independently equal to with probability . With this, we identify random variables depending on and functions in the obvious way.
For each , we define the function via
with . Then it can be checked that for all we have
These therefore form an orthogonal basis for this space (with inner product ), and each can uniquely be written as
We will need the following Lemma, which we prove in an Appendix.
Lemma 10.
With notation as before,
-
(i)
We have .
-
(ii)
Let . If , then . Moreover, .
-
(iii)
if
-
(iv)
if
-
(v)
If are distinct with and , then
-
(vi)
If and , then
Let’s prove the moment estimate assuming Lemma 10.
Estimating assuming Lemma 10
For each tuple consisting of (not necessarily distinct) vertices, set and . In this notation, we have .
-
•
For each , let denote the strings in which there are exactly vertices each appearing exactly once.
We will argue that for , the contribution due to is the main term, and all of the others are small. Namely, we’ll prove
And moreover for all , we’ll show .
Expansion of
Suppose and are disjoint sets of vertices such that , and suppose for each , there is an integer . Then we have
where the sum is taken over all choices of in (so that each is independently given a subset of ).
Let denote the -element subsets of , and suppose corresponds to a term in the above summation that isn’t zero. By Lemma 10.(ii), for each we must have (since otherwise one of the Fourier coefficients would be 0). Moreover every edge of must appear in at least two sets (otherwise the expectation of the functions would be 0). Together, this implies that each edge of appears in exactly two sets, that and that for all we have . Thus, the above summation becomes
Since , Lemma 10.(iii) implies that each term is . Therefore, for any fixed we have
(2) |
If there is a value for which , then we could apply part (vi) [and (iii)] of Lemma 10 to show the above expression is . Similarly, if there is a vertex for which , then we could apply (iv) to again conclude this expression is . Together, since there are at most choices for , this implies
(3) |
where is the set of all perfect matchings on the vertex set .
Contribution from
Let denote the set of strings where each vertex appears at most twice. Then we have
which holds since each term of the summation is at most and each string in has an element appearing more than twice (and none appearing only once). When is odd, , so the above summation is . Otherwise, for even, (3) [together with Lemma 10.(i)] implies
Contribution from for
For ease of notation, let us now assume without loss of generality that , and let us write and where . For each , let denote the strings such that (a) contains no elements of , (b) contains no elements more than twice, and (c) for each , if appears in , then does not. Then using (2) we have
which follows since
Moreover, we have
Fixing and and expanding this product out as in (3), we see
where is the set of graphs on the vertex set such that for all , we have and there is exactly one edge of meeting . For distinct , let denote the subset of graphs containing one of the edges in . Then we have
where the last line follows from Lemma 10.(v). Therefore, we have
as desired. ∎
4 Derivations of Proposition 3 and Corollary 4
Proof of Proposition 3
Proof.
For , call a pair of sets bad iff (1) , (2) , and (3) every vertex of has at least as many neighbors in as it does in . For each pair , we will show under the hypotheses of the proposition that if is large enough (depending only on ), there is a value such that , which will prove that with probability at least , the desired conclusion holds for every set and every .
Let be fixed satisfying conditions (1) and (2) above, and define . Note that if we condition on the edges in , then the events “” are independent as ranges over . Thus, we have
where denotes the number of edges of , and this inequality holds due to the log-concavity of binomial distributions.222Namely, if , then a routine computation shows for any we have . Thus, this concavity holds for linear combinations of binomial random variables as well.
Letting denote the random variable , we have
(4) |
The mean of is easily computed as
And for any fixed , setting , Bernstein’s inequality implies that
where the term tends to (as ) uniformly over all .
Let . Since and since is order , the central limit theorem applies to , and for all fixed we have
Thus, putting these together we obtain
Finally, by absorbing smaller terms into the error, we obtain the desired bound. ∎
Proof of Corollary 4
Proof.
We first prove the second part of the corollary given the first. For this, we assume and .
Case 1: First suppose . Note that for all , an application of the mean value theorem implies . With a degree of foresight, let us set , which is greater than since . Since , we have
Thus, using the first part of the lemma, we obtain
Case 2: Now suppose . Numerical computation shows , so setting (which is in fact at least by assumption) we have
Thus, in either case, we have a lower bound of at least for each corresponding probability, so in both cases, we know one (if not both) of these probabilities is at least this large, which finishes the proof (with the value of replaced by ).
We now turn our attention to proving the first statement. For each let be the indicator for the event that will be red after one step of the majority dynamics process. Then . Setting , we have , and using a version of the Berry-Esseen theorem from Shevstova [20], we obtain
Thus, since , we have
The variance of is computed in Theorem 1 (i) as . Finally, we finish by applying Chebyshev’s inequality (with ) as follows:
5 Applications
Proof of Theorem 5
Proof.
Each of the three parts of the theorem are proven from Theorem 4 in an analogous way exactly as in [5]. We will treat the three cases simultaneously, and we assume throughout that is large enough to support our argument (valid by choosing large enough).
Let be the initial (uniformly random) coloring of . Pick a fixed small enough so that , which is possible since is distributed as a binomial random variable with mean and variance . Define the events:
-
•
is the event that
-
•
is the event that
-
•
is the event that
-
•
is the event that
-
•
is the event that every vertex of has degree at least
If , we claim that all of these events happen simultaneously with probability at least , which—by increasing the constants of the theorem statement to allow us to assume is sufficiently large—will show they happen with probability at least . Having shown this, we will be done as follows.
-
(i)
If , then , so we have implying . Therefore, assuming and , we would deterministically need .
-
(ii)
If then implying . Thus, assuming and , we would need to have .
-
(iii)
If then . Therefore, assuming , we would have , which is less than since our assumption on (and ) implies . Therefore, and would imply .
Thus, it only remains to show that if , then events and all simultaneously occur with probability at least . We prove these in sequential order except for the claim , which we present last.
First note that by our choice of . To show and , we refer the reader to Lemmas 3.6 and 3.7 of [5], which prove the following.
Lemma 11 (Benjamini, Chan, O’Donnell, Tamuz, and Tan [5]).
If , then with probability tending to 1, the graph will satisfy the following property for all . If is any initial red-blue coloring with , then after one step of majority dynamics the number of blue vertices will be at most .
Therefore, with high probability we have that will imply , and in turn implies (because ). We control by a simple union bound together with Chernoff’s inequality to obtain
which tends to since .
Finally, we must show . If , then our choice of immediately allows us to apply Theorem 4, which implies . On the other hand, if , then we would need to have (since ) and thus . Therefore, with probability at least , every vertex in has degree at least since
where the last inequality follows from Bernstein’s inequality. Therefore, if occurs, then with probability we would need (and in particular ). This is because with probability every vertex of has degree at least , so if occurs, then initially every vertex would have at least more red neighbors than blue. (So in this case, majority dynamics will typically end after only one step.)
In any case, regardless of the value of , we’ve shown , completing the proof. ∎
Proof of Theorem 6
For this proof, we need the following lemma, as proven in the Appendix.
Lemma 12.
Let where , and suppose we independently sample the random graph . Define the sets and , and let . If then for any fixed , with high probability we have
Moreover, .
Assuming this lemma, we now proceed to a proof of the theorem at hand.
Proof of Theorem 6.
Suppose there are initially more red vertices than blue, and by monotonicity of the majority dynamics process, we may first assume [since reducing only decreases the quantity in question, and this would only change the integral by at most ]. Before we run the majority dynamics process, let us consider the following coupling. We first decompose the vertices of as , where and . The vertices in will be initially red, the vertices of initially blue, and the vertices of will either be initially all red or initially all blue with these two options being equally likely.
Suppose we sample the random graph , but we have not yet decided on the color for . For each , consider the majority dynamics process that would evolve if is initially colored , and define to be either (i) T if this process does not lead to unanimity or (ii) the unanimous color that the process eventually attains.
In this language, we seek to bound . By monotonicity, we have . Using this and exploiting symmetry, we have
Let denote the number of red vertices that the graph would have after steps if is initially colored . By Proposition 3, with probability , we have the two implications
By the same reasoning as in the proof of Theorem 5, this in turn would imply with probability tending to 1. Therefore, with high probability would imply and similarly would imply .
Let denote the number of vertices that would be red after one step if the set were completely removed from . Since , we can use Lemma 12 (see above) with to argue that with high probability
for some constants and (here, the term is from the error term in the estimate of , and is to account for the number of red vertices that would be in after one step). Thus, we have
Finally, using the central limit law for given by Theorem 1 together with the fact that , we obtain that the above expression is at least
References
- [1] Mohammed Amin Abdullah and Moez Draief. Global majority consensus by local majority polling on graphs of a given degree sequence. Discrete Applied Mathematics, 180:1–10, 2015.
- [2] Gideon Amir, Rangel Baldasso, and Nissan Beilin. Majority dynamics and the median process: connections, convergence and some new conjectures. arXiv preprint arXiv:1911.08613, 2019.
- [3] Venkatesh Bala and Sanjeev Goyal. Learning from neighbours. The review of economic studies, 65(3):595–621, 1998.
- [4] József Balogh and Boris G Pittel. Bootstrap percolation on the random regular graph. Random Structures & Algorithms, 30(1-2):257–286, 2007.
- [5] Itai Benjamini, Siu-On Chan, Ryan O’Donnell, Omer Tamuz, and Li-Yang Tan. Convergence, unanimity and disagreement in majority dynamics on unimodular graphs and random graphs. Stochastic Processes and their Applications, 126(9):2719–2733, 2016.
- [6] Emilio De Santis and Charles M Newman. Convergence in energy-lowering (disordered) stochastic spin systems. Journal of statistical physics, 110(1-2):431–442, 2003.
- [7] Sinziana M Eckner and Charles M Newman. Fixation to consensus on tree-related graphs. ALEA, 12(1):357–374, 2015.
- [8] Glenn Ellison and Drew Fudenberg. Rules of thumb for social learning. Journal of political Economy, 101(4):612–643, 1993.
- [9] Nikolaos Fountoulakis, Mihyun Kang, and Tamás Makai. Resolution of a conjecture on majority dynamics: rapid stabilisation in dense random graphs. arXiv preprint arXiv:1910.05820, 2019.
- [10] Bernd Gärtner and Ahad N. Zehmakan. Color war: Cellular automata with majority-rule. In Frank Drewes, Carlos Martín-Vide, and Bianca Truthe, editors, Language and Automata Theory and Applications, pages 393–404, Cham, 2017. Springer International Publishing.
- [11] Bernd Gärtner and Ahad N Zehmakan. Majority model on random regular graphs. In Latin American Symposium on Theoretical Informatics, pages 572–583. Springer, 2018.
- [12] Yuval Ginosar and Ron Holzman. The majority action on infinite graphs: strings and puppets. Discrete Mathematics, 215(1-3):59–71, 2000.
- [13] Eric Goles and Jorge Olivos. Periodic behaviour of generalized threshold functions. Discrete mathematics, 30(2):187–189, 1980.
- [14] Mark Granovetter. Threshold models of collective behavior. American journal of sociology, 83(6):1420–1443, 1978.
- [15] Yashodhan Kanoria, Andrea Montanari, et al. Majority dynamics on trees and the dynamic cavity method. The Annals of Applied Probability, 21(5):1694–1748, 2011.
- [16] Elchanan Mossel, Joe Neeman, and Omer Tamuz. Majority dynamics and aggregation of information in social networks. Autonomous Agents and Multi-Agent Systems, 28(3):408–429, 2014.
- [17] Vu Xuan Nguyen, Gaoxi Xiao, Xin-Jian Xu, Qingchu Wu, and Cheng-Yi Xia. Dynamics of opinion formation under majority rules on complex social networks. Scientific Reports, 10(1):1–9, 2020.
- [18] Ryan O’Donnell. Analysis of boolean functions. Cambridge University Press, 2014.
- [19] Devavrat Shah. Gossip algorithms. Now Publishers Inc, 2009.
- [20] Irina Shevtsova. On the absolute constants in the berry-esseen type inequalities for identically distributed summands. arXiv preprint arXiv:1111.6554, 2011.
- [21] Omer Tamuz and Ran J Tessler. Majority dynamics and the retention of information. Israel Journal of Mathematics, 206(1):483–507, 2015.
- [22] Linh Tran and Van Vu. Reaching a consensus on random networks: The power of few. arXiv preprint arXiv:1911.10279, 2019.
- [23] Ahad N. Zehmakan. Opinion Forming in Erdös-Rényi Random Graph and Expanders. In Wen-Lian Hsu, Der-Tsai Lee, and Chung-Shou Liao, editors, 29th International Symposium on Algorithms and Computation (ISAAC 2018), volume 123 of Leibniz International Proceedings in Informatics (LIPIcs), pages 4:1–4:13, Dagstuhl, Germany, 2018. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik.
- [24] Ahad N Zehmakan. Two phase transitions in two-way bootstrap percolation. arXiv preprint arXiv:1809.10764, 2018.
6 Appendix
Proof of Lemma 10
Proof.
We prove each claim in turn.
Claims (i) and (ii):
These essentially follow immediately from the facts that , that has mean 0, and that depends only on . We’ll prove , in the case that (the case being analogous), for which we have
And then we simply use the facts333To prove these, without loss of generality, suppose . We then condition on the value of and use that uniformly for all . that if then [with ]
Claims (iii) and (iv):
For these, suppose . Write as where and [so consists of edges from to a vertex of the same color and edges from to the other color]. Define the random variable
Then by conditioning on the edges in , we have (for )
Since , let . If , then by conditioning on whether , we have
Similarly if , then . In either case, if is bounded, these probabilities are at most , establishing (iii). We prove (iv) by picking some other edge and conditioning on whether or not to obtain
And this supremum is by Lemma 9.
Claim (v):
For this, we first note that by claim (ii), this sum reduces to only two terms:
Suppose and are additional vertices. From our computation in (iii), we have
By Lemma 9, the above three probabilities are all within of each other, which implies as well as are both at most . Combining this with claim (iii) then establishes (v).
Claim (vi):
For this, we use the fact that to obtain
Proof of Lemma 12
Proof.
We will prove that with high probability and are both close to their means and that each has expected value of about , which will finish the proof since . For this, we’ll focus on (the case of being almost identical). In what follows, let .
For , let denote the indicator for the event that . Then we have . So the expected value of is times , and moreover
Let . For each we have , by Lemma 9. Therefore we have
Moreover, for , the covariance is given by
where the last inequality again follows from Lemma 9. Thus, the variance of is satisfies
which holds because of the assumption that . Thus, the second-moment method provides the desired concentration statement of about its mean. ∎