This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

{CCSXML}

<ccs2012> <concept> <concept_id>10003752.10003753.10003761.10003763</concept_id> <concept_desc>Theory of computation Distributed computing models</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10003752.10003809.10003635.10010038</concept_id> <concept_desc>Theory of computation Dynamic graph algorithms</concept_desc> <concept_significance>300</concept_significance> </concept> </ccs2012> \ccsdesc[300]Theory of computation Distributed computing models \ccsdesc[300]Theory of computation Dynamic graph algorithms Tel Aviv University, Israel Faculty of Computer Science, Universität Wien, Austria Supported by the Austrian Science Fund (FWF): P 33775-N, Fast Algorithms for a Reactive Network Layer. Japan Advanced Institute of Science and Technology, Japan This work was supported by JSPS Kakenhi Grant Number JP19K20216 and JP18H05291. \CopyrightUri Meir, Ami Paz, and Gregory Schwartzman

Acknowledgements.
The authors are thankful to Seth Gilbert for interesting discussions. \hideLIPIcs

Models of Smoothing in Dynamic Networks

Uri Meir    Ami Paz    Gregory Schwartzman
Abstract

Smoothed analysis is a framework suggested for mediating gaps between worst-case and average-case complexities. In a recent work, Dinitz et al. [Distributed Computing, 2018] suggested to use smoothed analysis in order to study dynamic networks. Their aim was to explain the gaps between real-world networks that function well despite being dynamic, and the strong theoretical lower bounds for arbitrary networks. To this end, they introduced a basic model of smoothing in dynamic networks, where an adversary picks a sequence of graphs, representing the topology of the network over time, and then each of these graphs is slightly perturbed in a random manner.

The model suggested above is based on a per-round noise, and our aim in this work is to extend it to models of noise more suited for multiple rounds. This is motivated by long-lived networks, where the amount and location of noise may vary over time. To this end, we present several different models of noise. First, we extend the previous model to cases where the amount of noise is very small. Then, we move to more refined models, where the amount of noise can change between different rounds, e.g., as a function of the number of changes the network undergoes. We also study a model where the noise is not arbitrarily spread among the network, but focuses in each round in the areas where changes have occurred. Finally, we study the power of an adaptive adversary, who can choose its actions in accordance with the changes that have occurred so far. We use the flooding problem as a running case-study, presenting very different behaviors under the different models of noise, and analyze the flooding time in different models.

keywords:
Distributed dynamic graph algorithms, Smoothed analysis, Flooding

1 Introduction

Distributed graph algorithms give a formal theoretical framework for studying networks. There is abundant literature on distributed graph algorithms modeling static networks, and recently, an increasing amount of work on dynamic networks. Yet, our understanding of the interconnections between the theoretical models and real world networks is unsatisfactory. That is, we have strong theoretical lower bounds for many distributed settings, both static and dynamic, yet real-world communication networks are functioning, and in a satisfactory manner.

One approach to bridging the gap between the strong theoretical lower bounds and the behaviour of real-world instances, is the idea of smoothed analysis. Smoothed analysis was first introduced by Spielman and Teng [19, 18], in an attempt to explain the fact that some problems admit strong theoretical lower bounds, but in practice are solved on a daily basis. The explanation smoothed analysis suggests for this gap, is that lower bounds are proved using very specific, pathological instances of a problem, which are highly unlikely to happen in practice. They support this idea by showing that some lower bound instances are extremely fragile, i.e., a small random perturbation turns a hard instance into an easy one. Spielman and Teng applied this idea to the simplex algorithm, and showed that, while requiring an exponential time in the worst case, if we apply a small random noise to our instance before executing the simplex algorithm on it, the running time becomes polynomial in expectation.

Smoothed analysis was first introduced to the distributed setting in the seminal work of Dinitz et al. [8], who considered distributed dynamic networks. These are networks that changes over time, and capture many real world systems, from Blockchain, through vehicle networks, and to peer-to-peer networks. They are modeled by a sequence of communication graphs, on which the algorithm needs to solve a problem despite the changes in the communication topology. Adapting the ideas of smoothed analysis in this setting is not an easy task. First, while the classical smoothed analysis is applied to a single input, in the dynamic setting we deal with an infinite sequence of communication rounds, on ever-changing graphs. And second, defining the correct perturbation which the input undergoes is more challenging, as the input is discrete, as opposed to the continuous input of the simplex algorithm, where Gaussian noise is a very natural candidate.

To address the above challenges, Dinitz et al. fix a noise parameter k>0k>0, and then, after the adversary picks an infinite sequence of graphs {Gi}\left\{G_{i}\right\}, they derive a new series of graphs {Hi}\left\{H_{i}\right\} by perturbing every GiG_{i} with an addition or deletion of roughly kk random edges. Specifically, the perturbation (kk-smoothing) is done by choosing uniformly at random a graph within Hamming distance at most kk of GiG_{i} (i.e., a graph that is different from GiG_{i} by at most kk edges), which also has some desired properties (say, connectivity). Note that in the typical case Ω(k)\Omega(k) edges are perturbed, as most of the graphs with Hamming distance at most kk from GiG_{i} have Hamming distance Ω(k)\Omega(k) from it. Using this model, they analyze the problems of flooding, hitting time, and aggregation.

In this paper we present a study of models of smoothing, or put differently, a study of models of noise in dynamic networks. To this end, we focus on one of the three fundamental problems presented in the previous work — the flooding problem. In this problem, a source vertex has a message it wishes to disseminate to all other vertices in the network. In each round, every vertex which has the message forewords it to all of its neighbors, until all vertices are informed. First, note that the problem is trivially solvable in static networks, in diameter time. Second, in the dynamic setting, it is easy to construct a sequence of graph where flooding takes Ω(n)\Omega(n) rounds, even if each of them has a small diameter. It is thus not surprising that adding random links between vertices may accelerate the flooding process, and indeed, it was shown in [8] that in kk-smoothed dynamic networks, the complexity of flooding drops to Θ~(n2/3/k1/3)\tilde{\Theta}(n^{2/3}/k^{1/3}) (where the ~\tilde{\cdot} sign hides polylogn\operatorname{polylog}n factors). Note the huge improvement, from Ω(n)\Omega(n) to O~(n2/3)\tilde{O}(n^{2/3}), even for k=1k=1.

1.1 Motivation

The significant step made by Dinitz et al.in introducing smoothed analysis to the distributed dynamic environment left many fascinating questions unanswered. One natural direction of research has to do with applying smoothed analysis for a variety of problems, but here, we take a different approach. In an attempt to understand the essential properties of smoothed analysis in our setting, we focus on questions related to the very basic concept of smoothing and study several models of noise. We outline some of the related questions below.

The curious gap between k=0k=0 and k=1k=1

Dinitz et al.show a tight bound of Θ~(n2/3/k1/3)\tilde{\Theta}(n^{2/3}/k^{1/3}) for the problem of flooding in a dynamic networks with noise kk. That is, as kk decreases, flooding becomes harder, but only up to Θ~(n2/3)\tilde{\Theta}(n^{2/3}) for a constant kk. However, when there is no noise at all, a linear, in nn, number of rounds is required. That is, there is a curious gap between just a tiny amount of noise and no noise at all, which stands in sharp contrast with the smooth change in the flooding time as a function of kk, when kk ranges from 11 and to Ω(n)\Omega(n). One may wonder if there is a natural way to extend the model presented by Dinitz et al.to bridge this gap.

An oblivious adversary

The results of Dinitz et al.assume an oblivious adversary. That is, the evolution of the network must be decided by the adversary in advance, without taking into account the noise added by the smoothing process. It is natural to ask what is the power of an adaptive adversary compared to an oblivious one. As smoothed analysis aims to be the middle ground between worst-case and average-case analysis, it is very natural to consider the effect of noise on the strongest possible adversary, i.e., an adaptive adversary.

Change-dependent noise

The type of noise considered by Dinitz et al.remains constant between rounds, regardless of the topology or the actions of the adversary, which corresponds to “background noise” in the network. However, it is very natural to consider cases where the amount of noise itself changes over time. More specifically, if we think of a noise as an artifact of the changes the adversary performs to the system, and thus, it is natural to expect different amounts of noise if the adversary performs a single change, or changes the whole topology of the graph. Hence, a natural model of noise in a network is one where the added noise is proportional to the amount of change attempted by the adversary, that is, kk is proportional to the Hamming distance between the current graph and the next one. A different, natural definition of a changes-dependent noise, is where the noise occurs only in the changing edges, and not all the graph.

1.2 Our results

Model Upper bound Lower bound Reference
Integer noise
Non-adaptive adv.
O(n2/3(1/k)1/3logn)O\left(n^{2/3}(1/k)^{1/3}\log n\right) Ω(n2/3/k1/3)\Omega\left(n^{2/3}/k^{1/3}\right), for knk\leq\sqrt{n} Dinitz et al. [8]
Fractional noise
Non-adaptive adv.
O(min{n,n2/3(logn/k)1/3})O\left(\min\left\{n,n^{2/3}(\log n/k)^{1/3}\right\}\right) Ω(min{n,n/k,n2/3/k1/3})\Omega\left(\min\left\{n,n/k,n^{2/3}/k^{1/3}\right\}\right) Thm. 3.13.33.4
Fractional noise
Adaptive adv.
O(min{n,nlogn/k})O\left(\min\left\{n,n\sqrt{\log n/k}\right\}\right) Ω(min{n,nlogk/k})\Omega(\min\left\{n,n\log k/k\right\}) Thm. 3.53.7
Proportional noise
Non-adaptive adv.
O(n2/3(Dlogn/ϵ)1/3)O\left(n^{2/3}\cdot(D\log n/\epsilon)^{1/3}\right) Thm. 4.1
Proportional noise
Adaptive adv.
O(n)O(n) Ω(n)\Omega(n) Thm. 4.3
Targeted noise O(min{n,Dlogn/ϵD2})O\left(\min\left\{n,D\log n/\epsilon^{D^{2}}\right\}\right) Ω(n)\Omega(n), for DΘ(logn)D\in\Theta(\log n) Thm 4.54.7
Table 1: Bounds on flooding time in different models of smoothing

To address the above points of interest, we prove upper and lower bounds, as summarized in Table 1. First, we show a natural extension of the previously presented noise model to fractional values of kk. For k1k\geq 1, our model roughly coincides with the prior model and results, while for 0<k<10<k<1, in our model a single random change occurs in the graph with probability kk. In our model, we show a bound of Θ~(min{n2/3/k1/3,n})\tilde{\Theta}(\min\{n^{2/3}/k^{1/3},n\}) for flooding, for values of kk ranging from k=0k=0 to k=Θ(n)k=\Theta(n), even fractional — see theorems 3.1, 3.3, and 3.4. The flooding time thus has a clean and continuous behavior, even for the range 0<k<10<k<1 which was not studied beforehand, providing a very natural extension of the previous model.

Next, we focus our attention on an adaptive adversary, that can choose the next graph depending on the smoothing occurred in the present one. Here, we show that the added power indeed makes the adversary stronger, and yet, flooding can be solved in this case faster than in a setting where no smoothing is applied. Specifically, in theorems 3.5 and 3.7 we show that in this setting flooding can be done in O~(n/k)\tilde{O}(n/\sqrt{k}) rounds, and takes at least Ω~(n/k)\tilde{\Omega}(n/k) rounds. This result presents a strict separation between the power of an adaptive and an oblivious adversaries.

We then turn our attention to a different type of noise, and introduce two new models of responsive noise — noise which depends on the changes the adversary preforms. The goal of studying responsive noise is to better understand systems where the noise is an artifact of the changes, and more changes induce more noise. We consider two, completely different cases: if only the amount of noise depends on the changes, then the system is less stable than in the prior models, and flooding can be delayed a lot by the adversary. On the other hand, if the same amount of noise is targeted at the changing links themselves, then the system remains stable, and flooding occurs much faster. In both models, our results surprisingly depend on a new attribute — the static diameter of the graph, DD.

The first model of responsive noise we introduce is the proportional noise model, where the noise is randomly spread among the graph edges as before, and its amount is proportional to the number of proposed changes. We consider two types of adversaries in this setting — adaptive, and oblivious. Theorem 4.1 shows that O~(min{n2/3D1/3/ϵ1/3,n})\tilde{O}(\min\{n^{2/3}D^{1/3}/\epsilon^{1/3},n\}) rounds are sufficient for flooding in this model with an oblivious adversary. Here, the static diameter DD comes into play, since the adversary can now force change-free rounds: if the adversary does not make any change, no noise occurs, the graph remains intact and no random “shortcut edges” are created. Current lower bounds for flooding with oblivious adversaries seem to require many changes, but in the proportional noise model this results in the addition of many random edges, thus speeding up the flooding process. In addition, the upper bound suggests that the static diameter DD should come into play in the lower bounds, which is not the case in previous constructions. While we believe our upper bound is tight, proving lower bounds appears to be beyond the reach of current techniques.

In the proportional noise model with adaptive adversary, we encounter another interesting phenomenon. Adjusting the upper bound analysis in a straightforward manner gives the trivial upper bound of O(n)O(n), and for a good reason: an adaptive adversary can slow down the flooding time all the way to Ω(n)\Omega(n), as shown in Theorem 4.3. The adversary for this lower bound makes only a few changes in each round, and only in necessary spots (using its adaptivity). The fact that the noise is proportional boils down to almost no noise occurring, which allows the adversary to maintain control over the network.

The second model of responsive noise we study is that of a targeted noise. Here, the expected amount of noise applied is roughly the same as above, but only the edges that undergo changes are perturbed by the noise. More concretely, each change proposed by the adversary does not happen with some probability ϵ\epsilon. The aim here is to model networks where every attempted change may have some probability of failure. In this setting, the case of an adaptive adversary seems more natural; however, we prove our upper bound for an adaptive adversary, and the lower bound for an oblivious one, thus handling the harder cases of both sides.

Our upper bound shows significant speedup in flooding on graphs with constant static diameter — O(logn)O(\log n) rounds suffice for flooding with targeted noise. This phenomenon holds even for higher static diameters: Theorem 4.5 gives an upper bound of O((Dlog1/ϵn)/ϵD2)O((D\log_{1/\epsilon}n)/\epsilon^{D^{2}}) rounds for flooding. This improves upon the trivial O(n)O(n) whenever D=O(log1/ϵn)D=O(\sqrt{\log_{1/\epsilon}n}). Finally, in Theorem 4.7 we show that for larger static diameter, D=Θ(logn)D=\Theta(\log n), the number of rounds required for flooding is Ω(n)\Omega(n).

Our techniques

Analysing our new models of smoothing require a new and more global techniques. While we adopt the results of [8] for changes in a single round, our models introduce new technical challenges, as they require multiple-round based analysis.

The main technical challenge for proving upper bounds comes from the fact that one cannot even guarantee that noise occurs at every round, and when it does — the amount of noise is not fixed through the execution of the whole algorithm. This requires us to conduct a much more global analysis, taking into account sequences of rounds with a varying amount of noise, instead of analyzing a single round at a time. In several cases, the static diameter DD appears as a parameter in our results and analysis: in our model, the adversary can force change-free rounds where no noise occurs, in which case flooding happens on a static graph.

In an attempt to study the exact limitations of our noise models, we present specifically crafted lower bound networks for each model. Note that in many models our adversary is adaptive, potentially making it much stronger. This makes our lower bound more strategy-based, and not merely a fixed instance of a dynamic graph. We revise the original flooding lower bound of Dinitz et al., in order to make it more applicable to other models. We present a more detailed and rigorous proof, that indeed implies tight bounds in most of our models, and specifically, when considering adaptive adversaries.

1.3 Related work

Smoothed analysis was introduced by Spielman and Teng [19, 18] in relation to using pivoting rules in the simplex algorithm. Since, it have received much attention in sequential algorithm design; see, e.g., the survey in [19]. The first application of smoothed analysis to the distributed setting is due to Dinitz et al. [8], who apply it to the well studied problems of aggregation, flooding and hitting time in dynamic networks, as described above. Chatterjee et al. [5] considered the problem of a minimum spanning tree construction in a distributed setting of a synchronous network, where the smoothing is in the form of random links added in each round, which can be used for communication but not as a part of the tree.

The field of distributed algorithm design for dynamic networks has received considerable attention in recent years [16]. Most of the works considered models of adversarial changes [14, 1, 9, 11, 12, 15, 2, 4], while others considered random changes, or random interactions between the vertices [3, 6, 7, 10, 13]. Yet, as far as we are aware, only the two aforementioned works take the smoothed analysis approach in this context.

2 Preliminaries

2.1 Dynamic graphs

All graphs are of the form G=(V,E)G=(V,E), where V=[n]={1,,n}V=[n]=\{1,\ldots,n\} is the set of vertices. A dynamic graph \mathcal{H} is a sequence =(G1,G2,)\mathcal{H}=(G_{1},G_{2},\ldots) of graphs, Gi=(V,Ei)G_{i}=(V,E_{i}) on the same set of vertices, which can be finite or infinite. The diameter of a (static) graph is the largest distance between a pair of vertices in it, and the static diameter of a dynamic graph \mathcal{H} is the minimal upper bound DD on all the diameters of the graphs in the sequence \mathcal{H}.

We study synchronous distributed algorithms in this setting, where the time is split into rounds, and in round ii the vertices communicate over the edges of the graph GiG_{i}.

Given two graphs G=(V,E)G=(V,E) and G=(V,E)G^{\prime}=(V,E^{\prime}) on the same set of vertices, we denote GG=(V,EE)G-G^{\prime}=(V,E\setminus E^{\prime}). We also denote GG=(V,EE)G\oplus G^{\prime}=(V,E\oplus E^{\prime}), where EEE\oplus E^{\prime} is the set of edges appearing in exactly one of the graphs. The size of set EEE\oplus E^{\prime} is called the Hamming distance between GG and GG^{\prime}, and is denoted by |GG||G\oplus G^{\prime}|.

2.2 Models of noise

Our smoothed analysis is based on three models of noise, defined in this section.

For a single step, we recall the definition of tt-smoothing [8], and add the new notion of ϵ\epsilon-smoothing, for a parameter 0<ϵ<10<\epsilon<1. At each step, we think of GoldG_{\mathrm{old}} as the current graph, and of GadvG_{\mathrm{adv}} as the future graph suggested by the adversary. The actual future graph, GnewG_{\mathrm{new}}, will be a modified version of GadvG_{\mathrm{adv}}, randomly chosen as a function of GoldG_{\mathrm{old}} and GadvG_{\mathrm{adv}}. In addition, we consider the set 𝒢allowed\mathcal{G}_{\mathrm{allowed}} of allowed graphs for a specific problem. For flooding, this is just the set of all connected graphs.

Definition 2.1.

Let 0ϵ10\leq\epsilon\leq 1 and tt\in\mathbb{N} be two parameters, 𝒢allowed\mathcal{G}_{\mathrm{allowed}} a family of graphs, and GoldG_{\mathrm{old}} and GadvG_{\mathrm{adv}} two graphs in 𝒢allowed\mathcal{G}_{\mathrm{allowed}}.

  • A tt-smoothing of GadvG_{\mathrm{adv}} is a graph GnewG_{\mathrm{new}} which is picked uniformly at random from all the graphs of 𝒢allowed\mathcal{G}_{\mathrm{allowed}} that are at Hamming distance at most tt from GadvG_{\mathrm{adv}}. The parameter tt is called the noise parameter.

  • An ϵ\epsilon-targeted smoothing of a graph GadvG_{\mathrm{adv}} with respect to GoldG_{\mathrm{old}} is a graph GnewG_{\mathrm{new}} which is constructed from GadvG_{\mathrm{adv}} by adding to it each edge of GoldGadvG_{\mathrm{old}}-G_{\mathrm{adv}} independently at random with probability ϵ\epsilon, and removing each edge of GadvGoldG_{\mathrm{adv}}-G_{\mathrm{old}} with the same probability. If the created graph is not in 𝒢allowed\mathcal{G}_{\mathrm{allowed}}, the process is repeated.

We are now ready to define the three types of smoothing for dynamic graphs. The first extends the definition of [8] for non-integer values k+k\in\mathbb{R}_{+}, and the other two incorporate noise that depends on the latest modifications in the graph.

For a positive real number xx, we define the random variable round(x)\operatorname{round}(x), which takes the value x\lceil x\rceil with probability xxx-\lfloor x\rfloor (the fractional part of xx), and x\lfloor x\rfloor otherwise.

Definition 2.2.

Let 0ϵ10\leq\epsilon\leq 1 and k+k\in\mathbb{R}_{+} be two parameters, and 𝒢allowed\mathcal{G}_{\mathrm{allowed}} a family of graphs.

Let =(G1,G2,)\mathcal{H}=(G_{1},G_{2},\ldots) be a dynamic graph, i.e., sequence of “temporary” graphs. Let G0G^{\prime}_{0} be a graph (if G0G^{\prime}_{0} is not explicitly specified, we assume G0=G1G^{\prime}_{0}=G_{1}).

  • A kk-smoothed dynamic graph is the dynamic graph =(G0,G1,G2,)\mathcal{H}^{\prime}=(G^{\prime}_{0},G^{\prime}_{1},G^{\prime}_{2},\ldots) defined from \mathcal{H}, where for each round i>0i>0, GiG^{\prime}_{i} is the tit_{i}-smoothing of GiG_{i}, where ti=round(k)t_{i}=\operatorname{round}(k).

  • An ϵ\epsilon-proportionally smoothed dynamic graph =(G0,G1,G2,)\mathcal{H}^{\prime}=(G^{\prime}_{0},G^{\prime}_{1},G^{\prime}_{2},\ldots) is defined from \mathcal{H}, where for each round i>0i>0, GiG^{\prime}_{i} is the tit_{i}-smoothing of GiG_{i}, where ti=round(ϵ|Gi1Gi|)t_{i}=\operatorname{round}(\epsilon\cdot\left|G^{\prime}_{i-1}\oplus G_{i}\right|).

  • An ϵ\epsilon-targeted smoothed dynamic graph is the dynamic graph =(G0,G1,G2,)\mathcal{H}^{\prime}=(G^{\prime}_{0},G^{\prime}_{1},G^{\prime}_{2},\ldots) defined iteratively from \mathcal{H}, where for each round i>0i>0, GiG^{\prime}_{i} is the ϵ\epsilon-targeted smoothing of GiG_{i} w.r.t. Gi1G^{\prime}_{i-1}.

All the above definitions also have adaptive versions, where each \mathcal{H} and \mathcal{H}^{\prime} are defined in parallel: the graph GiG_{i} is chosen by an adversary after the graphs G0,G1,G2,,Gi1G_{0},G_{1},G_{2},\ldots,G_{i-1} and G0,G1,G2,,Gi1G^{\prime}_{0},G^{\prime}_{1},G^{\prime}_{2},\ldots,G^{\prime}_{i-1} are already chosen, and then GiG^{\prime}_{i} is defined as above. We use adaptive versions for the first two definitions.

2.3 Auxiliary lemmas

We use two basic lemmas, which help analyzing the probability of the noisy to add or remove certain edges from a given set of potential edges. These lemmas address a single round, we apply them with a different parameter in each round. The lemmas appeared as Lemmas 4.1 and 4.3 in [8], where they were used with the same parameter throughout the process.

Lemma 2.3.

There exists a constant c1>0c_{1}>0 such that the following holds. Let Gadv𝒢allowedG_{\mathrm{adv}}\in\mathcal{G}_{\mathrm{allowed}} be a graph, and S([n]2)\emptyset\neq S\subseteq\binom{[n]}{2} a set of potential edges. Let tt\in\mathbb{N} be an integer such that tn/16t\leq n/16 and t|S|n2/2t\cdot\left|S\right|\leq n^{2}/2.

If GnewG_{\mathrm{new}} is a tt-smoothing of GadvG_{\mathrm{adv}}, then the probability that GnewG_{\mathrm{new}} contains at least one edge from SS is at least c1t|S|/n2c_{1}\cdot t\left|S\right|/n^{2}.

Lemma 2.4.

There exists a constant c2>0c_{2}>0 such that the following holds.

Let Gadv𝒢allowedG_{\mathrm{adv}}\in\mathcal{G}_{\mathrm{allowed}} be a graph. Let SEadvS\subseteq E_{\mathrm{adv}} be a set of potential edges such that SEadv=S\bigcap E_{\mathrm{adv}}=\emptyset. Let tt\in\mathbb{N}, such that tn/16t\leq n/16.

If GnewG_{\mathrm{new}} is an tt-smoothing of GadvG_{\mathrm{adv}}, then the probability that GnewG_{\mathrm{new}} contains at least one edge from SS is at most c2t|S|/n2c_{2}\cdot t\left|S\right|/n^{2}.

2.4 Probability basics

In all our upper bound theorems, we show that flooding succeeds with high probability (w.h.p.), i.e., with probability at least 1nc1-n^{-c} for a constant cc of choice. For simplicity, we prove success probability 1n11-n^{-1}, but this can be amplified by increasing the flooding time by a multiplicative constant factor. Our lower bounds show that flooding does not succeed with probability more than 1/21/2, so it definitely does not succeeds w.h.p.

We will also use the following Chernoff bound (see, e.g. [17]):

Lemma 2.5.

Suppose X1,,XnX_{1},\dots,X_{n} are independent Bernoulli random variables, with 𝔼[Xi]=μ\mathbb{E}\left[{X_{i}}\right]=\mu for every ii. Then for any 0δ10\leq\delta\leq 1:

Pr[i=1nXi(1δ)μn]eδ2μn/2.\Pr\left[\sum_{i=1}^{n}X_{i}\leq(1-\delta)\mu n\right]\leq e^{-\delta^{2}\mu n/2}.

We usually apply this bound with δ=0.9\delta=0.9.

3 Flooding in Dynamic Networks

3.1 Fractional amount of noise

We address an interesting phenomenon which occurs in [8]: by merely adding a single noisy edge per round, the flooding process dramatically speeds up from Θ(n)\Theta(n) in the worst case, to only Θ(n2/3)\Theta(n^{2/3}) in the worst case. To explain this gap, we present a natural notion of fractional noise: if the amount of noise kk is not an integer, we use an integer close to kk, using the function round(k)\operatorname{round}(k). An easy assertion of the result in [8], is that whenever k>1k>1, the analysis does not change by much, and the same asymptotic bound holds. This leaves open the big gap between k=0k=0 (no noise) and k=1k=1. We address this question in the next two theorems, showing a smooth transition in the worst-case flooding time between these two values of kk.

Next, we revise the upper bound in [8] to handling fractional values of kk and values smaller than 11. In addition, we show a somewhat cleaner argument, that carries three additional advantages: (1) It can be easily extended to adaptive adversaries; (2) It extends well to other models, as seen in subsequent sections; (3) It decreases the multiplicative logarithmic factor, dropping the asymptotic gap between upper and lower bound to only O(log1/3(n))O(\log^{1/3}(n)). Hence, the proof of the next theorem serves as a prototype for later proofs, of upper bounds for adaptive adversaries and proportional noise.

Theorem 3.1.

Fix a real number 0<kn/160<k\leq n/16. For any kk-smoothed dynamic graph, flooding takes O(min{n,n2/3(logn/k)1/3})O\left(\min\left\{n,n^{2/3}(\log n/k)^{1/3}\right\}\right) rounds, w.h.p.

Note that an nn-round upper bound is trivial (using the connectivity guarantee), and that whenever kclogn/nk\leq c\cdot\log n/n for some constant cc, the term nn is the smaller one, and we need not prove anything more.111This is rather intuitive, as if kk is very small (e.g., kc/nk\leq c/n), when considering δn\delta n rounds for small enough δ\delta, no noise occurs. It is known that when no noise occurs, a linear amount of rounds is required for flooding. This intuition is formalized in the lower bound later in this section. We therefore focus on the event of k>clogn/nk>c\cdot\log n/n, and for this case we show an upper bound of O(n2/3(logn/k)1/3)O(n^{2/3}(\log n/k)^{1/3}).

Proof 3.2.

Fix uu to be the starting vertex, and vv to be some other vertex. We show that within R=3rR=3r rounds, the message has been flooded to vv with constant probability (over the random noise). We split the 3r3r rounds into three phases as follows.

  1. 1.

    First rr rounds, for spreading the message to a large set of vertices we call UU.

  2. 2.

    Next rr rounds, for transferring the message from the set UU to another large set VV to be defined next.

  3. 3.

    Last rr rounds, for “converging” and passing the message from the potential set VV to the specific vertex vv.

We now quantify the message flooding in the three phases.

After the first rr rounds, the message has reached many vertices.

Let UiU_{i} denote the set of vertices that have been flooded at time ii. Using merely the fact that the graph is connected at all times, and assuming the flooding process is not over, we know that at each round a new vertex receives the message, implying |Ui+1||Ui|+1\left|U_{i+1}\right|\geq\left|U_{i}\right|+1. Hence, |Ur|r\left|U_{r}\right|\geq r (with probability 11).

In the last rr rounds, many vertices can potentially flood the message to vv.

We denote by ViV_{i} the set of vertices from which vv can receive the message within the last ii rounds (Ri+1,,RR-i+1,\dots,R). Formally, fix the sequence of graphs chosen by the oblivious adversary, and apply the random noise on these graphs to attain the last rr graphs of our smoothed dynamic graph. Define V0={v}V_{0}=\left\{v\right\}, and for i>0i>0, define ViV_{i} as the union of Vi1V_{i-1} with the neighbors of it in GRi+1G_{R-i+1}. That is, ViV_{i} is defined analogously to UiU_{i} but with the opposite order of graphs. We point out that we are dealing with a stochastic process: each ViV_{i} is a random variable that depends on the noise in the last ii rounds. Still, the connectivity guarantees that |Vi+1||Vi|+1\left|V_{i+1}\right|\geq\left|V_{i}\right|+1. Therefore, we have |Vr|r\left|V_{r}\right|\geq r (with probability 11).222Note that here we strongly rely on the obliviousness of the adversary: with an adaptive adversary, one cannot define VrV_{r} properly before the end of the execution of the algorithm, as the adversary’s choices were not yet made. The case of an adaptive adversary is discussed in the next section.

The middle rounds.

The above processes define two randomly chosen sets, UrU_{r} and VrV_{r}, each of size at least rr. If UrVrU_{r}\cap V_{r}\neq\emptyset, then we are done as vv is guaranteed to receive the message even if we ignore the middle rounds. Otherwise, we consider the set S=Ur×VrS=U_{r}\times V_{r} of potential edges, and show that one of them belongs to our smoothed dynamic graph in at least one of the middle graphs, with probability at least 1n21-n^{-2}, for the right value of rr.

Let us consider separately the two cases: k1k\geq 1 (note that non-integer kk was not handled before), and the case 0<k<10<k<1.

The case of k1k\geq 1.

In this case, we essentially claim the following: we are guaranteed to have either k\lfloor k\rfloor noise or k\lceil k\rceil at each and every round. Applying Lemma 2.3 for each such round, the probability of not adding any edge from SS is at most

(1c1k|S|/n2)(1c1kr2/2n2),\left(1-c_{1}\lfloor k\rfloor\left|S\right|/n^{2}\right)\leq\left(1-c_{1}kr^{2}/2n^{2}\right),

where the inequality follows from kk/2\lfloor k\rfloor\geq k/2 and |S|=|Ur||Vr|r2\left|S\right|=\left|U_{r}\right|\cdot\left|V_{r}\right|\geq r^{2}. Thus, the probability of not adding any edge from SS in any of these rr noisy rounds is at most

(1c1kr2/2n2)rec1r3k/(2n2),\left(1-c_{1}kr^{2}/2n^{2}\right)^{r}\leq e^{-c_{1}r^{3}k/(2n^{2})},

which is upper bounded by n2n^{-2}, whenever rn2/3[(4logn)/(c1k)]1/3r\geq n^{2/3}\cdot\left[(4\log n)/(c_{1}k)\right]^{1/3}.

The case of 0<k<10<k<1.

Note that here we can no longer use kk/2\lfloor k\rfloor\geq k/2, and so we turn to a somewhat more global analysis which guarantees that “enough” rounds produce a (single) noisy edge: at each round we essentially add a noisy edge with probability k<1k<1, and otherwise do nothing. Since we have rr rounds, and in each of them noise occurs independently, we can use a standard Chernoff bound to say that with all but probability at most e0.4kre^{-0.4kr}, we have (k/10)r(k/10)r rounds in which a noisy edge was added.333The expectation over all rounds is krkr noisy edges. We later bound this term. For each of the (k/10)r(k/10)r rounds we again apply Lemma 2.3 to claim that no edge from SS was added, with probability at most

(1c1|S|/n2)(1c1r2/n2),\left(1-c_{1}\left|S\right|/n^{2}\right)\leq\left(1-c_{1}r^{2}/n^{2}\right),

where the inequality follows again from |S|=|Ur||Vr|r2\left|S\right|=\left|U_{r}\right|\cdot\left|V_{r}\right|\geq r^{2}. Thus, the probability of not adding any edge from SS in any of these (k/10)r(k/10)r noisy rounds is upper bounded by

(1c1r2/n2)(k/10)rec1kr3/(10n2),\left(1-c_{1}r^{2}/n^{2}\right)^{(k/10)r}\leq e^{-c_{1}kr^{3}/(10n^{2})},

which is upper bounded by 1/(2n2)1/(2n^{2}) whenever rn2/3[(10logn)/(c1k)]1/3r\geq n^{2/3}\cdot\left[(10\log n)/(c_{1}k)\right]^{1/3}.

Recalling that we only deal with the case kclogn/nk\geq c\cdot\log n/n, choosing the constant cc according to the constant c1c_{1}, we also get r10logn/kr\geq 10\log n/k. This allows us to bound the error of the Chernoff inequality: e0.4kre3logn1/(2n2)e^{-0.4kr}\leq e^{-3\log n}\leq 1/(2n^{2}). Union bounding on both possible errors, we know that with probability at least 1n21-n^{-2} the vertex vv is informed after 3r3r rounds.

To conclude, we use r=n2/3[(10logn)/(c1k)]1/3r=n^{2/3}\cdot\left[(10\log n)/(c_{1}k)\right]^{1/3}. For any value kclogn/nk\geq c\cdot\log n/n, and for any realization of UrU_{r} and VrV_{r}, the message has been passed to VrV_{r} within the rr middle rounds, with probability at least 1n21-n^{-2}. So vv received the message after R=3rR=3r rounds with the same probability. Taking a union bound over all the vertices implies that RR rounds are enough to flood to the whole network with probability at least 11/n1-1/n.

We also show a matching lower bound, restructuring the proof of the lower bound in [8]. We note that their proof actually states a lower bound of Ω(min{n/k,n2/3/k1/3})\Omega\left(min\left\{n/k,n^{2/3}/k^{1/3}\right\}\right) that apply to any k<n/16k<n/16 (and is indeed tight for k<O(n)k<O(\sqrt{n})). We restructure the analysis of the lower bound to fit the different constructions given in the following sections, as follows.

First, we consider the first RR rounds, for some parameter RR, and show inductively that the following two main claims continue to hold for every round i<Ri<R with very high probability.

  • Some pre-defined set of vertices stays uninformed

  • Given the above, the expected number of informed vertices in total does not grow rapidly.

The growth (in the expected number of informed vertices) has a multiplicative-additive progression, potentially becoming an exponential growth (with a small exponential).

We choose RR such that the expected number of informed vertices is upper bounded by δn\delta n, and use Markov inequality to show that with high probability the flooding is not yet over after RR rounds. We then apply a union bound over all the inductive steps, where each has a small probability to fail the inductive argument. Altogether, we show that with high probability, the flooding is not over after RR rounds.

In Appendix A, we use an argument along those lines in order to prove the following extensions of the lower bound from [8] to non-integer values and to fractional values (where k<1k<1).

Theorem 3.3.

Fix 1kn/161\leq k\leq n/16 (not necessarily an integer). For any kk-smoothed dynamic graph, for the flooding process to succeed with probability at least 1/21/2, it must run for Ω(min{n/k,n2/3/k1/3})\Omega\left(\min\left\{n/k,n^{2/3}/k^{1/3}\right\}\right) rounds.

In particular, whenever k=O(n)k=O(\sqrt{n}), the dominant term is Ω(n2/3/k1/3)\Omega(n^{2/3}/k^{1/3}), matching the upper bound (up to logarithmic factors).

Theorem 3.4.

Fix 0<k<10<k<1. For any kk-smoothed dynamic graph, for the flooding process to succeed with probability at least 1/21/2, it must run for Ω(min{n,n2/3/k1/3})\Omega\left(\min\left\{n,n^{2/3}/k^{1/3}\right\}\right) rounds.

3.2 Adaptive vs. oblivious adversary

Here we note that the results of [8] are for the case of oblivious (non-adaptive) adversary. We extend our results (in the more generalized, fractional noise regime) to the adaptive case, obtaining bounds that are different than the ones in [8]. Interestingly, our results show separation between adaptive and oblivious adversary for a wide range of network noise: for constant kk, in particular, we get a separation for the adaptive case, where no constant amount of noise speeds up the flooding below Ω(n)\Omega(n), and the oblivious case where k=ω(1/n)k=\omega(1/n) already speeds the flooding to o(n)o(n) rounds.

Theorem 3.5.

Fix 0<kn/160<k\leq n/16, not necessarily an integer. For any kk-smoothed adaptive dynamic graph, flooding takes O(min{n,nlogn/k})O(\min\left\{n,n\sqrt{\log n/k}\right\}) rounds, w.h.p.

The proof follows an argument similar to the one of Theorem 3.1, where the process is broken to three phases: uu spreads the message to UrU_{r}, which carry it to VrV_{r}, who are sure to deliver it to the destination vv. The major difference here is that we can no longer rely on the last phase, since an adaptive adversary sees the informed vertices and is able to act accordingly. We thus give more focus to the second, analyzing the chance for direct noisy edge between the informed set UrU_{r} and a destination vertex vv.

Proof 3.6.

First, note that either way n1n-1 rounds are enough for flooding, as we still have a connectivity guarantee for each and every iteration. This means that for k<Θ(logn)k<\Theta(\log n), the theorem does not improve upon the trivial upper bound. In particular, in the following, we can therefore disregard the case of k<1k<1.

We use a similar argument as in the proof of Theorem 3.1. We note that the previous proof relied on an oblivious adversary when we statistically analysed the middle rounds, for each fixed value of VrV_{r} (which depends on the the last rr rounds). Assuming adaptivity, one can no longer use this analysis, and so we turn to a more simplified analysis: we consider only the two phases: the first rr rounds and rr last rounds. At first, connectivity assures us that enough vertices learn the message, and next we analyse the probability of one of them to connect directly to the goal vertex vv by a noisy edge. As before, the first phase gives us UrrU_{r}\geq r with probability 11. Next, we give analysis of the second phase, that correspond to the one of the middle rounds before.

The second phase

Note that if vUrv\in U_{r}, then we are done. Otherwise, we consider the set S=Ur×{v}S=U_{r}\times\left\{v\right\} of potential edges, and show that one of them belongs to our smoothed dynamic graph in at least one of the intermediate graphs, with high probability. We use the fact that for 0<k<10<k<1 the assertion is trivial, and analyse the case of k1k\geq 1.

In this case, as before, we apply Lemma 2.3 for each round in this phase, and conclude that the probability of not adding any edge from SS is at most

1c1k|S|/n21c1kr/2n2,1-c_{1}\lfloor k\rfloor\left|S\right|/n^{2}\leq 1-c_{1}kr/2n^{2},

where the inequality follows from kk/2\lfloor k\rfloor\geq k/2 and |S|=|Ur|\left|S\right|=\left|U_{r}\right|. Thus, the probability of not adding any edge from SS in any of these rr noisy rounds is upper bounded by

(1c1kr/2n2)rec1r2k/(2n2),\left(1-c_{1}kr/2n^{2}\right)^{r}\leq e^{-c_{1}r^{2}k/(2n^{2})},

which is upper bounded by n2n^{-2}, for r=2nlogn/(c1k)r=2n\cdot\sqrt{\log n/(c_{1}k)}.

Using this value for rr, for any value k1k\geq 1, and realization of UrU_{r}, the message has been passed to vv with probability at least 1n21-n^{-2}, within the whole R=2rR=2r rounds of phase one and two. A union bound over all vertices vuv\neq u implies that RR rounds are enough to flood to the whole network with probability at least 11/n1-1/n.

When kk is a constant, the above result is tight, which we prove by adjusting the oblivious dynamic network from Theorem 3.3 to use the adversary’s adaptivity, as follows. The basic structure of the hard instance for flooding is achieved by roughly splitting the graph into informed and uninformed vertices. In order to ensure low diameter, all the uninformed vertices are connected as a star, centered at a changing uninformed vertex called a head, which in turn connects them to a star composed of the informed vertices. The key idea in the analysis of this process, is that the head at each round should not become informed via noisy edges at an earlier round, as in this case it will immediately propagate the message to all the uninformed vertices, completing the flooding. An oblivious adversary picks a sequence of heads at the beginning of the process, and this invariant is kept since the probability of any of the selected heads to get informed too early is low. However, after roughly n2/3n^{2/3} rounds, the probability that a selected head is informed becomes too high. An adaptive adversary, on the other hand, can continue crafting a hard instance graph for a linear number of rounds — in each round, the adversary knows which vertices are uninformed and can pick one of them as the new head, thus overcoming this obstacle.

Theorem 3.7.

Fix 0<kn/160<k\leq n/16, not necessarily an integer. For any kk-smoothed adaptive dynamic graph, for the flooding process to succeed with probability at least 1/21/2 it must run for Ω(min{n,nlogk/k})\Omega(\min\left\{n,n\log k/k\right\}) rounds.

Proof 3.8.

We start by formally defining a hard case for flooding with noise. The base to our definition is the spooling graph, which was defined by Dinitz et al. [8]: at time ii, the edges are {(j,i)}j=1i1\left\{(j,i)\right\}_{j=1}^{i-1}, a star around the vertex ii (this will be the informed spool) and edges {(i+1,j)}j=i+2n\left\{(i+1,j)\right\}_{j=i+2}^{n}, a star around the vertex i+1i+1, which will be the right spool, with vertex i+1i+1 crowned as the head. the spools are then connected using the additional edge (i,i+1)(i,i+1).

We define the adaptive spooling graph, as follows: at first E1=([n]{2})×{2}E_{1}=([n]\setminus\left\{2\right\})\times\left\{2\right\}. After the first iteration, 22 learns the message and is removed to the left (informed) spool. We denote for each round ii the set of informed vertices at the end with IiI_{i}. We also use uiu_{i} for the lowest index of an uninformed vertex at the end of round ii. Formally ui=min{I¯i}u_{i}=\min\left\{\bar{I}_{i}\right\}. We define the adaptive spooling graph at time i+1i+1 to have the edge set

Ei+1={1}×(Ii{1}){(1,ui)}{ui}×(I¯i{ui}).E_{i+1}=\left\{1\right\}\times(I_{i}\setminus\left\{1\right\})\ \cup\ \left\{(1,u_{i})\right\}\ \cup\ \left\{u_{i}\right\}\times(\bar{I}_{i}\setminus\left\{u_{i}\right\}).

In this graph, essentially, 11 is connected to all already-informed vertices, and also to the next head uiu_{i}, which is connected to all other uninformed vertices. The static diameter stays 33 at each iteration, but now we are promised that at iteration i+1i+1, the new head uiu_{i} is uninformed at the beginning of said round.

We next show that the expected number of informed players cannot grow by much.

Claim 1.

When taking expectation over the noise we add, we have

𝔼[|Ii+1|](1+c2k/n)𝔼[|Ii|]+1.\mathbb{E}\left[{\left|I_{i+1}\right|}\right]\leq(1+c_{2}k/n)\cdot\mathbb{E}\left[{\left|I_{i}\right|}\right]+1.

Note that IiI_{i} does not depend on the noise of rounds i+1,,Ri+1,\dots,R, and so the left side takes expectation over i+1i+1 rounds of noise, and the right side over ii rounds.

Intuitively, the claim states that the expected growth in the number of informed vertices is bounded by an additive-multiplicative progression (i.e., at each step the amount grows multiplicatively and then a constant is added). This is true as the noisy edges induce a multiplicative growth, and connectivity forces that at least 11 additional vertex (in fact, uiu_{i} itself) receives the message. The proof of this is deferred to Appendix A.

Using the claim, we analyze the progression of Ai=𝔼[|Ii|]A_{i}=\mathbb{E}\left[{\left|I_{i}\right|}\right], splitting to 22 cases:

  1. 1.

    For Ain/(c2k)A_{i}\leq n/(c_{2}k), the additive term is larger, and so in this range Ai+1Ai+2A_{i+1}\leq A_{i}+2.

  2. 2.

    For Ain/(c2k)A_{i}\geq n/(c_{2}k), the multiplicative term overcomes and we get Ai+1(1+2c2k/n)AiA_{i+1}\leq(1+2c_{2}k/n)A_{i}.

Note that we also have Ai+1Ai+1A_{i+1}\geq A_{i}+1, using connectivity, showing this bounds on the progression are rather tight.

We next split into cases: if k<1/c2k<1/c_{2}, the additive term controls the process through Θ(n)\Theta(n) iterations, giving us An/20n/10A_{n/20}\leq n/10. Otherwise, the multiplicative term would come into play. We denote the number of additive rounds by r0r_{0}: the minimal index such that Ar0n/(c2k)A_{r_{0}}\geq n/(c_{2}k). Since A0=1A_{0}=1 (only the vertex 11 knows the message at the beginning), and we have good bounds on the progression, we conclude r0=Θ(n/k)r_{0}=\Theta(n/k).

Next, we allow additional r1r_{1} rounds in which the multiplicative term is dominant. We get

Ar0+r1Ar0(1+2c2k/n)r1Θ(n/k)(1+2c2k/n)r1.A_{r_{0}+r_{1}}\leq A_{r_{0}}(1+2c_{2}k/n)^{r_{1}}\leq\Theta(n/k)(1+2c_{2}k/n)^{r_{1}}.

Taking r1=δ(nlogk/k)r_{1}=\delta(n\log k/k) with small enough δ\delta, we have Ar0+r1=𝔼[|Ir0+r1|]n/10A_{r_{0}+r_{1}}=\mathbb{E}\left[{\left|I_{r_{0}+r_{1}}\right|}\right]\leq n/10.

In both cases, there is a round RR with 𝔼[|IR|]n/10\mathbb{E}\left[{\left|I_{R}\right|}\right]\leq n/10. By Markov inequality, strictly less than nn vertices are informed after RR rounds, w.p. at least 0.90.9. Note that for the first case R=Θ(n)R=\Theta(n), and for the second case R=r0+r1=O(nlogk/k)R=r_{0}+r_{1}=O(n\log k/k), concluding the proof.

4 Responsive noise

In this section we consider responsive noise in the dynamic network, where some elements of the noise incurred in each round relate to the changes done at this round. We consider this model as complement to one discussed in the previous section: the parameter kk of “noise per round” is fit to model some internal background noise in a system. However, in a more realistic model we would expect the noise to work in different patterns in times of major changes in the network, as opposed to rounds where no changes were made at all.

To this end, we introduce two variants of a responsive noise: one is that of proportional noise, and the other is targeted noise. In the first, the amount of noisy edges would relate to the amount of changes that occurred in the last step. In the second variant, we expect the noise to specifically “target” newly modified edges. This could relate to a somewhat limited adversary (in aspect of ability to change the graph).

In the responsive model, an interesting phenomenon occurs: an adversarial network can choose to stay still, in order to force a round where no noise occurs. For the flooding problem, this strength is limited: whenever the static diameter of each iteration is upper bounded by DD, the waiting game can only last D1D-1 rounds. To that end, we show how this model affects the analysis of the upper bound, and yet again incorporate the new phenomenon described to devise a non-trivial lower bound.

4.1 Proportional noise

Theorem 4.1.

Fix 0<ϵ0<\epsilon. For any ϵ\epsilon-proportionally smoothed dynamic graph with static diameter at most DD. If no noise invokes more than n/16n/16 changes, the flooding process finishes after O(n2/3(Dlogn/ϵ)1/3)O(n^{2/3}\cdot(D\log n/\epsilon)^{1/3}) rounds, w.h.p.

The proof resembles the proof of Theorem 3.1 with three phases, which handles the “waiting game” mentioned above. Given static diameter DD, an adversarial network can only stay intact for D1D-1 rounds, thus in the second phase at least 1/D1/D of the rounds introduce one changed edge (or more), inferring that w.h.p. noise is incurred in Ω(ϵ/D)\Omega(\epsilon/D) rounds.

Proof 4.2.

We start by noting that if ncDlogn/ϵn\leq c\cdot D\log n/\epsilon, for some constant cc, this bound is larger than the trivial O(n)O(n) guaranteed by connectivity and we are done. Next, we follow a similar argument as the one in the proof of Theorem 3.1. For the oblivious case, we had 3 phases: first rr rounds and last rr rounds are always guaranteed connectivity, and so we have the random sets Ur,VrU_{r},V_{r} such that with probability 11 we know |Ur|,|Vr|r\left|U_{r}\right|,\left|V_{r}\right|\geq r. If the sets intersects we are done, so assume they do not, and denote S=Ur×VrS=U_{r}\times V_{r}. Note that |S|rr=r2\left|S\right|\geq r\cdot r=r^{2}. Note that the size of SS is guaranteed even though its realization depends on the first and last rounds. Since the adversary is not adaptive, we can turn to analyse chance of adding a noisy edge from the set SS during the rr middle rounds. We note that the adversary can now play the waiting game in order to force a round where no noise is invoked.

The middle rounds.

As we are promised a static diameter of at most DD, we know that if the networks does not change for DD steps consecutively, the flooding is guaranteed to finish successfully. Therefore, we can assume that within the rr middle rounds, once every DD rounds, we have a round where changes happen, and therefore potential noise might occur. This overall guarantees us r/Dr/D such rounds, where in each of them noise occurs independently with probability at least ϵ\epsilon (that correspond to the amount of noise if only a single change was made). We apply a standard Chernoff bound to say that with all but probability at most e0.4ϵr/De^{-0.4\epsilon r/D}, we have ϵr/(10D)\epsilon r/(10D) rounds in which at least one noisy edge was added.444The expectation over all rounds is at least ϵr/D\epsilon r/D rounds with noisy edges. We later bound this term. We write tit_{i} for the amount of noise at each such round, for which we know 1tin/161\leq t_{i}\leq n/16, using the premise. Now, one can safely apply Lemma 2.3 with SS for each such round, to upper bound the probability of not adding any such potential edge:

(1c1ti|S|/n2)(1c1r2/2n2),\left(1-c_{1}t_{i}\left|S\right|/n^{2}\right)\leq\left(1-c_{1}r^{2}/2n^{2}\right),

where the inequality simply follows from ti1t_{i}\geq 1 and |S|r2\left|S\right|\geq r^{2}. Thus, the probability of not adding any edge from SS in any of the ϵr/(10D)\epsilon r/(10D) noisy rounds is upper bounded by

(1c1r2/2n2)ϵr/(10D)ec1r3ϵ/(20Dn2),\left(1-c_{1}r^{2}/2n^{2}\right)^{\epsilon r/(10D)}\leq e^{-c_{1}r^{3}\epsilon/(20Dn^{2})},

which is upper bounded by 1/(2n2)1/(2n^{2}), whenever r20n2/3[Dlogn/(c1ϵ)]1/3r\geq 20n^{2/3}\cdot\left[D\log n/(c_{1}\epsilon)\right]^{1/3}.

We now recall assuming that ncDlogn/ϵn\geq c\cdot D\log n/\epsilon, using the right constant cc (as function of the constant c1c_{1}). For this case, we also get r10Dlogn/ϵr\geq 10D\log n/\epsilon, for which the case of too few noisy rounds is bounded by e0.4ϵr/De3log(n)1/(2n2)e^{-0.4\epsilon r/D}\leq e^{-3log(n)}\leq 1/(2n^{2}).

By union bound we get the following: using r=20n2/3[Dlogn/(c1ϵ)]1/3r=20n^{2/3}\cdot\left[D\log n/(c_{1}\epsilon)\right]^{1/3}, for any Dcϵn/lognD\geq c\cdot\epsilon n/\log n, and any realization of Ur,VrU_{r},V_{r}, the message has been passed to VrV_{r} within the rr middle rounds with probability at least 1n21-n^{-2}, and so after R=3rR=3r rounds, vv received the message with the same probability. Using union bound over all vertices in the network, we conclude see that RR rounds suffice for flooding with probability at least 11/n1-1/n.

Adjusting the same argument for an adaptive network, using only two phases (same as Theorem 3.5), would give an upper bound of O(nDlogn/ϵ)O(n\cdot\sqrt{D\log n/\epsilon}), which in this case does not improve upon the trivial O(n)O(n) promised by connectivity alone.

We continue to show that this is tight, and in the proportional noise model, the changes in the network can be so subtle that they would invoke little noise and the flooding would barely speed up at all, asymptotically. We show the following lower bound, and note that the graph sequence constructed in the proof has constant DD, where it is hardest to manipulate.

Theorem 4.3.

Fix ϵ1/5\epsilon\leq 1/5. There exists an ϵ\epsilon-proportionally smoothed dynamic graph with an adaptive adversary, where a flooding process must run for Ω(n)\Omega(n) rounds in order to succeed with probability at least 1/21/2.

In the current, adaptive model, the spooling graph no longer stands as a slow-flooding graph. Changing the center of the uninformed star takes Ω(n)\Omega(n) edge changes in early rounds, and the proportional noise invoked by these changes would be devastating (e.g., for ϵ=Ω(1)\epsilon=\Omega(1)).

Instead, in the following proof, the adversary relies on its adaptivity in order to maintain control over the network. This is done using a sequence of graphs that require only O(1)O(1) changes at each round.

Proof 4.4.

We modify the spooling graph in a way that only few edges are changed at each step. Let G1,,Gn1G_{1},\dots,G_{n-1} be the sequence of graphs over [n][n], each having the edge set

Ei={(1,j)ji}{(j,n)ji}.E_{i}=\left\{(1,j)\mid j\leq i\right\}\cup\left\{(j,n)\mid j\geq i\right\}.

Note that GiG_{i} is connected through the vertex ii. These graphs are essentially two star graphs, one around 11 and one around nn that are connected through one other vertex (a different one at each time frame). We associate the star centered in 11 with the set of informed vertices and the star centered in nn with the set of uninformed vertices (as 11 is the the vertex from which the flooding begins).

Wishfully, we would like to say that at each time connectivity is preserved via the vertex ii, but as soon as it receives the message, it is being cut off from the uninformed part of the graph. However, the added noise would impair this argument. As we wish to use a low-maintenance dynamic graph, we cannot afford to replace the center. Therefore, we surgically cut off from the uninformed star all the vertices that learned the message via edges added by noise. This enforces us to incorporate into the lower bound the concept of adaptivity. Let G1G_{1} be as before. we define the rest of the graphs in the sequence adaptively. Let IiI_{i} be the set of vertices that received the message by the end of round ii, and let I¯i=V/Ii\bar{I}_{i}=V/I_{i}, the set of uninformed vertices at the same time. We define the connecting vertex of the next graph as ui+1=argmin(I¯i)u_{i+1}=argmin(\bar{I}_{i}). The graph Gi+1G_{i+1} in the sequence is then defined using

Ei+1={(1,j)jIi}{(j,n)jI¯i}{(1,ui)}.E_{i+1}=\left\{(1,j)\mid j\in I_{i}\right\}\cup\left\{(j,n)\mid j\in\bar{I}_{i}\right\}\cup\left\{(1,u_{i})\right\}.

Consider the first R=δnR=\delta n rounds of the process, for small enough δ\delta. We show that if no edge of Ii×nI_{i}\times n was added at some round i+1i+1, which happens with small probability, we have:

  1. 1.

    The number of required changes at each iteration is at most constant at each round.

  2. 2.

    At each point in time |Ij|<2j\left|I_{j}\right|<2j.

For the first bullet, note that 2|Gi1Gi|52\leq\left|G^{\prime}_{i-1}\oplus G_{i}\right|\leq 5. Indeed, at the first round we only change 22 edges, replacing (u1,n)(u_{1},n) by (1,u2)(1,u_{2}). By induction, for any round i>1i>1, if noise was invoked in the previous round, then since 5ϵ<15\epsilon<1, at most one extra edge was changed in Gi1G^{\prime}_{i-1}. We have 3 options for the toggled edge

  • If an edge was added by noise, connecting an informed vertex with some uninformed vertex wnw\neq n, we remove the edge, cut off the edge (w,n)(w,n) as well, and add (1,w)(1,w) instead.555Leaving ww connected through the vertex 11 and not the vertex who sent him the message keeps our low diameter intact. A more freely defined graph with no diameter guarantees could simply remove (w,n)(w,n) and leave it connected to the informed spool.

  • If an edge was added by noise, connecting two uninformed vertices, we cut it off immediately (rather than analysing its affect later, when one of them becomes informed).

  • If any other edge was connected (or disconnected) by noise, we simply revert the change, to keep our graph in line with its definition.

Either way, we need to add two more changes (if not done already above) which consist of removing the edge (ui1,n)(u_{i-1},n) and adding the edge (1,ui)(1,u_{i}) instead. In total, the number of changes in the next round stays between 22 and 55, proving the induction step.

For the second bullet, note that the amount of noisy edges produced at each round is at most 11, which means that at most 22 new vertices learns the message in each round, including the one connecting the graph. This means that |Ii||Ii1|+2\left|I_{i}\right|\leq\left|I_{i-1}\right|+2, or alternatively |Ij|1+2j3j\left|I_{j}\right|\leq 1+2j\leq 3j.

As both the above claims hold whenever nn has not yet learnt the message, we are left to show that with high probability, within R=δnR=\delta n rounds, this is indeed the case. We union bound over all RR rounds: as long as it has not yet happened yet, the probability of nn getting the message at the next round is that of a noisy edge connecting IiI_{i} to the vertex nn.

We apply Lemma 2.4 for each round i<Ri<R, using Si=Ii×{n}S_{i}=I_{i}\times\left\{n\right\}, and assuming nn is not yet informed. Thus, the probability is at most:

c2|Si|/n2c23R/n2c_{2}\left|S_{i}\right|/n^{2}\leq c_{2}\cdot 3R/n^{2}

since |Si|=|Ii|3i3R\left|S_{i}\right|=\left|I_{i}\right|\leq 3i\leq 3R, and the number of noisy edges is at most 11. Applying a union bound over all RR rounds, the probability of vertex nn being prematurely informed is at most

3c2R2/n23c2δ2.3c_{2}\cdot R^{2}/n^{2}\leq 3c_{2}\delta^{2}.

For small enough δ\delta, this is at most 0.10.1, so with probability at least 0.90.9 after RR rounds we indeed have at most 3R<n3R<n informed vertices, which means the flooding is not yet over.

4.2 Targeted noise

For targeted noise this new phenomenon is surpassed by the limited ability of the adversary to make changes. We can think of targeted noise as some sort of “slow down”, as if the network repeatedly tries to modify some of the edges, eventually succeeding, but not before a number of rounds has passed. In this last model we show just how strong the waiting game can be: for a graph with constant static diameter (which makes the waiting game obsolete), flooding will take O(logn)O(\log n) rounds, with high probability. However, for larger value of DD, the same analysis would quickly fail. For D=Θ(logn)D=\Theta\left(\sqrt{\log n}\right) we get the trivial bound of O(n)O(n). We finish by showing an explicit construction with static diameter D=Θ(logn)D=\Theta\left(\log n\right) that relies strongly on the waiting game and admits a lower bound of Ω(n)\Omega(n) rounds.

Theorem 4.5.

For any ϵ\epsilon-targeted smoothed dynamic graph with static diameter DD, flooding can be done in O(Dlogn/ϵD2)O(D\log n/\epsilon^{D^{2}}) rounds, w.h.p.

Note that for D=o(log1/ϵn)D=o(\sqrt{\log_{1/\epsilon}n}), this improves upon the trivial fact that nn rounds are enough (as at each round, one new vertex must be informed, since the graph is connected). Specifically, for a constant static diameter, we get O(logn)O(\log n) rounds.

Proof 4.6.

Fix the starting vertex v0v_{0} which is informed. First, we show that the probability of a vertex vv0v\neq v_{0} staying uninformed after DD rounds is at most 1ϵD21-\epsilon^{D^{2}}.

Consider DD consecutive graphs G1,,GDG_{1},\ldots,G_{D} in the smoothed graph. As the static diameter is DD in every round, there exists a path PvP_{v} from v0v_{0} to vv in G1G_{1}, whose length is at most DD. Since we deal with targeted smoothing, each edge of PvP_{v} that exists in a graph GiG_{i} for some 1i<D1\leq i<D exists in Gi+1G_{i+1} probability either 11 or ϵ\epsilon. So, for each such ii, if all the edges of PvP_{v} exist in GiG_{i} then they all exist in Gi+1G_{i+1} with probability at least ϵD\epsilon^{D}. Hence, the probability of the path PvP_{v} existing in all DD graphs is at least (ϵD)D=ϵD2(\epsilon^{D})^{D}=\epsilon^{D^{2}}.

Fix a positive integer tt, and consider tt consecutive sequences of DD graphs each. On each sequence, we apply the above claim, and conclude that the probability of vv staying uninformed after theses tDtD rounds is at most

(1ϵD2)tetϵD2.(1-\epsilon^{D^{2}})^{t}\leq e^{-t\cdot\epsilon^{D^{2}}}.

For t=(c+1)logn(1/ϵ)D2t=(c+1)\log n\cdot(1/\epsilon)^{D^{2}} sequences, the probability of vv not being informed after tDtD rounds is at most n(c+1)n^{-(c+1)}. A union bound over all n1n-1 vertices that need to be informed implies that tDtD rounds suffice to fully flood the network with probability at least 1nc1-n^{-c}.

The above theorem implies that if the static diameter of all graphs in the sequence is small, roughly O(logn)O(\sqrt{\log n}), flooding is fast. Next, we show that this is almost tight: if the diameter is in Ω(logn)\Omega(\log n), flooding cannot be done faster than in non-smoothed graphs.

Theorem 4.7.

For every constant 0<ϵ<10<\epsilon<1, there is a value DΘ(logn)D\in\Theta(\log n) and an ϵ\epsilon-targeted smoothed dynamic graph such that with high probability, the diameter of the graph is DD and flooding on it takes n1n-1 rounds.

In order to prove this theorem, we present the dynamic cassette graph (see Figure 1). Fix ϵ\epsilon and nn as in the theorem statement, and let t=clog1/ϵnt=\lfloor c\log_{1/\epsilon}n\rfloor for a constant cc of choice. The dynamic cassette graph on vertices V={0,,n1}V=\left\{0,\dots,n-1\right\} is the dynamic graph ={G1,,Gn}\mathcal{H}=\left\{G_{1},\dots,G_{n}\right\}, where Gi=(V,Ei)G_{i}=(V,E_{i}) is defined by

Ei=\displaystyle E_{i}= {(j,j+1)0j<n1}\displaystyle\left\{(j,j+1)\mid 0\leq j<n-1\right\}\cup
{(0,jt)1j(i1)/t}{(jt,n1)(i1)/t+2j(n2)/t}.\displaystyle\left\{(0,jt)\mid 1\leq j\leq\lfloor(i-1)/t\rfloor\right\}\cup\left\{(jt,n-1)\mid\lfloor(i-1)/t\rfloor+2\leq j\leq\lfloor(n-2)/t\rfloor\right\}.
Refer to caption
Figure 1: The cassette graph, GjtG^{t}_{j}, where j=ktj=kt and nn is some multiple of tt.

This graph is the path on nn vertices, with some additional edges connecting the first and last vertices to vertices in the set {jt1j(n2)/t}\left\{jt\mid 1\leq j\leq(n-2)/t\right\}; these will be referred to as shortcut vertices, and the additional edges to them, shortcut edges. At the first graph, G1G_{1}, all shortcut vertices but the first are connected to the last vertex, n1n-1. Then, one by one, the shortcut vertices disconnect from n1n-1, and soon after — connect to 0. At each time interval [(j1)t+1,jt][(j-1)t+1,jt], all the shortcut vertices with index strictly smaller than jtjt are connected to the vertex 0, and all those with index strictly higher than jtjt are connected to n1n-1.

Consider the smoothed cassette graph \mathcal{H}^{\prime}, i.e., the ϵ\epsilon-targeted smoothed dynamic graph derived from \mathcal{H}. The dynamic graph \mathcal{H}^{\prime} can be interpreted as undergoing the following process: during each time interval [(j1)t+1,jt][(j-1)t+1,jt], the adversary repeatedly tries to add a new edge (0,(j1)t)(0,(j-1)t), and remove the edge ((j+1)t,n1)((j+1)t,n-1). The targeted noise creates a slowdown that might prevent this from happening right away, yet for the right value of tt, both changes indeed happen by the end of the time interval w.h.p. We state the following claim and direct the reader to Appendix A for a full proof.

Claim 2.

For each 2j(n2)/t2\leq j\leq(n-2)/t, the smoothed graph GjtG^{\prime}_{jt} does not contain the edge (0,(j1)t)(0,(j-1)t) with probability at most ncn^{-c}, and contains the edge ((j+1)t,n1)((j+1)t,n-1) with the same probability.

If the edge (0,(j1)t)(0,(j-1)t) exists in the smoothed graph GjtG^{\prime}_{jt} then it also exists in all later graphs, GjG_{j^{\prime}} with j>jtj^{\prime}>jt. Similarly, if the edge ((j+1)t,n1)((j+1)t,n-1) does not exist in this graph, it also does not appear in later graphs. A union bound thus extends the last claim as follows.

Claim 3.

For each 2j(n2)/t2\leq j\leq(n-2)/t, the smoothed graph GjtG^{\prime}_{jt} and all later graphs contain the edge (0,(j1)t)(0,(j-1)t) with probability at least 1nc+11-n^{-c+1}, and all these graphs do not contain the edge ((j+1)t,n1)((j+1)t,n-1) with the same probability.

Using this claim, Theorem 4.7 can easily be proven.

Proof 4.8 (Proof of Theorem 4.7).

Consider the smoothed dynamic cassette graph \mathcal{H}^{\prime}. We start by analyzing its diameter.

Let GjG^{\prime}_{j^{\prime}} be a graph in \mathcal{H}^{\prime}, and pick a jj such that jtj<(j+1)tjt\leq j^{\prime}<(j+1)t. By Claim 3, the graph GjtG^{\prime}_{jt} and all later graphs contain the edge (0,(j1)t)(0,(j-1)t) with probability at least 1nc+21-n^{-c+2}. In addition, all the graphs G1,,G(j+1)t1G_{1},\ldots,G_{(j+1)t-1} contain the edge ((j+2)t,n1)((j+2)t,n-1) (and note that this is not a probabilistic claim).

The distance between every two shortcut vertices is tt, so the distance from every vertex in the graph to the closest shortcut vertex is at most t/2t/2. Each shortcut vertex is directly connected to either 0 or n1n-1 w.h.p., except for jtjt, who is connected to both by a t+1t+1 path. Finally, between the vertices there is a path of length 2t+22t+2, through (j1)t(j-1)t and (j+1)t(j+1)t. Let us bound the length of a path between two vertices i,ii,i^{\prime} (with upper bounds on each part): from ii to its closest shortcut vertex (t/2t/2 hops), to 0 or n1n-1 (t+1t+1 hop), maybe to the other vertex between 0 and n1n-1 (2t+22t+2 hops), to the shortcut vertex closest to ii^{\prime} (t+1t+1 hop), and to ii^{\prime} (t/2t/2 hops). This sums to a path of length at most 5t+4=Θ(logn)5t+4=\Theta(\log n). A more detailed analysis can reduce this to roughly 3t3t hops.

For the flooding time, we use Claim 3 again, but now for the edges ((j+1)t,n1)((j+1)t,n-1) that do not appear GjtG^{\prime}_{jt} and all later graphs w.h.p. A simple induction on j=0,,n1j^{\prime}=0,\ldots,n-1 shows that after jj^{\prime} rounds, i.e. in graph GjG_{j^{\prime}}, only vertices 0,,j0,\ldots,j^{\prime} are informed. The base case is trivial. For the step, the only edges connecting informed vertices to uninformed vertices are (j,j+1)(j^{\prime},j^{\prime}+1), and edges from 0 to shortcut vertices, which have the form (0,tj)(0,tj) with tjjtj\leq j^{\prime} — this yields from the construction and always holds. The only other type of possible edges connecting informed and uninformed vertices are of the form (jt,n1)(jt,n-1), with jtjjt\leq j^{\prime}. However, the claim implies that w.h.p., by round jj^{\prime} none of these edges exist. Hence, before round n1n-1 not all vertices are informed, and flooding takes n1n-1 rounds w.h.p.

References

  • [1] John Augustine, Gopal Pandurangan, Peter Robinson, and Eli Upfal. Towards robust and efficient computation in dynamic peer-to-peer networks. In SODA, pages 551–569. SIAM, 2012.
  • [2] Philipp Bamberger, Fabian Kuhn, and Yannic Maus. Local distributed algorithms in highly dynamic networks. In IPDPS, pages 33–42. IEEE, 2019.
  • [3] Leran Cai, Thomas Sauerwald, and Luca Zanetti. Random walks on randomly evolving graphs. In Structural Information and Communication Complexity - 27th International Colloquium, SIROCCO, volume 12156 of Lecture Notes in Computer Science, pages 111–128. Springer, 2020.
  • [4] Keren Censor-Hillel, Neta Dafni, Victor I. Kolobov, Ami Paz, and Gregory Schwartzman. Fast and simple deterministic algorithms for highly-dynamic networks. CoRR, abs/1901.04008, 2019.
  • [5] Soumyottam Chatterjee, Gopal Pandurangan, and Nguyen Dinh Pham. Distributed MST: A smoothed analysis. In ICDCN 2020: 21st International Conference on Distributed Computing and Networking, pages 15:1–15:10. ACM, 2020. URL: https://doi.org/10.1145/3369740.3369778, doi:10.1145/3369740.3369778.
  • [6] Andrea E. F. Clementi, Riccardo Silvestri, and Luca Trevisan. Information spreading in dynamic graphs. Distributed Computing, 28(1):55–73, 2015.
  • [7] Oksana Denysyuk and Luís E. T. Rodrigues. Random walks on evolving graphs with recurring topologies. In DISC, volume 8784 of Lecture Notes in Computer Science, pages 333–345. Springer, 2014.
  • [8] Michael Dinitz, Jeremy T Fineman, Seth Gilbert, and Calvin Newport. Smoothed analysis of dynamic networks. Distributed Computing, 31(4):273–287, 2018.
  • [9] Chinmoy Dutta, Gopal Pandurangan, Rajmohan Rajaraman, Zhifeng Sun, and Emanuele Viola. On the complexity of information spreading in dynamic networks. In SODA, pages 717–736. SIAM, 2013.
  • [10] Leszek Gasieniec and Grzegorz Stachowiak. Fast space optimal leader election in population protocols. In Artur Czumaj, editor, Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA, pages 2653–2667. SIAM, 2018.
  • [11] Mohsen Ghaffari, Nancy A. Lynch, and Calvin C. Newport. The cost of radio network broadcast for different models of unreliable links. In PODC, pages 345–354. ACM, 2013.
  • [12] Bernhard Haeupler and David R. Karger. Faster information dissemination in dynamic networks via network coding. In PODC, pages 381–390. ACM, 2011.
  • [13] Dariusz R. Kowalski and Miguel A. Mosteiro. Polynomial counting in anonymous dynamic networks with applications to anonymous dynamic algebraic computations. Journal of the ACM, 67:1 – 17, 2020.
  • [14] Fabian Kuhn, Nancy A. Lynch, and Rotem Oshman. Distributed computation in dynamic networks. In STOC, pages 513–522. ACM, 2010.
  • [15] Fabian Kuhn, Yoram Moses, and Rotem Oshman. Coordinated consensus in dynamic networks. In Proceedings of the 30th Annual ACM Symposium on Principles of Distributed Computing, PODC, pages 1–10. ACM, 2011. URL: https://doi.org/10.1145/1993806.1993808, doi:10.1145/1993806.1993808.
  • [16] Othon Michail. An introduction to temporal graphs: An algorithmic perspective. Internet Math., 12(4):239–280, 2016.
  • [17] Michael Mitzenmacher and Eli Upfal. Probability and computing — randomized algorithms and probabilistic analysis. Cambridge University Press, 2005.
  • [18] Daniel A. Spielman and Shang-Hua Teng. Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time. J. ACM, 51(3):385–463, 2004.
  • [19] Daniel A. Spielman and Shang-Hua Teng. Smoothed analysis: an attempt to explain the behavior of algorithms in practice. Commun. ACM, 52(10):76–84, 2009.

Appendix

Appendix A Omitted Proofs

  • Theorem 3.3

    Fix 1kn/161\leq k\leq n/16 (not necessarily an integer). For any kk-smoothed dynamic graph, for the flooding process to succeed with probability at least 1/21/2, it must run for Ω(min{n/k,n2/3/k1/3})\Omega\left(\min\left\{n/k,n^{2/3}/k^{1/3}\right\}\right) rounds.

Proof A.1.

We start by re-defining the spooling graph from [8]: at time ii, the edges are {(j,i)}j=1i1\left\{(j,i)\right\}_{j=1}^{i-1}, a star around the vertex ii (this will be the informed spool) and edges {(i+1,j)}j=i+2n\left\{(i+1,j)\right\}_{j=i+2}^{n}, a star around the vertex i+1i+1, which will be the right spool. The spools are than connected using (i,i+1)(i,i+1)

Next, we consider the first RR rounds of flooding and show that for small enough RR, with at least constant probability, the message has not reached all the players. We observe the vertices 1,,R1,\dots,R that are going to be the heads of the uninformed spool.

Note that the right spool is not called “the uninformed spool” for a reason: some vertices in it might get the message through the process. But every time they send the message to the head of the spool, it is being “snatched” to the informed spool before it gets to spread the message to the whole right spool. (we stress this observation, as it is going to serve us later in the text as well). We must deal with two bad cases:

  1. 1.

    Some relevant head j<Rj<R gets the message in round l<j1l<j-1, meaning that he spreads the message to everyone on round j1j-1 when he becomes the head of the right spool.

  2. 2.

    At any point in the process, more than O(R)O(R) vertices in the set of “right spool player” ({R+1,,n}\left\{R+1,\dots,n\right\}) learned the message via a noisy edge that was added to the graph.

Note that the second bad case is not about the end of the process (otherwise, it would suffice to ask that not all vertices learnt the message). Rather, we ask for less than O(R)O(R) vertices, as we need this to guarantee the first bad case only happens in rare occasions.

To that end, we recall the players are numbered 1,,n1,\dots,n. We denote by IrI_{r} the set of informed vertices after round rr of the flooding.

We carry on using a careful induction on the two claims combined. The induction is probabilistic, stating that each step has a small probability to fail us, and if it does not - it leaves us in a rather good state for the next step. We finish up by union bounding over all RR rounds.

Let us denote by AiA_{i} the good event of no future head vertex getting the rumor at round ii (and AiA_{\leq i} for A1AiA_{1}\bigcap\dots\bigcup A_{i}) . Formally we prove by induction that w.h.p: (1) AiA_{i} occur at round ii; (2) The expected size of IiI_{i}, conditioned on AiA_{\leq i} relates to the expected value of Ii1I_{i-1}.

For the first round i=1i=1, only the starting vertex 11 knows the message and the claims hold trivially. Let us assume that both claims hold for round ii, we show that other than with small probability, both also hold for round i+1i+1:

We assume that AiA_{\leq i} occurs (meaning in particular that Ii[R]=[i]I_{i}\bigcap[R]=[i]). Therefore, we can use Lemma 2.4 with S=Ii×([R][i+1])S=I_{i}\times([R]\setminus[i+1]). We then know that Ai+1A_{i+1} (and therefore Ai+1A_{\leq i+1}) occurs with probability at least

1c2k|S|/n2c2k|Ii|R/n2.1-c_{2}k\left|S\right|/n^{2}\leq c_{2}k\left|I_{i}\right|R/n^{2}.

we now focus on a vertex that is not a future head vIi[R]v\in I_{i}\setminus[R], and note that within kk noisy edges in round i+1i+1, similarly to Lemma 2.4, the probability of adding an edge from the set Ii×{v}I_{i}\times\left\{v\right\}, given the event AiA_{i}, is at most

c2k|Ii|/(n2|Ii|R)2c2k|Ii|/n2c_{2}k\left|I_{i}\right|/(n^{2}-\left|I_{i}\right|\cdot R)\leq 2c_{2}k\left|I_{i}\right|/n^{2}

Where the statement is evident from the proof of Lemma 2.4 (in [8]), and the inequality is due to IiRn2/2I_{i}\cdot R\leq n^{2}/2 for any realization of IiI_{i}.

Therefore, if we denote by 𝟙v,i\mathds{1}_{v,i} the probability of vv being added at round ii, we have

𝔼[𝟙v,i|Gi]2c2k|Ii|/n2.\mathbb{E}\left[{\mathds{1}_{v,i}|G_{\leq i}}\right]\leq 2c_{2}k\left|I_{i}\right|/n^{2}.

Next, we sum up over all possible vv’s (at most nn of them), to get that for any specific realization of IiI_{i}:

𝔼[Ii+1|Ai+1]Ii+n2c2k|Ii|n2+1\mathbb{E}\left[{I_{i+1}|A_{\leq i+1}}\right]\leq I_{i}+n\cdot\frac{2c_{2}k\left|I_{i}\right|}{n^{2}}+1

where the expectation is taken over round i+1i+1 alone, the 11 comes from the guaranteed new informed vertex (the new head), and we use the guarantee of AiA_{i} that the previous head was not informed.

Taking an expectation over the noise in previous ii rounds as well, we get:

𝔼[Ii+1|Ai+1]𝔼[Ii|Ai](1+2c2kn)+1\mathbb{E}\left[{I_{i+1}|A_{\leq i+1}}\right]\leq\mathbb{E}\left[{I_{i}|A_{\leq i}}\right]\cdot(1+\frac{2c_{2}k}{n})+1

Now both expectations are over all noise (note that future noise cannot affect the value of IiI_{i}).

We denote Li:=𝔼[Ii|Ai]L_{i}:=\mathbb{E}\left[{I_{i}|A_{\leq i}}\right] to analyse this multiplicative-additive progression. Up until round R=n/(4c2k)R=n/(4c_{2}k), this progression is at most 22 at each step, giving us LR2R+1O(n/k)L_{R}\leq 2R+1\leq O(n/k). We do not deal with rounds of dominant multiplicative factors, for the sake of our union bound. We get that after RR rounds, the expected number of players that know that rumor is far smaller than nn.

We now enter the probabilities to have:

  1. 1.

    Each inductive step had failure probability of at most c2k|Ii|R/n2c_{2}k\left|I_{i}\right|R/n^{2}, where IiI_{i} is smaller than O(i)O(R)O(i)\leq O(R). Union bounding over all RR rounds, our failure probability is at most 10c2kR3/n210c_{2}kR^{3}/n^{2}.

  2. 2.

    Using Markov inequality, since the expected number of informed vertices at round RR is O(n/k)O(n/k), and so the probability at round RR of having more than n/2n/2 informed vertices is at most (n/k)/(n/2)O(1/k)(n/k)/(n/2)\leq O(1/k).

We note that the second failure probability is negligible, and the first (union bound over all steps) is at most 1/101/10 whenever we have Rn2/3/(100c2k)1/3R\leq n^{2/3}/(100c_{2}k)^{1/3}. Overall, we need both the progression to be additive, and the union bound to work, which means the flooding fails with high probability whenever

R=O(min{n4c2k,n2/3(100c2k)1/3}),R=O\left(\min\left\{\frac{n}{4c_{2}k},\frac{n^{2/3}}{(100c_{2}k)^{1/3}}\right\}\right),

completing the proof.

  • Theorem 3.4

    Fix 0<k<10<k<1. For any kk-smoothed dynamic graph, for the flooding process to succeed with probability at least 1/21/2, it must run for Ω(min{n,n2/3/k1/3})\Omega\left(\min\left\{n,n^{2/3}/k^{1/3}\right\}\right) rounds.

Proof A.2.

The proof follows an easier argument than the previous one, as no exponential growth in the number of informed players can ever happen. This means that our only concern is that of a future head prematurely learning the message.

Let us focus on the first RR rounds. Using a standard Chernoff bound, we know that with high probability at most 10kR10kR rounds contain any noise at all. All other rounds add exactly one new informed player per round, and the noisy rounds can add at most 11 more player using a noisy edge. Hence, as long as AiA_{i} keeps occurring (the events of no future head receiving the message prematurely), the inequality Ii3iI_{i}\leq 3i holds.

However, at each non-noisy iteration, the probability of GiG_{i} failing is exactly 0, and at each iteration with noise, the probability of AiA_{i} failing is at most

c2|Ii|R/n23c2iR/n23c2R2/n2c_{2}\left|I_{i}\right|R/n^{2}\leq 3c_{2}iR/n^{2}\leq 3c_{2}R^{2}/n^{2}

Applying a union bound over all 10kR10kR noisy rounds, we get the event ARA_{\leq R} fails with probability at most

30c2kR3/n2,30c_{2}kR^{3}/n^{2},

which is smaller than 1/101/10 whenever R=n2/3/(300c2k)1/3R=n^{2/3}/(300c_{2}k)^{1/3}.

But this would mean that with high probability indeed Ii+1Ii+2I_{i+1}\leq I_{i}+2 for any i<Ri<R, and thus no more than 3R3R players are informed after RR rounds. For any k1/(10,000c2n)k\geq 1/(10,000c_{2}n), we have R<n/3R<n/3, or 3R<n3R<n, concluding the protocol failed.

On the other hand, for k1/(10,000c2n)k\leq 1/(10,000c_{2}n), considering δn\delta n rounds for small enough δ\delta, we get that with constant probability no round invoked any noise, which means the flooding will take Ω(n)\Omega(n), as is the case with no noise.

  • Claim 1

    When taking expectation over the noise we add, we have

    𝔼[|Ii+1|](1+c2k/n)𝔼[|Ii|]+1.\mathbb{E}\left[{\left|I_{i+1}\right|}\right]\leq(1+c_{2}k/n)\cdot\mathbb{E}\left[{\left|I_{i}\right|}\right]+1.

Note that IiI_{i} does not depend on the noise of rounds i+1,,Ri+1,\dots,R, and so the left side takes expectation over i+1i+1 rounds of noise, and the right side over ii rounds.

Proof A.3.

Indeed, just before the (i+1)th iteration, there are exactly |Ii|\left|I_{i}\right| informed vertices. Therefore, each uninformed vertex vI¯i{ui}v\in\bar{I}_{i}\setminus\left\{u_{i}\right\} has probability of at most c2k|Ii|/n2c_{2}k\left|I_{i}\right|/n^{2} of obtaining a straight edge to an informed vertex (Using S=Ii×{v}S=I_{i}\times\left\{v\right\}). As all this vertices were disconnected from IiI_{i} before the noise, this is their only way to receive the message at this round. For uiu_{i} we use the trivial upper bound of 11 (in fact, it is even smaller: it has a small probability to get disconnected). For each vI¯iv\in\bar{I}_{i} we denote by 𝟙v\mathds{1}_{v} the event that vv learnt the message at round i+1i+1. For each realization of IiI_{i} after ii rounds, we have

𝔼[|Ii+1|]\displaystyle\mathbb{E}\left[{\left|I_{i+1}\right|}\right] =|Ii|+vI¯i𝟙v\displaystyle=\left|I_{i}\right|+\sum_{v\in\bar{I}_{i}}\mathds{1}_{v}
|Ii|+1+(|I¯i|1)c2k|Ii|/n2\displaystyle\leq\left|I_{i}\right|+1+(\left|\bar{I}_{i}\right|-1)c_{2}k\left|I_{i}\right|/n^{2}
(1+c2k/n)|Ii|+1\displaystyle\leq(1+c_{2}k/n)\left|I_{i}\right|+1

where the expectation is over the noise in round i+1i+1, and the second inequality uses the fact |I¯i|n\left|\bar{I}_{i}\right|\leq n.

Using linearity of this term, and taking expectation over the noise of the first ii rounds in both sides, the claim is proven.

  • Claim 2

    For each 2j(n2)/t2\leq j\leq(n-2)/t, the smoothed graph GjtG^{\prime}_{jt} does not contain the edge (0,(j1)t)(0,(j-1)t) with probability at most ncn^{-c}, and contains the edge ((j+1)t,n1)((j+1)t,n-1) with the same probability.

Proof A.4.

We prove the first part, and the second one is dual using a similar argument. Fix jj, 2j(n2)/t2\leq j\leq(n-2)/t. Consider the edge e=(0,(j1)t)e=(0,(j-1)t), and denote by AiA_{i} the event of GiG^{\prime}_{i} not containing the edge ee. Under these notations, our goal is to upper bound Pr[Ajt]\Pr[A_{jt}].

First, note that the edge ee does not appear in any graph (before and after the smoothing) in the first (j1)t(j-1)t rounds, and so for i{1,,(j1)t}i\in\left\{1,\dots,(j-1)t\right\} we have:

Pr[Ai]=1.\Pr[A_{i}]=1.

Recall that each iteration imposes smoothing with respect to the previous (already smoothed) graph. Starting with G(j1)t+1G_{(j-1)t+1}, the non-smoothed graphs include ee, and the targeted smoothing might disrupt this addition. However, once the addition happens, the smoothing process does not target it any more. So, for i{(j1)t+1,,jt}i\in\left\{(j-1)t+1,\dots,jt\right\}:

Pr[Ai|Ai1]=ϵ, and Pr[Ai|A¯i1]=0,\Pr[{A_{i}}|A_{i-1}]=\epsilon\text{, and }\Pr[{A_{i}}|\bar{A}_{i-1}]=0,

which implies

Pr[Ai]=ϵPr[Ai=1].\Pr[A_{i}]=\epsilon\cdot\Pr[A_{i=1}].

Starting with Pr[A(j1)t]=1\Pr[A_{(j-1)t}]=1, we finally conclude Pr[Ajt]=ϵt\Pr[A_{jt}]=\epsilon^{t}.

Fix jj, 2j(n2)/t2\leq j\leq(n-2)/t. For j=2j=2, note that the edge (0,t)(0,t) is not present in any graph so the claim regarding it trivially holds.

Consider the tt non-smoothed graphs G(j1)t+1,,GjtG_{(j-1)t+1},\ldots,G_{jt}, and note that all of them contain the edge (0,(j1)t)(0,(j-1)t). In each of the corresponding smoothed graphs, G(j1)t+1,,GjtG^{\prime}_{(j-1)t+1},\ldots,G^{\prime}_{jt}, the edge (0,(j1)t)(0,(j-1)t) does not appear only if it did not appear in the previous graph, in which case it does not appear in the current graph with probability ϵ\epsilon. Hence, the probability this edge does not appear in the last graph GjtG^{\prime}_{jt} is at most ϵt=nc\epsilon^{t}=n^{-c}. Similarly, the edge ((j+1)t,n1)((j+1)t,n-1) does not appear in all the graphs G(j1)t+1,,GjtG_{(j-1)t+1},\ldots,G_{jt}, so it appear in the last graph GjtG^{\prime}_{jt} with probability at most ncn^{-c}.