On simple back-off in unreliable radio networks

In this paper, we study local and global broadcast in the dual graph model, which describes communication in a radio network with both reliable and unreliable links. Existing work proved that efficient solutions to these problems are impossible in the dual graph model under standard assumptions. In real networks, however, simple back-off strategies tend to perform well for solving these basic communication tasks. We address this apparent paradox by introducing a new set of constraints to the dual graph model that better generalize the slow/fast fading behavior common in real networks. We prove that in the context of these new constraints, simple back-off strategies now provide efficient solutions to local and global broadcast in the dual graph model. We also precisely characterize how this efficiency degrades as the new constraints are reduced down to non-existent, and prove new lower bounds that establish this degradation as near optimal for a large class of natural algorithms. We conclude with a preliminary investigation of the performance of these strategies when we include additional generality to the model. These results provide theoretical foundations for the practical observation that simple back-off algorithms tend to work well even amid the complicated link dynamics of real radio networks.


Introduction
In this paper, we study upper and lower bounds for efficient broadcast in the dual graph radio network model [4,12,13,3,6,5,8,7,15,9], a dynamic network model that describes wireless communication over both reliable and unreliable links. As argued in previous studies of this setting, including unpredictable link behavior in theoretical wireless network models is important because in real world deployments radio links are often quite dynamic.
The Back-Off Paradox. Existing papers [13,8,15] proved that it is impossible to solve standard broadcast problems efficiently in the dual graph model without the addition of strong extra assumptions (see related work). In real radio networks, however, which suffer from the type of link dynamics abstracted by the dual graph model, simple back-off strategies tend to perform quite well. These dueling realities seem to imply a dispiriting gap between theory and practice: basic communication tasks that are easily solved in real networks are impossible when studied in abstract models of these networks.
What explains this paradox? This paper tackles this fundamental question. As detailed below, we focus our attention on the adversary entity that decides which unreliable links to include in the network topology in each round of an execution in the dual graph model. We introduce a new type of adversary with constraints that better generalize the dynamic behavior of real radio links. We then reexamine simple back-off strategies originally introduced in the standard radio network model [2] (which has only reliable links), and prove that for reasonable parameters, these simple strategies now do guarantee efficient communication in the dual graph model combined with our new, more realistic adversary.
We also detail how this performance degrades toward the existing dual graph lower bounds as the new constraints are reduced toward non-existent, and prove lower bounds that establish these bounds to be near tight for a large and natural class of back-off strategies. Finally, we perform investigations of even more general (and therefore more difficult) variations of this new style of adversary that continue to underscore the versatility of simple back-off strategies.
We argue that these results help resolve the back-off paradox described above. When unpredictable link behavior is modeled properly, predictable algorithms prove to work surprisingly well.
The Dual Graph Model. The dual graph model describes a radio network topology with two graphs, G = (V, E) and G = (V, E ), where E ⊆ E , V corresponds to the wireless devices, E corresponds to reliable (high quality) links, and E \ E corresponds to unreliable (quality varies over time) links. In each round, all edges from E are included in the network topology. Also included is an additional subset of edges from E \ E, chosen by an adversary. This subset can change from round to round. Once the topology is set for the round, the model implements the standard communication rules from the classical radio network model: a node u receives a message broadcast by its neighbor v in the topology if and only if u decides to receive and v is its only neighbor broadcasting in the round.
We emphasize that the abstract models used in the sizable literature studying distributed algorithms in wireless settings do not claim to provide high fidelity representations of real world radio signal communication. They instead each capture core dynamics of this setting, enabling the investigation of fundamental algorithmic questions. The well-studied radio network model, for example, provides a simple but instructive abstraction of message loss due to collision. The dual graph model generalizes this abstraction to also include network topology dynamics. Studying the gaps between these two models provides insight into the hardness induced by the types of link quality changes common in real wireless networks.
The Fading Adversary. Existing studies of the dual graph model focused mainly on the information about the algorithm known to the model adversary when it makes its edge choices. In this paper, we place additional constraints on how these choices are generated.
In more detail, in each round, the adversary independently draws the set of edges from E \ E to add to the topology from some probability distribution defined over this set. We do not constrain the properties of the distributions selected by the adversary. Indeed, it is perfectly valid for the adversary in a given round to use a point distribution that puts the full probability mass on a single subset, giving it full control over its selection for the round. We also assume the algorithm τ ∈ O(log ∆/ log log ∆) Thm 10 Table 1: A summary of the upper and lower bounds proved in this paper, along with pointers to the corresponding theorems. In the following, n is the network size, ∆ ≤ n is an upper bound on local neighborhood size, D is the (reliable link) network diameter, and τ is the stability factor constraining the adversary. executing in the model has no advance knowledge of the distributions used by the adversary. We do, however, constrain how often the adversary can change the distribution from which it selects these edge subsets. In more detail, we parameterize the model with a stability factor, τ ≥ 1, and restrict the adversary to changing the distribution it uses at most once every τ rounds. For τ = 1, the adversary can change the distribution in every round, and is therefore effectively unconstrained and behaves the same as in the existing dual graph studies. On the other extreme, for τ = ∞, the adversary is now quite constrained in that it must draw edges independently from the same distribution for the entire execution. As detailed below, we find τ ≈ log ∆, for local neighborhood size ∆, to be a key threshold after which efficient communication becomes tractable.
Notice, these constraints do not prevent the adversary from inducing large amounts of changes to the network topology from round to round. For non-trivial τ values, however, they do require changes that are nearby in time to share some underlying stochastic structure. This property is inspired by the general way wireless network engineers think about unreliability in radio links. In their analytical models of link behavior (used, for example, to analyze modulation or rate selection schemes, or to model signal propagation in simulation), engineers often assume that in the short term, changes to link quality come from sources like noise and multi-path effects, which can be approximated by independent draws from an underlying distribution (Gaussian distributions are common choices for this purpose). Long term changes, by contrast, can come from modifications to the network environment itself, such as devices moving, which do not necessarily have an obvious stochastic structure, but unfold at a slower rate than short term fluctuations.
In our model, the distribution used in a given round captures short term changes, while the adversary's arbitrary (but rate-limited) changes to these distributions over time capture long term changes. Because these general types of changes are sometimes labeled short/fast fading in the systems literature (e.g., [17]), we call our new adversary a fading adversary.
Our Results and Related Work. In this paper, we study both local and global broadcast. The local version of this problems assumes some subset of devices in a dual graph network are provided broadcast messages. The problem is solved once each receiver that neighbors a broadcaster in E receives at least one message. The global version assumes a single broadcaster starts with a message that it must disseminate to the entire network. Below we summarize the relevant related work on these problems, and the new bounds proved in this paper. We conclude with a discussion of the key ideas behind these new results.
Related Work. In the standard radio network model, which is equivalent to the dual graph model with E = E , Bar-Yehuda et al. [2] demonstrate that a simple randomized back-off strategy called Decay solves local broadcast in O(log 2 n) rounds and global broadcast in O(D log n + log 2 n) rounds, where n = |V | is the network size and D is the diameter of G. Both results hold with high probability in n, and were subsequently proved to be optimal or near optimal 1 [1,14,16].
In [12,13], it is proved that global broadcast (with constant diameter), and local broadcast require Ω(n) rounds to solve with reasonable probability in the dual graph model with an offline adaptive adversary controlling the unreliable edge selection, while [8] proves that Ω(n/ log n) rounds are necessary for both problems with an online adaptive adversary. As also proved in [8]: even with the weaker oblivious adversary, local broadcast requires Ω( √ n/ log n) rounds, whereas global broadcast can be solved in an efficient O(D log (n/D) + log 2 n) rounds, but only if the broadcast message is sufficiently large to contain enough shared random bits for all nodes to use throughout the execution. In [15], an efficient algorithm for local broadcast with an oblivious adversary is provided given the assumption of geographic constraints on the dual graphs, enabling complicated clustering strategies that allow nearby devices to coordinate randomness.
New Results. In this paper, we turn our attention to local and global broadcast in the dual graph model with a fading adversary constrained by some stability factor τ (unknown to the algorithm). We start by considering upper bounds for a simple back-off style strategy inspired by the decay routine from [2]. This routine has broadcasters simply cycle through a fixed set of broadcast probabilities in a synchronized manner (all broadcasters use the same probability in the same round). We prove that this strategy solves local broadcast with probability at least 1 − , in O ∆ 1/τ ·τ 2 log ∆ · log (1/ ) rounds, where ∆ is an upper bound on local neighborhood size, and τ = min{τ, log ∆}.
Notice, for τ ≥ log ∆ this bound simplifies to O(log ∆ log (1/ )), matching the optimal results from the standard radio network model. 2 This performance, however, degrades toward the polynomial lower bounds from the existing dual graph literature as τ reduces from log ∆ toward a minimum value of 1. We show this degradation to be near optimal by proving that any local broadcast algorithm that uses a fixed sequence of broadcast probabilities requires Ω(∆ 1/τ τ / log ∆) rounds to solve the problem with probability 1/2 for a given τ . For τ ∈ O(log ∆/ log log ∆) , we refine this bound further to Ω(∆ 1/τ τ 2 / log ∆), matching our upper bound within constant factors.
We next turn our attention to global broadcast. We consider a straightforward global broadcast algorithm that uses our local broadcast strategy as a subroutine. We prove that this algorithm solves global broadcast with probability at least 1 − , in O(D + log(n/ )) · ∆ 1/ττ 2 / log ∆) rounds, where D is the diameter of G, andτ = min{τ, log ∆}. Notice, for τ ≥ log ∆ this bound reduces to O(D log ∆ + log ∆ log (1/ )), matching the near optimal result from the standard radio network model. As with local broadcast, we also prove the degradation of this performance as τ shrinks to be near optimal. (See Table 1 for a summary of these results and pointers to where they are proved 1 The broadcast algorithm from [2] requires O(D log n + log 2 n) rounds, whereas the corresponding lower bound is Ω(D log (n/D) + log 2 n). This gap was subsequently closed by a tighter analysis of a natural variation of the simple Decay strategy used in [2] 2 To make it match exactly, set ∆ = n and = 1/n, as is often assumed in this prior work.
in this paper.) Finally we consider the generalized model when we allow correlation between the distributions selected by the adversary within a given stable period of τ rounds. It turns out that in the case of arbitrary correlations any simple algorithm needs time Ω( √ ∆/l) if it uses only cycles of length l. In particular any our previous algorithms would require time Ω( √ ∆/ log ∆) in the model with arbitrary correlations. The adversary construction in this lower bound requires large changes in the degree of a node in successive steps. Such changes are unlikely in real networks thus we propose a restricted version of the adversary. We assume that the expected change in the degree of any node can be at most ∆ 1/(τ (1−o(1)) . With such restriction it is again possible to propose a simple, but slightly enhanced, back-off strategy (with a short cycle of probabilities) that works efficiently in time O ∆ 1/τ ·τ · log (1/ ) .
Technique Discussion. Simple back-off strategies can be understood as experimenting with different guesses at the amount of contention afflicting a given receiver. If the network topology is static, this contention is fixed, therefore so is the right guess. A simple strategy cycling through a reasonable set of guesses will soon arrive at this right guess-giving the message a good chance of propagating.
The existing lower bounds in the dual graph setting deploy an adversary that changes the topology in each round to specifically thwart that round's guess. In this way, the algorithm never has the right guess for the current round so its probability of progress is diminished. The fading adversary, by contrast, is prevented from adopting this degenerate behavior because it is required to stick with the same distribution for τ consecutive rounds. An important analysis at the core of our upper bounds reveals that any fixed distribution will be associated with a right guess defined with respect to the details of that distribution. If τ is sufficiently large, our algorithms are able to experiment with enough guesses to hit on this right guess before the adversary is able to change the distribution.
More generally speaking, the difficulty of broadcast in the previous dual graph studies was not due to the ability of the topology to change dramatically from round to round (which can happen in practice), but instead due to the model's ability to precisely tune these changes to thwart the algorithm (a behavior that is hard to motivate). The dual graph model with the fading adversary preserves the former (realistic) behavior while minimizing the latter (unrealistic) behavior.

Model and Problem
We study the dual graph model of unreliable radio networks. This model describes the network topology with two graphs G = (V, E) and G = (V, E ), where E ⊆ E . The n = |V | vertices in V correspond to the wireless devices in the network, which we call nodes in the following. The edge in E describe reliable links (which maintain a consistently high quality), while the edges in E \ E describe unreliable links (which have quality that can vary over time). For a given dual graph, we use ∆ to describe the maximum degree in G , and D to describe the diameter of G.
Time proceeds in synchronous rounds that we label 1, 2, 3... For each round r ≥ 1, the network topology is described by G r = (V, E r ), where E r contains all edges in E plus a subset of the edges in E \ E. The subset of edges from E \ E are selected by an adversary. The graph G r can be interpreted as describing the high quality links during round r. That is, if {u, v} ∈ E r , this mean the link between u and v is strong enough that u could deliver a message to v, or garble another message being sent to v at the same time.
With the topology G r established for the round, behavior proceeds as in the standard radio network model. That is, each node u ∈ V can decide to transmit or receive. If u transmits, it learns nothing about other messages transmitted in the round (i.e., the radios are half-duplex). If u receives and exactly one neighbor v of u in E r transmits, then u receives v's message. If u receives and two or more neighbors in E r transmit, u receives nothing as the messages are lost due to collision. If u receives and no neighbor transmits, u also receives nothing. We assume u does not have collision detection, meaning it cannot distinguish between these last two cases.
The Fading Adversary. A key assumption in studying the dual graph model are the constraints placed on the adversary that selects the unreliable edges to include in the network topology in each round. In this paper, we study a new set of constraints inspired by real network behavior. In more detail, we parameterize the adversary with a stability factor that we represent with an integer τ ≥ 1. In each round, the adversary must draw the subset of edges (if any) from E \ E to include in the topology from a distribution defined over these edges. The adversary selects which distributions it uses. Indeed, we assume it is adaptive in the sense that it can wait until the beginning of a given round before deciding the distribution it will use in that round, basing its decision on the history of the nodes' transmit/receive behavior up to this point, including the previous messages they send, but not including knowledge of the nodes' private random bits.
The adversary is constrained, however, in that it can change this distribution at most once every τ rounds. On one extreme, if τ = 1, it can change the distribution in every round and is effectively unconstrained in its choices. On the other other extreme, if τ = ∞, it must stick with the same distribution for every round. For most of this paper, we assume the draws from these distributions are independent in each round. Toward the end, however, we briefly discuss what happens when we generalize the model to allow more correlations.
As detailed in the introduction, because these constraints roughly approximate the fast/slow fading behavior common in the study of real wireless networks, we call a dual graph adversary constrained in this manner a fading adversary.
Problem. In this paper, we study both the local and global broadcast problems. The local broadcast problem assumes a set B ⊆ V of nodes are provided with a message to broadcast. Each node can receive a unique message. Let R ⊆ V be the set of nodes in V that neighbor at least one node in B in E. The problem is solved once every node in R has received at least one message from a node in B. We assume all nodes in B start the execution during round 1, but do not require that B and R are disjoint (i.e., broadcasters can also be receivers). The global broadcast problem, by contrast, assumes a single source node in V is provided a broadcast message during round 1. The problem is solved once all nodes have received this message. Notice, local broadcast solutions are often used as subroutines to help solve global broadcast.
Uniform Algorithms. The broadcast upper and lower bounds we study in this paper focus on uniform algorithms, which require nodes to make their probabilistic transmission decisions according to a predetermined sequence of broadcast probabilities that we express as a repeating cycle, (p 1 , p 2 , ..., p k ) of k probabilities in synchrony. In studying global broadcast, we assume that on first receiving a message, a node can wait to start making probabilistic transmission decisions until the cycle resets. We assume these probabilities can depend on n, ∆ and τ (or worst-case bounds on these values).
In uniform algorithms in the model with fading adversary an important parameter of any node v is its effective degree in step t denoted by d t (v) and defined as the number of nodes w such that (v, w) ∈ E t and w has a message to transmit (i.e., will participate in step t).
As mentioned in the introduction, uniform algorithms, such as the decay strategy from [2], solve local and global broadcast with optimal efficiency in the standard radio network model. A major focus of this paper is to prove that they work well in the dual graph model as well, if we assume a fading adversary with a reasonable stability factor.
The fact that our lower bounds assume the algorithms are uniform technically weaken the results, as there might be non-uniform strategies that work better. In the standard radio network model, however, this does not prove to be the case: uniform algorithms for local and global broadcast match lower bounds that hold for all algorithms (c.f., discussion in [16]).

Local broadcast
We begin by studying upper and lower bounds for the local broadcast problem. Our upper bound performs efficiently once the stability factor τ reaches a threshold of log ∆. As τ decreases toward a minimum value of 1, this efficiency degrades rapidly. Our lower bounds capture that this degradation for small τ is unavoidable for uniform algorithms. In the following we use the notation τ = min{τ, log ∆ }. By log n we will always denote logarithm at base 2 and by ln n the natural logarithm.

Upper Bound
All uniform local broadcast algorithms behave in the same manner: the nodes in B repeatedly broadcast according to some fixed cycle of k broadcast probabilities. We formalize this strategy with algorithm RLB (Robust Local Broadcast) described below (we break out Uniform into its own procedure as we later use it in our improved FRLB local broadcast algorithm as well): Before we prove the complexity of RLB we will show two useful properties of any uniform algorithm. Let R (v) t denote the event that node v receives a message from some neighbor in step t. Lemma 1. For any uniform algorithm and any node v and step t if d t (v) > 0 and the algorithm uses in step t probability p ≤ 1/2, then Pr R Proof. For this to happen exactly one among d t (v) neighbors of v has to transmit and v must not transmit. Node v does not transmit with probability 1 − p if it has the message and clearly with probability 1 if it has the message. Denote by α = p · d t (v). We have Proof. If the algorithm uses probability p in step t then Pr R . Seeing this expression as a function of d t (v) we can compute the derivative and obtain that this function has a single maximum in d t (v) = 1/(ln(1/(1 − p))). Hence if we restrict d t (v) to be within a certain interval, then value of the function is lower bounded by the minimum at the endpoints of the interval.
Our upper bound analysis leverages the following useful lemma which can be shown by induction on n (the left side is also known as the Weierstrass Product Inequality): To begin our analysis, we focus on the behavior of our algorithm with respect to a single receiver when we use the transmit probability sequence p 1 , p 2 , ..., pτ , whereτ = min{τ, log ∆ }, and p i = ∆ −i/τ . Lemma 4. Fix any receiver u ∈ R and error bound > 0. It follows: RLB(2 ln(1/ ) · 4e·∆ 1/τ ,τ ) delivers a message to u with probability at least 1 − in time O(∆ 1/ττ log(1/ )).
Proof. It is sufficient to prove the claim for τ ≤ log ∆. For τ > log ∆ we use the algorithm for τ = log ∆. Note that any algorithm that is correct for some τ must also work for any larger τ because the adversary may not choose to change the distribution as frequently as it is permitted to. In the case where τ ≤ log ∆ we get that ∆ 1/τ ≥ 2.
We want to show that if the nodes from N u ∩ B execute the procedure Uniform(τ, p 1 , . . . , p τ ) twice, then u receives some message with probability at least log ∆/(2e∆ 1/τ τ ). Every time we execute Uniform twice, we have a total of 2τ consecutive time slots out of which, by the definition of our model, at least τ consecutive slots have the same distribution of the additional edges and moreover stations try all the probabilities p 1 , p 2 , . . . , p τ (not necessarily in this order). Let T denote the set of these τ time slots and for i = 1, 2, . . . , τ let t i ∈ T be the step in which probability p i is used. We also denote the distribution used in steps from set T by E (T ) . Hence we can denote the edges between u and its neighbors that have some message by We know that the edge sets are chosen independently from the same distribution: E t ∼ E (T ) for t ∈ T . Let us denote by X t = |E t ∩ E part | the random variable being the number of neighbors that are connected to u in step t and belong to B. For each i form 1 to τ we define q i = Pr ∆ (i−1)/τ < X t ≤ ∆ i/τ , for any t ∈ T . Observe that probabilities q i do not depend on t during the considered τ rounds. Moreover since u ∈ R then u is connected via a reliable edge to at least one node in B, thus We would like to lower bound the probability that v receives a message in step t i for i = 1, 2 . . . , τ : In t i -th slot the transmission probability is p i = ∆ −i/τ and the transmission choices done by the stations are independent from the choice of edges E t i active in round t i . Let Q i denote the event that ∆ (i−1)/τ < X t i ≤ ∆ i/τ . We have p i ≤ 1/2 hence we can use Lemma 1 and 2: Since the edge sets are chosen independently in each step and the random choices of the stations whether to transmit or not are also independent from each other we have: Hence if we execute the procedure for 2τ ln(1/ ) · 4e · ∆ 1/τ time steps, we have at least ln(1/ ) · 4e · ∆ 1/τ sequences of τ consecutive time steps in which the distribution over the unreliable edges is the same and the algorithm tries all the probabilities {p 1 , p 2 , . . . , p τ }. Each of these procedures fails independently with probability at most 1 − 1/(4e∆ 1/τ ) hence the probability that all the procedures fail is at most: On closer inspection of the analysis of Lemma 4, it becomes clear that if we tweak slightly the probabilities used in our algorithm, we require fewer iterations. In more detail, the probability of a successful transmission in the case where each of the x transmitters broadcasts independently with probability α/x is approximately α/(2e) α . In the previous algorithm we were transmitting in successive steps with probabilities ∆ −1/τ , ∆ −2/τ , . . . . Thus if x = 1 we would get in i-th step α = ∆ −i/τ and approximately the sum of probabilities of success in τ consecutive steps would be ∆ −1/τ . The formula α/(2e) −α shows that the success probability depends on α linearly if α < 1 ("too small" probability) and depends exponentially on α if α > 1 ("too large" probability). In the previous theorem we intuitively only use the linear term. In the next one we would like to also use, to some extent, the exponential term. If we shift all the probabilities by multiplying them by a factor of β > 1, the total success probability would be approximately β∆ −1/τ if x = 1 and β(2e) −β if x = ∆. Thus by setting β = log 2e ∆/τ we maximize both these values.
The following lemma makes this above intuition precise and gains a log-factor in performance in algorithm FRLB (Fast Robust Local Broadcast) compared to RLB. As part of this analysis, we add a second statement to our lemma that will prove useful during our subsequent analysis of global broadcast. The correctness of this second lemma is a straightforward consequence of the analysis.
Lemma 5. Fix any receiver u ∈ R and error bound > 0. It follows: 1. FRLB(2 ln(1/ ) · 4∆ 1/ττ / log 2e ∆ ,τ ) completes local broadcast with a single receiver in time O ∆ 1/τ ·τ 2 log ∆ · log (1/ ) with probability at least 1 − , for any > 0, 2. FRLB(2,τ ) completes local broadcast with a single receiver with probability at least log 2e ∆ 4∆ 1/ττ . Proof. It is sufficient to prove the claim for τ ≤ log 2e ∆. For τ > log 2e ∆ we use the algorithm for τ = log 2e ∆. Note that any algorithm that is correct for some τ must also work for any larger τ because the adversary may not choose to change the distribution as frequently as it is permitted to. In the case where τ ≤ log 2e ∆ we get that We want to show that if the nodes from N u ∩ B execute the procedure Uniform(τ, p 1 , . . . , p τ ) twice, then u receives some message with probability at least log ∆/(4∆ 1/τ τ ). Since we execute Uniform twice, we have a total of 2τ consecutive time slots out of which, by the definition of our model, at least τ consecutive slots have the same distribution of the edges in E \ E and moreover stations try all the probabilities p 1 , p 2 , . . . , p τ . (not necessarily in this order). Let T denote the set of these τ time slots and for i = 1, 2, . . . , τ let t i ∈ T be the step in which probability p i is used. We also denote the distribution used in steps from set T by E (T ) . Observe from the definition of the algorithm that during these slots the number of participating stations does not change. Hence we can denote the edges between u and its neighbors that have some message by We know that the edge sets are chosen independently from the same distributions: E t ∼ E (T ) for t ∈ T . Let us denote by X t = |E t ∩ E part | the random variable being the number of neighbors that are connected to u in step t and belong to B. For each i form 1 to τ , we define q i : for any t ∈ T . Observe that probabilities q i do not depend on t during the considered τ rounds.
We would like to lower bound the probability that v receives a message in step t i for i = 1, 2, . . . , τ : In t i -th slot each station with a message transmits independently with probability is p i = ∆ −i/τ · log 2e ∆/τ and the transmission choices done by the stations are independent from the choice of hence we can use Lemma 1 and 2: Since the edge sets are chosen independently in each step and the choices of the stations are also independent we have: by Equation (4) where the last inequality is true since if we denote τ = (log 2e ∆)/α (for α ≥ 1) then we have ∆ 1/τ τ = (2e) α log ∆/(2α) ≥ log ∆ hence log 2 2e ∆ 4∆ 2/τ τ 2 ≤ log 2e ∆ 4∆ 1/τ τ . This completes proof of 2. To prove 1 we observe that if we execute the procedure for 2τ ln(1/ ) · 4 · ∆ 1/τ τ / log 2e ∆ time steps we have at least ln(1/ ) · 4·∆ 1/τ τ / log 2e ∆ sequences of τ consecutive time steps in which the distribution over the unreliable edges is the same and the algorithm tries all the probabilities {p 1 , p 2 , . . . , p τ }. Each of these procedures fails independently with probability at most 1 − log 2e ∆/(4∆ 1/τ τ ) hence the probability that all the procedures fail is at most: In Lemmas 4 and 5 we studied the fate of a single receiver in R during an execution of algorithms RLB and FRLB. Here we apply this result to bound the time for all nodes in R to receive a message, therefore solving the local broadcast problem. In particular, for a desired error bound , if we apply these lemmas with error bound = /n, then we end up solving the single node problem with a failure probability upper bounded by /n. Applying a union bound, it follows that the probability that any node from R fails to receive a message is less than . Formally: Theorem 6. Fix an error bound > 0. It follows that algorithm FRLB(2 ln(n/ ) · 4∆ 1/ττ / log ∆ ) solves local broadcast in O ∆ 1/τ ·τ 2 log 2e ∆ · log (n/ ) rounds, with probability at least 1 − .

Lower bound
Observe that for τ = Ω(log ∆), FRLB has a time complexity of O(log ∆ log n) rounds for = 1/n, which matches the performance of the optimal algorithms for this problem in the standard radio model. This emphasizes the perhaps surprising result that even large amounts of topology changes do not impede simple uniform broadcast strategies, so long as there is independence between nearby changes. Once τ drops below log ∆, however, a significant gap opens between our model and the standard radio network model. Here we prove that gap is fundamental for any uniform algorithm in our model.
In the local broadcast problem, a receiver from set R can have between 1 and ∆ neighbors in set B. The neighbors should optimally use probabilities close to the inverse of their number. But since the number of neighbors is unknown, the algorithm has to check all the values. If we look at the logarithm of the inverse of the probabilities (call them log-estimates) used in Lemma 4 we get i log ∆/τ , for i = 1, 2, . . . , τ -which are spaced equidistantly on the interval [0, log ∆]. The goal of the algorithm is to minimize the maximum gap between two adjacent log-estimates placed on this interval since this maximizes the success probability in the worst case. With this in mind, in the proof of the following lower bound, we look at the dual problem. Given a predetermined sequence of probabilities used by an arbitrary uniform algorithm, we seek the largest gap between adjacent log-estimates, and then select edge distributions that take advantage of this weakness.
Proof. Consider the dual graph G = (V, E) and G = (V, E ), defined as follows: Figure 1). We will study local boadcast in this dual Observe that the maximum degree of any node is indeed ∆ and the number of nodes is n. Nodes v ∆ , v ∆+1 , . . . , v n−2 do not belong to B ∪ R hence are not relevant in our analysis.
Using the sequence of probabilities p 1 , p 2 , . . . used by algorithm A we will define a sequence of distributions over the edges that will cause a long delay until node v will receive a message. The adversary we define is allowed to change the distribution every τ steps. Accordingly, we partition the rounds into phases of length τ , which we label 1, 2, 3, . . . . Phase k consists of time steps I k = {1 + k · τ, 2 + k · τ, . . . , (k + 1) · τ }. For each phase k ≥ 1, the adversary will use a distribution D k that's defined with respect to the probabilities used by A during the rounds in phase k. In particular, let sequence P k = {p i } i∈I k be the τ probabilities used by A during phase k.
We use P k to define the distribution D k as follows. Define∆ = ∆ − 1 and let N represent log∆ urns labeled with numbers from 1 to log∆ . Into these urns we place balls with numbers log(1/p j ) and log(1/p j ) for all j ∈ I k . Ball with number i is placed into the bin with the same number. With this procedure for each j, we place two balls in adjacent bins if log(1/p j ) = log(1/p j ) and a single ball in the opposite case. We arrange the bins in a circular fashion i.e., bins log∆ and 1 are consecutive and we want to find the longest sequence of consecutive empty bins. Observe that since for each j we put either a single ball or two balls into adjacent bins we have at most τ sequences of consecutive empty bins. Moreover, since at most 2τ bins contain a ball then there exists a sequence of consecutive empty bins of length at least log∆ −2τ τ . Knowing that τ is an integer and that τ ≤ log∆/16 we can represent log∆ = aτ + b + {log∆}, where {log∆} is the fractional part of log∆ and b + {log∆} < τ . We can then show that: We define: x = log∆/τ − 3 − log( ln∆/τ ) , y = log ( ln∆/τ ) + 1.
We observe that for τ ≤ log∆/16 we have log ( ln∆/τ ) ≥ 4 hence x and y are both positive integers and moreover x + y = log∆/τ − 2. Hence we already showed that there exists a sequence of consecutive empty bins of length at least x + y. Now, we pick the label of (y + 1)-st bin in this sequence (the order of bins is according to the circular arrangement i.e., 1 comes after log∆ ) and call it a k . Let A k = {log(1/p j ) : j ∈ I k }. This set contains logarithms of all the estimates "tried" by the algorithm in k-th phase. Now we split A k into elements that are larger and that are smaller than a k : then a ≤ a k − y because there are y empty bins between bin a k and the bin containing ball a . Symmetrically if a ∈ A (≥) k then a ≥ a k + x − 1 because there are x − 1 empty bins between bin a k and the bin containing ball a .
In our distribution D k in phase k, we include all edges from E, plus a subset of size 2 a k − 1 selected uniformly from E \ E. This is possible since the adversary can choose to activate any subset of links among the set {(v i , v), i ∈ {2, . . . ,∆}}. With this choice, the degree of v is 2 a k in phase k hence we can bound the probability that a successful transmission occurs in phase k.
Having chosen the distribution of the edges between v and {v 1 , v 2 , . . . , v∆} we can now bound the probability of a successful transmission in any step t in the considered phase. Let the event of a successful transmission in step t be denoted by S t . For this event to happen exactly one of the 2 a k nodes among {v 1 , v 2 , . . . , v∆} that are connected to v need to transmit. We have: Take any step t and the corresponding probability p t used by the algorithm. We know that a k is chosen so that a k ≥ log(1/p t ) + y or a k ≤ log(1/p t ) − x. We consider these cases separately: We know that a k ≥ log(1/p t ) + y and a k ≤ log∆ thus p t ≤ 2 y /∆ hence since∆ ≥ 9 we get 1/(1 − p t ) ≤ 1/2. Moreover since 2 a i p t ≥ 2 y ≥ 4 we have e −2 a i pt/2 < 1/(2 a i p t ) (because e x/2 > x for all x). Which gives in this case: We have just shown that the probability that v receives a message in our fixed phase k is at most 32 ln∆ ∆ 1/τ τ . To conclude the proof, we apply a union bound to show that probability v receives a message in at least one of∆ 1/τ τ /(64 ln∆) − 1 phases, which require∆ 1/τ τ 2 /(64 ln∆) − τ total rounds, is strictly less than 1/2: In our next theorem, we refine the argument used in Theorem 7 for the case where τ is a nontrivial amount smaller than the log ∆ threshold. We will argue that for smaller τ , the complexity is Ω(∆ 1/τ τ 2 / log ∆), which more exactly matches our best upper bound. We are able to trade this small amount of extra wiggle room in τ for a stronger lower bound because it simplifies certain probabilistic obstacles in our argument. Combined with our previous theorem, the below result shows our upper bound performance is asymptotically optimal for uniform algorithms for all but a narrow range of stability factors, for which it is near tight.
Proof. In this proof we will use the same graph as in Theorem 7. Let G = (V, E) and G = (V, E ).
Let p 1 , p 2 , . . . be the fixed sequence of broadcast probabilities used by nodes in B running A. Using this sequence we will define a sequence of distributions over the edges that will cause a long time for this algorithm until node v will receive a message.
As an adversary we are allowed to define an integer value l * ∈ [1, 2, . . . , log∆ ] based on the l-values and define a distribution for phase k in which there are always 2 l * active links between nodes v 1 , v 2 , . . . , v n and v. The success probability in i-th step of the considered phase is then Our goal as an adversary is to find such l * that minimizes τ i=1 s i . We will show that it is always possible to find such l * that τ i=1 s i = O(∆ −1/τ log∆/τ ) = Θ(∆ −1/τ log ∆/τ ). This will give us that Ω(∆ 1/τ τ / log∆) phases of τ steps hence in total Ω(∆ 1/τ τ 2 / log ∆) steps are needed to complete local broadcast with constant probability.
Since τ < ln∆/(12 log log∆) we have that y ≥ 3. Observe that x + y = log∆/τ and x + y ≤ log∆/τ + 2 log τ − log ln∆ + log(ln∆/τ + 2 ln τ ) = log∆/τ + 2 log τ + log(1/τ + 2 ln τ / ln∆) ≤ log∆/τ + 2 log τ + log 3. Let ∆ * denote the number of active links between v and v 1 , v 2 , . . . , v∆ in the considered phase and l * = log ∆ * . In any step if ∆ * is such that l * ≥ y then if we also have p i ≥ 2 3 then we get: where the last inequality is true because τ ≥ 1 and ln τ ≤ ln∆. This shows that the sum of all such s i is at most 9 ln∆ ∆ 1/τ τ . Consider now only steps with p i < 2/3. Then: Observe that for a fixed value of l * , for any i such that l i / ∈ [l * − y , l * + x − 1] we have s i ≤ 9 ln∆ ∆ 1/τ τ 2 (by Equations (8), (10)). Hence the sum of all such values s i is at most 9 ln∆ ∆ 1/τ τ . Hence we only need to find such l * that the sum of the values s i for which the corresponding l i ∈ [l * 1 − y , l * 1 + x ] is less than (c−9) ln∆ ∆ 1/τ τ . We denote the smallest and the largest l-values: l sm = min i∈{1,2,...,τ } {l i } and l lg = max i∈{1,2,...,τ } {l i }. We will prove two following claims about l sm and l lg : l sm ≤ x Observe that otherwise we can choose l * = 0 (∆ * is then equal to 1 which corresponds to exactly one active link between {v 1 , v 2 , . . . , v∆} and v) and then by Equation (8)  l lg ≥ log∆ − y If it is not the case, we choose l * = log∆ and by Equation (10) we have that if p i < 2/3 then s i ≤ 9 ln∆ ∆ 1/τ τ 2 and by Equation (6) that if p i ≥ 2/3 then s i ≤ 9 ln∆ ∆ 1/τ τ 2 . And the sum of all values of s i is at most 9 ln∆ ∆ 1/τ τ which contradicts our assumption.
Consider now interval Γ 1 = [l sm , l lg ]. Two previous claims showed that |Γ 1 | ≥ log∆ − x − y . We can now consider the placement of values l i on Γ 1 and analyze gaps between the adjacent values. Gap g i is the difference between the (i + 1)-st smallest and i-th smallest value out of all values l j that belong to Γ 1 . We want to show the following: Assume on the contrary that such a gap between l i and l j exists. Then we pick l * = l i + y and observe that l * is an integer and is at least y larger than each smaller l-value and at least x − 1 smaller than each larger l-value. In such a case l * ≥ y hence for all i such that p i ≥ 2/3 by Equation (6) we have s i ≤ 9 log∆ ∆ 1/τ τ 2 and if p i < 2/3 then (since l * ≥ y ) by Equations (8) (10) we have s i ≤ 9 ln∆ ∆ 1/τ τ 2 . Thus if any gap has size at least x + y then τ i=0 s i ≤ 9 ln∆ ∆ 1/τ τ which contradicts our assumption.
We know that there are at most τ − 1 gaps and that they cover area of at least log∆ − x − y . Hence we can lower bound the average length of a gap: Thus there exists a gap G 1 of length at least d 1 . Knowing that y ≥ 3 we have d 1 ≥ x + y − 2 ≥ 1 and inside this gap we can find an integer value l * 1 that is at least y larger than the closest smaller l-value and at least x − 3 smaller than the closest smaller l-value. Consider values of s i with this choice of l * . By Equations (7) (9) if l * = l * 1 , each s i is at most 24∆ −1/τ ln∆ τ . Consider now interval I 1 = [l * 1 − y , l * 1 + x ]. By Equations (8) and (10) for all i such that l i / ∈ I 1 and p i < 2/3 we have Thus the sum of all s i for which l i / ∈ I 1 or p i ≥ 2/3 is at most 9∆ −1/τ ln∆ τ . Since by the assumption, the sum of all s i is at least c ln∆ ∆ 1/τ τ then the sum of all s i for which l i ∈ I 1 and p i < 2/3 has to be at least 2400 ln∆ ∆ 1/τ τ . By the choice of l * 1 , each s i for which l i ∈ I 1 and p i ≥ 2/3 is at most 24 ln∆ ∆ 1/τ τ hence we must have at least 100 such l-values. We have shown that there are at least 100 l-values inside interval I 1 .
We find the smallest and the largest l-values inside I 1 .
lg ) (we remove the interior of the interval [l (1) sm , l (1) lg ] keeping the endpoints). We know that we removed at least 98 l-values. Since the l-values have to work for any l * we can now argue about the average length of a gap inside Γ 2 and locate a different value l * 2 in the remaining interval and identify 100 l-values close to l * 2 . But we need to make sure that |l * 1 − l * 2 | ≥ x + y since otherwise we would count the same l-values twice. We extend the interval I 1 to I 1 = [l * 1 − (x + y ), l * 1 + (x + y )] and we find the smallest lvalue larger than any l-value inside I 1 (call it l (1) sm ) and the largest l-value smaller than any l-value inside I 1 (call it l lg does not exist we set sm , l lg ]. Now we want to show that Γ * 2 ≥ |Γ 1 | − 5(x + y ). It is because |I 1 | = 2(x + y ) and by Equation (11), length of any gap is at most x + y hence distance between l * 1 − (x + y ) and l sm is at most x + y (similarly between l * 1 + (x + y ) and l lg ). If l sm or l lg does not exist we remove additionally no more than x + y because the smallest l-value that is most x and the largest is at least log∆ − y . This shows that we remove the total area of at most 5(x + y ). Now we consider the average length of a gap in Γ * 2 . We removed at least 98 l-values (because I * 1 contains I 1 ) and area of at most 5(x + y ) ≤ 5 log∆/τ + 10 log τ + 5 log 3. Hence the average length of a gap in Γ * 2 is: We pick a gap of length at least log∆ τ and find an integer l * 2 at least y larger than the closest smaller lvalue and at least x−1 smaller than the closest larger l-value. Observe moreover that |l * 2 −l * 1 | ≥ x +y because l * 2 does not belong to the interior of interval I 1 . We define I 2 = [l * 2 −y , l * 2 +x ]. Observe that I 1 and I 2 are disjoint (except possibly their endpoints). We can now argue that I 2 also contains 100 l-values similarly as I 1 . And moreover at most one l-value can be shared between I 1 and I 2 (because the interiors of the intervals are disjoint). And now we extend I 2 to I * 2 , construct Γ 3 and Γ * 3 and repeat the whole procedure. This procedure identifies at least 98 unique l-values in each step hence it can last for at most τ /98 iterations. But we remove at most 5 log∆/τ + 10 log τ + 5 log 3 area per iteration. Since we assumed τ ≤ ln∆/ log log∆, then 5 log∆/τ + 10 log τ + 5 log 3 ≤ 10 log∆/τ . This leads to contradiction since there are only τ l-values. Hence for any choice of l 1 , . . . , l τ there exists l * such that τ i=1 s i < c ln∆ ∆ 1/τ τ . Thus by the union bound the algorithm needs to run for at least∆ 1/τ τ /(c ln∆) phases to accumulate the total probability of success of 1/2. Knowing that each phase lasts for τ rounds the total number of steps needed is Ω(∆ 1/τ τ 2 / ln ∆)

Global Broadcast
We now turn our attention to the global broadcast problem. Our upper bound will use the same broadcast probability sequence as our best local broadcast algorithm from before. As with local broadcast, for τ ≥ log ∆, our performance nearly matches the optimal performance in the standard radio network model, and then degrades as τ shrinks toward 1. Our lower bound will establish that this degredation is near optimal for uniform algorithms in this setting. In this section we also use the notationτ = min{τ, log ∆ }.

Upper Bound
A uniform global broadcast algorithm requires each node to cycle through a predetermined sequence of broadcast probabilities once it becomes active (i.e., has received the broadcast message). The only slight twist in our algorithm's presentation is that we assume that once a node becomes active, it waits until the start of the next probability cycle to start broadcasting. To implement this logic in pseudocode, we use the variable T ime to indicate the current global round count. We detail this algorithm below (notice, the FRLB(2) is the local broadcast algorithm analyzed in Lemma 5). Proof. Similarly to the analysis of the local broadcast algorithms, we consider only the case of τ ≤ log ∆ since for any larger τ we use the algorithm for τ = log ∆. Take any station u and assume that some positive number of neighbors of u in E execute in parallel procedure FRLB(2). Then by Lemma 5 station u receives a message from some neighbor with probability at least ln ∆ 4∆ 1/τ τ . Note that the same number of neighbors of u have to execute both procedures Uniform of FRLB(2) and at least one of these neighbors has to be connected to u by a reliable link. This is true since after receiving the message, a station waits until a time slot that is a multiple of 2τ (Line 3 in the pseudocode). Hence we can treat each execution of FRLB(2) as a single phase.
Let o denote the originator of the message. Fix any tree T of shortest paths on graph G (e.g., BFS Tree) on edges from E (reliable) rooted at o. We would like to bound the progress of the message on tree T . For any station u we can denote by p(u) the parent of u in tree T . For any station u we can define the earliest time step T (u) in which p(u) receives the message. We set T (u) = ∞ if the message does not reach p(u). If T (u) < ∞ we consider ln(2n/ ) · 4∆ 1/τ τ / log ∆ phases that follow step T (u). A phase is called successful if it succeeds in delivering the message to u and unsuccessful otherwise. Note that (assuming that T (u) < ∞) the probability that all phases are unsuccessful for any fixed u is at most Let us denote by S an event that T (u) < ∞ for all stations u. And by S i the event that T (u) < ∞ for all stations at distance at most i from the root in tree T . If d i denotes the number of stations at distance i from the root in tree T we get: by (12) and union bound If event S takes place, the message reaches all the nodes of the network. Clearly it can reach node u not necessarily from its parent p(u) in tree T , but this would only help in our analysis (it will cause the message to arrive at u faster). Now we want to bound the number of phases it takes for the message to traverse a path in the tree. Fix any station u and let P = (o, v 1 , v 2 , . . . , v D −1 , u) denote the path from o to u in tree T (note that D ≤ D). We denote by R i the round in which v i receives the message (R D denotes the round in which u receives the message) and introduce random variables ∆ i = max{0, R i − R i−1 }. Conditioning on event S, variables ∆ i are stochastically dominated by independent geometric random variables with success probability ln ∆ 4∆ 1/τ τ . We have D such variables and the probability that sum T of them exceeds L = 4(D + ln(2n/ )) · 7∆ 1/τ τ log ∆ = E[ T ]·4(1+ln(2n/ )/D ) can be bounded using inequalities from [11]. Denote λ = 4(1+ln(2n/ )/D ) and observe that (λ − 1)/2 ≥ ln λ is true since λ > 4. We get: and by taking the union bound over all stations u, we get that with probability at least 1 − 2 /(4n) the message reaches all nodes within time 4(D + ln(2n/ )) · 4∆ 1/τ τ log ∆ , conditioned on S. Since S takes place with probability at least 1 − /2 and since each phase takes 2τ time steps, this shows that the algorithm works within time 8(D+ln(2n/ ))· 4∆ 1/τ τ 2 log ∆ with probability at least (1− /2)(1− 2 /(4n)) ≥ 1 − .

Lower Bound
The global broadcast lower bound of Ω(D log(n/D)), proved by Kushilevitz and Mansour [14] for the standard radio network model, clearly still holds in our setting, as the radio network model is a special case of the dual graph model where E = E. Similarly, the Ω(log n log ∆) lower bound Figure 2: A graph used in proof of Theorem 10.
proved by Alon et al. [1] also applies. 3 It follows that for τ ≥ log ∆, we almost match the optimal bound for the standard radio network model, and do match the time of the seminal algorithm of Bar-Yehuda et al. [2].
For smaller τ , this performance degrades rapidly. Here we prove this degradation is near optimal for uniform global broadcast algorithms in our model. We apply the obvious approach of breaking the problem of global broadcast into multiple sequential instances of local broadcast (though there are some non-obvious obstacles that arise in implementing this idea). As with our local broadcast lower bounds, we separate out the case where τ is at least a 1/ log log ∆ factor smaller than our log ∆ threshold, as we can obtain a slightly stronger bound under this assumption. Proof. We assume first that D is divisible by 3 (if it is not we can decrease D by one or two nodes to make it divisible by 3, without impacting the asymptotic bounds). We construct the dual graph G, G by connecting together D/3 gadgets, G 1 , G 2 , . . . , G D/3 , as shown in Figure 2. In particular, each gadget G i is the same graph structure used to prove our local broadcast lower bound. Formally, for each i = 1, 2, , D/3, gadget G i is a dual graph We denote the set of edges connecting the gadgets by E c = {(v i , u i+1 ) : i = 1, 2, . . . , D/3 − 1}. Finally we can define the total set of nodes and edges in the complete dual graph G = (V, E) and G = (V, E ) as follows: We will show statement 1 by applying Theorem 8 to each gadget, statement 2 can be shown using the same proof by applying Theorem 7.
We bound the dissemination of a broadcast message in this graph originating at node u 1 . We can view the progression of the message through the chain of gadgets G 1 , G 2 , . . . , G D/3 as a sequence of local broadcasts. When the message arrives at a node u i , it is propagated to nodes v ∆−1 and at this step delivering the message to v i is exactly the local broadcast problem considered in Theorem 8. In this theorem we constructed a sequence of distributions that yields a high running time. The distribution changes every exactly τ steps i.e., we have a distribution D k for steps 1 + (k − 1)τ, 2 + (k − 1)τ, . . . , kτ . We cannot immediately apply the result for local broadcast because the adversary might not be allowed to change the distribution immediately when the message arrives in a gadget. Moreover in the global broadcast problem, stations are allowed to delay the transmissions for some number of steps. We can easily solve this problem by keeping the "first" distribution D 1 in each gadget until the message reaches the gadget, at which point the adversary can start the sequence of changes specified by the local broadcast lower bound.
More precisely, we denote sequence p 1 , p 2 , . . . of probabilities used by algorithm A and we denote subsequences P k = (p 1+(k−1)τ , p 2+(k−1)τ , . . . , p kτ ). We want to use distributions D 1 , D 2 , . . . from Theorem 8 in such a way that if i is the furthest gadget reached by the message and its nodes are in phase k (i.e., are using probabilities from sequence P k ) then distribution in gadget G i is D k . If the message has not reach the gadget yet, distribution in the gadget is D 1 . Finally if the message already reached node v i in gadget G i for any i we do not change the distribution in this gadget any more. We need to show that with this construction we do not need to change the distribution more frequently than once per τ steps. This is true because we only change the distribution in the furthest gadget (call it G i ) reached by the message and moreover we change it from D k to D k+1 only after the stations v ∆−1 delay transmitting until the beginning of the next probability cycle are not counted in variable X i . The steps counted by variable X i can be seen as local broadcast. By Theorem 8 we have that Pr X i ≤∆ 1/τ τ 2 /(c ln∆) ≤ 1/2 for some constant c > 1. Moreover variables X i are independent because choices of the stations in each gadget are independent hence we can use Chernoff bound to Observe that X lower bounds the time of the global broadcast. This shows that the global broadcast needs Ω(D∆ 1/τ τ 2 / ln ∆) steps with probability at least 1/2. If D is not divisible by 3 we construct our graph with diameter 3 D/3 and attach a path of D − 3 D/3 (one or two) vertices to node v D 1 /3 . This cannot decrease the time of broadcast hence we get the bound Ω((D − 2)∆ 1/τ τ 2 / log ∆) = Ω(D∆ 1/τ τ 2 / log ∆).

Correlations
Here we explore a promising direction for the study of broadcast in realistic radio network models. In particular, the fading adversary studied above assumes that the distribution draws are independent. As we will show, interesting results are still possible when considering the even more general case where the marginal distributions in each step are not necessarily independent in each round. More precisely, in this case, the adversary chooses a distribution over sequences of length at least τ of the sets of unreliable edges. A sequence from this distribution is used to determine which unreliable edges are active in successive steps. The adversary after a least τ steps can decide to change the distribution. In this model, we first show a simple lower bound that any uniform algorithm using a short list of probabilities of length l (our algorithms in previous sections always used list of length min{τ, log ∆}) needs time Ω( √ n/l) for some graphs. Our lower bound uses distributions over sequences of graphs in which the degrees of nodes change by a large number in successive steps. Such large changes in degree turn out to be crucial as we show that if in the sequence taken from the distribution chosen by the adversary, in every step in expectancy only O(∆ 1/(τ −o(τ )) ) edges adjacent to each node can be changed then we can get an algorithm working in time O(∆ 1/τ τ log(1/ )) with probability at least 1 − and using list of probabilities of length O(min{τ, log ∆}).

A Lower Bound for Correlated Distributions
The following lower bound shows that any simple back-off algorithm, similar to the ones presented in Section 3, that uses at most log ∆ probabilities requires time Ω( √ ∆/ log ∆) if arbitrary correlations are permitted. Proposition 1. Any uniform local broadcast algorithm that repeats a procedure consisting of l probabilities requires expected time Ω( √ ∆/l) in some graph with ∆ = n − 2 even if τ = ∞.
Proof. Denote the procedure that is being used by the algorithm by P. Assume for simplicity that √ ∆ is a natural number. We take as a graph a connected pair of stars (a similar graph was used in Theorem 7).
The fist star has arms v 1 , v 2 , . . . , v ∆ and center at u. In the fist star, arms v 1 , v 2 , . . . , v ∆ are connected to center u by reliable edges. The second star has arms v 1 , v 2 , . . . , v ∆ and center at v. In the second star, connection from v 1 to v is reliable and all other connections are unreliable. Note that by such construction, graph G is connected. All nodes, except v, are initially holding a message.
The single distribution is defined in the following way. Let e i = min{1/p i , ∆} for i = 1, 2, . . . , l be the estimates used by procedure P. Let Let s be a number chosen uniformly at random from {1, 2, . . . , l}. In our distribution, the degree of v in step t is d t =ē 1+rt , where r t is the remainder of t + s modulo l. More precisely, in step t in the distribution exactly d t − 1 edges chosen at random among edges between v and v 2 , v 3 , . . . , v ∆ are activated. Observe that before the algorithm starts, the distribution of the degree of node v in each step is simply a uniform number from multiset {ē 1 ,ē 2 , . . . ,ē l }. But after step 1 the sequence of degrees of v becomes deterministic and depends only on the value s of the shift. The dependencies are designed in such a way that if s = l (which happens with probability 1/l) then in any step t of the algorithm, the probability p t used by the algorithm satisfies either p t · d t ≥ √ ∆ or p t · d t < 1/ √ ∆. This means by Lemma 1 that the success probability is at most 1/ √ ∆ in each step and hence by the union bound the success probability in the whole procedure is at most l/ √ ∆. Thus with probability at least 1/l the algorithm has to repeat procedure P at least √ ∆/(2l) times to get a constant probability of success. Hence the expected time is Ω( √ ∆/l).

Locally Limited Changes
The previous section shows that under an adversary that is allowed to use arbitrary correlations then any simple procedure need polynomial time in the worst case.
In this section we want to consider the adversary that can use correlations but cannot change the degree too much in successive steps. Of course once every at most τ steps the adversary is allowed to define a completely new distribution over the unreliable edges. We want to argue that it is possible to build a simple algorithm resistant to such an adversary. Intuitively the changes of the degree are problematic only if the changes are by a large (non-constant) factor. Note by Lemma 1 that if we perturb the effective degree by only a constant factor then the bound also changes only by a constant factor. Hence in order to design an algorithm that is immune to such changes we should add more "coverage" to the small-degree nodes. We do this by enhancing each phase of algorithm RLB with additional steps in which we assume that the effective degree of a node is small. The adversary may try to avoid the successful transmission in these steps by changing the degree (the adversary knows the probabilities used by the algorithm). But having the restriction on the distance the adversary can move the degree allows us to define overlapping "zones" such that in two consecutive steps we are sure to find the degree in one of the zones. We also have to make sure that the whole phase of the new algorithm fits into τ steps. Now we present algorithm RLBC (Robust Local Broadcast with Correlations). We first show that the algorithm works under (l, τ )-deterministic adversary that can change at most l edges adjacent to each node per round and all the edges from E \ E once every at most τ rounds. Our algorithm will be resistant to deterministic adversary that can change at most τ ∆ 1/(τ −o(τ )) edges adjacent to each node in every step.
Then we show that it also works under restricted fading adversary with parameters τ and l. Restricted fading adversary can change the distribution arbitrarily once every at most τ steps, if the distribution is not changed then the expected change of the degree of any node can be at most l. Under these restrictions, the adversary can design arbitrary correlations between successive steps. We show that RLBC works with restricted fading adversary with l of at most ∆ 1/(τ −o(τ )) .
Proof. Assume that τ ≤ log 2e ∆/2 and note that in this caseτ = τ . In the opposite case we use the algorithm for τ = log 2e ∆/2 which works also for any larger τ . Denote k = ∆ 1/(τ −2a) , l = kτ /2 and observe that for τ ≥ 1000 we have a > 200 and τ − 2a ≥ τ /2 and k ≥ 2. We divide the time into intervals of length τ , called cycles. In each interval algorithm RLBC repeats the same probabilities. In the first τ −2a steps of the cycle it uses probabilities p i = k −i for i = 1, 2, . . . , τ −2a, in the next 2a steps it uses probabilities 1/e 1 and 1/e 2 . We take two consecutive cycles and note 1 Algorithm: RLBC(r, τ ) 2τ = min{ log 2e ∆/2 , τ } 3 a ← τ / log 2eτ 4 k ← ∆ 1/(τ −2a) 5 e 1 ← k · a 6 e 2 ← k 2 · τ · a 7 repeat 2r times that in each such a pair of cycles we can find τ consecutive steps in which the distribution over the unreliable edges is the same (since global changes can happen at most once every τ steps) and moreover the algorithm uses all the probabilities from a cycle. Let us call a sequence of these steps T = [t 1 , . . . , t τ ]. Note that in this sequence we have either one full procedure RLB(1, τ − 2a) or parts of two procedures RLB(1, τ − 2a) (call them R 1 and R 2 ). In the second case sequence T contains some suffix of R 1 and some prefix of R 2 . Connect these steps together into a procedure R, which contains all steps of procedure RLB(1, τ − 2a) executed in a possibly different order. Fix a receiver v and assume that at least one reliable neighbor of v tries to transmit a message to v. We want to show that in each such a pair of cycles v receives the message independently with probability at least p s = 1 8ek . We know by the definition of the adversary that the effective degree cannot change by too much between steps in the same cycle: |d t i (v) − d t i +1 (v)| ≤ l. We can consider two cases depending on the effective degree in the first considered step t 1 : Here we want to show that procedure R is successful with probability at least p s . Observe that here since l ≥ τ we have d t i (v) ≥ d t i (v)/2 + l 2 ≥ d t i (v)/2 + lτ for each i = 1, 2, . . . , τ . Thus d t i (v) − lτ ≥ d t i (v)/2 and d t i (v) + lτ ≤ 2d t i (v) thus the effective degree in the whole considered sequence of steps can change by a factor of at most 2. Recall from the definition of RLB that It uses probabilities p i = k −i . Consider the smallest i such that 1/p i ≥ 2d t 1 (v) by the minimality of i we have that 1/(kp i ) ≤ 2d t 1 (v). Probability p i is used in some step of sequence T . Call this step t j . We have: Thus by Lemmas 1 and 2: Case 2: d t 1 (v) < 2l 2 Here we want to show that a successful transmission occurs with probability at least p s in one of the 2a additional steps (see lines 7 − 11 of the pseudocode).
Note that since d 1 (v) < 2l 2 then d t i (v) ≤ d t 1 (v) + lτ ≤ 4l 2 Pick two consecutive steps t i , t i + 1 such that in step t i the algorithm uses probability 1/e 1 and in t i + 1 it uses 1/e 2 . Note that in the considered sequence we have at least a − 1 such pairs. Case 2.1: d t i (v) ≤ 2l Here the probability is 1/e 1 and the degree is within interval [1, 2l] hence we have that: By Lemma 1 2: In each pair the stations are making independent choices hence the probability of failure in all the pairs is by Lemma 3 at most: where in the last inequality we used the fact that a > 20. Thus also in this case with probability at least 1/(2ek) ≥ p s node v receives a message during this cycle. The two considered cases showed that any full two cycles deliver the message with probability at least p s . If we perform at least 2r = 2 ln(1/ )/p s = O(∆ 1/τ ) cycles then the probability that v does not receive a message is at most (1 − p s ) ln(1/ )/ps ≤ .
The case with deterministic adversary can be generalized to stochastic restricted adversary. Proof. Fix any receiver v. We know that RLBC(8e ln(1/ )∆ 1/τ , τ ) solves local broadcast in the presence of (lτ, τ ) -deterministic adversary. But in the case with arbitrary correlations we can still bound the probability that the degree of v does not change too much. Take any two consecutive steps t, t + 1. We have by Markov Inequality: If we pick τ steps like in the proof of Theorem 11 then by the union bound with probability at least 1/2 in each of these steps the degree changes by at most 2lτ . From now on we can use the same analysis as in Theorem 11 and we obtain only a constant slowdown compared to the case with deterministic adversary. Hence RLBC(16e ln(1/ )∆ 1/τ , τ ) solves local broadcast with restricted fading adversary with probability at least 1 − .