Abstract 1 Introduction 2 Preliminaries 3 Probabilistic Phase Synchronization 4 A Self-Stabilizing MM Algorithm 5 Analysis References Appendix A Proving Lemma 3.2 Appendix B Standard Probabilistic Arguments Appendix C Figures and Tables

Self-Stabilizing Fully Adaptive Maximal Matching

Shimon Bitton Intel Corporation, Haifa, Israel Yuval Emek ORCID Technion – Israel Institute of Technology, Haifa, Israel Taisuke Izumi ORCID Osaka Unversity, Japan Shay Kutten ORCID Technion – Israel Institute of Technology, Haifa, Israel
Abstract

A self-stabilizing randomized algorithm for mending maximal matching (MM) in synchronous networks is presented. Starting from a legal MM configuration and assuming that the network undergoes k faults or topology changes (that may occur in multiple batches), the algorithm is guaranteed to stabilize back to a legal MM configuration in time O(logk) in expectation and with high probability (in k), using constant size messages. The algorithm is simple to implement and is uniform in the sense that it does not assume unique identifiers, nor does it assume any global knowledge of the communication graph including its size. It relies on a generic probabilistic phase synchronization technique that may be useful for other self-stabilizing problems. The algorithm compares favorably with the existing self-stabilizing MM algorithms in terms of the dependence of its run-time on k, a.k.a. fully adaptive run-time. In fact, this dependence is asymptotically optimal for uniform algorithms that use constant size messages.111This paper is extracted from (a much longer) full version [18]. The main contribution of [18] is a self-stabilizing black-box transformer that can be applied to LCL problems in general. The self-stabilizing MM algorithm developed in the current paper is closely related to the one obtained by invoking the transformer on the (fault free) algorithm presented in Sec. 4.1, although the run-time and message size bounds of the former are better than those of the latter.

Keywords and phrases:
self-stabilization, maximal matching, fully adaptive run-time, dynamic graphs
Funding:
Yuval Emek: supported by the Bernard M. Gordon Center for Systems Engineering at the Technion.
Taisuke Izumi: supported by JSPS KAKENHI Grant Numbers 23H04385 and 22H03569.
Shay Kutten: supported by ISF grant 1346/22 and by the Grand Technion Energy Program.
Copyright and License:
[Uncaptioned image] © Shimon Bitton, Yuval Emek, Taisuke Izumi, and Shay Kutten; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Computer systems organization Fault-tolerant network topologies
Related Version:
Full Version: https://arxiv.org/abs/2105.09756 [18]
Editors:
Silvia Bonomi, Letterio Galletta, Etienne Rivière, and Valerio Schiavoni

1 Introduction

Distributed systems are notoriously subject to faults and much of the effort in the design and implementation of distributed algorithms is devoted to fault tolerance issues. Introduced in the seminal paper of Dijkstra [26], self-stabilization is a fundamental approach to fault tolerance requiring that the system recovers from any number of transient faults and stabilizes back to a legal configuration within finite time. The strong guarantees of this natural approach attracted a lot of attention over the years making self-stabilization an extensively studied research field (see [27, 4] for textbooks).

The main performance measure for self-stabilizing algorithms is their stabilization run-time. With very few exceptions (discussed in Sec. 1.3), this is measured as a function of the size of the system: the more processors are in the system, the longer is the time it takes for the algorithm to stabilize. However, while self-stabilizing systems are guaranteed to recover from any number of transient faults, in most systems, the common scenario is that over a limited time interval, only few faults occur. Although a small number of faults can seriously hinder the operation of the whole system [51], one may hope that the system recovers from them faster than the recovery time from a larger number of faults, regardless of the total number of processors.

With this hope in mind, we turn to the notion of fully adaptive run-time that expresses the recovery time of a self-stabilizing algorithm as a function of the number of faults (supporting also dynamic topology changes), rather than the size of the system. Our goal in the current paper is to investigate this appealing complexity measure in the context of the classic maximal matching (MM) problem as we now turn to define.

1.1 Model, Problem, and Complexity Measure

Consider a distributed algorithm 𝙰𝚕𝚐 running on a message passing communication network represented as a simple undirected graph G=(V,E). Under 𝙰𝚕𝚐, the nodes operate in synchronous rounds so that in each round t0, every node vV executes the following operations: (1) v performs local computation that includes reading from and writing to its registers; (2) v sends messages to (a subset of) its neighbors; and (3) v receives the messages sent to it by its neighbors in round t. Round t is assumed to span the time interval [t,t+1) so that time t marks the beginning of the round. The configuration of 𝙰𝚕𝚐 at time t is an abstract object that encodes the content of all nodes’ registers at that time.

Let N(v) be the set of neighbors of a node vV and let d(v)=|N(v)| be its degree. We adhere to the conventions of the port numbering model of distributed graph algorithms where each node vV is associated with an (arbitrarily chosen) port bijection pv:[d(v)]N(v) so that from v’s perspective, node pv(i) is referred to as neighbor i for each i[d(v)]. Assuming that pv(i)=u and pu(j)=v, when v (resp., u) sends a message to its neighbor i (resp., j), this message is written into a designated port register πu(j) (resp., πv(i)) that u (resp., v) can read during the local computation step of the subsequent round. We do not assume that v (resp., u) knows anything about the identity of its neighbor i (resp., j) otherwise. In fact, in the current paper, we focus on uniform algorithms, namely, the nodes do not have unique identifiers, nor do they hold any global knowledge of the graph G (including its size).

Self-Stabilizing Maximal Matching.

An edge subset ME is a matching in G if the degree of every node in the graph (V,M) is at most 1. A matching M is maximal if M is not a matching for any MME. We assume that the interface of each node vV in a distributed maximal matching (MM) algorithm includes a designated matching register μv{}{𝚄}{1,,d(v)} whose semantics is as follows: μv= indicates that v is still undecided; μv=𝚄 indicates that v is (decided to be) unmatched; and μv=i indicates that v is matched to its neighbor i.

To support the consistency of the nodes’ matching registers, the interface of node v also includes, for each i[d(v)], a designated (matching) status field πvσ(i){}{𝚄,𝙷,𝙴}, within the port register πv(i), whose role is to “reflect” the content of μpv(i). Specifically, assuming that pv(i)=u and pu(j)=v, the semantics of the status field πvσ(i) is as follows: πvσ(i)= indicates that μu=; πvσ(i)=𝚄 indicates that μu=𝚄; πvσ(i)=𝙷 indicates that μu=j (𝙷 stands for ’(matched) here’); and πvσ(i)=𝙴 indicates that μu[d(u)]{j} (𝙴 stands for ’(matched) elsewhere’).

Assuming that pv(i)=u and pu(j)=v, nodes v and u are said to be strongly matched if (1) μv=i and μu=j; and (2) πvσ(i)=πuσ(j)=𝙷. Node v is said to be strongly unmatched if (1) μv=𝚄; (2) πvσ(i)=𝙴 for all i[d(v)]; and (3) every node uN(v) is strongly matched. We say that a configuration of a distributed MM algorithm 𝙰𝚕𝚐 is legal if every node vV is either strongly matched or strongly unmatched, observing that the strongly matched nodes induce a maximal matching in G; we refer to this maximal matching as the output of 𝙰𝚕𝚐.

A distributed algorithm is self-stabilizing if it is guaranteed to reach a legal configuration in finite time from any initial configuration (see [26, 27]). This notion is captured by a malicious adversary, who knows the algorithm but is oblivious to its coin tosses, that determines the algorithm’s initial configuration.

For reasons that will become clear soon, in the current paper, the adversary is allowed to “intervene” in the algorithm’s execution also after time 0 (corresponding to the initial configuration) and modify the content of any register maintained by the algorithm including the aforementioned port registers πv() (and their status fields πvσ()) and matching registers μv. The adversary can also impose dynamic topology changes including the addition (resp., removal) of nodes and edges which are reflected in adding new port registers πv() (resp., removing existing port registers) and updating the port bijections pv. In this regard, a node vV is said to be manipulated (by the adversary) in round t if at least one of the following three events occur during round t: (i) the content of (any of) v’s registers is modified; (ii) the port bijection pv is modified; or (iii) v is added to the graph as a new node. (Notice that condition (ii) includes topological changes involving any edges incident on v as well as “rewiring” of v’s ports.)

Fully Adaptive Run-Time.

Let 𝙰𝚕𝚐 be a distributed self-stabilizing MM algorithm. Consider times 0tadv<t0 and integer k>0 and suppose that (1) 𝙰𝚕𝚐 is in a legal configuration at time tadv; (2) the adversary manipulates k nodes during the time interval [tadv,t0); and (3) the adversary does not manipulate any node from time t0 onwards. We say that 𝙰𝚕𝚐 has fully adaptive run-time T(k) for some function T:>0>0 if it is guaranteed that there exist some time t and maximal matching M such that (i) 𝙰𝚕𝚐 is in a legal configuration with output M from time t onwards; and (ii) tt0+T(k). If t (and M) are random variables determined by the coin tosses of 𝙰𝚕𝚐, then we require that condition (ii) holds in expectation and with high probability, where throughout this paper, the term with high probability (abbreviated w.h.p.) refers to events that occur with probability at least 1kc for a constant c that can be made arbitrarily large.

It is important to point out that the number k of manipulated nodes is chosen by the adversary and is not known to the algorithm that also does not know any bound on k.

1.2 Our Contribution

Our main contribution is cast in the following theorem.

Theorem 1.1.

There exists a randomized uniform self-stabilizing MM algorithm that uses messages of size O(1) whose fully adaptive run-time is O(logk).

The reader may have noticed that the guarantees of our algorithm do not depend in any way on the size of the graph G. Indeed, Theorem 1.1 holds also for infinite graphs as long as the node degrees are finite (though not necessarily bounded).222In infinite graphs, the notion of fully adaptive run-time can be extended to support an infinite number of manipulations as long as the manipulated nodes can be partitioned into clusters of size at most k which are sufficiently far away from each other. We believe that distributed computation in infinite graphs is a fascinating research domain and advocate using the notion of (fully) adaptive run-time as a natural complexity measure when it comes to self-stabilizing algorithms in this domain.

As established in [48], any uniform (non-self-stabilizing) distributed MM algorithm with constant size messages requires Ω(logn) time in expectation when running on rings (and paths) of size n. This implies that any uniform self-stabilizing MM algorithm with constant size messages requires Ω(logk) time in expectation to stabilize from kn transient faults, which means that the fully adaptive run-time bound promised in Thm. 1.1 is asymptotically optimal.

1.3 Related Work and Discussion

Self-stabilizing symmetry breaking is an extensively studied research topic, see, e.g., [38] for a survey. In particular, there is a long line of works on self-stabilizing algorithms for maximal matching (and closely related problems) [42, 64, 21, 22, 40, 37, 35, 36, 57, 58, 63, 7, 46, 24, 5, 43, 65, 15]. Under synchronous schedules (assumed in the current paper), the state-of-the-art MM self-stabilization run-time upper bounds are O(logn), obtained by the (randomized uniform) algorithm of Turau [65], and O(Δ+logn), obtained by the (deterministic non-uniform) algorithm of Barenboim et al. [15], where n is the number of nodes in the graph and Δ is its maximum degree. Notice that those run-time upper bounds are not fully adaptive.

The MM problem received a lot of attention also in the literature on fault free (non-self-stabilizing) distributed graph algorithms, see, e.g., [45, 44, 54, 39, 61, 16, 32, 30, 12, 33]. Some of the algorithmic ideas in the current paper are inspired by ideas originated in that literature, e.g., [44]. Another source of inspiration is the algorithm of [66] for the related task of approximating maximum matching.

The self-stabilizing research community has a special interest in generic transformers that translate non-self-stabilizing algorithms into self-stabilizing ones, see, e.g., [10, 8, 9, 2, 1, 53, 13]. None of the existing transformers though can produce self-stabilizing algorithms that use constant size messages and run in sub-diameter time (as the algorithm developed in the current paper does). Moreover, with the exception of the transformer presented in the full version [18], none of the existing transformers (provably) guarantees a fully adaptive stabilization run-time, regardless of the graph’s size and diameter or any probabilistic assumption on the distribution of faults.

There has been a lot of interest in saving run-time in dynamic models by adapting to the actual load on the algorithm. In the non-self-stabilizing computing context, Lamport suggested that an algorithm should benefit from the common case where the number of contending processes is small [52]. A maximum matching approximation can be mended following a single topological change in constant distributed time [55]. A distributed algorithm that mends a maximal independent set after a single change in constant time was presented in [20]. Mending other local functions in a similar setting, also in constant distributed time, was addressed in [47]. Mending after a small number of changes was addressed in [59]. In the context of distributed fault tolerance, a similar motivation stands behind the notions of obstruction freedom [41].

In highly dynamic graphs, mending of various functions has been treated recently [14, 19]. Comparing these papers to the current one, their algorithms are not self-stabilizing. Moreover, the authors of [19] optimize the amortized time complexity (vs. the worst case time complexity used in the current paper) assuming that the graph is initially empty, whereas the run-time of the algorithm of [14] is expressed as a (logarithmic) function of n rather than k (so it is not adaptive). Both papers rely on messages of super-constant size.

For non-distributed algorithms, the area of dynamic algorithms started with addressing algorithms that already computed a complete and correct output (say a minimum spanning tree [31]) and need to mend the output following a single change to the input. A non-distributed update of MM in dynamic graphs for the deletion or the removal of a single edge is performed in amortized (expected) O(logn) time [17] or worst case (deterministic) O(m) time, where m denotes the number of edges [60].

In the context of self stabilization, the notion of super-stabilization [28] is concerned with decreasing the impact of topological changes on the complexity of self-stabilizing algorithms and the notion of containment [34] has been used to ensure that if only a single fault occurred, then the complexity is smaller than in the general fault case. For the maximal independent set problem, multiple simultaneous changes to the input were handled in [50] assuming a model that is only partially self-stabilizing (the adversary may change the data structure, but not other registers, e.g., the program counter). Moreover, while the run-time measure there is adaptive, expressed as a function of the number k of faults, it is not fully adaptive since all the faults are assumed to occur in a single batch before the mending starts. In contrast, the run-time analysis in the current paper is fully adaptive in the sense that it allows the adversary to divide the k faults (and/or topology changes) over multiple batches.

Algorithms whose time adaptivity was at least linear in the number of faults for various tasks were suggested in [25, 6, 49, 11].

1.4 Phase Based Distributed Algorithms

A common design pattern in distributed algorithms working under the synchronous fault free model is to divide the execution into phases so that each phase is composed of a fixed number ϕ of single round steps, grouped together to fulfill a task that cannot be achieved in a single round. The execution then progresses by running the phases in succession, where step 0jϕ1 of phase i=0,1, is executed in round t=iϕ+j. A key property of this design pattern is that the nodes execute their phases in synchrony so that step j of phase i is executed by all nodes in the same round.

More often than not, the algorithm runs the same code in each phase which means that the nodes do not have to keep track of the current round t; rather, it suffices that they have access to a step oracle, denoted hereafter by 𝙶𝚎𝚝_𝚂𝚝𝚎𝚙, that simply returns the current step j=tmodϕ. We refer to the family of such algorithms as phase based algorithms and note that this family includes the distributed graph algorithm “classics” of [23] (3-coloring of trees), [56, 3] (maximal independent set), and [44] (maximal matching), as well as more recent distributed graph algorithms, see, e.g., [16, 29, 62, 32].

The step oracle discussion is usually avoided in the literature on synchronous fault free algorithms as these algorithms can easily maintain a register that keeps track of tmodϕ. This is not the case in the self-stabilizing setting though: the adversary may modify this register, causing the nodes to execute their phases “out of sync” indefinitely. Indeed, we cannot assume that the nodes in a self-stabilizing algorithm have free access to a step oracle, hence the adaptation of a phase based algorithm designed to operate in a fault free environment to the self-stabilization setting imposes a significant challenge (cf. [65]).

To overcome this challenge, we introduce a technique called probabilistic phase synchronization. In high level terms (refer to Sec. 3 for a formal exposition), this technique replaces the step oracle by a generic (algorithm independent) module, referred to as 𝙿𝙿𝚂, whose implementation is based on a simple ergodic Markov chain, maintained independently at each node. The state space of the Markov chain is {,1,0,1,,ϕ1}, where states 0,1,,ϕ1 are identified with the ϕ steps of the phase and and 1 are auxiliary states whose role is explained in the sequel.

Using the 𝙿𝙿𝚂 module, the adaptation of a fault free phase based algorithm 𝙰𝚕𝚐 into a self-stabilizing algorithm becomes almost a “black-box compiler” based on the following two modifications: (i) Replace each call to 𝙶𝚎𝚝_𝚂𝚝𝚎𝚙 by a call to a procedure named 𝙶𝚎𝚝_𝚂𝚝𝚎𝚙_𝙿𝙿𝚂 that returns the current state of 𝙿𝙿𝚂. (ii) When executing the phase, node v restricts its communication to its phase synchronized neighbors, namely, adjacent nodes whose 𝙿𝙿𝚂 state agrees with that of v.

We design the 𝙿𝙿𝚂 module so that (a) if nodes u and v start a phase in the same round, then they remain phase synchronized until the phase ends; and (b) the Markov chain admits a poly(ϕ) mixing time, ensuring that every pair of adjacent nodes become phase synchronized sufficiently often. As we show in the sequel (see [18] for more details), these two properties guarantee progress on behalf of the algorithm, allowing us to bound the stabilization time. While our analysis is specific to maximal matching, the aforementioned adaptation is fairly generic and serves as the cornerstone for the transformer discussed in the full version [18].

2 Preliminaries

The algorithms are presented from the perspective of a node vV and for clarity of the exposition, they are described as if v can address its neighbors directly, rather than through the proxy of the port bijection pv(). Specifically, we use 𝐼𝑛v(u) to denote the message received by v from neighbor uN(v), recalling that this message is actually written into the port register πv(pv1(u)) and that v does not know u beyond its (local) port number pv1(u). Likewise, we define mv to be the neighbor pointed to by the matching register μv if μv[d(v)]; and mv=μv otherwise (i.e., if μv= or μv=𝚄). Implementing our algorithm based on the proxy of the port bijections (as defined in Sec. 1.1) is straightforward.

For a node vV and a register 𝚛𝚎𝚐v of v, let 𝚛𝚎𝚐v,t denote the value of 𝚛𝚎𝚐v at time t, that is, at the beginning of round t prior to any local computation. Notice that if v is not manipulated during the time interval [t1,t], as is guaranteed for every t>t0, then the value 𝚛𝚎𝚐v,t is also the value of 𝚛𝚎𝚐v at the end of round t1.

Given some graph H and a node v in H, we use the notation NH(v) and dH(v) for v’s neighbor set and degree, respectively, in H.

3 Probabilistic Phase Synchronization

Let 𝙰𝚕𝚐 be a phase based (fault free) distributed algorithm with phase length ϕ. We assume that under 𝙰𝚕𝚐, each node vV calls 𝙶𝚎𝚝_𝚂𝚝𝚎𝚙 in every round and proceeds according to the step number j{0,1,,ϕ1} it receives. Our goal in this section is to apply the probabilistic phase synchronization “compiler” to 𝙰𝚕𝚐, obtaining an algorithm 𝙰𝚕𝚐 that has no access to 𝙶𝚎𝚝_𝚂𝚝𝚎𝚙.

Consider a node vV. The design of 𝙰𝚕𝚐 (from 𝙰𝚕𝚐) is based on the 𝙿𝙿𝚂 module that can be viewed as an ergodic Markov chain over the state space {,1,0,1,,ϕ1} that v runs, independently of the other nodes. The current state of (v’s copy of) 𝙿𝙿𝚂 is stored in a register denoted by 𝑠𝑡𝑒𝑝v; to minimize the interface between 𝙰𝚕𝚐 and the generic module 𝙿𝙿𝚂, the access of 𝙰𝚕𝚐 to 𝑠𝑡𝑒𝑝v is made by means of procedure 𝙶𝚎𝚝_𝚂𝚝𝚎𝚙_𝙿𝙿𝚂 that simply returns the current value of this register. The main idea is that each call of 𝙰𝚕𝚐 to 𝙶𝚎𝚝_𝚂𝚝𝚎𝚙 is replaced by a call to 𝙶𝚎𝚝_𝚂𝚝𝚎𝚙_𝙿𝙿𝚂 under 𝙰𝚕𝚐.

Beside 𝙶𝚎𝚝_𝚂𝚝𝚎𝚙_𝙿𝙿𝚂, the 𝙿𝙿𝚂 module has one additional procedure whose role is to implement the Markov chain, advancing it in every round. This procedure is denoted by 𝚂𝚝𝚎𝚙_𝙲𝚘𝚞𝚗𝚝𝚎𝚛_ϕ (see Algorithm 1) and it is assumed to be invoked by 𝙿𝙿𝚂 in every round, towards the end of the local computation stage, after any call to 𝙶𝚎𝚝_𝚂𝚝𝚎𝚙_𝙿𝙿𝚂 has already been made. We emphasize that 𝚂𝚝𝚎𝚙_𝙲𝚘𝚞𝚗𝚝𝚎𝚛_ϕ is the only procedure that writes to the 𝑠𝑡𝑒𝑝v register; any other access of 𝙰𝚕𝚐 to this register is restricted to the read-only procedure 𝙶𝚎𝚝_𝚂𝚝𝚎𝚙_𝙿𝙿𝚂. The fundamental properties of 𝚂𝚝𝚎𝚙_𝙲𝚘𝚞𝚗𝚝𝚎𝚛_ϕ are summarized in the following observation (see Fig. 1 for an illustration of the underlying Markov chain).

Pseudocode 1 𝚂𝚝𝚎𝚙_𝙲𝚘𝚞𝚗𝚝𝚎𝚛_ϕ(), code for node v. Executed in every round at the end of the local computation stage.
1:if 𝑠𝑡𝑒𝑝v= then 𝑠𝑡𝑒𝑝v1 w.p. 1/2; w.p. 1/2
2:else if 𝑠𝑡𝑒𝑝v=ϕ1 then 𝑠𝑡𝑒𝑝v
3:else 𝑠𝑡𝑒𝑝v𝑠𝑡𝑒𝑝v+1
Observation 3.1.

For every time tt0 and node vV, we have

  • (𝑠𝑡𝑒𝑝v,t+1=j𝑠𝑡𝑒𝑝v,t=j1)=1 for every 0jϕ1;

  • (𝑠𝑡𝑒𝑝v,t+1=𝑠𝑡𝑒𝑝v,t=ϕ1)=1; and

  • (𝑠𝑡𝑒𝑝v,t+1=𝑠𝑡𝑒𝑝v,t=)=(𝑠𝑡𝑒𝑝v,t+1=1𝑠𝑡𝑒𝑝v,t=)=1/2.

This holds independently of any coin toss of v prior to time t and of any coin toss of all other nodes.

Consider some round t0 and for j{1,0,1,,ϕ1}, let

Σtj={uV𝑠𝑡𝑒𝑝u,t=j}.

We say that adjacent nodes u,vV are phase synchronized in round t if they both belong to Σtj for some j{1,0,1,,ϕ1}. Notice that node v cannot be phase synchronized (with any node) in round t if 𝑠𝑡𝑒𝑝v,t=.

Fix some node vV. If vΣtj for some j{0,1,,ϕ1}, then v runs in round t under 𝙰𝚕𝚐 the same code that 𝙰𝚕𝚐 runs in step j with one crucial difference: v progresses as if it operates in the graph induced by Σtj. In other words, for the sake of the simulation of 𝙰𝚕𝚐, node v restricts its interaction to its phase synchronized neighbors, that is, the nodes in N(v)Σtj, ignoring any node in N(v)Σtj. Notice that v still sends messages to all its neighbors, however the messages sent to the nodes in N(v)Σtj do not carry any 𝙰𝚕𝚐-related information.

To make this possible, each node uV appends the value of 𝑠𝑡𝑒𝑝u,t1 to its outgoing messages (sent to all its neighbors) in round t1, thus allowing its neighbors vN(u) to determine if they are phase synchronized with u in round t. Indeed, Obs. 3.1 ensures that if the message that a node v receives from its neighbor u in round t indicates that 𝑠𝑡𝑒𝑝u,t1=j1 for some 0jϕ1, then v can deduce that 𝑠𝑡𝑒𝑝u,t=j unless the adversary manipulates u or v during the time interval [t1,t].

The role of state 1 is to allow the nodes to identify their phase synchronized neighbors already in step j=0 so that they can simulate a full phase of 𝙰𝚕𝚐. Specifically, if uΣt1, then node u does not do anything in round t other than appending 𝑠𝑡𝑒𝑝u,t=1 to its outgoing messages, allowing any neighbor vΣt1 to identify that it is phase synchronized with u in the next round t+1.

Similarly to state 1, state is also special in the sense that it does not correspond to any step in 𝙰𝚕𝚐 and as such, corresponds to an “empty code”: If 𝑠𝑡𝑒𝑝u,t=, then node u does not do anything in round t other then appending 𝑠𝑡𝑒𝑝u,t= to its outgoing messages.

We now turn to bound the mixing time of the Markov chain associated with 𝙿𝙿𝚂. To this end, let Sϕ={,1,0,1,,ϕ1} denote its state space. The following lemma is established by showing that this ergodic Markov chain belongs to a family of Markov chains that has been identified and analyzed in [67]; its proof is deferred to Appendix A.

Lemma 3.2.

Fix some time t0t0, node v, and j0Sϕ. For every 0<ϵ<1, there exists a time t^=t0+O(log(1/ϵ)ϕ3) such that for every tt^, it holds that

(𝑠𝑡𝑒𝑝v,t=j𝑠𝑡𝑒𝑝v,t0=j0){2ϕ+3(1ϵ),if j=1ϕ+3(1ϵ),otherwise.

This holds independently of any coin toss of v prior to time t0 and of any coin toss of all other nodes.

Corollary 3.3.

Fix some time t0t0, node v, and j0Sϕ. For ϵ=1ϕ+32ϕ and τ=O(ϕ3), it holds that (𝑠𝑡𝑒𝑝v,t0+τ=1𝑠𝑡𝑒𝑝v,t0=j0)12ϕ. This holds independently of any coin toss of v prior to time t0 and of any coin toss of all other nodes.

4 A Self-Stabilizing MM Algorithm

In this section, we develop the self-stabilizing MM algorithm promised in Thm. 1.1, referred to as 𝚂𝚂𝙼𝙼. First, we present a phase based fault free MM algorithm, referred to as 𝙵𝙵𝙼𝙼, similar in its spirit to the classic distributed algorithm of Israeli and Itai [44].333Israeli and Itai [44] present their MM algorithm under the CRCW-PRAM model, but by now, it is better known in its implementation as a distributed message passing algorithm (cf. [66]). Next, we apply our probabilistic phase synchronization technique to 𝙵𝙵𝙼𝙼 and slightly modify it to obtain 𝚂𝚂𝙼𝙼.

We use 𝑂𝑢𝑡v(u) to denote the message sent from v to u; the algorithms are described so that the content of this register is constructed (during the local computation) gradually until it is sent to u. Recall that every message includes a status field, referred to hereafter as the σ-field, that is part of the node’s interface (see Sec. 1.1). The status field reflects the node’s matching register and it is set before the message is sent (see Algorithm 2). The messages include additional fields on top of the σ-field; we use a superscript x to refer to the x-field of the message so that 𝑂𝑢𝑡vx(u) and 𝐼𝑛vx(u) denote the x-fields in the messages sent from v to u and received by v from u, respectively. For ease of reference, Table 1 summarizes the various registers of 𝙵𝙵𝙼𝙼 and 𝚂𝚂𝙼𝙼 and the values they can hold.

Pseudocode 2 𝙵𝚒𝚗𝚊𝚕𝚒𝚣𝚎_𝚊𝚗𝚍_𝚂𝚎𝚗𝚍_𝙼𝚎𝚜𝚜𝚊𝚐𝚎𝚜(), code for node v.
1:if mv= or mv=𝚄 then
2:  for all uN(v) do 𝑂𝑢𝑡vσ(u)mv   
3:else
4:  𝑂𝑢𝑡vσ(mv)𝙷
5:  for all uN(v){mv} do 𝑂𝑢𝑡vσ(u)𝙴   
6:for all uN(v) do Send 𝑂𝑢𝑡v(u) to u

4.1 Algorithm 𝙵𝙵𝙼𝙼

Like the algorithm of [44], algorithm 𝙵𝙵𝙼𝙼, presented in Algorithm 3, is phase based with a phase of length 3. In round 0 of the phase, each undecided node v tosses an unbiased coin that determines if v is active or passive (for the duration of the current phase); active nodes then send a matching request to an undecided neighbor picked uniformly at random. In round 1 of the phase, a passive node v that received a matching request from at least one neighbor picks one such neighbor u (arbitrarily) and replies that u’s matching request is accepted. Finally, in round 2 of the phase, each edge over which a matching request was accepted is added to the output MM. To implement this logic, the messages sent by 𝙵𝙵𝙼𝙼 use one additional field (on top of the aforementioned σ-field), namely, the r-field that takes values in {𝙼𝚛𝚎𝚚,𝙰𝚌𝚌} (stands for ’matching request’ and ’accept’, respectively).

Pseudocode 3 𝙵𝙵𝙼𝙼(), code for node v.
1:for all uN(v) do 𝑂𝑢𝑡vr(u)=
2:if mv= and 𝐼𝑛vσ(u)=𝙴 for all uN(v) then mv𝚄
3:if mv= then
4:  if 𝙶𝚎𝚝_𝚂𝚝𝚎𝚙()=0 then
5:   𝑐ℎ𝑜𝑠𝑒𝑛v
6:   𝑎𝑐𝑡𝑖𝑣𝑒v𝑡𝑟𝑢𝑒 w.p. 1/2;𝑓𝑎𝑙𝑠𝑒 w.p. 1/2
7:   if 𝑎𝑐𝑡𝑖𝑣𝑒v then
8:     Pv{uN(v)𝐼𝑛vσ(u)=}
9:     if |Pv|>0 then
10:      𝑐ℎ𝑜𝑠𝑒𝑛v node picked from Pv uniformly at random
11:      𝑂𝑢𝑡vr(𝑐ℎ𝑜𝑠𝑒𝑛v)𝙼𝚛𝚎𝚚           
12:  if 𝙶𝚎𝚝_𝚂𝚝𝚎𝚙()=1 then
13:   if 𝑎𝑐𝑡𝑖𝑣𝑒v=false then
14:     Pv{uN(v)𝐼𝑛vσ,r(u)=(,𝙼𝚛𝚎𝚚)}
15:     if |Pv|>0 then
16:      𝑐ℎ𝑜𝑠𝑒𝑛v node picked arbitrarily from Pv
17:      𝑂𝑢𝑡vr(𝑐ℎ𝑜𝑠𝑒𝑛v)𝙰𝚌𝚌           
18:  if 𝙶𝚎𝚝_𝚂𝚝𝚎𝚙()=2 then
19:   if 𝑎𝑐𝑡𝑖𝑣𝑒v then
20:     if 𝑐ℎ𝑜𝑠𝑒𝑛v and 𝐼𝑛vr(𝑐ℎ𝑜𝑠𝑒𝑛v)=𝙰𝚌𝚌 then mv𝑐ℎ𝑜𝑠𝑒𝑛v      
21:   else if 𝑐ℎ𝑜𝑠𝑒𝑛v then mv𝑐ℎ𝑜𝑠𝑒𝑛v      
22:𝙵𝚒𝚗𝚊𝚕𝚒𝚣𝚎_𝚊𝚗𝚍_𝚂𝚎𝚗𝚍_𝙼𝚎𝚜𝚜𝚊𝚐𝚎𝚜()

4.2 Algorithm 𝚂𝚂𝙼𝙼

Our self-stabilizing algorithm 𝚂𝚂𝙼𝙼, presented in Algorithm 5, is obtained from 𝙵𝙵𝙼𝙼 through the following two modifications: (1) We add an error detection subroutine, presented in Algorithm 4, which is invoked in every round (see line 2 in Algorithm 5). In this subroutine, node v checks whether its available local information indicates that it is either strongly matched or strongly unmatched; if v is neither, then it becomes undecided by setting mv. (2) We apply the probabilistic phase synchronization “compiler” of Sec. 3. To this end, the outgoing messages are augmented with an additional s-field (stands for ’step’). This field is used to communicate v’s current step so that 𝑂𝑢𝑡vs() is set to the value of 𝑠𝑡𝑒𝑝v before the message is sent by calling 𝙶𝚎𝚝_𝚂𝚝𝚎𝚙_𝙿𝙿𝚂 (see line 23 in Algorithm 5).

Pseudocode 4 𝙳𝚎𝚝𝚎𝚌𝚝(), code for node v.
1:if mvN(v) and 𝐼𝑛vσ(mv)=𝙷 then return
2:if mv=𝚄 and 𝐼𝑛vσ(u)=𝙴 for all uN(v) then return
3:mv
Pseudocode 5 𝚂𝚂𝙼𝙼(), code for node v.
1:for all uN(v) do 𝑂𝑢𝑡vr(u)=
2:𝙳𝚎𝚝𝚎𝚌𝚝()
3:if mv= and 𝐼𝑛vσ(u)=𝙴 for all uN(v) then mv𝚄
4:if mv= then
5:  if 𝙶𝚎𝚝_𝚂𝚝𝚎𝚙_𝙿𝙿𝚂()=0 then
6:   𝑐ℎ𝑜𝑠𝑒𝑛v
7:   𝑎𝑐𝑡𝑖𝑣𝑒v𝑡𝑟𝑢𝑒 w.p. 1/2;𝑓𝑎𝑙𝑠𝑒 w.p. 1/2
8:   if 𝑎𝑐𝑡𝑖𝑣𝑒v then
9:     Pv{uN(v)𝐼𝑛vσ,s(u)=(,1)}
10:     if |Pv|>0 then
11:      𝑐ℎ𝑜𝑠𝑒𝑛v node picked from Pv uniformly at random
12:      𝑂𝑢𝑡vr(𝑐ℎ𝑜𝑠𝑒𝑛v)𝙼𝚛𝚎𝚚           
13:  if 𝙶𝚎𝚝_𝚂𝚝𝚎𝚙_𝙿𝙿𝚂()=1 then
14:   if 𝑎𝑐𝑡𝑖𝑣𝑒v=false then
15:     Pv{uN(v)𝐼𝑛vσ,r,s(u)=(,𝙼𝚛𝚎𝚚,0)}
16:     if |Pv|>0 then
17:      𝑐ℎ𝑜𝑠𝑒𝑛v node picked arbitrarily from Pv
18:      𝑂𝑢𝑡vr(𝑐ℎ𝑜𝑠𝑒𝑛v)𝙰𝚌𝚌           
19:  if 𝙶𝚎𝚝_𝚂𝚝𝚎𝚙_𝙿𝙿𝚂()=2 then
20:   if 𝑎𝑐𝑡𝑖𝑣𝑒v then
21:     if 𝑐ℎ𝑜𝑠𝑒𝑛v and 𝐼𝑛vr(𝑐ℎ𝑜𝑠𝑒𝑛v)=𝙰𝚌𝚌 then mv𝑐ℎ𝑜𝑠𝑒𝑛v      
22:   else if 𝑐ℎ𝑜𝑠𝑒𝑛v then mv𝑐ℎ𝑜𝑠𝑒𝑛v      
23:for all uN(v) do 𝑂𝑢𝑡vs(u)𝙶𝚎𝚝_𝚂𝚝𝚎𝚙_𝙿𝙿𝚂()
24:𝙵𝚒𝚗𝚊𝚕𝚒𝚣𝚎_𝚊𝚗𝚍_𝚂𝚎𝚗𝚍_𝙼𝚎𝚜𝚜𝚊𝚐𝚎𝚜()

5 Analysis

Assume that 𝚂𝚂𝙼𝙼 is in a legal configuration at time tadv and that the adversary manipulates k>0 nodes during the time interval [tadv,t0) and does not manipulate any node from time t0 onwards. Our goal in this section is to establish Thm. 1.1 by proving that 𝚂𝚂𝙼𝙼 stabilizes to a legal configuration by time t0+O(logk) in expectation and w.h.p. (in k).

Recall that the phase length of 𝚂𝚂𝙼𝙼 is φ=ϕ+1, where ϕ=3 is the phase length of 𝙵𝙵𝙼𝙼. Although the analysis presented in this section is dedicated to 𝚂𝚂𝙼𝙼, it is performed with respect to a general parameter ϕ (and φ) so that it can be applied to self-stabilizing algorithms derived from other phase based algorithms. Since the adversary may add/remove nodes/edges, the graph may change during the time interval [tadv,t0); in what follows, we reserve G=(V,E) for the graph that exists at time t0, recalling that this is also the graph at any time t>t0.

5.1 Outline

This section presents the general structure of 𝚂𝚂𝙼𝙼’s analysis. It hinges on Prop. 5.15.6 whose combination yields Thm. 1.1. We start with Prop. 5.1 ensuring that once the algorithm reaches a legal configuration, it stays in a legal configuration.

Proposition 5.1.

For every tt0 and vV, if v is strongly matched (resp., unmatched) at time t, then v is strongly matched (resp., unmatched) at time t+1.

We say that v is locally matched to node uN(v) if (i) mv=u; and (ii) 𝐼𝑛vσ(u)=𝙷; when the identity of u is not important, we may refer to v as being locally matched without explicitly mentioning u. Notice that by definition, if v is strongly matched, then v is also locally matched. Given some tt0, let Vt be the set of nodes which are not locally matched at time t, let Et=E(Vt×Vt), and let Gt=(Vt,Et) be the graph induced on G by Vt. When combined with Prop. 5.1, the following proposition implies that we do not have to worry about the nodes that are no longer in Gt.

Proposition 5.2.

For every t>t0 and vV, if v is locally matched at time t, then v is strongly matched at time t.

By combining Prop. 5.1 and 5.2 with the following proposition, we deduce that if Et=, then 𝚂𝚂𝙼𝙼 is in a legal configuration at time t+1 and will remain so indefinitely.

Proposition 5.3.

For every t>t0 and vVt, if dGt(v)=0, then v is strongly unmatched at time t+1.

The rest of the analysis is devoted to showing that it takes O(logk) rounds in expectation and w.h.p. for the set Et to become empty. Let τ=O(ϕ3) be the parameter promised in Cor. 3.3. The next proposition implies that from time t>t0, algorithm 𝚂𝚂𝙼𝙼 stabilizes in O(ϕ5log(1+1/ϕ2)log|Et|+logk) rounds in expectation and w.h.p. (in k), which is O(log|Et|+logk) when ϕ is a constant as in 𝚂𝚂𝙼𝙼. This means in particular that once 𝚂𝚂𝙼𝙼 reaches a configuration in which |Et|poly(k), it takes O(logk) additional rounds in expectation and w.h.p. for the algorithm to stabilize.

Proposition 5.4.

Fix some time t>t0 and a configuration at time t. There exist universal positive constants α and δ such that (|Et+τ+φ|(1δϕ2)|Et|)α/ϕ2.

Unfortunately, by manipulating k nodes, the adversary can lead the algorithm to a graph Gt0 whose edge set is arbitrarily large (and not polynomially bounded) with respect to k. To resolve this obstacle, we prove that it takes the algorithm O(logk) rounds in expectation and w.h.p. to reduce the number of edges in Gt down to O(k2). This relies on the following proposition derived from a nice combinatorial property of maximal matching.

Proposition 5.5.

Let S={vVv is strongly matched at time t0}. There exists a node set KVS of size |K|2k such that I=V(SK) is an independent set in G.

Consider the sets K and I promised in Prop. 5.5 and some positive constant ξ=ξ(ϕ) whose value is determined later on. For t>t0, let Dt={vKVtdGt(v)ξk} and let T be the random variable that takes on the earliest time t such that Dt=. Since I is an independent set, every edge in ET is incident on at least one vertex in K, thus |ET|<|K|ξk2ξk2. Recalling that ϕ is a constant in 𝚂𝚂𝙼𝙼, the proof of Thm. 1.1 is completed due to the following proposition, implying, by standard probabilistic arguments (see Appendix B), that TO(logk) in expectation and w.h.p.

Proposition 5.6.

Fix some time t>t0 and a configuration at time t. There exists a universal positive constant δ such that (vDt+2φ+1)δ2ϕ for every node vDt.

5.2 Correctness of the Detection Process

We begin the journey towards proving Prop. 5.15.6 by establishing two simple structural observations regarding the operation of 𝚂𝚂𝙼𝙼.

Observation 5.7.

For every time t>t0, node vV, and node uN(v), the following three statements are logically equivalent: (1) mv,t=u; (2) 𝐼𝑛u,tσ(v)=𝙷; (3) 𝐼𝑛w,tσ(v)=𝙴 for every wN(v){u}.

Observation 5.8.

For every t>t0 and vV, if v is strongly matched at time t, then v is strongly matched at time t+1.

Proof.

Since v is strongly matched at time t, it holds that mv,t=u, mu,t=v, and 𝐼𝑛u,tσ(v)=𝐼𝑛v,tσ(u)=𝙷. The design of 𝚂𝚂𝙼𝙼 ensures that in this case, mv=u and mu=v after the call to 𝙳𝚎𝚝𝚎𝚌𝚝 in line 2, thus mv=u and mu=v before the call to 𝙵𝚒𝚗𝚊𝚕𝚒𝚣𝚎_𝚊𝚗𝚍_𝚂𝚎𝚗𝚍_𝙼𝚎𝚜𝚜𝚊𝚐𝚎𝚜 in line 24 and mv,t+1=u and mu,t+1=v. By Obs. 5.7, it holds that u and v are strongly matched at time t+1.

We are now ready to prove Prop. 5.15.3.

Proof of Prop. 5.1.

The statement regarding a strongly matched node is proved by Obs. 5.8. If v is a strongly unmatched at time t, then mv,t=𝚄, 𝐼𝑛v,tσ(u)=𝙴 for every uN(v), and all neighbors of v are strongly matched. The design of 𝚂𝚂𝙼𝙼 ensures that in this case, mv=𝚄 after the call to 𝙳𝚎𝚝𝚎𝚌𝚝 in line 2, hence mv=𝚄 at the end of the round and mv,t+1=𝚄. By Obs. 5.8, all of v’s neighbors remain strongly matched, and by Obs. 5.7 we know that, 𝐼𝑛v,t+1σ(u)=𝙴 for every uN(v).

Proof of Prop. 5.2.

Since v is locally matched to u at time t, it holds that mv,t=u and 𝐼𝑛v,tσ(u)=𝙷. By Obs. 5.7, we know that mu,t=v and 𝐼𝑛u,tσ(v)=𝙷.

Proof of Prop. 5.3.

Since dGt(v)=0 we know that all of v’s neighbors are locally matched. By Prop. 5.2, all of v’s neighbors are also strongly matched. Since v is not locally matched we conclude that none of v’s neighbors are strongly matched to him. In this case, by Obs. 5.7, 𝐼𝑛v,tσ(u)=𝙴, for every uN(v). Moreover, by Prop. 5.1 and Obs. 5.7 it holds that, for every uN(v), u is strongly matched at time t+1 and 𝐼𝑛v,t+1σ(u)=𝙴.

If mv,tN(v), then after the call to 𝙳𝚎𝚝𝚎𝚌𝚝 in Line 2 of 𝚂𝚂𝙼𝙼 it holds that mv=. Thus, in Line 3 of 𝚂𝚂𝙼𝙼 node v sets mv=𝚄. This value will not change during round t, thus mv,t+1=𝚄.

If mv,t=𝚄, then mv=𝚄 after the call to 𝙳𝚎𝚝𝚎𝚌𝚝 in Line 2 of 𝚂𝚂𝙼𝙼. The design of 𝚂𝚂𝙼𝙼 ensures that in that case mv=𝚄 in the end of round t, thus mv,t+1=𝚄. Otherwise (mv,t=), in Line 3 of 𝚂𝚂𝙼𝙼 node v sets mv=𝚄. This value will not change during round t, thus mv,t+1=𝚄. We can conclude that v is strongly unmatched at time t+1.

5.3 Eliminating the Graph

Our goal in this section is to establish Prop. 5.4– 5.6, starting with some additional definitions. For every time tt0, let

V~t=Vt{vV𝑠𝑡𝑒𝑝v,t=1}

be the subset of nodes that start a phase in round t, recalling that 𝑠𝑡𝑒𝑝v,t is the state of v’s 𝙿𝙿𝚂 module at time t. Let E~t=Et(V~t×V~t) and let G~t=(V~t,E~t) be the graph induced on Gt by V~t.

A key feature of 𝚂𝚂𝙼𝙼 is that in each phase, node vV writes to 𝚛𝚎𝚐v before reading from it for every register 𝚛𝚎𝚐v other than mv and 𝐼𝑛v(). Therefore, we can conceptually assume that these registers are reset to an arbitrary default value at step 1.

Consider some graph H=(VH,EH). Node vVH is said to be good in H if at least 1/3 of its neighbors u in H have degree dH(u)dH(v). The following lemma is established by Alon et al. [3, Lem. 4.4].

Lemma 5.9.

Let H be an undirected graph. At least 1/2 of the edges in H are incident on a good node.

Lem. 5.9 is exploited in the following two lemmas to show that sufficiently many edges are removed with a sufficiently high probability; Prop. 5.4 follows by a standard Markov inequality argument (see Appendix B).

Lemma 5.10.

Fix some time t>t0 and a configuration at time t. There exists a universal positive constant pg such that (vVt+φ)pg for every good node vV~t.

Proof.

Denote by d the degree of node v in G~t, i.e., d=dG~t(v)>0. Node v is good, thus there exist u1,,ud/3NG~t(v) such that di=dG~t(ui)d for every 1id/3. Recall that by the definition of G~t, nodes v and u1,,ud/3 start a phase (in synchrony) at time t. Node v marks itself as passive in step 0 of the phase with probability 1/2; condition hereafter on this event. For 1id/3, let Bi be the event that ui marks itself as active and sends a 𝙼𝚛𝚎𝚚 message to v in step 1 of the phase, noticing that (Bi)=12di12d. Since the events Bi are independent, it follows that the probability that none of them occurs is up-bounded by (11/(2d))d/3<e1/6. The assertion follows since the occurrence of any of the events Bi implies that v becomes locally matched by the end of the phase that lasts for φ rounds.

Lemma 5.11.

Fix some time t>t0 and a configuration at time t. There exists a universal positive constant α such that 𝔼(|Et+τ+φ|)(1αϕ2)|Et|.

Proof.

Denote m=|Et|. Let V^Vt be the (random) set of nodes vVt such that 𝑠𝑡𝑒𝑝v,t+τ=1 and let E^={{u,v}|v,uV^,uv}. For every edge eEt let Ae be the event that eE^. By Cor. 3.3, for every edge e=(u,v)Et, we have

(Ae)14ϕ2. (1)

Hence,

𝔼(|E^|)14ϕ2m. (2)

Let Et+τc=EtEt+τ and notice that Et+τcEt+τ+φ=. We partition the set E^ into two (possibly empty) disjoint sets, E^Et+τ and E^c=E^Et+τc. By the definition of E~t+τ, it holds that E^Et+τ=E~t+τ.

By Lem. 5.9, in the (random) graph G~t+τ at least half of the edges are incident on a good node (at least one). Every good node in G~t+τ is removed with probability at least pg, where pg is the constant promised in Lem. 5.10. Hence, the expected number of removed edges in the time interval [t+τ,t+τ+φ) can be low-bounded as follows

𝔼(|Et+τEt+τ+φ|)=𝔼(𝔼(|Et+τEt+τ+φ|||E~t+τ|))12pg𝔼(|E~t+τ|).

By definition, |E^|=|E^c|+|E~t+τ|, hence 𝔼(|E^|)=𝔼(|E^c|)+𝔼(|E~t+τ|) and by Eq. 2, 𝔼(|E^c|)+𝔼(|E~t+τ|)14ϕ2m. This allows us to low-bound the expected number of edges removed in the time interval [t,t+τ+φ) by

𝔼(|EtEt+τ+φ|)𝔼(|E^c|)+𝔼(|Et+τEt+τ+φ|)𝔼(|E^c|)+12pg𝔼(|E~t+τ|)12pg(𝔼(|E^c|)+𝔼(|E~t+τ|))18ϕ2pgm,

thus completing the proof.

We now turn to establishing Prop. 5.5.

Proof of Prop. 5.5.

Let A be the set of nodes in VS that are manipulated (by the adversary) during the time interval [tadv,t0). Let R be the set of nodes in V(SA) that are strongly matched at time tadv, referred to hereafter as orphans. We define K=AR and establish the assertion by proving that (1) |K|2k; and (2) I=V(SK) is an independent set in G.

For every orphan vR, let w(v) be the node with which v is strongly matched at time tadv. Note that w(v) does not necessarily exist in G as it may have been removed during the time interval [tadv,t0). Since v and w(v) are no longer strongly matched at time t0 and since v is not manipulated during the time interval [tadv,t0), Obs. 5.8 implies that w(v) must be manipulated during that time interval. We conclude that |R|k as the mapping defined by w is injective. The bound |K|2k follows since |A|k.

It remains to show that I is an independent set in G. This is done by arguing that every node vI is strongly unmatched at time tadv. This establishes the assertion recalling that by definition, the nodes in I are not manipulated during the time interval [tadv,t0), hence the adversary does not introduce new edges in I×I. To that end, assume by contradiction that there exists some node vI that is not strongly unmatched at time tadv. Since 𝚂𝚂𝙼𝙼 is in a legal configuration at time tadv, it follows that v must be strongly matched at that time. But by definition, the nodes in VS that are strongly matched at time tadv belong to either R or A, in contradiction to vI=V(SAR).

Corollary 5.12.

Fix some time t>t0 and a configuration at time t. If vV~tK and dG~t(v)3k, then v is good in G~t.

Proof.

Let d=dG~t(v). Node v has at most |K|1<2k neighbors from the set K, thus at least d2k(1/3)d of v’s neighbors in G~t belong to V~tK=IV~t. By Prop. 5.5, every node uIV~t has degree at most 2k which completes the proof.

The analysis is completed by establishing Prop. 5.6.

Proof of Prop. 5.6.

For every node vVt, let tt(v) be the first time after time t such that 𝑠𝑡𝑒𝑝v,t(v)=. According to the definition of the 𝙿𝙿𝚂 module it must hold that tt(v)t+φ and notice that t(v) is fully determined by 𝑠𝑡𝑒𝑝v,t. Denote by Av the event that 𝑠𝑡𝑒𝑝v,t= for every t(v)tt+φ. By Obs. 3.1, it holds that the event Av is independent of any coin toss of v prior to time t(v) and of any coin toss of all other nodes and that

(Av)2φ. (3)

For every node vVt, we augment the power of the adversary by allowing it choose the outcome of any coin toss in the time interval [t,t(v)). Notice that this adversary can only choose the outcome of coin tosses that are within a phase and cannot choose the outcome of coin tosses of the 𝙿𝙿𝚂 module. Let S be the set of nodes vVt that are not locally matched at time t(v), i.e., S={vVt|vVt(v)}. Let GS=(S,ES) be the graph induced on Gt by S.

Fix some node vDt. If node v is locally matched at time t(v) or dGS(v)<ξk, then vDt+2φ+1 with probability 1. Otherwise (v is not locally matched at time t(v) and more than ξk of its neighbors are in S), we will show that with probability Ω(2φ) node v is locally matched at time t+2φ+1 which implies that vDt+2φ+1.

Let B be the event that dG~t+φ+1(v)3k. We start by showing that there exists α=α(φ) such that (B)α. For every uS, let A~u be the event that Au occurred and 𝑠𝑡𝑒𝑝v,t+φ+1=1. By Eq. 3 and Obs. 3.1 we conclude that (A~u)=(Au)(1/2)2(φ+1). Moreover, the events in {A~u|uS} are independent.

Denote d=dGS(v) and let u1,,ud be the neighbors of v is GS. Let Y be the random variable that counts the number of occurrences of events A~ui. Notice that if Y3k and A~v occurs, then event B occur; moreover, events Y3k and A~v are independent and 𝔼(Y)ξk2(φ+1) since dξk.

By choosing ξ=4/2(φ+1)=2φ+3 and applying Chernoff’s (lower tail) bound we conclude that

(Y<3k)=(Y<(11/4)ξk2(φ+1))(Y<(11/4)𝔼(Y))<ek/8e1/8.

Thus (B)(1e1/8)2(φ+1). By Cor. 5.12 and Lem. 5.10, occurrence of B implies that v is good in G~t+ϕ+1 and a good node is removed with at least a constant probability by the end of the phase, i.e., at time t+2φ+1. We conclude that

(vDt+2φ+1)=(BvVt+2φ+1)(vVt+2φ+1|B)(B)=pg(1e1/8)2(φ+1),

thus establishing the assertion.

 Remark.

Unfortunately, we did not manage to prove Prop. 5.6 based on the promise of Cor. 3.3 as we did in the proof of Lem. 5.11. Instead, we used the weaker promise of Obs. 3.1 that results in an exponential, rather than polynomial, dependency on the phase length ϕ. As ϕ is a constant in 𝚂𝚂𝙼𝙼, this does not affect its asymptotic run-time.

References

  • [1] Yehuda Afek and Shlomi Dolev. Local stabilizer. Journal of Parallel and Distributed Computing, 62(5):745–765, 2002. doi:10.1006/jpdc.2001.1823.
  • [2] Yehuda Afek, Shay Kutten, and Moti Yung. The local detection paradigm and its applications to self-stabilization. Theoretical Computer Science, 186(1-2):199–229, 1997. doi:10.1016/S0304-3975(96)00286-1.
  • [3] Noga Alon, László Babai, and Alon Itai. A fast and simple randomized parallel algorithm for the maximal independent set problem. Journal of algorithms, 7(4):567–583, 1986. doi:10.1016/0196-6774(86)90019-2.
  • [4] Karine Altisen, Stéphane Devismes, Swan Dubois, and Franck Petit. Introduction to distributed self-stabilizing algorithms. Synthesis Lectures on Distributed Computing Theory, 8(1):1–165, 2019. doi:10.2200/S00908ED1V01Y201903DCT015.
  • [5] Ozkan Arapoglu and Orhan Dagdeviren. An asynchronous self-stabilizing maximal independent set algorithm in wireless sensor networks using two-hop information. In 2019 International Symposium on Networks, Computers and Communications (ISNCC), pages 1–5. IEEE, 2019. doi:10.1109/ISNCC.2019.8909189.
  • [6] Anish Arora and Hongwei Zhang. LSRP: Local stabilization in shortest path routing. IEEE/ACM Transactions on Networking, 14(3):520–531, 2006. doi:10.1145/1143396.1143402.
  • [7] Yuma Asada, Fukuhito Ooshita, and Michiko Inoue. An efficient silent self-stabilizing 1-maximal matching algorithm in anonymous networks. Journal of Graph Algorithms and Applications, 20(1):59–78, 2016. doi:10.7155/jgaa.00384.
  • [8] Baruch Awerbuch, Boaz Patt-Shamir, and George Varghese. Self-stabilization by local checking and correction (extended abstract). In 32nd Annual Symposium on Foundations of Computer Science, San Juan, Puerto Rico, 1-4 October 1991, pages 268–277. IEEE, IEEE Computer Society, 1991. doi:10.1109/SFCS.1991.185378.
  • [9] Baruch Awerbuch, Boaz Patt-Shamir, George Varghese, and Shlomi Dolev. Self-stabilization by local checking and global reset (extended abstract). In Distributed Algorithms, 8th International Workshop, WDAG ’94, Terschelling, The Netherlands, September 29 - October 1, 1994, Proceedings, volume 857, pages 326–339. Springer, Springer, 1994. doi:10.1007/BFb0020443.
  • [10] Baruch Awerbuch and George Varghese. Distributed program checking: A paradigm for building self-stabilizing distributed protocols (extended abstract). In 32nd Annual Symposium on Foundations of Computer Science, San Juan, Puerto Rico, 1-4 October 1991, pages 258–267. IEEE, IEEE Computer Society, 1991. doi:10.1109/SFCS.1991.185377.
  • [11] Yossi Azar, Shay Kutten, and Boaz Patt-Shamir. Distributed error confinement. In Proceedings of the twenty-second annual symposium on Principles of distributed computing, pages 33–42, 2003. doi:10.1145/872035.872041.
  • [12] Alkida Balliu, Sebastian Brandt, Juho Hirvonen, Dennis Olivetti, Mikaël Rabie, and Jukka Suomela. Lower bounds for maximal matchings and maximal independent sets. In 2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS), pages 481–497. IEEE, 2019. doi:10.1109/FOCS.2019.00037.
  • [13] Alkida Balliu, Juho Hirvonen, Darya Melnyk, Dennis Olivetti, Joel Rybicki, and Jukka Suomela. Local mending. In Merav Parter, editor, Structural Information and Communication Complexity - 29th International Colloquium, SIROCCO 2022, Paderborn, Germany, June 27-29, 2022, Proceedings, pages 1–20, 2022. doi:10.1007/978-3-031-09993-9_1.
  • [14] Philipp Bamberger, Fabian Kuhn, and Yannic Maus. Local distributed algorithms in highly dynamic networks. In 2019 IEEE International Parallel and Distributed Processing Symposium, IPDPS 2019, Rio de Janeiro, Brazil, May 20-24, 2019, pages 33–42, 2019. doi:10.1109/IPDPS.2019.00015.
  • [15] Leonid Barenboim, Michael Elkin, and Uri Goldenberg. Locally-iterative distributed (Δ + 1)-coloring and applications. J. ACM, 69(1):5:1–5:26, 2022. doi:10.1145/3486625.
  • [16] Leonid Barenboim, Michael Elkin, Seth Pettie, and Johannes Schneider. The locality of distributed symmetry breaking. J. ACM, 63(3):20:1–20:45, 2016. doi:10.1145/2903137.
  • [17] Surender Baswana, Manoj Gupta, and Sandeep Sen. Fully dynamic maximal matching in O(logn) update time. SIAM Journal on Computing, 44(1):88–113, 2015. doi:10.1137/130914140.
  • [18] Shimon Bitton, Yuval Emek, Taisuke Izumi, and Shay Kutten. Fully adaptive self-stabilizing transformer for LCL problems. CoRR, abs/2105.09756, 2024. doi:10.48550/arXiv.2105.09756.
  • [19] Keren Censor-Hillel, Neta Dafni, Victor I. Kolobov, Ami Paz, and Gregory Schwartzman. Fast and simple deterministic algorithms for highly-dynamic networks. CoRR, abs/1901.04008, 2019. doi:10.48550/arXiv.1901.04008.
  • [20] Keren Censor-Hillel, Elad Haramaty, and Zohar Karnin. Optimal dynamic distributed mis. In Proceedings of the 2016 ACM Symposium on Principles of Distributed Computing, pages 217–226. ACM, 2016. doi:10.1145/2933057.2933083.
  • [21] Johanne Cohen, Jonas Lefèvre, Khaled Maâmra, Laurence Pilard, and Devan Sohier. A self-stabilizing algorithm for maximal matching in anonymous networks. Parallel Process. Lett., 26(4):1650016:1–1650016:17, 2016. doi:10.1142/S012962641650016X.
  • [22] Johanne Cohen, George Manoussakis, Laurence Pilard, and Devan Sohier. A self-stabilizing algorithm for maximal matching in link-register model. In International Colloquium on Structural Information and Communication Complexity, pages 14–19. Springer, 2018. doi:10.1007/978-3-030-01325-7_2.
  • [23] Richard Cole and Uzi Vishkin. Deterministic coin tossing with applications to optimal parallel list ranking. Information and Control, 70(1):32–53, 1986. doi:10.1016/S0019-9958(86)80023-7.
  • [24] Ajoy Kumar Datta, Lawrence L. Larmore, and Toshimitsu Masuzawa. Maximum matching for anonymous trees with constant space per process. In 19th International Conference on Principles of Distributed Systems, OPODIS 2015, December 14-17, 2015, Rennes, France, volume 46 of LIPIcs, pages 16:1–16:16. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2015. doi:10.4230/LIPIcs.OPODIS.2015.16.
  • [25] Murat Demirbas, Anish Arora, Tina Nolte, and Nancy Lynch. A hierarchy-based fault-local stabilizing algorithm for tracking in sensor networks. In International Conference on Principles of Distributed Systems, pages 299–315. Springer, 2004. doi:10.1007/11516798_22.
  • [26] Edsger W Dijkstra. Self-stabilization in spite of distributed control. In Selected writings on computing: a personal perspective, pages 41–46. Springer, 1982.
  • [27] Shlomi Dolev. Self-stabilization. MIT press, 2000. URL: http://www.cs.bgu.ac.il/%7Edolev/book/book.html.
  • [28] Shlomi Dolev and Ted Herman. Superstabilizing protocols for dynamic distributed systems (abstract). In Proceedings of the Fourteenth Annual ACM Symposium on Principles of Distributed Computing, Ottawa, Ontario, Canada, August 20-23, 1995, page 255. ACM, ACM, 1995. doi:10.1145/224964.224993.
  • [29] Michael Elkin and Jian Zhang. Efficient algorithms for constructing (1+epsilon, beta)-spanners in the distributed and streaming models. Distributed Comput., 18(5):375–385, 2006. doi:10.1007/s00446-005-0147-2.
  • [30] Manuela Fischer. Improved deterministic distributed matching via rounding. In 31st International Symposium on Distributed Computing, DISC, pages 17:1–17:15, 2017. doi:10.4230/LIPIcs.DISC.2017.17.
  • [31] G.N. Frederickson. Data structures for on-line updating of minimum spanning trees, with applications. SIAM J. on Computing (SICOMP), 14(4):781–798, 1985. doi:10.1137/0214055.
  • [32] Mohsen Ghaffari. An improved distributed algorithm for maximal independent set. In Proceedings of the twenty-seventh annual ACM-SIAM symposium on Discrete algorithms, pages 270–277. SIAM, 2016. doi:10.1137/1.9781611974331.ch20.
  • [33] Mohsen Ghaffari. Distributed maximal independent set using small messages. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 805–820. SIAM, 2019. doi:10.1137/1.9781611975482.50.
  • [34] Sukumar Ghosh, Arobinda Gupta, Ted Herman, and Sriram V Pemmaraju. Fault-containing self-stabilizing algorithms. In Proceedings of the fifteenth annual ACM symposium on Principles of distributed computing, pages 45–54. ACM, 1996. doi:10.1145/248052.248057.
  • [35] Wayne Goddard, Stephen T. Hedetniemi, David Pokrass Jacobs, and Pradip K. Srimani. A robust distributed generalized matching protocol that stabilizes in linear time. In 23rd International Conference on Distributed Computing Systems Workshops (ICDCS 2003 Workshops), 19-22 May 2003, Providence, RI, USA, pages 461–465. IEEE, IEEE Computer Society, 2003. doi:10.1109/ICDCSW.2003.1203595.
  • [36] Wayne Goddard, Stephen T. Hedetniemi, David Pokrass Jacobs, and Pradip K. Srimani. Self-stabilizing protocols for maximal matching and maximal independent sets for ad hoc networks. In 17th International Parallel and Distributed Processing Symposium (IPDPS 2003), 22-26 April 2003, Nice, France, CD-ROM/Abstracts Proceedings, page 162. IEEE, IEEE Computer Society, 2003. doi:10.1109/IPDPS.2003.1213302.
  • [37] Maria Gradinariu and Colette Johnen. Self-stabilizing neighborhood unique naming under unfair scheduler. In European Conference on Parallel Processing, pages 458–465. Springer, 2001. doi:10.1007/3-540-44681-8_67.
  • [38] Nabil Guellati and Hamamache Kheddouci. A survey on self-stabilizing algorithms for independence, domination, coloring, and matching in graphs. Journal of Parallel and Distributed Computing, 70(4):406–415, 2010. doi:10.1016/j.jpdc.2009.11.006.
  • [39] Michal Hanckowiak, Michal Karonski, and Alessandro Panconesi. On the distributed complexity of computing maximal matchings. SIAM Journal on Discrete Mathematics, 15(1):41–57, 2001. doi:10.1137/S0895480100373121.
  • [40] Stephen T Hedetniemi, David P Jacobs, and Pradip K Srimani. Maximal matching stabilizes in time o (m). Information Processing Letters, 80(5):221–223, 2001. doi:10.1016/S0020-0190(01)00171-5.
  • [41] Maurice Herlihy, Victor Luchangco, and Mark Moir. Obstruction-free synchronization: Double-ended queues as an example. In 23rd International Conference on Distributed Computing Systems (ICDCS 2003), 19-22 May 2003, Providence, RI, USA, pages 522–529. IEEE, IEEE Computer Society, 2003. doi:10.1109/ICDCS.2003.1203503.
  • [42] Su-Chu Hsu and Shing-Tsaan Huang. A self-stabilizing algorithm for maximal matching. Information processing letters, 43(2):77–81, 1992. doi:10.1016/0020-0190(92)90015-N.
  • [43] Can Umut Ileri and Orhan Dagdeviren. A self-stabilizing algorithm for b-matching. Theoretical Computer Science, 753:64–75, 2019. doi:10.1016/j.tcs.2018.06.042.
  • [44] Amos Israeli and Alon Itai. A fast and simple randomized parallel algorithm for maximal matching. Information Processing Letters, 22(2):77–80, 1986. doi:10.1016/0020-0190(86)90144-4.
  • [45] Richard M Karp and Avi Wigderson. A fast parallel algorithm for the maximal independent set problem. Journal of the ACM (JACM), 32(4):762–773, 1985. doi:10.1145/4221.4226.
  • [46] Masahiro Kimoto, Tatsuhiro Tsuchiya, and Tohru Kikuno. The time complexity of Hsu and Huang’s self-stabilizing maximal matching algorithm. IEICE transactions on information and systems, 93(10):2850–2853, 2010. doi:10.1587/transinf.E93.D.2850.
  • [47] Michael König and Roger Wattenhofer. On local fixing. In International Conference On Principles Of Distributed Systems, pages 191–205. Springer, 2013. doi:10.1007/978-3-319-03850-6_14.
  • [48] Kishore Kothapalli, Christian Scheideler, Melih Onus, and Christian Schindelhauer. Distributed coloring in O(logn) bit rounds. In 20th International Parallel and Distributed Processing Symposium (IPDPS 2006), Proceedings, 25-29 April 2006, Rhodes Island, Greece. IEEE, 2006. doi:10.1109/IPDPS.2006.1639281.
  • [49] Shay Kutten and Boaz Patt-Shamir. Time-adaptive self stabilization. In Proceedings of the sixteenth annual ACM symposium on Principles of distributed computing, pages 149–158. ACM, 1997. doi:10.1145/259380.259435.
  • [50] Shay Kutten and David Peleg. Fault-local distributed mending (extended abstract). In Proceedings of the Fourteenth Annual ACM Symposium on Principles of Distributed Computing, Ottawa, Ontario, Canada, August 20-23, 1995, pages 20–27. ACM, ACM, 1995. doi:10.1145/224964.224967.
  • [51] Leslie Lamport. Distribution, May 1987. Email message sent to a DEC SRC bulletin board at 12:23:29 PDT on 28 May 87. URL: https://www.microsoft.com/en-us/research/publication/distribution/.
  • [52] Leslie Lamport. A fast mutual exclusion algorithm. ACM Transactions on Computer Systems (TOCS), 5(1):1–11, 1987. doi:10.1145/7351.7352.
  • [53] Christoph Lenzen, Jukka Suomela, and Roger Wattenhofer. Local algorithms: Self-stabilization on speed. In Symposium on Self-Stabilizing Systems, pages 17–34. Springer, 2009. doi:10.1007/978-3-642-05118-0_2.
  • [54] Nathan Linial. Locality in distributed graph algorithms. SIAM J. Comput., 21(1):193–201, 1992. doi:10.1137/0221015.
  • [55] Zvi Lotker, Boaz Patt-Shamir, and Adi Rosén. Distributed approximate matching. SIAM Journal on Computing, 39(2):445–460, 2009. doi:10.1137/080714403.
  • [56] Michael Luby. A simple parallel algorithm for the maximal independent set problem. SIAM journal on computing, 15(4):1036–1053, 1986. doi:10.1137/0215074.
  • [57] Fredrik Manne, Morten Mjelde, Laurence Pilard, and Sébastien Tixeuil. A new self-stabilizing maximal matching algorithm. Theoretical Computer Science, 410(14):1336–1345, 2009. doi:10.1016/j.tcs.2008.12.022.
  • [58] Fredrik Manne, Morten Mjelde, Laurence Pilard, and Sébastien Tixeuil. A self-stabilizing 23-approximation algorithm for the maximum matching problem. Theoretical Computer Science, 412(40):5515–5526, 2011. doi:10.1016/j.tcs.2011.05.019.
  • [59] Darya Melnyk, Jukka Suomela, and Neven Villani. Mending partial solutions with few changes. CoRR, abs/2209.05363, 2022. doi:10.48550/arXiv.2209.05363.
  • [60] Ofer Neiman and Shay Solomon. Simple deterministic algorithms for fully dynamic maximal matching. ACM Trans. Algorithms, 12(1):7:1–7:15, 2016. doi:10.1145/2700206.
  • [61] Alessandro Panconesi and Romeo Rizzi. Some simple distributed algorithms for sparse networks. Distributed Comput., 14(2):97–100, 2001. doi:10.1007/PL00008932.
  • [62] Alex Scott, Peter Jeavons, and Lei Xu. Feedback from nature: an optimal distributed algorithm for maximal independent set selection. In Proceedings of the 2013 ACM symposium on Principles of distributed computing, pages 147–156, 2013. doi:10.1145/2484239.2484247.
  • [63] Zhengnan Shi, Wayne Goddard, and Stephen T Hedetniemi. An anonymous self-stabilizing algorithm for 1-maximal independent set in trees. Information Processing Letters, 91(2):77–83, 2004. doi:10.1016/j.ipl.2004.03.010.
  • [64] Gerard Tel. Maximal matching stabilizes in quadratic time. Information Processing Letters, 49(6):271–272, 1994. doi:10.1016/0020-0190(94)90098-1.
  • [65] Volker Turau. Making randomized algorithms self-stabilizing. In Structural Information and Communication Complexity - 26th International Colloquium, SIROCCO 2019, L’Aquila, Italy, July 1-4, 2019, Proceedings, pages 309–324, 2019. doi:10.1007/978-3-030-24922-9_21.
  • [66] Mirjam Wattenhofer and Roger Wattenhofer. Distributed weighted matching. In International Symposium on Distributed Computing, pages 335–348. Springer, 2004. doi:10.1007/978-3-540-30186-8_24.
  • [67] Elizabeth Lee Wilmer. Exact rates of convergence for some simple non-reversible Markov chains. PhD thesis, Department of Mathematics, Harvard University, 1999.

Appendix A Proving Lemma 3.2

Proof of Lem. 3.2.

To avoid cumbersome notation, we assume throughout this proof that t0=0. Let {Xt}t=0 be a stochastic process such that Xt=𝑠𝑡𝑒𝑝v,t. The stochastic process {Xt}t=0 can be defined by the discrete time Markov chain Mϕ with state space Sϕ as in Fig. 1.

Let Pϕ0Sϕ×Sϕ be the time-homogeneous transition matrix of the Markov chain Mϕ and Pϕt the matrix Pϕ to the power of t. Denote entry (i,j)Sϕ×Sϕ of Pϕ by Pϕ(i,j) and recall that for every t,s0 such that t1 it holds that Pϕt(i,j)=(Xt+s=jXs=i). For convenience sake, let n=|Sϕ|=ϕ+2.

The chain Mϕ is ergodic, thus there exists a unique stationary distribution on the state space Sϕ, which we denote by the size n vector 𝝅>0Sϕ. It is easily verifiable that 𝝅()=2n+1 and 𝝅(j)=1n+1 for every jSϕ{}. We will prove that there exists t^=O(log(1/ϵ)n3) such that for every tt^ it holds that

Pϕt(j0,)2n+1(1ϵ) (4)

and

Pϕt(j0,j)1n+1(1ϵ),jSϕ{}. (5)

The chain Mϕ is a special case of the coin-flip chain as defined in [67] with p=q=1/2. By Proposition 5.8, Theorem 3.6 and Lemma 2.4 in [67], for every positive ϵ<1 there exists a time t(n)=O(n3) such that for every tO(log(1/ϵ)t(n)), and iSϕ

maxjSϕ|Pϕt(i,j)𝝅(j)1|ϵ.

Thus, for every tO(log(1/ϵ)n3) we can low-bound Eq. 45 by,

Pϕt(j0,j)𝝅(j)(1ϵ).

The proof is completed by the value of 𝝅.

Appendix B Standard Probabilistic Arguments

In the section we will show how to prove Thm. 1.1 using Prop. 5.45.6. It is also used to prove Prop. 5.4 using Lem. 5.11.

Fix some m,>0 and c>1. We define 1<β(c)<111/c. Let {Xi}i=0 be a stochastic process such that (1) X0=m; (2) for every i0, it holds that Xi0 with probability 1; and (3) for every α>0 it holds that 𝔼(Xα|X(α1)=d)(11c)d. The random variable that corresponds to the run time of an algorithm with the aforementioned properties is defined as T=min{i|Xi<1}.

Fix some α>0 and d0. Let YdXα|X(α1)=d. By applying Markov inequality we get

(Yd>β(c)(11c)d)(Yd>β(c)𝔼(Yd))<1β(c).

Hence,

(Ydβ(c)(11c)d)(11β(c))

This is regardless of the value of d and α. So, for every α with probability at least (11β(c)) the value of Xα decreases by a factor of β(c)(11c)<1 and we denote this event by Aα.

Let j=logmlog(1β(c)(11/c)) and α^ denote the jth occurrence of events from the set {Aαα>0}. Since X0=m, it holds that Xα^<1 implying that T/α^. It is easily verifiable that T/ is stochastically dominated by a random variable YNB(j,11/β(c)) and the proof is completed by the properties of Y.

Appendix C Figures and Tables

Figure 1: The underlying Markov chain of 𝙿𝙿𝚂 with state space Sϕ={,1,0,,ϕ1}.
Table 1: The registers maintained by 𝙵𝙵𝙼𝙼 and 𝚂𝚂𝙼𝙼.
register values set
𝑠𝑡𝑒𝑝v {,1,0,,ϕ1}
mv {,𝚄}N(v)
𝐼𝑛vσ(), 𝑂𝑢𝑡vσ() {,𝚄,𝙷,𝙴}
𝐼𝑛vr(), 𝑂𝑢𝑡vr() {,𝙰𝚌𝚌,𝙼𝚛𝚎𝚚}
𝐼𝑛vs(), 𝑂𝑢𝑡vs() {,1,0,,ϕ1}
𝑎𝑐𝑡𝑖𝑣𝑒v {true,false}
chosenv {}N(v)
P(v) 2N(v)