Almost Time-Optimal Loosely-Stabilizing Leader Election on Arbitrary Graphs Without Identifiers in Population Protocols
Abstract
The population protocol model is a computational model for passive mobile agents. We address the leader election problem, which determines a unique leader on arbitrary communication graphs starting from any configuration. Unfortunately, self-stabilizing leader election is impossible to be solved without knowing the exact number of agents; thus, we consider loosely-stabilizing leader election, which converges to safe configurations in a relatively short time, and holds the specification (maintains a unique leader) for a relatively long time. When agents have unique identifiers, Sudo et al. (2019) proposed a protocol that, given an upper bound for the number of agents , converges in expected steps, where is the number of edges. When unique identifiers are not required, they also proposed a protocol that, using random numbers and given , converges in expected steps. Both protocols have a holding time of expected steps and use bits of memory. They also showed that the lower bound of the convergence time is expected steps for protocols with a holding time of expected steps given .
In this paper, we propose protocols that do not require unique identifiers. These protocols achieve convergence times close to the lower bound with increasing memory usage. Specifically, given and an upper bound for the maximum degree, we propose two protocols whose convergence times are and both in expectation and with high probability. The former protocol uses random numbers, while the latter does not require them. Both protocols utilize bits of memory and hold the specification for expected steps.
Keywords and phrases:
Population protocols, Leader election, Loose-stabilization, Self-stabilizationCopyright and License:
![[Uncaptioned image]](x1.png)
2012 ACM Subject Classification:
Theory of computation Distributed algorithmsFunding:
The second author was supported by JSPS KAKENHI Grant Number 22H03569.Editors:
Silvia Bonomi, Letterio Galletta, Etienne Rivière, and Valerio SchiavoniSeries and Publisher:

1 Introduction
The population protocol model, introduced by Angluin et al. [3], is a computational model widely recognized in distributed computing and applicable to passive mobile sensor networks, chemical reaction systems, and molecular calculations, etc. This model comprises finite state machines (called agents), which form a network (called a population). Agents’ states are updated through communication (called interaction) among a pair of agents. A simple connected digraph (called a communication graph) determines the possibility of interaction among the agents. In this model, only one pair of agents interacts at each step. The interactions are determined by a uniform random scheduler.
The leader election problem is one of the most studied problems in population protocols. This problem involves agents electing a unique leader agent from the population and maintaining this unique leader forever. Angluin et al. [3] first studied this problem for complete graphs with designated common initial state. Under this assumption, many studies have been conducted [3, 6, 11, 17], and a time and space optimal protocol [6] has already been proposed. Several studies also exist for arbitrary graphs [1, 2], and a time-optimal protocol [2] has already been proposed.
The self-stabilizing leader election problem requires that agents start from any configuration, elect and externally maintain a unique leader agent. It is known that there is no self-stabilizing leader election protocol for arbitrary graphs [4] and complete graphs [8], and researchers have explored the problem in three ways. The first approach involves assuming that all agents initially know the exact number of agents [7, 8, 22]. The second approach introduces an oracle that informs agents about the existence of leaders [5, 9, 10]. The third approach relaxes the requirement of maintaining a unique leader forever, introducing a loosely-stabilizing leader election problem, where agents start from any configuration, elect a unique leader within a short time, and maintain this leader for a long time. Sudo et al. [16] first addressed this problem on complete graphs. Subsequent studies have continued to explore this problem [12, 15, 18, 19, 20, 21] as follows and summarized in Table 1.
Sudo, Ooshita, Kakugawa, and Masuzawa [18] first addressed this problem for arbitrary graphs, and it is significantly improved by Sudo, Ooshita, Kakugawa, Masuzawa, Datta, and Lawrence [20] introducing a novel concept of Same Speed Timer. They proposed two protocols. The first protocol, , assumes that agents have unique identifiers and are given as initial knowledge. converges within expected steps and holds the unique leader with expected steps, using bits of memory. The second protocol, , assumes that agents can make randomized transitions and is given as initial knowledge. converges within expected steps and holds the unique leader with expected steps using bits of memory. Sudo et al. also demonstrated that the lower bound of the convergence time is steps for any loosely-stabilizing protocols with holding a unique leader expected steps.
Loosely-stabilizing leader election protocols without requiring unique identifiers or random numbers were proposed [19] and then improved [22]. The protocol, , given and as initial knowledge, converges within expected steps and holds with expected steps using bits of memory [22].
Graph | Convergence | Holding | Memory | Requisite | |
[16] | complete | – | |||
[12] | complete | – | |||
[12] | complete | – | – | ||
[12] | complete | – | – | ||
[18] | arbitrary | agent identifiers | |||
[18] | arbitrary | random numbers | |||
[20] | arbitrary | agent identifiers | |||
[20] | arbitrary | random numbers | |||
[20] | arbitrary | – | – | ||
[19] | arbitrary | – | |||
This | arbitrary | random numbers | |||
This | arbitrary | – |
1.1 Our Contribution
In this paper, we propose a protocol whose convergence time is nearly optimal on anonymous (without unique identifiers) arbitrary graphs, as supported by the lower bound [20]. The lower bound for the complete graph [12] is , but this is a special case, as the lower bound [20] is known to hold even when . Given and , converges within steps if the transition is randomized, and steps if the transition is deterministic111 Transition is said to be deterministic if it does not require random numbers in the transition. Though the transition is deterministic, we allow the protocol to exploit the randomness with which initiator and responder roles are chosen. , both in expectations and with high probability. The protocol holds the unique leader with expected steps and utilizes bits of memory. The proposed has better convergence time than SOTA self-stabilizing leader election protocol [22] which converges with steps with requiring the knowledge of .
To achieve the convergence time of , we utilize the Same Speed Timer proposed in [20], which requires two-hop coloring. The self-stabilizing two-hop coloring protocol was first studied by Angluin et al. [4], and further explored by Sudo et al. [20] (see Table 2). In this paper, we propose two new self-stabilizing two-hop coloring protocols; with randomized transitions, and with deterministic transitions. Both protocols require and as initial knowledge. converges within steps, both in expectation and with high probability, and uses bits of memory. converges within steps, both in expectation and with high probability, and also uses bits of memory. In , agents generate random numbers independently from interactions among themselves. To ensure the independence among random numbers, we employ the self-stabilizing normal coloring protocol to assign superiority or inferiority between adjacent agents. When interacting, only the superior agent uses the interaction to generate random numbers. converges within steps, both in expectation and with high probability, and utilizes bits of memory.
Convergence | Memory | Knowledge | Requisite | |
---|---|---|---|---|
Angluin et al. [4] | – | random numbers | ||
Angluin et al. [4] | – | – | ||
Sudo et al. [20] | random numbers | |||
(this) | random numbers | |||
(this) | – |
2 Preliminaries
In this paper, denotes the set of natural numbers no less than one, and refers to . If we use the natural logarithm, we explicitly specify the base by writing .
A population is represented by a simple connected digraph , where () represents a set of agents, and represents the pairs of agents indicating potential interactions. An agent can interact with an agent if and only if , where is the initiator and is the responder. We assume is symmetric, that is, if for every , the preposition holds. We also denote and . The diameter of is denoted by . The degree of agent is denoted by , and the maximum degree is denoted by . The upper bound of satisfies , and the upper bound of satisfies 222 Given only , holds, thus . .
A protocol is defined as a 5-tuple , where represents the finite set of states of agents, represents the finite set of output symbols, represents the range of random numbers, is the transition function, and is the output function. When an initiator , whose state is , interacts with a responder , whose state is , each agent updates their states via the transition function using their current states and a random number to such that . An agent whose state is outputs . A protocol is with deterministic transitions if and only if holds. Otherwise, a protocol is with randomized transitions. The memory usage of protocol is defined by bits.
A configuration represents the states of all agents. The set of all configurations by protocol is denoted by . A configuration transitions to by an interaction and a random number if and only if and holds. Transitioning from a configuration to by an interaction and a random number is denoted by . A uniform random scheduler determines which pair of agents interact at each step, where (for ) is a random variable satisfying . An infinite sequence of random numbers represents a random number generated at each step, where (for ) is a random variable satisfying . Given an initial configuration , a uniform random scheduler , and a sequence of random numbers , the execution of protocol is denoted by where (for ) holds. If and are clear from the context, we may simply write . A set of configurations is safe if and only if there is no configuration for any configuration and any execution . A protocol is silent if and only if there is no state changed after reached safe configurations.
For a protocol that solves a population protocol problem, the expected holding time and the expected convergence time are defined as follows. The specification of the problem, which is a required condition for an execution, is denoted by . For any configuration , any uniform random scheduler , and any infinite sequence of random numbers , the expected number of steps that an execution satisfies is defined as the expected holding time, denoted . For any set of configurations , any configuration , any uniform random scheduler , and any infinite sequence of random numbers , the expected number of steps from the beginning of the execution until the configuration reaches is defined as the expected convergence time, denoted . A computation is considered to be finished with high probability if and only if the computation finishes with probability for .
The leader election problem requires that all agents output either or , where represents a leader and represents a follower. The specification of the leader election is denoted by . For an execution , the configurations satisfy if and only if there is an agent such that , and holds.
Definition 1 (Loosely-stabilizing leader election[16]).
A protocol is an -loosely-stabilizing leader election protocol if and only if there exists a set of configurations such that and holds.
A protocol is a self-stabilizing protocol of a problem if and only if there exists safe configurations that any execution starting from any safe configuration satisfies the specification of the problem (called closure), and any execution starting from any configuration includes a safe configuration reaches the safe configurations (called convergence).
Definition 2.
A protocol is a self-stabilizing normal coloring protocol if and only if there exists non-negative integer such that for any configuration , and the execution , the following condition holds: and .
Definition 3.
A protocol is a self-stabilizing two-hop coloring protocols if and only if there exists non-negative integer such that for any configuration , and the execution , the following condition holds: and .
3 Self-Stabilizing Two-Hop Coloring
In this section, we introduce a self-stabilizing two-hop coloring protocol with randomized transitions , alongside a deterministic self-stabilizing two-hop coloring protocol with deterministic transitions .
Two distinct agents are called two-hop located if and only if there exists such that . A graph is considered two-hop colored if and only if, for any pair of agents and that are two-hop located, and are assigned distinct colors.
The basic strategy is similar to that described by Angluin et al. [4] and Sudo et al. [20]. The differences lie in the methods for generating colors and the length of the array used to record the colors of interacted agents. Angluin et al. memorized all generated colors using an array of length , whereas Sudo et al. recorded only the most recent color. In the protocols, the agents record the last colors.
We present a general strategy for color collision detection. When interacting two agents, they record each other’s color with a common binary random stamp. If there is no color collision, when they interact again they find that they remember each other’s color with the same stamp value. Assume that agents and have a color collision, that is, they are two-hop located with a common neighbor and have the same color. Consider the scenario in which interactions occur in the order of , , , and (resp. ) records ’s (resp. ’s) color with a stamp and then records ’s color (it is also ’s color) with a stamp . When and interact again, they notice they remember each other’s color with different stamp values and detect the color collision.
In both protocols, each agent has arrays whose size are to record the last colors and their stamps. Both protocols are the same except for the way to generate colors. In , agents generate colors by using the ability of generating uniform random numbers. In , agents generate colors by using the roles of initiator and responder. To generate -digit binary random number, each agent generates a one random bit according to its role (initiator or responder) in each interaction, and repeats it times. To ensure the independence of random numbers, only one agent can use the interaction to generate random numbers for each interaction. To solve this issue, we use normal coloring. A graph is considered normal colored if and only if agents and are different colors for any pair . We call agents’ colors which are colored by normal coloring the normal color. To guarantee independency among random numbers, when two agents interact, the agent with larger normal color value can use the interaction to generate random numbers. Though this mechanism does not give a chance to generate random numbers to agents with smaller normal color, random numbers are used when two agents detect a color collision. In such cases, two random numbers are provided as new colors from an agent who has larger normal color value and already generated two or more numbers.
3.1 Protocol
In this subsection, we introduce the randomized self-stabilizing two-hop coloring protocol . Given and , the protocol achieves convergence within steps, both in expectation and with high probability, while requiring bits of memory per agent.
An agent in has four variables: , , , and . The variable represents the two-hop color of the agent. The variable is an array that stores the last colors interacted by the agent. The variable is an array of size , with each entry being either or , used to record the stamp associated with each color memorized. Lastly, serves as a temporary index to locate the color of the interacting agent in .
is given by algorithm 1, and consists of four parts: i) reading memory, ii) collision detection, iii) saving colors, and iv) stamping.
-
i)
Reading memory (lines 1–6) aims to find a color of the interacting partner in an array of recorded colors. For each agent (where ) interacting with another agent , searches for and records the minimum index in if exists, otherwise, sets as .
-
ii)
Collision detection (lines 7, and 13–16) aims to generate new colors when a stamp collision is detected. To address this, two uniform random numbers are generated from the range . These numbers are then used to update and respectively.
-
iii)
Saving colors (lines 8–11) aims to maintain arrays and in a Least Recently Used (LRU) fashion.
-
iv)
Stamping (lines 12, and 17–18) aims to generate a common binary stamp to two interacting agents, and to move a color and a stamp of this current interacting partner to the heads of arrays and .
We have following theorems.
Theorem 4.
Given the upper bound and , is a self-stabilizing two-hop coloring protocol with randomized transitions, and the convergence time is steps both in expectation and with high probability.
Theorem 5.
Given the upper bound and , is a self-stabilizing two-hop coloring protocol with deterministic transitions, and converges to safe configurations within steps both in expectation and with high probability.
4 Loosely-Stabilizing Leader Election
In this section, we propose a loosely-stabilizing leader election protocol . uses a self-stabilizing two-hop coloring protocol. Thus, if it uses , is with randomized transitions. If it uses , is with deterministic transitions. In both cases, holds a unique leader with expected steps and uses bits of memory. Note that (with randomized transitions) always generates random numbers deterministically like in outside of two-hop coloring since it does not affect a whole complexity.
The basic strategy of leader election is as follows: i) All agents become followers. ii) Some candidates of a leader emerge, and the number of candidates becomes with high probability. iii) If there are multiple leaders, return to i). Each agent has variables and timers: , , , , , , , , , and . Here, is a sufficiently large value for to work correctly. A status of an agent is represented by where (leader candidate), (leader mode), and (duplication check mode) represent leaders and represents a follower. The variable is used for detecting multiple leaders. The variable represents the identifier of leaders.
In , agents mainly use the broadcast (also called the epidemic and the propagation) to inform others of something. In a broadcast mechanism, information from one agent is repeatedly copied (with modification if needed) to agents when two agents interact. In order to detect the end of operations (including broadcasts) for all agents, the agents uses timers. The timers decrease by Larger Time Propagation and Same Speed Timer. Using Larger Time Propagation and Same Speed Timer, all timer values decrease gradually, almost synchronously. Larger Time Propagation means that when an agent interacts with an agent , (resp. ) is set to (resp. ). A variable is used to implement Same Speed Timer and represents whether can decrease its own timers or not in the current interaction. Same Speed Timer decrease a timer value by when an agent interacts the same agent continuously. For Same Speed Timer, after reaching , the agents use the colors to determine whether the current partner is the last partner or not. A read-only variable represents a color determined in the two-hop coloring. A variable represents an agent’ color that interacted previously. The domains of and depend on a self-stabilizing two-hop coloring protocol. If we use , . If we use , .
4.1 Outline of
We show the outline of . We call a configuration satisfying certain conditions a phase. mainly has 3 phases: i) Global Reset, ii) Leader Generation, iii) Leader Detection. We first explain the overview of the three phases with the roles of timers using an example flow shown in Fig. 1, where the height of the timer represents the relative magnitude of its value.
-
i)
Global Reset is the phase that resets all agents when some inconsistencies are detected. (In Fig. 1(a), multiple leaders are detected.) A configuration is in the Global Reset if there exists agents whose . When some inconsistency is detected, a Global Reset phase is started and the kill virus is created. The kill virus makes an agent a follower, sets the agent’s identifier to (a tentative value before generating id), and also erases the search virus. The presence of kill virus is represented as a positive value of , which serves as timer to live (TTL). When the Global Reset phase begins, some agents’ are set to the maximum value and spread to all agents with decreasing the values as shown in Fig. 1(b). Eventually, will become , and the Global Reset phase will be finished. While , takes its maximum value and takes 0.
-
ii)
Leader Generation is the phase where agents generate leaders. In , a leader keeps a values of to its maximum value, while followers propagate the value with Larger Time Propagation and Same Speed Timer. If there is no leader (Fig. 1(c)), values of for some followers eventually become . Then these followers become candidates () (Fig. 1(d)). Each candidate for leaders generates a random number as an identifier using interactions. The way to generate random numbers is described in Section 3. The candidates broadcast their s to all agents, and become a follower () if it encounters a larger value. While generating an identifier (), takes its maximum value, and it gradually decreases after generating . When of a candidate becomes , it becomes a leader (). When there are no candidates for leaders, Leader Generation is finished.
-
iii)
Leader Detection is the phase where leaders determine whether there are multiple leaders or not. The leaders generate search viruses periodically using . When becomes , the agent start generating a of search virus. If there are multiple leaders, leaders generate search viruses almost simultaneously thanks to Larger Time Propagation. When leaders generate search viruses, they generate random numbers using interactions to determine the type of search virus. The type of search virus with its TTL spreads to all agents. When two different types of search viruses meet, agents create the kill virus and move to the Global Reset phase (Fig. 1(a)). While, if there is a unique leader, one search virus is periodically generated and expired (Fig. 1(e)).
Phases circulates Global Reset, Leader Generation, and Leader Detection in this order. If there exists a unique leade , the configuration stays in Leader Detection phase with high probability. Otherwise, the phase moves to Global Reset phase. Timers and are used for Global Reset and Leader Generation phases to have enough steps.
4.2 Details of
is given by Algorithm 2, Algorithm 3, and Algorithm 4. We explain the details of . has five parts: i) Two-hop coloring, ii) Timer Count Down, iii) Reset, iv) Leader Generation, v) Leader Detection. The relationship between the three phases and five parts is as follows. The Global Reset phase corresponds to the Reset part, the Leader Generation phase to the Generate part, and the Leader Detection phase to the Detect part. The Timer Count Down part operates throughout all phases, while the Two-Hop Coloring part is completed before the phases begin. Throughout this explanation, we consider when an initiator interacts with a responder .
-
i)
In line 1, the agents execute the two-hop coloring protocol or .
-
ii)
Timer Count Down (lines 2, 4–12, and 15–22) aims to increase or decrease timers. First, the interacting agents determine whether the current partner is the same as the last partner in REPEAT_CHECK (lines 2, and 17–20) to implement Same Speed Timer. After that, each agent saves the current partner’s color to its own . Then, agents decrease timers if holds. (lines 4–5) and (lines 6–7) are handled by Larger Time Propagation and Count Down. That is, for , is set to , and if holds, is decreased by 1 (resp. ). If both interacting agents are not candidates (), they increase or decrease like (lines 8–10). For , if is a leader ( or ), is set to (lines 11–12). If or is a candidate (), increases or decreases in Leader Generation. increases or decreases in Leader Detection.
-
iii)
Reset (lines 3, and 23–24) aims to reset the population when some inconsistency is detected. For , if holds, sets to . In other words, becomes a follower, its identifier becomes , and erases the search virus. This Reset is happened when and only when there are multiple leaders and candidates’ have not been reset to . Specifically, when different search virus meets (lines 60–61, 65–66, 69–70), and when candidates’ have not been reset (lines 27–28).
-
iv)
Generate Leader (lines 13, and 25–40) aims to generate a new leader when there are no leaders. For , if holds, each agent sets to to prevent starting Leader Generation during the Global Reset phase (lines 25–26). Firstly, for , if becomes , determines that there are no leaders, and becomes a candidate for leaders () and sets to (lines 27–29). At this time, if is not (i.e., , it has not been reset), sets to and moves to the Global Reset phase (line 29). Secondly, candidates () generate random numbers as their own identifiers (). For each interaction, if the candidate is an initiator, is updated to ; otherwise, is updated to until becomes no less than (lines 32–33). For the independence of random numbers, if both agents are candidates and generating random numbers, the responder becomes a follower () by resetting to (lines 30–31). Thirdly, if a candidate’s becomes no less than , the candidate starts to broadcast its own to other agents (lines 34–35). This broadcast allows all agents to know the maximum of candidates. For , if is a candidate and holds, becomes a follower () and sets to . For , if is a follower and holds, sets to . To avoid generating new candidates when there are candidates in the population, for , if is a follower and is a candidate, sets to (lines 36–37). That is, we consider has virtually. Eventually, all agents’ s become the same, and most candidates become followers (and some candidates remain). The candidates measure until all candidates finish generating the and s are broadcast for a sufficiently long time using . A candidate decreases by 1 if the agent’s is (lines 39). Finally, when a candidate’s becomes , the candidate becomes a new leader () and sets to (lines 40). The range of generated identifiers () is , so there exists a unique leader with high probability.
-
v)
Detect (lines 41–66) aims to determine whether there are multiple leaders or not. Leaders generate search viruses every time their becomes . If or is a candidate (), agents set their to to prevent generating a search virus (lines 41–42). Firstly, for , if is a leader ( or ) whose becomes , becomes and starts generating random numbers to get the type of search virus (lines 44–45). At the beginning of generating random numbers, sets to . The way of generating random numbers is the same as generation. While generating random numbers, a leader sets own to to inform that there exist agents generating random numbers (lines 46–47). When a leader finished generating random numbers, the leader sets own to (lines 48–49). Secondly, agents detect multiple leaders if there are multiple leaders. broadcasts the generated search virus to all agents via some agents until becomes (lines 50–59). If a follower having search virus and a leader not having search virus interact except the cases their types are same, they set to and the phase moves to Global Reset (lines 51–52). If a follower not having search virus and a leader having search virus interact, the follower set own to the leader’s and set own to the leader’s (lines 53–54). If and are followers and they have different types of search viruses, they set to and move to Global Reset phase (lines 56–57). If and are followers and there exists agents satisfying and for , is set to and is set to (lines 58–59). If and are leaders, they set to and move to Global Reset phase (lines 60–61). Finally, both agents’ run Larger Time Propagation and decrease by if holds (lines 62–63). For , if holds, is set to to prevent generating a new search virus when there is a search virus in the population (line 65). For , if is a leader and becomes less than , becomes (line 66). The range of generating random numbers of types is , so when there are multiple leaders in the population, the types generated by leaders are not the same with high probability.
Lemma 6.
For any execution, all candidates’ no less than are independent and uniform if they are started to be generated during the execution. All leaders’ no less than are independent and uniform if they are generated from the beginning of this execution.
4.3 Analysis
In this subsection, we analyze the expected convergence time and the expected holding time of . We assume , and . We will prove the following equations under these assumptions:
.
.
Here, and are the sets of configurations described later.
We define the sets of configurations to prove the above equations:
is the safe configurations of the self-stabilizing two-hop coloring.
is the set of configurations where each agent’s is the same as the last interacted agent’s color.
.
.
.
.
.
.
.
.
.
.
4.3.1 Expected Holding Time
Lemma 7.
Let and . If holds, then holds.
Proof.
Let . We assume that holds with probability at least . Then, We have . Solving this inequality gives .
We say that an agent encounters a counting interaction when interacts with an agent such that holds.
Lemma 8.
Let and . The probability that every agent encounters less than counting interactions while is at least .
Lemma 9.
Let and . For any , and for any such that is no more than the maximum value of the domain of , when holds, the probability that holds is at least .
Lemma 10.
Let and . For any integer , and any integer satisfying , if holds, then holds.
Proof.
Since there exists an agent satisfying in , the probability that every agent’s holds in is at least from Lemma 9. Since there is every agent satisfying in , holds from Lemma 8. Thus, holds by the union bound. Repeating this times, we get by the union bound.
We can prove Lemma 11 by the same way of Lemma 10, and Lemma 12 by assigning to Lemma 10. Lemma 13, 14, and 15 analyzes the probability that configuration keep some condition for some interval.
Lemma 11.
Let and . For any integer , and any integer satisfying , if holds, then holds.
Lemma 12.
Let and . holds.
Lemma 13.
Let and . holds.
Lemma 14.
Let and . holds.
Lemma 15.
Let and . holds.
Lemma 16.
.
Proof.
4.3.2 Expected Convergence Time
We first analyze the number of interactions until all timers converge to with high probability. Let be the maximum value of the domain of timers, that is, for and for , , and .
Lemma 17.
Let and . For any , if every agent’s increases only by Larger Time Propagation (not including setting to a specific value like by leaders etc.), the number of interactions until every agent’s becomes is less than with probability at least .
Proof.
Let . From the mechanism of Larger Time Propagation, for every agent , does not become if . Thus, when every agent decreases its timer by at least from , holds (i.e., , the maximum value of decreases by at least 1). Let be a binomial random variable that represents the number of interactions of an agent interacts during interactions. From Chernoff Bound (Eq. 4.5 in [14]), . Thus, the probability that an agent interacts no less than times during interactions is at least . Let be a binomial random variable that represents the number of counting interactions that an agent encounters during interactions in which an agent interacts. The probability that is . Thus, the probability that an agent encounters at least one counting interaction is at least . Let denote the number of interactions until an agent decreases its by at least . Since holds, holds. By Markov’s inequality, the probability that an agent does not decrease during interactions is no more than . Thus, the probability that an agent does not decrease during interactions is no more than . By the union bound, the probability that every agent does not decrease during interactions is no more than . Let be an event that every agent decreases by at least 1 during interactions. We consider the expected number of times until succeeds times using geometric distributions. In other words, for , let be the independent geometric random variable such that . Considering the sum of independent random variables . Note that holds. From Janson’s inequality (Theorem 2.1 in [13]), . Thus, the expected number of times that succeeds times is less than with probability at least . Therefore, the number of interactions until all agents’ becomes is with probability at least .
Lemma 18 shows a convergence time.
Lemma 18.
Let and . The number of interactions until the configuration reaches is with probability .
Theorem 19.
Protocol is a randomized -loosely-stabilizing leader election protocol for arbitrary graphs when if is used for two-hop coloring.
Theorem 20.
Protocol is a deterministic -loosely-stabilizing leader election protocol for arbitrary graphs when if is used for two-hop coloring.
5 Conclusion
New loosely-stabilizing leader election population protocols on arbitrary graphs without identifiers are proposed. One is randomized, and the other is deterministic. The randomized one converges within steps, while the deterministic one converges steps both in expectations and with high probability. Both protocols hold a unique leader with expected steps and utilizes bits of memory. The convergence time is close to the known lower bound of .
References
- [1] Dan Alistarh, Rati Gelashvili, and Joel Rybicki. Fast Graphical Population Protocols. In 25th International Conference on Principles of Distributed Systems (OPODIS 2021), volume 217, pages 14:1–14:18, 2022. doi:10.4230/LIPICS.OPODIS.2021.14.
- [2] Dan Alistarh, Joel Rybicki, and Sasha Voitovych. Near-optimal leader election in population protocols on graphs. In Proceedings of the 2022 ACM Symposium on Principles of Distributed Computing, pages 246–256, 2022. doi:10.1145/3519270.3538435.
- [3] Dana Angluin, James Aspnes, Zoë Diamadi, Michael J. Fischer, and René Peralta. Computation in networks of passively mobile finite-state sensors. Distributed Computing, 18(4):235–253, 2006. doi:10.1007/S00446-005-0138-3.
- [4] Dana Angluin, James Aspnes, Michael J. Fischer, and Hong Jiang. Self-stabilizing population protocols. ACM Trans. Auton. Adapt. Syst., 3(4), 2008. doi:10.1145/1452001.1452003.
- [5] Joffroy Beauquier, Peva Blanchard, and Janna Burman. Self-stabilizing leader election in population protocols over arbitrary communication graphs. In Principles of Distributed Systems, pages 38–52, 2013. doi:10.1007/978-3-319-03850-6_4.
- [6] Petra Berenbrink, George Giakkoupis, and Peter Kling. Optimal time and space leader election in population protocols. In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, pages 119–129, 2020. doi:10.1145/3357713.3384312.
- [7] Janna Burman, Ho-Lin Chen, Hsueh-Ping Chen, David Doty, Thomas Nowak, Eric Severson, and Chuan Xu. Time-optimal self-stabilizing leader election in population protocols. In Proceedings of the 2021 ACM Symposium on Principles of Distributed Computing, pages 33–44, 2021. doi:10.1145/3465084.3467898.
- [8] Shukai Cai, Taisuke Izumi, and Koichi Wada. How to prove impossibility under global fairness: On space complexity of self-stabilizing leader election on a population protocol model. Theory of Computing Systems, 50(3):433–445, 2012. doi:10.1007/S00224-011-9313-Z.
- [9] Davide Canepa and Maria Gradinariu Potop-Butucaru. Stabilizing leader election in population protocols. Research Report RR-6269, INRIA, 2007.
- [10] Michael Fischer and Hong Jiang. Self-stabilizing leader election in networks of finite-state anonymous agents. In Principles of Distributed Systems, pages 395–409, 2006. doi:10.1007/11945529_28.
- [11] Leszek Gąsieniec, Grzegorz Stachowiak, and Przemyslaw Uznanski. Almost logarithmic-time space optimal leader election in population protocols. In The 31st ACM Symposium on Parallelism in Algorithms and Architectures, pages 93–102, 2019. doi:10.1145/3323165.3323178.
- [12] Taisuke Izumi. On space and time complexity of loosely-stabilizing leader election. In Structural Information and Communication Complexity, pages 299–312, 2015. doi:10.1007/978-3-319-25258-2_21.
- [13] Svante Janson. Tail bounds for sums of geometric and exponential variables. Statistics & Probability Letters, 135:1–6, 2018.
- [14] Michael Mitzenmacher and Eli Upfal. Probability and Computing: Randomized Algorithms and Probabilistic Analysis. Cambridge University Press, 2005. doi:10.1017/CBO9780511813603.
- [15] Yuichi Sudo, Ryota Eguchi, Taisuke Izumi, and Toshimitsu Masuzawa. Time-Optimal Loosely-Stabilizing Leader Election in Population Protocols. In 35th International Symposium on Distributed Computing (DISC 2021), volume 209, pages 40:1–40:17, 2021. doi:10.4230/LIPICS.DISC.2021.40.
- [16] Yuichi Sudo, Junya Nakamura, Yukiko Yamauchi, Fukuhito Ooshita, Hirotsugu Kakugawa, and Toshimitsu Masuzawa. Loosely-stabilizing leader election in a population protocol model. Theoretical Computer Science, 444:100–112, 2012. doi:10.1016/J.TCS.2012.01.007.
- [17] Yuichi Sudo, Fukuhito Ooshita, Taisuke Izumi, Hirotsugu Kakugawa, and Toshimitsu Masuzawa. Time-optimal leader election in population protocols. IEEE Transactions on Parallel and Distributed Systems, 31(11):2620–2632, 2020. doi:10.1109/TPDS.2020.2991771.
- [18] Yuichi Sudo, Fukuhito Ooshita, Hirotsugu Kakugawa, and Toshimitsu Masuzawa. Loosely-stabilizing leader election on arbitrary graphs in population protocols. In Principles of Distributed Systems, pages 339–354, 2014. doi:10.1007/978-3-319-14472-6_23.
- [19] Yuichi Sudo, Fukuhito Ooshita, Hirotsugu Kakugawa, and Toshimitsu Masuzawa. Loosely stabilizing leader election on arbitrary graphs in population protocols without identifiers or random numbers. IEICE Transactions on Information and Systems, E103.D(3):489–499, 2020. doi:10.1587/TRANSINF.2019FCP0003.
- [20] Yuichi Sudo, Fukuhito Ooshita, Hirotsugu Kakugawa, Toshimitsu Masuzawa, Ajoy K. Datta, and Lawrence L. Larmore. Loosely-stabilizing leader election for arbitrary graphs in population protocol model. IEEE Transactions on Parallel and Distributed Systems, 30(6), 2019. doi:10.1109/TPDS.2018.2881125.
- [21] Yuichi Sudo, Fukuhito Ooshita, Hirotsugu Kakugawa, Toshimitsu Masuzawa, Ajoy K. Datta, and Lawrence L. Larmore. Loosely-stabilizing leader election with polylogarithmic convergence time. Theoretical Computer Science, 806:617–631, 2020. doi:10.1016/J.TCS.2019.09.034.
- [22] Yuichi Sudo, Masahiro Shibata, Junya Nakamura, Yonghwan Kim, and Toshimitsu Masuzawa. Self-stabilizing population protocols with global knowledge. IEEE Transactions on Parallel and Distributed Systems, 32(12):3011–3023, 2021. doi:10.1109/TPDS.2021.3076769.