Abstract 1 Introduction 2 Preliminaries 3 Main Result 4 Lower Bound 5 Conclusion References

The Expressive Power of Uniform Population Protocols with Logarithmic Space

Philipp Czerner ORCID Technical University of Munich, Germany Vincent Fischer ORCID Technical University of Munich, Germany Roland Guttenberg ORCID Technical University of Munich, Germany
Abstract

Population protocols are a model of computation in which indistinguishable mobile agents interact in pairs to decide a property of their initial configuration. Originally introduced by Angluin et. al. in 2004 with a constant number of states, research nowadays focuses on protocols where the space usage depends on the number of agents. The expressive power of population protocols has so far however only been determined for protocols using o(logn) states, which compute only semilinear predicates, and for Ω(n) states. This leaves a significant gap, particularly concerning protocols with Θ(logn) or Θ(polylogn) states, which are the most common constructions in the literature. In this paper we close the gap and prove that for any ε>0 and fΩ(logn)𝒪(n1ε), both uniform and non-uniform population protocols with Θ(f(n)) states can decide exactly those predicates, whose unary encoding lies in 𝖭𝖲𝖯𝖠𝖢𝖤(f(n)logn).

Keywords and phrases:
Population Protocols, Uniform, Expressive Power
Copyright and License:
[Uncaptioned image] © Philipp Czerner, Vincent Fischer, and Roland Guttenberg; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Distributed computing models
Related Version:
Preprint: https://arxiv.org/abs/2408.10027
Previous Brief Announcement in DISC 2024: https://doi.org/10.4230/LIPIcs.DISC.2024.44
Editors:
Kitty Meeks and Christian Scheideler

1 Introduction

Population protocols are a model of computation in which indistinguishable mobile agents randomly interact in pairs to decide whether their initial configuration satisfies a given property. The decision is taken by stable consensus; eventually all agents agree on whether the property holds or not, and never change their mind again. While originally introduced to model sensor networks [4], population protocols are also very close to chemical reaction networks [24], a model in which agents are molecules and interactions are chemical reactions.

Originally agents were assumed to have a finite number of states [4, 5, 6], however many predicates then provably require at least Ω(n) time to decide [20, 7, 1], as opposed to recent breakthroughs of 𝒪(logn) time using 𝒪(logn) or even fewer states for important tasks like leader election [9] and majority [18]. Limitting the number of states to logarithmic is important in most applications, especially the chemical reaction setting, since a linear in n number of states would imply the unrealistic number of approximately 1023 different chemical species. Therefore most recent literature focuses on the polylogarithmic time and space setting, and determines time-space tradeoffs for various important tasks like majority [3, 1, 2, 21, 8, 18], leader election [1, 21, 9] or estimating/counting the population size [19, 15, 10, 16, 17].

This leads to the interesting open problem of characterizing the class of predicates which can be computed in polylogarithmic time using a logarithmic or polylogarithmic number of states. There is however a fundamental problem with working on this question: Despite the focus on 𝒪(logn) number of states in recent times, the expressive power for this number of states has not yet been determined. While it is known that protocols with o(logn) number of states can only compute semilinear predicates [6, 14] and with f(n)Ω(n) states the expressive power is 𝖴𝖭𝖲𝖯𝖠𝖢𝖤(nlogf(n)) [14], i.e. predicates which can be decided in 𝖭𝖲𝖯𝖠𝖢𝖤(nlogf(n)), when the input is encoded in unary, the important case of having logarithmically many states is unknown. To the best of our knowledge, the only research in this direction is [12], where the expressive power is characterised for polylog(n) number of states for a similar model – not population protocols themselves. Their results do not lead to a complete characterization for Θ(logn) states since their construction is slightly too space-inefficient, simulating a loglogn-space TM by approximately log2n space protocols.

In this paper, we resolve this gap by proving that for functions f(n)Ω(logn)𝒪(n1ε), where ε>0, we have 𝖴𝖯𝖯(f(n))=𝖴𝖭𝖲𝖯𝖠𝖢𝖤(f(n)logn), i.e. predicates computable by population protocols using 𝒪(f(n)) number of states are exactly the predicates computable by a non-deterministic Turing machine using 𝒪(f(n)logn) space with the input encoded in unary. The “U” in 𝖴𝖯𝖯(f(n)) stands for uniform: Modern population protocol literature distinguishes between uniform and non-uniform protocols. In a non-uniform protocol, a different protocol is allowed to be used for every population size. While we have stated the expressive power for uniform protocols here, our complexity characterization also holds for non-uniform population protocols.

Our results complete the picture of the expressive power of uniform protocols: For o(logn) only semilinear predicates can be computed (open for non-uniform), for a class of reasonable functions f(n)Ω(logn)𝒪(n1ε) , which contains most practically relevant functions111This will be clarified in the next section. we have 𝖴𝖭𝖲𝖯𝖠𝖢𝖤(log(n)f(n)) by our results, and for fΩ(n) we have 𝖴𝖭𝖲𝖯𝖠𝖢𝖤(nlogf(n)). (A slight gap between 𝒪(n1ε) and Ω(n) remains.)

Main Contribution.

The technically most involved part of our result is the lower bound, i.e. constructing a 𝒪(f(n)) space uniform population protocol simulating a 𝒪(f(n)logn) space Turing machine, or – equivalently [23], and used in our construction – simulating a 𝒪(2f(n)logn)-bounded counter machine. Let us briefly illustrate the main techniques and difficulties towards this result. In a nutshell, the crucial difference between o(n) and Ω(n) states is the ability to assign unique identifiers to agents, and to store the population size n in a single agent. In our construction, therefore, we must distribute the value of n over multiple agents, and they must collaborate to compute operations involving it. We also introduce a novel approach for encoding the counters of the counter machine, as those described in previous publications such as [5] and [12] cannot encode large enough numbers for our purposes.

Overview.

The paper is structured as follows: In Section 2 we give preliminaries and define population protocols. Section 3 briefly states our main result and prove the lower bound for weakly uniform population protocols. The proof of the matching upper bound (even for uniform protocols) is presented in Section 4.

2 Preliminaries

We let denote the set of natural numbers including 0 and let denote the set of integers. We write logn for the binary logarithm log2n.

A multiset over a set Q is a multiplicity function f:Q, which maps every qQ to its number of occurances in the multiset f. We denote multisets using a set notation with multiplicities, i.e. {f(q1)q1,,f(qm)qm}. We define addition f+f on multisets via (f+f)(q)=f(q)+f(q) for all qQ. Multisets are compared via inclusion, defined as fff(q)f(q) for all qQ. If ff, then subtraction ff is defined via (ff)(q)=f(q)f(q) for all qQ. The number of elements of f is denoted |f| and defined as f(q)0f(q) if only finitely many qQ fulfill f(q)0, and |f|= otherwise. Elements q of Q are identified with the multiset {1q}. The set of all finite multisets over Q is denoted Q. Given a function g:AB, its extension g^ to finite multisets is g^:AB,g^(f)=f(a)0f(a){g(a)}.

Definition 1.

A protocol scheme 𝒫 is a 5-tuple (Q,Σ,δ,I,O) of

  • a (not necessarily finite) set of states Q,

  • a finite input alphabet Σ,

  • a transition function δ:Q×QQ×Q,

  • an input mapping I:ΣQ,

  • an output mapping O:Q{0,1}.

A configuration of 𝒫 is a finite multiset CQ. A step CC in 𝒫 consists of choosing a multiset {q1,q2}C and replacing {q1,q2} by {δ(q1,q2)} or {δ(q2,q1)}, i.e. C=(C{q1,q2}+{δ(q1,q2)}). The intuition is that the configuration describes for every q the number of agents in q, and a step consists of an agent in q1 exchanging messages with q2, upon which these two agents change into the states δ(q1,q2). Observe that the transition function δ distinguishes between the initiator of the exchange and the responder, while in the configuration all agents are anonymous. The number of agents is denoted n:=|C|.

We write for the reflexive and transitive closure of , and say that a configuration C is reachable from C if CC. A configuration C is initial if there exists a multiset wΣ such that I^(w)=C. In that case C is the initial configuration for input w.

A configuration C is a b-consensus for b{0,1} if O(q)=b for all q such that C(q)0, i.e. if every state which occurs in the configuration has output b. A configuration C is stable with output b if every configuration C reachable from C is a b-consensus.

A run ρ is an infinite sequence of configurations ρ=(C0,C1,) such that CiCi+1 for all i. A run is fair if for all configurations C which occur infinitely often in ρ, i.e. such that there are infinitely many i with Ci=C, also every configuration C reachable from C occurs infinitely often in ρ. A run has output b if some configuration Ci along the run is stable with output b (and hence all Cj for ji are also stable with output b).

An input wΣ has output b if every fair run starting at its corresponding initial configuration I^(w) has output b. The protocol scheme 𝒫 computes a predicate if every input w has some output. In that case the computed predicate is the mapping Σ{0,1}, which maps w to the output of I^(w).

Example 2.

Consider Q:={0}{2ii}, and define δ(2i,2i)=(2i+1,0), otherwise δ is the identity function. Let Σ={x}, and let x20 be the input mapping. Then a configuration is initial if every agent is in state 20. Intuitively this protocol will eventually end up with the binary representation of the number of agents. Namely each transition preserves the total sum of all agents’ values, and every actual transition (which does not simply leave the agents the same) causes an agent to enter 0, so this protocol in fact always reaches a terminal configuration. For example if we start this protocol with 22 agents we will eventually reach the stable configuration {121,122,124,190}, which corresponds to the binary encoding of 22=101102.

We now define the state complexity of a protocol scheme. A state qQ is coverable from some initial configuration C0 if there exists a configuration C reachable from C0 which fulfills C(q)>0. The state complexity S(n) of 𝒫 for n agents is the number of states qQ which are coverable from some initial configuration with n agents.

Example 3.

In the scheme of Example 2, let Cn be the unique initial configuration with n agents, i.e. Cn(20)=n and Cn(q)=0 otherwise. For n2, the states coverable from Cn are exactly {0}{2iilogn}. Hence the state complexity is S(n)=logn+2.

As defined so far, protocol schemes are not necessarily computable. Hence actual population protocols require some uniformity condition, and that S(n) is finite for all n.

Definition 4.

A uniform population protocol 𝒫=(Q,Σ,δ,I,O) is a protocol scheme s.t. 1) the space complexity S(n) for all n and 2) there is a representation of states as binary strings and linear space Turing-machines (TMs) Mδ,MI,MO, where

  1. 1.

    Mδ: Given (the representation of) two states q1,q2, Mδ outputs δ(q1,q2).

  2. 2.

    MI: Given multiset w, MI outputs a representation of I^(w).

  3. 3.

    MO: Given a state q and b{0,1}, MO checks whether O(q)=b.

We remark that “linear space” then in terms of our n, the number of agents, is 𝒪(logS(n)) space (since the input of the machine is a representation of a state).

In the literature on uniform population protocols, e.g. [13, 14, 19, 15], often agents are defined as TMs and states hence automatically assumed to be represented as binary strings. We avoid talking about the exact implementation of a protocol via TMs because it introduces an additional logarithm in the number of states and potentially confuses the reader, while most examples are clearly computable.

Example 5.

In the protocol scheme of Example 2 we represent states by the binary representation of the exponent. Clearly incrementing natural numbers or setting the number to a fixed value are possible by a linear space TM, hence this is a uniform population protocol.

Next we define a more general class of population protocols, which we call weakly uniform. This class includes all known population protocols, and our results also hold for this class, which shows that having a different protocol for every n does not strengthen the model.

Definition 6.

A finite population protocol is a protocol scheme with a finite set Q.

A population protocol 𝒫 is an infinite family (𝒫n)n=(Qn,Σ,δn,In,On)n of finite population protocols. The state complexity for inputs of size n is S(n):=|Qn|.

𝒫 is weakly uniform if there exist TMs Mδ,MI,MO using 𝒪(S(n)) space which:

  1. 1.

    Mδ: Given two states q1,q2 and n in unary, Mδ outputs δn(q1,q2).

  2. 2.

    MI: Given multiset w with n elements, MI outputs a representation of In^(w).

  3. 3.

    MO: Given a state q, b{0,1} and n in unary, MO checks whether On(q)=b.

The configurations of 𝒫 with n agents are exactly the configurations of 𝒫n with n agents, and accordingly the semantics of steps, runs and acceptance are inherited from 𝒫n.

The protocol for a given population size n is allowed to differ completely from the protocol for n1 agents, as long as TMs are still able to evaluate transitions, input and output. Usually this is not fully utilised, with the most common case of a non-uniform protocol being that logn is encoded into the transition function [18].

Clearly uniform population protocols are weakly uniform. Namely let 𝒫=(Q,Σ,δ,I,O) be a protocol scheme. Then for every n we let Qn be the set of states coverable by some initial configuration with n agents, similar to the definition of state complexity, and define 𝒫n:=(Qn,Σ,δn|Qn2,I,O|Qn), where f|A is the restriction of f to inputs in A. This protocol family computes the same predicate, and is weakly-uniform with the same state complexity.

Next we define the complexity classes for our main result. Let f: be a function. f is space-constructible if there exists a TM M which computes f using 𝒪(f(n)) space. Given a space-constructible function f:, we denote by 𝖭𝖲𝖯𝖠𝖢𝖤(f(n)) the class of predicates computable by a non-deterministic Turing-machine in 𝒪(f(n)) space. Similarly, let 𝖴𝖯𝖯(f(n)) be the class of predicates computable by uniform population protocols with 𝒪(f(n)) space, and 𝖶𝖴𝖯𝖯(f(n)) be the class of predicates computable by weakly-uniform population protocols with 𝒪(f(n)) space.

Population protocols decide predicates on multisets wΣ, or equivalently predicates on k for k=|Σ|. In order to compare the complexity classes defined on predicates with those defined on languages over an alphabet we define the unary encoding of a predicate φ:k{0,1} as the language Lφ{1x1#1x2##1xkφ(x1,x2,,xk)=1}. For any complexity class 𝒞 we can now define 𝖴𝖤𝖭𝖢(𝒞){φ:k{0,1}k,Lφ𝒞} as the class of predicates whose unary encoding lies in 𝒞. More specificaly we define 𝖴𝖭𝖲𝖯𝖠𝖢𝖤(f(n))𝖴𝖤𝖭𝖢(𝖭𝖲𝖯𝖠𝖢𝖤(f(n))) 222Previous work has instead used the complexity class 𝖲𝖭𝖲𝖯𝖠𝖢𝖤(f(n)) consisting of the symmetric languages (i.e. languages closed under permutation) over the alphabet Σ in 𝖭𝖲𝖯𝖠𝖢𝖤(f(n)) to reflect that the agents in a population protocol are unordered. We find it more intuitive to think about a unary encoding with separators, but languages with either encoding can be polynomially reduced to the other..

3 Main Result

We give a characterisation for the expressive power of both uniform and weakly uniform population protocols with f(n) states, where fΩ(logn)𝒪(n1ε), for some ε>0. For technical reasons, we must place two limitations on f(n):

  1. 1.

    f(n)=g(logn) for some g:, i.e. f is computable knowing only logn.

  2. 2.

    f(n) is space-constructible, i.e. the function f can be computed in 𝖲𝖯𝖠𝖢𝖤(f(n)), and

  3. 3.

    f(n) is monotonically increasing.

All practically relevant functions fulfil these properties. For the first, we remark that “usually” f(n)Θ(f(2logn)).333The exceptions are plateau functions with large jumps. For example, while n is not computable from logn, we can instead use 2logn, which is asymptotically equivalent.

In the remainder of this paper, a function f with these properties is called reasonable.

Our bound applies to uniform and weakly uniform protocols. As mentioned in the previous section, the latter includes, to the best of our knowledge, all non-uniform constructions from the literature.

Theorem 7.

Let ε>0 and let fΩ(logn)𝒪(n1ε) be reasonable. Then

𝖴𝖯𝖯(f(n))=𝖶𝖴𝖯𝖯(f(n))=𝖴𝖭𝖲𝖯𝖠𝖢𝖤(f(n)logn).
Proof.

This will follow directly from the upper and lower bounds given by Proposition 8 and Theorem 9. In particular, we have 𝖴𝖯𝖯(logn)=𝖶𝖴𝖯𝖯(logn)=𝖴𝖭𝖲𝖯𝖠𝖢𝖤(log2n).

Proposition 8.

Let ε>0 and let fΩ(logn)𝒪(n1ε) be space-constructible. Then

𝖴𝖯𝖯(f(n))𝖶𝖴𝖯𝖯(f(n))𝖴𝖭𝖲𝖯𝖠𝖢𝖤(f(n)logn).
Proof.

𝖴𝖯𝖯(f(n))𝖶𝖴𝖯𝖯(f(n)) follows since uniform protocols are also weakly-uniform.

Hence let (𝒫n)n=(Qn,Σ,δn,In,On)n be a weakly uniform population protocol computing a predicate φ. We have to show that there exists a TM M𝖭𝖲𝖯𝖠𝖢𝖤(f(n)logn) computing φ, when given the input in unary. We employ a similar argument as in the proof of the upper bound in [11]: First observe that a configuration of 𝒫n with n agents can be described by |Qn|𝒪(f(n)) many numbers up to n, i.e. can be stored using 𝒪(f(n)logn) bits. Namely one can store the number of agents per state qQn. The encoding of the initial configuration can easily be calculated by simply counting the ones on the input tape corresponding to each initial state.

Since f is space-constructible, f(n)logn is space-constructible as well. By the Immerman-Szelepcsényi theorem we have 𝖭𝖲𝖯𝖠𝖢𝖤(f(n)logn)=𝖼𝗈𝖭𝖲𝖯𝖠𝖢𝖤(f(n)logn).

Since the population protocol (𝒫n)n computes a predicate, either every fair run starting from the initial configuration I^(w) accepts or every fair run rejects. M has to determine which of these is the case. In fact, because every fair run has the same output, we claim that some configuration C reachable from I^(w) is stable for output 1 if and only if I^(w) is accepted. By definition, an accepting run visits a configuration stable for output 1, proving one direction, and for the other direction construct a fair run ρ:I^(w)C by extending I^C in a fair way. This run is accepting, and hence also every other fair run is.

We hence construct M as follows: M applies MI to obtain a representation of the initial configuration I^(w). It guesses a configuration C, and checks using repeatedly Mδ that C is reachable from I^(w). It remains to check that C is stable with output 1. A configuration is not stable for output 1 if and only if some configuration C reachable from C contains an agent with output 0. Therefore non-stability can be checked in 𝖭𝖲𝖯𝖠𝖢𝖤(f(n)logn) by guessing C, checking using MO that C is not a 1-consensus and checking reachability. By Immerman-Szelepcsényi hence also stability is decidable in 𝖭𝖲𝖯𝖠𝖢𝖤(f(n)logn).

4 Lower Bound

In this section, we prove the following.

Theorem 9.

Let ε>0 and let fΩ(logn)𝒪(n1ε) be reasonable. Then

𝖴𝖭𝖲𝖯𝖠𝖢𝖤(f(n)logn)𝖴𝖯𝖯(f(n)).

To do this we first fix a reasonable function fΩ(logn)𝒪(n1ε) and a predicate φ:Σ{0,1} with φ𝖴𝖭𝖲𝖯𝖠𝖢𝖤(f(n)logn). With a slight modification to the classic 3-counter simulation of a f(n) space-bounded Turing machine described in [23], we obtain a counter machine 𝒞 with |Σ| input registers and 3 computation registers that decides φ using 𝒪(2f(n)logn) space.

We now construct a population protocol 𝒫=(Q,Σ,δ,I,O) simulating 𝒞. There are three main difficulties involved in this construction:

Firstly, in order to sequence multiple operations such that an operation only starts once the previous one has finished, we need a way of performing a zero-check, i.e. detecting whether an agent with a certain state exists. We achieve this by counting the number of agents using a binary encoding similar as to [12]. By keeping track of the agents already seen in an additional counter, we can then perform loops over all agents, and so detect absence of a certain state.

Secondly, we need to encode the counters of 𝒞, which can hold values up to 𝒪(2f(n)logn). The two counter encodings described in existing literature of either counting in unary the number of agents in a special state [5], or the binary encoding of [12] both cannot encode numbers this large. We improve on the binary encoding by using digits in a higher base of Θ(nf(n)) and counting in unary within each digit. Manipulating these digits makes heavy use of the looping construct mentioned above.

The final problem, which is inherent to all population protocols, is that an arbitrary number of agents may not participate in any interactions for an arbitrary long amount of time. These errors are detected at some point, but this can happen arbitrarily late. At that point, we solve this by providing a way for the simulation to re-initialize itself.

The protocol consists of multiple phases:

  1. 1.

    We count the number of agents and initialize the additional counters to zero. This process is detailed in Section 4.2. Section 4.3 describes how the counters are manipulated, and Section 4.4 presents a macro for looping over all agents using the counters for bookkeeping.

  2. 2.

    We set up the digits, which encode the counters of 𝒞. This phase is described in Section 4.6.

  3. 3.

    The instructions of 𝒞 are simulated. This is described in Section 4.7.

As a technical aside, as is usual we assume that the protocol is started with a sufficient number of agents (i.e. exceeding some constant). We argue in the proof of Theorem 9 why this is not a problem.

4.1 State Space

The states will be of the form Q=×2F for a finite set F of flags. A state (q,S)Q has level q. We defer precise definitions until they become relevant.

Notation.

To compactly denote sets of states characterised by flags, we, for example, write (i,Ldr0) for the set of all level i states which do not include the flag Ldr. In particular, this notation avoids mentioning other flags.

Formally, we write (i,Xb1(1),,Xbk(k)), where X(1),,X(k)F and b1,,bk{0,1} to refer to the set of all states (i,S) where SF fulfils X(j)Sbj=1 for all j=1,,k.

On the right-hand side of a transition, we use the same notation with a different meaning: it refers to the state where flags X(1),,X(k) are as given, and all other flags match the corresponding state that initiated the transition. I.e. similar to an assignment command, the mentioned values are set while leaving other flags the same as before.

We also use as wildcard. On the left-hand side of a transition, it matches anything, and on the right-hand side, it refers to the same value as the corresponding element of the left-hand side. For example, the transition

(i,Ex1),(,Ex1)(i+1),(,Ex0)for i

means that any two agents with flag Ex can interact. The first moves to the next level (with flags unchanged), while the second removes the Ex flag (and leaves its level unchanged).

Sometimes, we want to refer to groups of flags at once, and we write Sb for S={X(1),,X(k)}F,b{0,1} instead of Xb(1),,Xb(k).

4.2 Initialisation

Our first goal is to reach a configuration with one leader at level ln:=logn, with ln+1 agents each storing one bit of the binary representation of n, and all other agents “ready to be reset”. Let blnb0 be the binary representation of n. Formally we want the leader in state (ln,Ldr1,I1), exactly one counter agent in (j,Ctr1,Nbj) for each jln=logn, and all other agents in states (,Ldr0,Ctr0). The flags Ldr,Ctr,FreeF indicate whether the agent is currently a leader, a counter agent, or free, respectively (these are exclusive). Additionally, N,IF, where N indicates whether the bit of the counter is set, and I whether the leader should perform initialisation.

Regarding the input we define I(X):=(0,{Ctr,N,X}) for XΣ.

In the counter, the agents perform usual bitwise increments as in Example 2, though now expressed in terms of the exponent i, and we have to leave one agent in every bit.

(i,Ctr1,N1),(i,Ctr1,N1) (i+1),(i,N0) for i
(i,Ctr1,Na),(i,Ctr1,Nb) (i,Na+b),(i,Ctr0,Ldr1,I1) for i,a+b1

This uses the compact notation for transitions introduced above. Consider the first line. If two agents with value i are both responsible for the counter and have their N flag set to 1, then, regardless of any other flags, the outcome is as follows: The first agent increments i (leaving every flag unchanged), and the second agents sets N to 0, again leaving the rest as is.

For the second line, if – in the same type of encounter – at most one of the two bits 𝖭a and 𝖭b was set, then one of the agents unsets his counter flag and becomes a leader with I flag set to 1.

This is the way for agents to originally set the leader flag. Since we want to have only one leader, we execute a leader election subprotocol. Every time a leader is eliminated, it moves into Free, and the remaining leader re-initialises.

(i,Ldr1),(j,Ldr1) (i,I1),(0,Ldr0,Free1) for i,j,ij
(i,Ldr1),(j,Ctr1) (j,I1),(j) for i,j,i<j

The second line causes the leader to eventually point to the most significant bit of n.

Let δinit:=4.24.2. For the following proof, as well as later sections, it will be convenient to denote the value of the counter. Given a configuration C and XF we write val(C,X):=i2iC((i,Ctr1,X1)). For example, the goal of the initialisation is to ensure val(C,N)=n at all times.

We say that a configuration is initialised, if it has

  1. (1)

    exactly one agent in (ln,𝖫𝖽𝗋1,𝖢𝗍𝗋0),

  2. (2)

    exactly one agent in (i,𝖢𝗍𝗋1,𝖭bi), for i=0,,ln and bi the i-th bit of n, and

  3. (3)

    all other agents in (,𝖫𝖽𝗋0,𝖢𝗍𝗋0).

Lemma 10.

Assume that each transition tδδinit leaves flags 𝖫𝖽𝗋,𝖢𝗍𝗋,𝖭 unchanged, and does not affect levels of agents with the 𝖫𝖽𝗋 or 𝖢𝗍𝗋 flag. 𝒫 eventually reaches an initialised configuration with an agent in (ln,𝖫𝖽𝗋1,I1), and will remain in an initialised configuration.

Proof.

We will show that eventually such a configuration is reached via a 4.2 transition. Since transitions in δinit observe only flags Ldr,Ctr,N and levels of leader and counter agents, which by assumption no other transition can change, we can disregard all transitions in δδinit for the purposes of this proof.

We have that val(C,N) is invariant in all reachable configurations C, as no transition changes its value. Further, in an initial configuration we have val(C,N)=20|C|=n. Hence the level of any agent with flags Ctr and N is at most ln.

Furthermore, let ni denote the number of counter agents at level i. Then (n0,) decreases lexicographically with every 4.2 transition. As 4.2 is enabled as long as we have two counter agents on the same level, eventually we will have exactly one agent in (i,Ctr1) for every i=0,,ln, and by the invariant val(C,N) the N flag corresponds to the binary representation of n, proving (2).

Therefore eventually no more leaders are created and transition 4.2 leaves exactly one leader. All other agents are then necessarily in states (,Ldr0,Ctr0), proving (3). Once the last 4.2 transition occurs, flag I is set on the leader and it has level ln, showing (1).

4.3 The Counter

We created a counter during initialisation, which now contains the precise number of agents. To perform arithmetic on this counter, we designate a helper agent that executes one operation at a time. This agent uses flags Fcounter:={Clr,Incr,Cmp,Swap,Done} to store the operation it is currently executing, and it uses its level to iterate over the bits of the counter. Formally, we say that an agent is a (counter) helper, if it has one of the flags in Fcounter.

The value stored in the counter using the N flag is immutable (to satisfy the assumptions of Lemma 10), so we use flags A,B to store two additional values in the counter agents.

The first operation clears the value in A, i.e. sets it to zero.

(i,Clr1),(i,Ctr1) (i+1),(i,A0) for i
(i+1,Clr1),(i,Ldr1) (0,Clr0,Done1),(i)

It iterates over each bit using the level. To detect that the end has been reached, the helper communicates with the leader, which always has level ln.

To access the value stored in B, we create an operation that swaps it with A. It proceeds in much the same way.

(i,Swap1),(i,Ctr1,Aa,Bb) (i+1),(i,Ab,Ba) for i,a,b{0,1}
(i+1,Swap1),(i,Ldr1) (0,Swap0,Done1),(i)

Incrementing is slightly more involved, but only because we do multiple things: we increase the value in A by 1, and then compare it with N. If they match, the value of A is cleared and the helper sets flag R to indicate whether this happened.

(i,Incr1),(i,Ctr1,A1) (i+1),(i,A0) for i
(i,Incr1),(i,Ctr1,A0) (0,Incr0,Cmp1),(i,A1) for i
(i,Cmp1),(i,Ctr1,Aa,Na) (i+1),(i) for i,a{0,1}
(i,Cmp1),(i,Ctr1,Aa,N1a) (0,Cmp0,Done1,R0),(i) for i,a{0,1}
(i+1,Cmp1),(i,Ldr1) (0,Cmp0,Clr1,R1),(i) for i

Let δcounter:=4.34.34.3.

Observation 11.

Let C denote an initialised configuration with exactly one counter helper in state (0,S). If only transitions in δcounter are executed, C eventually reaches a configuration C with

  1. (1)

    exactly one counter helper in state (0,S), where SFcounter={𝖣𝗈𝗇𝖾},

  2. (2)

    val(C,𝖠)=0, if 𝖢𝗅𝗋S,

  3. (3)

    val(C,𝖠)=val(C,𝖡), and val(C,𝖡)=val(C,𝖠), if 𝖲𝗐𝖺𝗉S,

  4. (4)

    val(C,𝖠)=val(C,𝖠)+1 and 𝖱S, if 𝖨𝗇𝖼𝗋S and val(C,𝖠)+1<val(C,𝖭),

  5. (5)

    val(C,𝖠)=0 and 𝖱S, if 𝖨𝗇𝖼𝗋S and val(C,𝖠)+1=val(C,𝖭).

In cases (2), (4), and (5), we also have val(C,𝖡)=val(C,𝖡).

Proof.

Each operation iterates through the bits of the counter and performs the operations according to the above specification. Once the helper reaches level ln+1, we use Lemma 10 to deduce the existence of a leader at level ln, causing the helper to move to Done. We also remark that the increment operation cannot overflow, as (by specification) val(C,𝖠)+1val(C,𝖭).

4.4 Loops

A common pattern is to iterate over all agents. To this end, we implement a loop functionality, which causes a loop body to be executed precisely n1 times.

(,Loop1,Body0),(,Done1) (,Loop0,LoopA1),(0,Done0,Incr1)
(,LoopA1),(,Done1,R0) (,LoopA0,Loop1,Body1),()
(,LoopA1),(,Done1,R1) (,LoopA0,End1),()

This transition is to be understood as a template. Any agent can set flag Loop, and 4.4 will then interact with the counter, and set flag Body. The agent must then execute another transition removing flag Body, to commence another iteration of the loop. At some point, 4.4 will instead indicate that the loop is finished, by setting flag End.

4.5 Cleanup

After the initialisation of Section 4.2, most agents are in some state in (,Ldr0,Ctr0). We now want to move all of them into state (0,{Free}), and move the leader to (ln,{Ldr,Start}). (For intuitive explanations we sometimes elide, as here, the flags corresponding to the input Σ, but the transitions take care to not inadvertently clear them.)

During the cleanup, we need one helper agent to perform operations on the counter. The leader will appoint one such agent and mark it using Q. However, it is unavoidable that sometimes such an agent may already exist. Therefore, any counter helper can cause the leader to reset, and during a reset the leader moves any such agents to (0,{Free,T}). Additionally, while resetting the leader sets flag T on any agent it encounters.

(,Ldr1,I0),(,Q1) (,I1),()
(,Ldr1,I1),(,Ldr0,Ctr0) (),(0,(FΣ)0,Free1,T1)
(,Ldr1,I1),(,Ctr1,T0) (),(,T1)

For the actual cleanup, the leader first appoints one free agent as helper, then uses the loop template from the previous section to iterate over all agents. Free agents are moved to (0,{Free}), and all other agents are left as-is. At the end of the loop, the helper is moved as well, and the leader enters Start, indicating that cleanup is complete. The following transition 4.5 part 1 is the only transition which unsets the I flag.

(,Ldr1,I1),(,Free1) (,(FΣ)0,Ldr1,Loop1),(0,Free0,Clr1,T1,Q1)
(,Ldr1,Body1,Start0),(,T1) (,Body0),(,T0)
(,Ldr1,End1,Start0),(,Done1) (,End0,Start1),(0,(FΣ)0,Free1)

Now we are ready to prove that eventually the protocol reaches a “clean” configuration as in the following lemma. Let δcleanup:=4.34.34.34.44.54.5.

Lemma 12.

Assume that the assumptions of Lemma 10 hold, and that every transition in δ(δinitδcleanup)

  1. (a)

    does not change 𝖨 or 𝖲𝗍𝖺𝗋𝗍,

  2. (b)

    does not reduce the number of counter helpers,

  3. (c)

    does not use any free agent or counter helper with 𝖳 set,

  4. (d)

    does not use any agent with 𝖢𝗍𝗋 set, and

  5. (e)

    does not only use a counter helper or agents in (,𝖥𝗋𝖾𝖾1) or (,𝖲𝗍𝖺𝗋𝗍0).

Then 𝒫 eventually reaches an initialised configuration with

  1. (1)

    exactly one agent in (ln,{𝖫𝖽𝗋,𝖲𝗍𝖺𝗋𝗍}) and ln+1 agents in (,𝖢𝗍𝗋1), and

  2. (2)

    all other agents in (0,S{𝖥𝗋𝖾𝖾}) for SΣ, i.e. only 𝖥𝗋𝖾𝖾 and input flags are set.

Proof.

Let 𝒞 denote the set of initialised configurations with in agent in (,Ldr1,I1).

Lemma 10 guarantees that we reach a configuration C1𝒞. As stated there, all configurations reachable from C1 are initialised. We start by arguing that C1 reaches a configuration C2 with exactly one counter helper and one leader with I unset.

First, we note that it is possible to reach such a C2, by executing line 2 of 4.5 to remove all counter helpers, and then executing the first line of 4.5 to create one counter helper and unset I. So any fair run from C1 that does not reach such a C2 must avoid configurations in 𝒞 eventually. (If it visited 𝒞 infinitely often, by fairness it would have to reach C2 at some point.)

So we now assume that C1 is the last configuration in 𝒞 on that run. The only possibility to leave 𝒞 is to have the leader clear I, which by assumption (a) can only be done in 4.5. This transition creates a counter helper; since we do not reach C2 we thus must have multiple such helpers.

By assumption (b), the number of counter helpers can only be reduced by a transition in δinitδcleanup. Inspecting these transitions, the only candidates are line 2 of 4.5 and line 3 of 4.5. The former is only enabled at configurations in 𝒞. The latter reduces the number of counter helpers by 1 and sets flag Start on the leader. This flag, by assumption (a), cannot be cleared by any transition other than line 1 of 4.5. (Note that line 3 modifies an agent that is not the leader, and there is only one leader since we are operating within initialised configurations.)

Since Start prevents further reductions in the number of counter helpers, at least one such helper remains. Therefore, it is possible to execute the first line of 4.5 and move back to 𝒞. By fairness, this happens eventually, contradicting our assumption that 𝒞 is visited finitely often and C2 not reached, proving our first claim.

Reaching such a C2 must be done by line 1 of 4.5 (since no other transition clears I), which clears the counter and initiates a loop. As we have argued, C2 is initialised and has exactly one counter helper. We now show that all fair runs from C2 either reach 𝒞 or a configuration C3 fulfilling conditions (1-2).

By assumption (d), transitions outside of δinitδcleanup do not interact with the counter, and by (c) cannot interact with the counter helper (since it has T set). The only transitions involving the counter helper in a state other than Done are δcounter and the first line of 4.5. Since the latter moves to a configuration in 𝒞, we may assume wlog that it does not occur.

Inspecting 4.4, line 2 of 4.5 is only enabled when the counter helper is in Done. Similarly for line 3 of 4.5. So when we move the helper to another state, we can apply Observation 11 and conclude that it performs its operation correctly. (Transitions outside of δcounter may be executed, but cannot affect either the counter or the counter helper.)

This means that line 3 of 4.5 is only executed once line 2 has run exactly n1 times. If C2(,Ldr0,T1)<n1, this is not possible, and we go back to 𝒞 eventually using line 1 of 4.5. Otherwise, T is set on all non-leader agents and we claim that it was set by lines 2-3 of 4.5. Namely note that by assumption (c) no transition other than 4.5 may use the agents in (,Free1,T1) at all, and by assumption (e) no transition may be initiated using only the counter helper, the free agents, and the leader without Start.

In that case, all agents with T set must result from lines 2-3 of 4.5. Since it resets the (non-input) flags of all non-free agents, the leader will execute line 2 of 4.5 precisely n1 times, and then execute line 3 once, moving to the desired configuration.

4.6 Digits

Let g be the function, such that f(x)=g(logx) for all x. For the simulation of 𝒞, we organise the agents into f(n)=g(ln) many “digits”, which are counters that count up to (roughly) n/g(ln). They do not work by storing the bits individually, as for the counters of the previous section, but instead digit i is stored by having the appropriate number of agents in state (i,Digit1,N1).

Overall, the goal is to simulate registers by using multiple digits. For example, consider k digits, where digit i can store a number in 0,,ni1, and currently stores di. Then the number stored by this group of digits would be i=1k(n1ni1)di. This is a generalization of standard base b number systems to allow every digit to have a different base ni.

In the previous sections, we have made use of a helper agents that could autonomously execute certain tasks (e.g. interacting with the counter). We will continue in this vein and designate a new agent for each task.

We start by distributing the free agents into the g(ln) digits. This happens in a simple round-robin fashion.

(,Dist1), (0,Dist0,DistA1,Loop1),
(i,DistA1,Body1),(0,Free1,V0) (i1,Body0),(i,Free0,Digit1,V1) for i>0
(,DistA1,Body1),(,Free0,V0,T0) (,Body0),(,V1)
(0,DistA1),(i,Ldr1) (g(i),),(i) for i
(,DistA1,End1), (,DistA0,End0,DistDone1),

We use a new flag V to mark agents that have already been seen. This ensures that all available agents are distributed. The restriction to T0 is necessary to satisfy the assumptions of Lemma 12 – but once the cleanup has successfully completed, no agents will have T set.

Now we implement arithmetic operations on the digits. First, we give a subroutine to detect whether a digit is full (or empty). For the following transition, let i,j, a,b{0,1} and MF, with (j,M)(i,Digit1,Ma) and (j,M)(,Ub).

(,Det1), (,Det0,DetA1,Loop1,R0),
(i,DetA1,Body1,Ma,Ub),(i,Digit1,Ma,Ub) (i,Body0,R1),(,U1b)
(i,DetA1,Body1,Ma,Ub),(j,M) (i,Body0),(j,U1b)
(i,DetA1,End1,Ub), (i+1,DetA0,DetDone1,End0,U1b),

This is slightly more involved. Similar to before, we mark agents that have been counted (this time using U). To avoid having to do a second loop which resets U, we instead alternate between U0 and U1 every time 4.6 is executed. In each iteration, we count agents by setting U to the opposite of the value stored in the digit helper. After the loop has completed, the digit helper then flips its own U flag.

To use this routine on digit i, we move an agent into (i,Det1,Mb), where b indicates whether we want to check that the digit is not empty (b=1) or not full (b=0). The output is returned using the R flag. (For technical reasons, the agent ends in level i+1 – this will be useful when checking multiple digits.)

There are two ways to change the value of a digit i: incrementing and decrementing. Both are analogous, so we only describe the former. The process is straightforward: we check whether digit i is already full; if it is not, we move an agent from (i,Digit1,M0) to (i,Digit1,M1). Otherwise the digit overflows; we have to set it to 0 and increment digit i+1. (This is simply adding 1 to a number represented using multiple digits in some base.)

Similar to before, let i,j, b{0,1} and MF, with (j,M)(i,Digit1) and (j,M)(,Wb).

(i,DigIncr1),(,DetDone1) (i,DigIncr0,DigIncrA1),(i,DetDone0,Det1,M0)
(,DigIncrA1),(,DetDone1,R1) (,DigIncrA0,DigIncrB1),()
(i,DigIncrB1),(i,Digit1,M0) (0,DigIncrB0,DigDone1),(,M1)
(,DigIncrA1),(,DetDone1,R0) (,DigIncrA0,DigIncrC1,Loop1),()
(i,DigIncrC1,Body1,Wb),(i,Digit1,Wb) (i,Body0),(i,W1b,M0)
(i,DigIncrC1,Body1,Wb),(j,M) (i,Body0),(i,W1b)
(i,DigIncrC1,End1,Wb), (i+1,DigIncrC0,End0,DigIncr1,W1b),

We define transitions for DigDecr analogously.

4.7 Counter Machine

In this section, we describe a subprocess that simulates instructions of 𝒞, using the digits of the previous section. Each of the |Σ|+3 registers is simulated by K=g(ln)/(|Σ|+3) digits. We write νln(r) for the function that maps each register r to its first digit. In particular, r is then simulated by digits νln(r),,νln(r)+K1. Formally, we have νln(r):=1+K(r1).

4.7.1 Input

Initially, each agent holds one input in Σ. We need to initialise the |Σ| input registers of 𝒞 accordingly. We use a loop to make sure that all agents have been moved. However, both the loop and incrementing the digit use the counter stored in A by the Ctr agents; therefore, we swap A and B to switch between them.

Let X,YΣ denote inputs, where X is stored in digits r,,r+K1.

(,Inp1,X1,O0),(,Done1) (,Inp0,InpA1),(0,Swap1)
(,InpA1,X1,O0),(,DigDone1) (,InpA0,InpB1,X0,O1),(r,DigIncr1)
(,InpB1),(,DigDone1) (,InpB0,InpC1),()
(,InpC1),(,Done1) (,InpC0,Inp1,Loop1,Body0),(,Swap1)
(,Inp1,Body1,X1,O1),(,Y1,O0) (,X0,Y1,O0),(,X1,Y0,O1)
(,Inp1,End1,Σ0), (,InpDone),

There are two considerations complicating the implementation of 4.7.1. First, the agent in Inp must count its own input. Second, the overall amount of input flags in the population must not change. We ensure the latter by marking agents with O (instead of e.g. consuming the input) and exchanging input flags (second to last line).

4.7.2 Simulating Instructions

Finally, we can start simulating the instructions of the counter machine. There are two types of instructions. Incr instructions increment a register and then go nondeterministically to one of two instructions. Decr instructions decrement a register and go to one of two instructions, depending on whether the resulting value is zero. The counter machine accepts by reaching the last instruction. We make the following assumptions on the behaviour of the counter machine:

  1. (P1)

    No increment that would cause an overflow is performed, nor is a decrement on an empty register.

  2. (P2)

    If it is possible to accept from the initial configuration, every fair run will accept eventually.

  3. (P3)

    Once reaching the final instruction, the counter machine loops and remains there.

Let 1,,l denote the instructions of the counter machine. The subprocess simulating the machine is led by the agent with flag CM; it stores the current instruction using flag IPs, with s{1,,l}. Fix some instruction s=(op,r,s0,s1){Incr,Decr}×{1,,|Σ|+3}×{1,,l}2. If op=Incr, we increment counter νi(r) and move nondeterministically to instruction s0 or s1. Let i,b{0,1}.

(i,CM1,IP1s),(,DigDone1) (i,IP0s,IP1sb),(νi(r),DigDone0,DigIncr1)

We remark that the digits have no concept of being grouped into registers – if digit i overflows during an increment, the digit helper moves on to the next digit, even if it “belongs” to a different register. For our purposes, this is not a problem, since property (P1) ensures that the last digit of a register never overflows.

If op=Decr, we decrement counter νi(r) and check whether it is zero. If so, we move to s1, else to s0.

(i,CM1,IP1s),(,DigDone1) (i,IP0s,IPA1s),(νi(r),DigDone0,DigDecr1)
(i,CM1,IPA1s),(,DigDone1) (i,IPA0s,IPB1s),()
(i,CM1,IPB1s),(,DetDone1) (i,IPB0s,IPC1s),(νi(r),R0)
(i,CM1,IPC1s),(j,DetDone1,R0) (i),(j,DetDone0,Det1,M1)
for j<νi(r+1)
(i,CM1,IPC1s),(νi(r+1),DetDone1,R0) (i,IPC0s,IP1s0),()
(i,CM1,IPC1s),(,DetDone1,R1) (i,IPC0s,IP1s1),()

4.7.3 Output

For the population protocol to have an output, we do a standard output broadcast. The agent simulating the counter machine outputs 1 once the machine has reached the last instruction, and 0 otherwise. All other agents copy that output.

(,CMl), (,Output1),
(,CM1,Outputb),() (),(,Outputb) for b{0,1}

And O((q,S)):=1 if OutputS, else O((q,S)):=0.

4.7.4 Starting the Simulation

All that remains is initialising the above subprocesses. After cleanup, there will be one unique leader in Start (Lemma 12). It creates the subprocesses for the counter and the digits. Then it starts the subprocess that distributes the agents in to the digits. Once that is finished, the leader starts the initialisation of the input registers, and after that, finally starts the counter machine simulation.

(,Start1,Go0),(,Free1) (,Go1,GoA1),(,Free0,Done1)
(,Start1,GoA1),(,Free1) (,GoA0,GoB1),(0,Free0,DetDone1)
(,Start1,GoB1),(,Free1) (,GoB0,GoC1),(0,Free0,DigDone1)
(,Start1,GoC1),(,Free1) (,GoC0,GoD1),(0,Free0,Dist1)
(,Start1,GoD1),(,DistDone1) (),(0,DistDone0,Inp1)
(,Start1,GoD1),(,InpDone1) (),(0,InpDone0,CM1,IP1)

This finally allows us to prove Lemma 9: See 9

Proof.

Let φ𝖭𝖲𝖯𝖠𝖢𝖤(f(n)logn) denote a predicate, where φ:Σ{0,1}. Then there is a 2cf(n)logn-bounded counter machine 𝒞 deciding φ, for some c, using Γ:=Σ+3 registers. (The three additional counters are usually used to store the tape left of the head, right of the head, and as a temporary area to perform multiplication and division by constants.)

We may assume that 𝒞 never exceeds its bounds (ensuring (P1)). Further, we can assume that 𝒞 stores its inputs in some fashion and may nondeterministically restart, as long as it has not accepted. This yields (P2). Property (P3) can easily be achieved by a syntactic modification.

Furthermore, it is enough to show that our uniform population protocol 𝒫 is correct for all inputs n0 for some constant n0 by possibly taking a product with an 𝒪(1) states population protocol computing φ for small inputs.

We argue that there is a constant β, s.t. the construction from Section 4 can simulate Γ registers that are 2cf(n)logn-bounded, using g(ln):=βf(2ln) digits in total. Each digit has at least (nln5)/g(ln)1 agents and there are βf(2ln)/Γ digits per register. Taking the logarithm, we obtain

log((nln5βf(2ln)1)βf(2ln)/Γ)βf(n)Γlog(n2β2f(n)1)βf(n)Γlog(nεdβ1)

where d is a constant s.t. f(n)dn1ε. We can further lower-bound this by εβ/Γf(n)logn𝒪(1). Choosing a suitably large constant β, this is at least cf(n)logn, as desired.

It remains to argue that our construction is correct. Using lemmas 10 and 12, we know that the protocol eventually reaches a configuration with exactly one leader, a counter initialised to n, and all other agents in a well-defined state. Afterwards, at each step at most one agent can execute a transition, and correctness follows from careful inspection of the transitions defined above.

5 Conclusion

We have characterised the expressive power of population protocols with fΩ(logn)𝒪(n1ε) states. This closes the gap left open by prior research for uniform protocols, and gives the complexity for protocols with Θ(logn) or Θ(polylogn) states – the most common constructions in the literature. Our characterisation applies to both uniform and non-uniform protocols.

The upper bound uses the Immerman-Szelepcsényi theorem to argue that a nondeterministic space-bounded Turing machine can simulate the protocol and determine whether it has stabilised. Similar arguments can be found in the literature [11].

Our construction is more involved. It uses the standard idea of determining the total number of agents and then performing zero-checks, i.e. checking whether a state is absent by iterating over all agents. Using zero-checks, it is straightforward to simulate counter-machines. There are two main difficulties: First, with only 𝒪(logn) states, no single agent can store n. Instead, we have to distribute that information over multiple agents (namely those with flag Ctr), and those agents must collaborate to perform computations on that number. Second, it is neither sufficient to use a constant number of counters with n agents, nor to use f(n) counters with constant number of agents (i.e. bits). We must do both at the same time, which results in the Digit agents. This is one main point where our construction improves upon [12] and prevents the loss of log factors.

We have focused on the expressive power of protocols that can run for an arbitrary amount of time. However, time-complexity plays an important role, and many constructions in the literature focus on being fast. Does limitting the running time affect the expressive power? We conjecture that such protocols can be modelled well by randomised, space-bounded Turing machines, but it is unclear whether one can obtain a characterisation in that case.

One important result about constant-state population protocols is the decidability of the verification problem [22] – a natural question is whether this result can be extended to, e.g. protocols with Θ(logn) states. Unfortunately, our characterisation answers this question in the negative. This does open the question of whether there exist subclasses that exclude our construction (and may, therefore, have a decidable verification problem), but include known constructions from the literature for e.g. the majority predicate.

Finally, one gap remains for non-uniform (or weakly uniform) protocols with o(logn) states. In particular, is it possible to decide a non-semilinear predicate with o(logn) states? We conjecture 𝖴𝖭𝖫𝖴𝖤𝖭𝖢(𝖭𝖫)𝖶𝖴𝖯𝖯(loglogn), i.e. there is a (non-uniform) population protocol with 𝒪(loglogn) states for every predicate in 𝖴𝖭𝖫, in particular for xy=z or for deciding whether a given input x is a prime number.

References

  • [1] Dan Alistarh, James Aspnes, David Eisenstat, Rati Gelashvili, and Ronald L. Rivest. Time-space trade-offs in population protocols. In SODA 2017, pages 2560–2579. SIAM, 2017. doi:10.1137/1.9781611974782.169.
  • [2] Dan Alistarh and Rati Gelashvili. Recent algorithmic advances in population protocols. SIGACT News, 49(3):63–73, 2018. doi:10.1145/3289137.3289150.
  • [3] Dan Alistarh, Rati Gelashvili, and Milan Vojnovic. Fast and exact majority in population protocols. In PODC, pages 47–56. ACM, 2015. doi:10.1145/2767386.2767429.
  • [4] Dana Angluin, James Aspnes, Zoë Diamadi, Michael J. Fischer, and René Peralta. Computation in networks of passively mobile finite-state sensors. In PODC 2004, pages 290–299. ACM, 2004. doi:10.1145/1011767.1011810.
  • [5] Dana Angluin, James Aspnes, and David Eisenstat. Fast computation by population protocols with a leader. In DISC, volume 4167 of Lecture Notes in Computer Science, pages 61–75. Springer, 2006. doi:10.1007/11864219_5.
  • [6] Dana Angluin, James Aspnes, David Eisenstat, and Eric Ruppert. The computational power of population protocols. Distributed Comput., 20(4):279–304, 2007. doi:10.1007/S00446-007-0040-2.
  • [7] Amanda Belleville, David Doty, and David Soloveichik. Hardness of computing and approximating predicates and functions with leaderless population protocols. In ICALP, volume 80 of LIPIcs, pages 141:1–141:14. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2017. doi:10.4230/LIPICS.ICALP.2017.141.
  • [8] Petra Berenbrink, Robert Elsässer, Tom Friedetzky, Dominik Kaaser, Peter Kling, and Tomasz Radzik. Time-space trade-offs in population protocols for the majority problem. Distributed Comput., 34(2):91–111, 2021. doi:10.1007/S00446-020-00385-0.
  • [9] Petra Berenbrink, George Giakkoupis, and Peter Kling. Optimal time and space leader election in population protocols. In STOC, pages 119–129. ACM, 2020. doi:10.1145/3357713.3384312.
  • [10] Petra Berenbrink, Dominik Kaaser, and Tomasz Radzik. On counting the population size. In PODC, pages 43–52. ACM, 2019. doi:10.1145/3293611.3331631.
  • [11] Michael Blondin, Javier Esparza, and Stefan Jaax. Expressive power of broadcast consensus protocols. In CONCUR, volume 140 of LIPIcs, pages 31:1–31:16. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2019. doi:10.4230/LIPICS.CONCUR.2019.31.
  • [12] Olivier Bournez, Johanne Cohen, and Mikaël Rabie. Homonym population protocols. Theory Comput. Syst., 62(5):1318–1346, 2018. doi:10.1007/S00224-017-9833-2.
  • [13] Ioannis Chatzigiannakis, Othon Michail, Stavros Nikolaou, Andreas Pavlogiannis, and Paul G. Spirakis. Passively mobile communicating logarithmic space machines. CoRR, abs/1004.3395, 2010. arXiv:1004.3395.
  • [14] Ioannis Chatzigiannakis, Othon Michail, Stavros Nikolaou, Andreas Pavlogiannis, and Paul G. Spirakis. Passively mobile communicating machines that use restricted space. Theor. Comput. Sci., 412(46):6469–6483, 2011. doi:10.1016/J.TCS.2011.07.001.
  • [15] David Doty and Mahsa Eftekhari. Efficient size estimation and impossibility of termination in uniform dense population protocols. In PODC, pages 34–42. ACM, 2019. doi:10.1145/3293611.3331627.
  • [16] David Doty and Mahsa Eftekhari. A survey of size counting in population protocols. Theor. Comput. Sci., 894:91–102, 2021. doi:10.1016/J.TCS.2021.08.038.
  • [17] David Doty and Mahsa Eftekhari. Dynamic size counting in population protocols. In SAND, volume 221 of LIPIcs, pages 13:1–13:18. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2022. doi:10.4230/LIPICS.SAND.2022.13.
  • [18] David Doty, Mahsa Eftekhari, Leszek Gasieniec, Eric E. Severson, Przemyslaw Uznanski, and Grzegorz Stachowiak. A time and space optimal stable population protocol solving exact majority. In FOCS, pages 1044–1055. IEEE, 2021. doi:10.1109/FOCS52979.2021.00104.
  • [19] David Doty, Mahsa Eftekhari, Othon Michail, Paul G. Spirakis, and Michail Theofilatos. Brief announcement: Exact size counting in uniform population protocols in nearly logarithmic time. In DISC, volume 121 of LIPIcs, pages 46:1–46:3. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2018. doi:10.4230/LIPICS.DISC.2018.46.
  • [20] David Doty and David Soloveichik. Stable leader election in population protocols requires linear time. In DISC, volume 9363 of Lecture Notes in Computer Science, pages 602–616. Springer, 2015. doi:10.1007/978-3-662-48653-5_40.
  • [21] Robert Elsässer and Tomasz Radzik. Recent results in population protocols for exact majority and leader election. Bull. EATCS, 126, 2018. URL: http://bulletin.eatcs.org/index.php/beatcs/article/view/549/546.
  • [22] Javier Esparza, Pierre Ganty, Jérôme Leroux, and Rupak Majumdar. Verification of population protocols. Acta Informatica, 54(2):191–215, 2017. doi:10.1007/S00236-016-0272-3.
  • [23] Patrick C. Fischer, Albert R. Meyer, and Arnold L. Rosenberg. Counter machines and counter languages. Math. Syst. Theory, 2(3):265–283, 1968. doi:10.1007/BF01694011.
  • [24] David Soloveichik, Matthew Cook, Erik Winfree, and Jehoshua Bruck. Computation with finite stochastic chemical reaction networks. Nat. Comput., 7(4):615–633, 2008. doi:10.1007/S11047-008-9067-Y.