Fast and Succinct Population Protocols for Presburger Arithmetic

In their 2006 seminal paper in Distributed Computing, Angluin et al. present a construction that, given any Presburger predicate as input, outputs a leaderless population protocol that decides the predicate. The protocol for a predicate of size $m$ (when expressed as a Boolean combination of threshold and remainder predicates with coefficients in binary) runs in $\mathcal{O}(m \cdot n^2 \log n)$ expected number of interactions, which is almost optimal in $n$. However, the number of states of the protocol is exponential in $m$. Blondin et al. described in STACS 2020 another construction that produces protocols with a polynomial number of states, but exponential expected number of interactions. We present a construction that produces protocols with $\mathcal{O}(m)$ states that run in expected $\mathcal{O}(m^{7} \cdot n^2)$ interactions, optimal in $n$, for all inputs of size $\Omega(m)$. For this we introduce population computers, a carefully crafted generalization of population protocols easier to program, and show that our computers for Presburger predicates can be translated into fast and succinct population protocols.


Introduction
Population protocols [4,5] are a model of computation in which indistinguishable, mobile finite-state agents, randomly interact in pairs to decide whether their initial configuration satisfies a given property, modelled as a predicate on the set of all configurations. The decision is taken by stable consensus; eventually all agents agree on whether the property holds or not, and never change their mind again. Population protocols are very close to chemical reaction networks, a model in which agents are molecules and interactions are chemical reactions. 1 Remainder predicates cannot be directly expressed in Presburger arithmetic without quantifiers. 2 If the model is extended by allowing a leader (and one considers the slightly weaker notion of convergence time), or the number of states of an agent is allowed to grow with the population size, O(n · polylog(n)) interactions can be achieved [6,3,2,13,12].

Preliminaries
Multisets. Let E be a finite set. A multiset over E is a mapping E → N, and N E denotes the set of all multisets over E. We sometimes write multisets using set-like notation, e.g. a, 2 · b denotes the multiset v such that v(a) = 1, v(b) = 2 and v(e) = 0 for every e ∈ E \ {a, b}.
The empty multiset is also denoted ∅.

Multiset rewriting transitions.
A multiset rewriting transition, or just a transition, is a pair (r, s) ∈ N E × N E , also written r → s. A transition t = (r, s) is enabled at v ∈ N E if v ≥ r, and its occurrence leads to v ′ : The multiset v is terminal if it does not enable any transition. An execution is a finite or infinite sequence v 0 , v 1 , ... of multisets such that v → t1 v 1 → t2 · · · for some sequence t 1 , t 2 , ... of transitions. A multiset v ′ is reachable from v if there is an execution v 0 , v 1 , ..., v k with v 0 = v and v k = v ′ ; we also say that the execution leads from v to v ′ . An execution is a run if it is infinite or it is finite and its last multiset is terminal. A run v 0 , v 1 , ... is fair if it is finite, or it is infinite and for every multiset v, if v is reachable from v i for infinitely many i ≥ 0, then v = v j for some j ≥ 0.
Presburger arithmetic. Angluin et al. proved that population protocols decide exactly the predicates N k → {0, 1} definable in Presburger arithmetic, the first-order theory of addition, which coincide with the semilinear predicates [14]. Using the quantifier elimination procedure of Presburger arithmetic, every Presburger predicate can be represented as a Boolean combination of threshold and remainder predicates. A predicate φ : where a 1 , ..., a v , c ∈ Z, and a remainder predicate if φ(x 1 , ..., where a 1 , ..., a v ∈ Z, m ≥ 1, c ∈ {0, ..., m−1}, and a ≡ m b denotes that a is congruent to b modulo m. We call the set of these formulas quantifier-free Presburger arithmetic, or QFPA. The size of a predicate is the minimal number of bits of a formula of QFPA representing it, with coefficients written in binary.

Population Computers
Population computers are a generalization of population protocols that allows us to give very concise descriptions of our protocols for Presburger predicates.

Syntax.
A population computer is a tuple P = (Q, δ, I, O, H), where: Q is a finite set of states. Multisets over Q are called configurations. δ ⊆ N Q × N Q is a set of multiset rewriting transitions r → s over Q such that |r| = |s| ≥ 2 and |supp(r)| ≤ 2. Further, we require that δ is a partial function, so s 1 = s 2 for all r, s 1 , We call a population computer is binary if every transition binary.
H ∈ N Q\I is a multiset of helper agents or just helpers. A helper configuration is a configuration C such that supp(C) ⊆ supp(H) and C ≥ H.
Graphical notation. We visualise population computers as Petri nets (see e.g. Figure 3). Places (circles) and transitions (squares) represent respectively states and transitions. To visualise configurations, we draw agents as tokens (smaller filled circles).
Semantics. Intuitively, a population computer decides which output (0 or 1) corresponds to an input C I as follows. It adds to the agents of C I an arbitrary helper configuration C H of agents to produce the initial configuration C I + C H . Then it starts the computation and lets it stabilise to configurations of output 1 or output 0. Formally, the initial configurations of P for input C I are all configurations of the form C I + C H for some helper configuration An input C I has output b if for every initial configuration C 0 = C I + C H , every fair run starting at C 0 stabilises to b. A population computer P decides a predicate φ : N I → {0, 1} if every input C I has output φ(C I ).
Terminating and bounded computers. A population computer is bounded if no run starting at an initial configuration C is infinite, and terminating if no fair run starting at C is infinite.
Observe that bounded population computers are terminating.
Size and adjusted size. Let P = (Q, δ, I, O, H) be a population computer. We assume that O is described as a boolean circuit with size(O) gates. For every transition t = (r → s) let |t| := |r|. The size of P is size(P) := |Q| + |H| + size(O) + t∈δ |t|. If P is binary, then (as for population protocols) we do not count the transitions and define the adjusted size size 2 (P) := |Q| + |H| + size(O). Observe that both the size of a transition and the size of the helper multiset are the number of elements, i.e. the size in unary, strengthening our later result about the existence of succinct population computers.
Population protocols. A population computer P = (Q, δ, I, O, H) is a population protocol if it is binary, has no helpers (H = ∅), and O is a consensus output. It is easy to see that this definition coincides with the one of [5]. The speed of a binary population computer with no helpers, and so in particular of a population protocol, is defined as follows. We assume a probabilistic execution model in which at configuration C two agents are picked uniformly at random and execute a transition, if possible, moving to a configuration C ′ (by assumption they enable at most one transition). This is called an interaction. Repeating this process, we generate a random execution C 0 C 1 ... . We say that the execution stabilises at time t if C t reaches only configurations C ′ with O(supp(C ′ )) = O(supp(C t )), and we say that P decides φ within T interactions if it decides φ and E(t) ≤ T . See e.g. [6] for more details.
Population computers vs. population protocols. Population computers generalise population protocols in three ways: They have non-binary transitions, but only those in which the interacting agents populate at most two states. For example, p, p, q → p, q, o (which in the following is written simply as p, p, q → p, q, o) is allowed, but p, q, o → p, p, q is not. They use a multiset H of auxiliary helper agents, but the addition of more helpers does not change the output of the computation. Intuitively, contrary to the case of leaders, agents do not know any upper bound on the number of helpers, and so the protocol cannot rely on this bound for correctness or speed. They have a more flexible output condition. Loosely speaking, population computers accept by stabilising the population to an accepting set of states, instead of to a set of accepting states.

Overview and Main Results
Given a predicate φ ∈ QFPA over variables x 1 , ..., x v , the rest of this paper shows how to construct a fast and succinct population protocol deciding φ. First, Section 5 gives an overview of previous constructions and explains why they are not fast or not succinct. Then we proceed in five steps: Construct a succinct bounded population computer P deciding double(φ). 3. Convert P into a succinct population protocol P ′ deciding φ for inputs of size Ω(|φ|). 4. Prove that P ′ runs within O(n 3 ) interactions. 5. Use a refined running-time analysis to prove that P ′ runs within O(n 2 ) interactions.
Section 6 constructs bounded population computers for all predicates φ ∈ QFPA. This allows us to conduct steps 1 and 2. More precisely, the section proves: The restriction to inputs of size Ω(m) is very mild. Moreover, it can be lifted using a technique of [8], at the price of adding additional states (and at no cost regarding asymptotic speed, since the speed of the new protocol only changes for inputs of size O(m)): It is known that the majority predicate can only be decided in Ω(n 2 ) interactions by population protocols [1], so -as a general construction -our result is optimal w.r.t. time. Regarding space, an Ω(|φ| 1/4 ) lower bound was shown in [9], leaving a polynomial gap.

Previous Constructions: Angluin et al. and Blondin et al.
The population protocols for a quantifier free Presburger predicate φ constructed in [5] are not succinct, i.e. do not have O(|φ| a ) states for any constant a, and those of [8] are not fast, i.e. do not have speed O(|φ| a n b ) for any constants a, b. We explain why with the help of some examples.
▶ Example 5. Consider the protocol of [5] for the predicate φ = (x − y ≥ 2 d ). The states are the triples Intuitively, ℓ indicates whether the agent is active (A) or passive (P), b indicates whether it currently believes that φ holds (Y) or not (N), and u is the agent's wealth, which can be negative. Agents for input x are initially in state (A, N, 1), and agents for y in (A, N, −1). If two passive agents meet their encounter has no effect. If at least one agent is active, then the result of the encounter is given by the transition ( * , * , u), The protocol stabilises after O(n 2 log n) expected interactions [5], but it has 2 d+1 + 1 states, exponentially many in |φ| ∈ Θ(d).
▶ Example 6. We give a protocol for φ = (x − y ≥ 2 d ) with a polynomial number of states. This is essentially the protocol of [8]. We remove states and transitions from the protocol of Example 5, retaining only the states (ℓ, b, u) such that u is a power of 2, and some of the transitions involving these states: The protocol is not yet correct. For example, for d = 1 and the input x = 2, y = 1, the protocol can reach in one step the configuration in which the three agents (two x-agents and one y-agent) are in states (A, Y, 2), (P, Y, 0), (A, N, −1), after which it gets stuck. In [8] this is solved by adding "reverse" transitions: The protocol has only Θ(d) states and transitions, but runs within Ω(n 2 d −2 ) interactions. ▶ Example 7. Given protocols P 1 , P 2 with n 1 and n 2 states deciding predicates φ 1 and φ 2 , Angluin et al. construct in [5] a protocol P for φ 1 ∧ φ 2 with n 1 · n 2 states. It follows that the number of states of a protocol for φ := φ 1 ∧ · · · ∧ φ s grows exponentially in s, and so in |φ|. Blondin et al. give an alternative construction with polynomially many states [8, Section 5.3]. However, their construction contains transitions that, as in the previous example, reverse the effect of other transitions, and make the protocol very slow. The problem is already observed in the toy protocol with states q 1 , q 2 and transitions q 1 , q 1 → q 2 , q 2 and q 1 , q 2 → q 1 , q 1 .
(Similar transitions are used in the initialisation of [8].) Starting with an even number n ≥ 2 of agents in q 1 , eventually all agents move to q 2 and stay there, but the expected number of interactions is Ω(2 n/10 ) [11, Appendix A.2].

Succinct Bounded Population Computers for Presburger Predicates
In Sections 6.1 and 6.2 we construct population computers for remainder and threshold predicates in which all coefficients are powers of two. We present the remainder case in detail, and sketch the threshold case. The generalization to arbitrary coefficients is achieved by means of a gadget very similar to the one we used to compute boolean combinations of predicates. This later gadget is presented in Section 6.3, and so we introduce the generalization there.

Population computers for remainder predicates
be the set of positive powers of 2.
We construct population computers P φ for remainder predicates φ : We say that a finite multiset r over Pow + represents the residue rep(r) := sum(r) mod m. For example, if m = 11 then r 18 := 2 3 , 2 3 , 2 1 represents 7. Accordingly, we call the multisets over Pow + representations. A representation of degree d only contains elements of Pow for every x ∈ Pow + ; so its represented value is completely determined by the support. For example, r 18 is not a support representation of 7, but 2 5 , 2 3 is. We proceed to construct P φ . Let us give some intuition first. P φ has Pow + d ∪ {0} as set of states. We extend the notion of representation to configurations by disregarding agents in state 0; a configuration is therefore a support representation if all states except 0 have at most one agent. The initial states of P φ are chosen so that every initial configuration for an input (x 1 , ..., x v ) is a representation of the residue z := v i=1 a i x i mod m. The transitions transform this initial representation of z into a support representation of z. Whether z ≡ m c holds or not depends only on the support of this representation, and the output function thus returns 1 for the supports satisfying z ≡ m c, and 0 otherwise. Let us now formally States and initial states. Let d := ⌈log 2 m⌉. The set of states is Q = Pow + d ∪ {0}. The set of initial states is I := {a 1 , ..., a v }. Observe that an input C I = x 1 · a 1 , ..., x v · a v is a representation of z, but not necessarily a support representation.

Transitions.
Transitions ensure that non-support representations, i.e. representations with two or more agents in some state q, are transformed into representations of the same residue "closer" to a support representation. For q ∈ 2 0 , ..., 2 d−1 we introduce the transition: The left half of Figure 1 shows the population computer for φ = (8x + 5y ≡ 11 4).

Population computers for threshold predicates
We sketch the construction of population computers P φ for threshold predicates φ : As the construction is similar to the construction for remainder, we will focus on the differences and refer to [11, Appendix B.2] for details.
As for remainder, we work with representations that are multisets of powers of 2. However, they represent the sum of their elements (without modulo) and we allow both positive and negative powers of 2. Similar to the remainder construction, the computer transforms any representation into a support representation without changing the represented value. Then, the computer decides the predicate using only the support of that representation.
Again, there are 〈combine〉 transitions that allow agents with the same value to combine. Instead of modulo transitions, 〈cancel〉 transitions further simplify the representation: 2 i , −2 i → 0, 0. Note that even after exhaustively applying 〈combine〉 and 〈cancel〉 there can still be many agents in 2 d or many agents in −2 d . This has two consequences: In the construction for general predicates of Section 6.3, we need that computers for remainder and threshold move most agents to state 0. In the remainder construction, all but a constant number of agents are moved to 0. In contrast, the threshold construction does not have this property. Thus, we do not design a single computer for a given threshold predicate φ but a family: one for every degree d larger than some minimum degree d 0 ∈ Ω(|φ|). Intuitively, larger degrees result in a larger fraction of agents in 0.
Assume we detect agents in 2 d (−2 d is analogous). If there are many, the predicate is true. However, if there is just one, then the represented value might be small, due to negative contributions −2 0 , ..., −2 d−1 . We cannot distinguish the two cases, so we add transition 〈cancel 2nd highest〉: 2 d , −2 d−1 → 2 d−1 , 0. It ensures that agents cannot be present in both 2 d and −2 d−1 ; therefore, an agent in 2 d certifies a value of at least 2 d−1 . The right half of Figure 1 shows the population computer for φ = (−2x + y ≥ 5) with degree d = 4. [11, Appendix B.2] proves: For every d ≥ max{⌈log 2 c⌉ + 1, ⌈log 2 |a 1 |⌉, ..., ⌈log 2 |a v |⌉} there is a bounded computer of size O(d) that decides φ.

Construct Subcomputers.
For every 1 ≤ i ≤ s, if φ i is a remainder predicate, then let P i be the computer defined in Section 6.1. If φ i is a threshold predicate, then let P i be the computer of Section 6.2, with d = d 0 + ⌈log 2 s⌉. We explain this choice of d in step 5.

Combine Subcomputers.
Take the disjoint union of P i , but merging their 0 states. More precisely, rename all states q ∈ Q i to (q) i , with the exception of state 0. Construct a computer with the union of all the renamed states and transitions. Figure 1 shows the Petri net representation of the computer so obtained for our example. We call the combined 0 state reservoir as it holds agents with no value that are needed for various tasks like input distribution.

Input Distribution.
For each variable x i add a corresponding new input state x i . Then add a transition that takes an agent in state x i and agents in 0 and distributes agents to the input states of the subcomputers that correspond to x i . In our example, we add two states x and y and the transitions x, 0 → (1) 1 , (8) 2 and y, 0, 0 → (−2) 1 , (4) 2 , (1) 2 . The distribution for x needs one helper, because we need one agent in each subcomputer. The distribution for y needs two helpers, one for P 1 and two for P 2 , as 5y was split into 4y 1 + 1y 2 . This way, once the input states are empty, the correct value is distributed to each subcomputer. Crucially, this input distribution can be fast as it is not reversible.

Add Extra Helpers.
In addition to all helpers from the subcomputers, add r − 1 more helpers to state 0. Intuitively, this allows to distribute the first input agent. Because of our choice for d in threshold subcomputers, each subcomputer returns most agents back to state 0. More precisely, for each distribution the number of agents that do not get returned to 0 only increases by at most 1 s (per subcomputer). So in total only one agent is "consumed" per distribution and enough agents are returned to 0 for the next distribution to occur. In our example, the agents that stay in each of the s = 2 subcomputers only increases by at most 1 2 per distribution. (In fact, remainder subcomputers return all distributed agents.) 6. Combine Output. Note that we can still decide φ i from the support of the states in the corresponding subcomputer P i . We compute the output for φ by combining the outputs of the subcomputers P 1 , ..., P s according to B(φ 1 , ..., φ s ). In our example, we set the output to 1 if and only if the output of P 1 or P 2 is 1.

Converting Population Computers to Population Protocols
In this section we prove Theorem 2. We proceed in four steps, which must be carried out in the given order. Section 7.1 converts any bounded computer P for double(φ) of size m into a binary bounded computer P 1 with O(m 2 ) states. Section 7.2 converts P 1 into a binary bounded computer P 2 with a marked consensus output function (a notion defined in the section). Section 7.3 converts P 2 into a binary bounded computer P 3 for φ -not double(φ) -with a marked consensus output function and no helpers. Section 7.4 shows that P 3 runs within O(n 3 ) interactions. Finally, we convert P 3 to a binary terminating (not necessarily bounded) computer P 4 with a normal consensus output and no helpers, also running within

Removing multiway transitions
We transform a bounded population computer with k-way transitions r → s such that |supp(r)| ≤ 2 into a binary bounded population computer. Let us first explain why the construction introduced in [9, Lemma 3], which works for arbitrary transitions r → s, is too slow. In [9], the 3-way transition t : q 1 , q 2 , q 3 → q ′ 1 , q ′ 2 , q ′ 3 is simulated by the transitions Intuitively, the occurrence of t 1 indicates that two agents in q 1 and q 2 want to execute t, and are waiting for an agent in q 3 . If the agent arrives, then all three execute t 2 t 3 , which takes them to q ′ 1 , q ′ 2 , q ′ 3 . Otherwise, the two agents must be able to return to q 1 , q 2 to possibly execute other transitions. This is achieved by the "revert" transition t 1 . The construction for a k-way transition has "revert" transitions t 1 , ..., t k−2 . As in Example 6 and Example 7, these transitions make the final protocol very slow.
We present a gadget without "revert" transitions that works for k-way transitions r → s satisfying |supp(r)| ≤ 2. Figure 2 illustrates it, using Petri net notation, for the 5-way transition t : 3p, 2q → a, b, c, d,  and (q, 0), ..., (q, 2). Intuitively, an agent in (q, i) acts as representative for a group of i agents in state q. Agents in (p, 3) and (q, 2) commit to executing t by executing the binary transition 〈commit〉. After committing, they move to the states a, ..., e together with the other members of the group, who are "waiting" in the states (p, 0) and (q, 0). Note that 〈commit〉 is binary because of the restriction |supp(r)| ≤ 2 for multiway transitions.
To ensure correctness of the conversion, agents can commit to transitions if they represent more than the required amount. In this case, the initiating agents would commit to a transition and then elect representatives for the superfluous agents, before executing the transition. This requires additional intermediate states.
[11, Appendix C] formalises the gadget and proves its correctness and speed.

Converting output functions to marked-consensus output functions
We convert a computer with an arbitrary output function into another one with a markedconsensus output function. An output function is a marked-consensus output function if there are disjoint sets of states Q 0 , , 1}, and O(S) := ⊥ otherwise. Intuitively, for every S ⊆ Q we have O(S) = 1 if all agents agree to avoid Q 0 (consensus), and at least one agent populates Q 1 (marked consensus). We only sketch the construction, a detailed description as well as a graphical example can be found in [11, Appendix D].
Our starting point is some bounded and binary computer P = (Q, δ, I, O, H), e.g. as constructed in Section 7.1. Let (G, E) be a boolean circuit with only NAND-gates computing the output function O. We simulate P by a computer P ′ with a marked consensus output and O(|Q| + |G|) states. This result allows us to bound the number of states of P ′ by applying well known results on the complexity of Boolean functions.
Intuitively, P ′ consists of two processes running asynchronously in parallel. The first one is (essentially, see below) the computer P itself. The second one is a gadget that simulates the execution of G on the support of the current configuration of P. Whenever P executes a transition, it raises a flag indicating that the gadget must be reset (for this, we duplicate each state q ∈ Q into two states (q, +) and (q, −), indicating whether the flag is raised or lowered). Crucially, P is bounded, and so it eventually performs a transition for the last time. This resets the gadget for the last time, after which the gadget simulates (G, E) on the support of the terminal configuration reached by P.
The gadget is designed to be operated by one state-helper for each q ∈ Q, with set of states Q supp (q), and a gate-helper for each gate g ∈ G, with set of states Q gate (g), defined as follows: Q supp (q) := {q} × {0, 1, !}. These states indicate that q belongs/does not belong to the support of the current configuration (states (q, 0) and (q, 1)), or that the output has changed from 0 to 1 (state (q, !)). Q gate (g) := {g} × {0, 1, ⊥} 3 for each gate g ∈ G, storing the current values of the two inputs of the gate and its output. Uninitialised values are stored as ⊥. Recall that a population computer must also remain correct for a larger number of helpers. This is ensured by letting all helpers populating one of these sets, say Q supp (q), perform a leader election; whenever two helpers in states of Q supp (q) meet, one of them becomes a non-leader, and a flag requesting a complete reset of the gadget is raised. All resets are carried out by a reset-helper with set of states Q reset := {0, ..., |Q| + |G|}, initially in state 0. (Reset-helpers also carry out their own leader election!) Whenever a reset is triggered, the reset-helper contacts all other |Q| + |G| helpers in round-robin fashion, asking them to reset the computation.
Eventually the original protocol P has already reached a terminal configuration with some support Q term , each set Q supp (q) and Q gate (g) is populated by exactly one helper, and all previous resets are terminated. From this moment on, P never changes its configuration. The |Q| state-helpers detect the support Q term of the terminal configuration by means of transitions that move them to the states Q term × {1} and (Q \ Q term ) × {0}; the gate-helpers execute (G, E) on input Q ′ by means of transitions that move them to the states describing the correct inputs and outputs for each gate. State-helpers use Q × {!} as intermediate states, indicating that the circuit must recompute its output.
It remains to choose the sets Q 0 and Q 1 of states the marked consensus output. We do it according to the output b of the output gate g out ∈ G: Q b is the set of states of Q gate (g out ) corresponding to output b.

Removing helpers
We convert a bounded binary computer P deciding the predicate double(φ) over variables x 1 , ..., x k , x ′ 1 , ..., x ′ k into a computer P ′ with no helpers deciding φ over variables x 1 , ..., x k . In [8], a protocol with helpers and set of states Q is converted into a protocol without helpers with states Q × Q. We sketch a better construction that avoids the quadratic blowup. A detailed description can be found in [11, Appendix E].
Let us give some intuition first. All agents of an initial configuration of P ′ are in input states. P ′ simulates P by liberating some of these agents and transforming them into helpers, without changing the output of the computation. For this, two agents in an input state x i are allowed to interact, producing one agent in x ′ i and one "liberated" agent, which can be used as a helper. This does not change the output of the computation, because double(φ)(..., holds by definition of double(φ). Figure 3 illustrates this idea. Assume P has input states x, y, x ′ , y ′ and helpers H = q 1 , q 2 , q 3 , q 4 , as shown on the left-hand side. Assume further that P computes a predicate double(φ)(x, y, x ′ , y ′ ). The computer P ′ is shown on the right of the figure. The additional transitions liberate agents, and send them to the helper states H. Observe that the initial states of P ′ are only x and y. Let us see why P ′ decides φ(x, y). As the initial configuration of S A N D 2 0 2 2 11:14 Fast and Succinct Population Protocols for Presburger Arithmetic Figure 3 Illustration in graphical Petri net notation (see Section 3) of construction that removes helpers. Initial states are highlighted. P ′ for an input x, y puts no agents in x ′ , y ′ , the computer P ′ produces the same output on input x, y as P on input x, y, 0, 0. Since P decides double(φ) and double(φ)(x, y, 0, 0) = φ(x, y) by the definition of double(φ), we are done. We make some remarks: P ′ may liberate more agents than necessary to simulate the multiset H of helpers of P. This is not an issue, because by definition additional helpers do not change the output of the computation. If the input is too small, P ′ cannot liberate enough agents to simulate H. Therefore, the new computer only works for inputs of size Ω(|H|) = Ω(|φ|). Even if the input is large enough, P ′ might move agents out of input states before liberating enough helpers. However, the computers of Section 6 can only do this if there are enough helpers in the reservoir state (see point 3. in Section 6.3). Therefore, they always generate enough helpers when the input is large enough.

A O(n 3 ) bound on the expected interactions
We show that the computer obtained after the previous conversion runs within O(n 3 ) interactions. We sketch the main ideas; the details are in [11, Appendix G]. We introduce potential functions that assign to every configuration a positive potential, with the property that executing any transition strictly decreases the potential. Intuitively, every transition "makes progres". We then prove two results: (1) under a mild condition, a computer has a potential function iff it is bounded, and (2) every binary computer with a potential function and no helpers, i.e. any bounded computer for which speed is defined, stabilises within O(n 3 ) interactions. This concludes the proof.
Observe that k-way transitions reduce the potential by k − 1, binary transitions by 1. At this point, we consider only binary computers, but this distinction becomes relevant for the refined speed analysis.
If a population computer has a potential function, then every run executes at most O(n) transitions, and so the computer is bounded. Applying Farkas' Lemma we can show that the converse holds for computers in which every state can be populated -a mild condition, since states that can never be populated can be deleted without changing the behaviour.
▶ Lemma 11. If P has a reachable configuration C q with C q (q) > 0 for each q ∈ Q, then P is bounded iff there is a potential function for P.
Consider now a binary computer with a potential function and no helpers. At every non-terminal configuration, at least one (binary) transition is enabled. The probability that two agents chosen uniformly at random enable this transition is Ω(1/n 2 ), and so a transition occurs within O(n 2 ) interactions. Since the computer has a potential function, every run executes at most O(n) transitions, and so the computer stabilises within O(n 3 ) interactions.
The final step to produce a population protocol is to translate computers with markedconsensus output function into computers with standard consensus output function, while preserving the number of interactions. For space reasons this construction is presented in [11, Appendix F].

Rapid Population Computers: Proving a O(n 2 ) Bound
We refine our running-time analysis to show that the population protocols we have constructed actually stabilise within O(n 2 ) interactions. We continue to use potential functions, as introduced in Section 7.4, but improve our analysis as follows: We introduce rapidly-decreasing potential functions. Intuitively, their existence shows that progress is not only possible, but also likely. We prove that they certify stabilisation within O(n 2 ) interactions. We introduce rapid population computers, as computers with rapidly-decreasing potential functions that also satisfy some technical conditions. We convert rapid computers into protocols with O(|φ|) states, and show that the computers of Section 6 are rapid.
In order to define rapidly-decreasing potential functions, we need a notion of "probability to execute a transition" that generalises to multiway transitions and is preserved by our conversions. At a configuration C of a protocol, the probability of executing a binary transition t = (p, q → p ′ , q ′ ) is C(q)C(p)/n(n − 1). Intuitively, leaving out the normalisation factor 1/n(n − 1), the transition has "speed" C(q)C(p), proportional in the product of the number of agents in p and q. But for a multiway transition like q, q, p → r 1 , r 2 , r 3 the situation changes. If C(q) = 2, it does not matter how many agents are in p -the transition is always going to take Ω(n 2 ) interactions. We therefore define the speed of a transition as min{C(q), C(p)} 2 instead of C(q)C(p).
For the remainder of this section, let P = (Q, δ, I, O, H) denote a population computer.
▶ Definition 13. Let Φ denote a potential function for P and let α ≥ 1. We say that Φ is α-rapidly decreasing at a configuration C if speed(C) ≥ (Φ(C) − Φ(C term )) 2 /α for all terminal configurations C term with C → C term .
We have not been able to find potential functions for the computers of Section 6 that are rapidly decreasing at every reachable configuration, only at reachable configurations with sufficiently many helpers, defined below. Fortunately, that is enough for our purposes.
▶ Definition 14. C ∈ N Q is well-initialised if C is reachable and C(I) + |H| ≤ 2 3 n.
Observe that an initial configuration C can only be well-initialised if C(supp(H)) ∈ Ω(C(I)). We now define rapid population computers, and state the result of our improved analysis.