Towards Fully Automatic Distributed Lower Bounds
Abstract
In the past few years, a successful line of research has led to lower bounds for several fundamental local graph problems in the distributed setting. These results were obtained via a technique called round elimination. On a high level, the round elimination technique can be seen as a recursive application of a function that takes as input a problem and outputs a problem that is one round easier than . Applying this function recursively to concrete problems of interest can be highly nontrivial, which is one of the reasons that has made the technique difficult to approach. The contribution of our paper is threefold.
Firstly, we develop a new and fully automatic method for finding so-called fixed point relaxations under round elimination. The detection of a non--round solvable fixed point relaxation of a problem immediately implies lower bounds of and rounds for deterministic and randomized algorithms for , respectively.
Secondly, we show that this automatic method is indeed useful, by obtaining lower bounds for defective coloring problems. More precisely, as an application of our procedure, we show that the problem of coloring the nodes of a graph with colors and defect at most requires rounds for deterministic algorithms and rounds for randomized ones. Additionally, we provide a simplified proof for an existing defective coloring lower bound. We note that lower bounds for coloring problems are notoriously challenging to obtain, both in general, and via the round elimination technique.
Both the first and (indirectly) the second contribution build on our third contribution: a new method to compute the one-round easier problem in the round elimination framework. This method heavily simplifies the usage of the round elimination technique, and in fact it has been successfully exploited in a recent work in order to prove quantum advantage in the distributed setting [STOC ’25].
Keywords and phrases:
round elimination, lower bounds, defective coloringCopyright and License:
2012 ACM Subject Classification:
Theory of computation Distributed algorithmsFunding:
This work was partially supported by the MUR (Italy) Department of Excellence 2023 - 2027 for GSSI, the European Union - NextGenerationEU under the Italian MUR National Innovation Ecosystem grant ECS00000041 - VITALITY – CUP: D13C21000430001, and the PNRR MIUR research project GAMING “Graph Algorithms and MinINg for Green agents” (PE0000013, CUP D13C24000430001).Editor:
Dariusz R. KowalskiSeries and Publisher:
Leibniz International Proceedings in Informatics, Schloss Dagstuhl – Leibniz-Zentrum für Informatik
1 Introduction
In the standard setting of distributed graph algorithms, known as the model [29, 35], the nodes of a graph communicate over the edges of in synchronous rounds. Initially, the nodes do not know anything about (except for their own unique identifier and possibly some global parameters such as the number of nodes or the maximum degree ) and at the end, each node must output its local part of the solution for the graph problem that needs to be solved. For example, if we intend to compute a vertex coloring of , at the end, every node must output its own color in the final coloring. The time complexity of such a distributed algorithm is then measured as the number of rounds needed from the start until all nodes have terminated.
The study of the complexity of solving graph problems in the model and in related distributed models has been a highly active area of research with a variety of substantial results over the last years. Apart from very significant and insightful new algorithmic results for distributed graph problems (e.g., [20, 19, 26, 36, 21, 25, 24]), the last ten years in particular also brought astonishing progress on proving lower bounds for distributed graph problems in the model (e.g., [17, 19, 3, 5]). Essentially all of this recent progress on lower bounds has been obtained by a technique known as round elimination. The technique works for a class of problems known as locally checkable problems [33, 16]. Informally, a locally checkable problem is a problem for which there exists a constant-time algorithm satisfying the following: if a given solution is correct, the algorithm accepts on all nodes; otherwise, the algorithm rejects on at least one node. This class encompasses many of the most fundamental problems studied in the context of the model, such as maximal matching, maximal independent set, and different variants of graph coloring.
Round Elimination.
On a very high level, round elimination works as follows. Given a problem provided in the proper language, the round elimination framework provides a way to mechanically construct a problem that is exactly one round easier (under some mild assumptions). That is, if can be solved in rounds, then can be solved in rounds (and vice versa).111Formally, round elimination has to be performed on a weaker version of the model, which is known as the port numbering model. In the port numbering model, nodes do not have unique IDs, but they can distinguish their neighbors through different port numbers. Round elimination lower bounds in the port numbering model can then be lifted to lower bounds in the standard model [5]. For proving an -round lower bound on problem , one then has to show that the problem , or a relaxation of it, is not trivial, i.e., it cannot be solved in rounds.
In its modern form, round elimination has first been used to show that the problems of computing a sinkless edge orientation or a -vertex coloring of require rounds with randomization and rounds deterministically [17, 19].222We remark that although phrased differently, the classic proofs that -coloring a ring requires rounds [32, 29] can also be seen as round elimination proofs. Subsequently Brandt [16] showed that round elimination can be applied to essentially every locally checkable problem and if a problem is specified in the right language, the problem can be computed in a fully automatic way. Automatic round elimination in the following led to a plethora of new distributed lower bounds. We next list some of the highlights. In [3], it was shown that even in regular trees, computing a maximal matching requires rounds with randomized algorithms and rounds with deterministic algorithms. Previously, the best known lower bound as a function of for this problem was only [28]. By a simple reduction, the same lower bound as for maximal matching also holds for computing a maximal independent set (MIS). In later work, the same lower bound was also proven directly for the MIS problem on trees and it was generalized in particular to the problems of computing ruling sets and of computing maximal matchings in hypergraphs, leading to tight (as a function of ) lower bounds for those problems [8, 4, 5, 7].
We emphasize that all the recent progress on developing new lower bounds for locally checkable problems in the model has only been possible because the work of Brandt [16] describes an automatic and generic way to turn any locally checkable problem (given in the right formalism) into a locally checkable problem that is exactly one round easier. Moreover, for much of the progress, it was crucial that there exists efficient software as described by Olivetti in [34] that can be used to apply round elimination to concrete locally checkable problems.
Unfortunately, while round elimination has been extremely successful for proving many new lower bounds for computing locally checkable graph problems, the method has so far not been able to provide new lower bounds for many of the standard variants of distributed graph coloring and thus for some of the most important and most well-studied locally checkable problems. When applying round elimination to standard -coloring and related graph coloring problems, the descriptions of the problems in the sequence obtained by applying iteratively grow doubly exponential in each round elimination step (i.e., with each application of ) and thus even the one round easier problem often becomes too complex to understand.
We are convinced that in order to continue the present success story, further developing the existing automatic techniques will be indispensible and the main objective of this paper is to provide more efficient and more powerful methods for finding distributed lower bounds in an automatic fashion.
Distributed Coloring.
As a concrete application, we aim to make progress towards obtaining lower bounds for distributed coloring problems. To achieve this, we consider the problem of computing a -defective -coloring. For two parameters and , a -defective -coloring of a graph is a partition of into color classes so that every node has at most neighbors that have the same color as . Such colorings have become an important tool in many recent distributed coloring algorithms [15, 12, 13, 11, 14, 27, 6, 10, 23]. In [23], it is also argued that further progress on defective coloring algorithms might be key towards obtaining faster distributed -coloring algorithms and proving hardness results on distributed defective coloring algorithms might therefore also provide insights into understanding the hardness of the standard -coloring problem. To obtain proper colorings, defective colorings are commonly used as a subroutine in a recursive manner and to obtain efficient coloring algorithms using few colors, it would be particularly convenient to have algorithms that efficiently compute defective colorings with colors and defect only . Such defective colorings always exist [30] and efficient distributed algorithms for computing such colorings would immediately lead to faster -coloring algorithms and potentially also to faster -coloring algorithms. In fact, a generalized variant of -defective -colorings of line graphs have recently been used in a breakthrough result that obtains the first -round algorithm for computing a -edge coloring of a graph [6].
In contrast, the best known algorithms for computing an or -vertex coloring require time polynomial in [11, 22, 14, 31]. For vertex coloring, it is already known that computing -defective -colorings requires rounds even in bounded-degree graphs [9]. This raises the important question whether an increased number of colors can admit the desired efficient -defective -colorings. Already the case of was wide open previous to our work and an important open problem in its own right: obtaining the desired efficient defective coloring algorithm for would have fundamental consequences by improving the complexity of -coloring (and of -coloring if extendable to list defective colorings [23]), while proving a substantial lower bound for any such algorithm might pave the way for proving similar lower bounds for larger in the future. As one of the main technical results of this paper, we show that computing -defective colorings (and in fact -defective colorings) with colors requires rounds. We conjecture that a similar result should also hold for more than colors and we hope that such a result can be proven by extending the techniques that we introduce in this paper.
1.1 Our Contributions and High-Level Ideas of Our Techniques
In the present paper, we take the task of automating round elimination and thus automating the search for distributed lower bounds one step further. In the following, we provide a high-level discussion of the contributions of the paper.
1.1.1 An Automatic Way of Generating Round Elimination Fixed Points
Chang, Kopelowitz, and Pettie [19] showed that in the model, every locally checkable problem can either be solved deterministically in rounds (for some function ) or has a deterministic and a randomized lower bounds. In the following, we call problems of the first type easy problems and problems of the second type hard problems.
Fixed Points Imply Hardness Results.
A particularly elegant way to prove that a problem is of the second type is through round elimination fixed points. A locally checkable problem is called a round elimination fixed point if , i.e., if the problem that is “one round easier” than is itself. We say that a problem is a non-trivial fixed point if is a round elimination fixed point that cannot be solved in rounds. If a problem is a non-trivial fixed point, existing standard techniques directly imply that is a hard problem, i.e., that any deterministic algorithm to solve requires at least rounds and every randomized such algorithm requires at least rounds (see, e.g., [5]). Moreover, we obtain the same lower bounds for if is not a fixed point itself but can be relaxed to a non-trivial fixed point . In fact, while interesting problems exist that are non-trivial fixed points themselves (see, e.g., [17]), finding a non-trivial fixed point relaxation for (which we may simply call a fixed point for ) is a more common way to prove lower bounds for a given problem (see, e.g., [2, 5, 7]). Furthermore, as shown in [5, 7], surprisingly, fixed points can also be used to prove lower bounds on the -dependency of easy problems, i.e., problems that can be solved in time .
Fixed Points Can Be Large.
In order to understand the distributed complexity of locally checkable problems, we therefore need methods to find non-trivial fixed points for such problems in case such fixed points exist. We argue that, similarly to performing and analyzing round elimination, also finding new fixed points will in many cases require some automated support for searching for fixed points. Note that in general, even for a relatively simple problem with a small description, the smallest fixed point relaxation of might be much more complex and have a much larger description than the original problem . Consider for example the -coloring problem in -regular graphs. While the problem itself can be described333For an introduction to the description of problems, see Section 1.1.3 or Section 3.2. with different labels and different node configurations (one for each possible color), the round elimination fixed point for -coloring that has been described in [5] consists of different labels and different node configurations (and no smaller fixed point for -coloring is known or suspected to exist). Finding fixed points for problems that are not as symmetric and not as well-behaved as -coloring might quickly become infeasible when it has to be done by hand, even when using the support of existing software for performing single round elimination steps.
Our Contribution: a Procedure for Finding Fixed Points Automatically.
While the round elimination procedure is fully automatic in the sense that it gives a mechanical way to compute as a function of , finding fixed point relaxations is a manual process, where it is required to guess what is the right relaxation of and then test if is indeed a non-trivial fixed point.
One of the major open questions in the field is understanding whether one can automatically decide whether a given problem is easy or hard. Essentially, the only known way to prove that a problem is hard is producing a non-trivial fixed point relaxation of it, and it is not even known if all hard problems admit a non-trivial fixed point relaxation.
As our first main contribution, we make substantial progress on this question, by providing a method to automatically generate relaxations of a given locally checkable problem that are guaranteed to be fixed points under the round elimination framework. While there is no formal guarantee that the resulting problem is non-trivial (and hence that the procedure succeeded in giving a lower bound), this procedure is able to automatically derive all fixed point relaxations that have been manually obtained in the past. Moreover, we additionally show that the procedure provides novel results.
Our Contribution: a First Simple Application of Our Procedure.
1.1.2 Lower Bounds for Defective Coloring Problems
Understanding the complexity of -coloring is one of the major open questions in the field, and one of the very few techniques known to be able to prove lower bounds for local problems is round elimination. Unfortunately, -coloring, and many other variants of coloring, behave very badly in round elimination, in the sense that the problem constructed via round elimination is typically doubly exponentially larger than the original problem . Understanding defective colorings seems to be one of the main obstacles that we need to overcome in order to make progress on -coloring, for different reasons:
-
While we still cannot understand the resulting problem , we can observe that it is some variant of coloring that includes, as a subproblem, variants of defective coloring. Hence, in order to prove lower bounds for -coloring, we need to understand defective colorings first;
-
In [23] it has been argued that improving defective coloring algorithms might be the key for improving upper bounds for -coloring.
For defective -coloring, hardness results are known [9]. However, such results do not say much about defective -coloring for , because defective -coloring is special, in the sense that this problem seems to be much harder than the case . In fact, even on graphs where each node has some minimum degree equal to some large constant, even the seemingly simpler task of requiring each node of the graph to have at least two neighbors of a different color is hard. While this is not our main contribution regarding defective coloring, we show that our fixed point procedure is able to automatically obtain a non-trivial fixed point relaxation for defective -coloring.
The problem of computing a -defective -coloring is much more interesting, since, differently from defective -coloring, for a large range of values of , it is known to be easy. In our work, we make substantial progress on defective coloring, by proving that, for , the -defective -coloring problem is hard. Such a result is obtained by applying our fixed point procedure on defective -coloring. However, the obtained fixed point is so large that we needed to introduce an additional technique to let a computer verify that the resulting problem is indeed non-trivial. We believe that the three new techniques that we introduce to obtain our result on defective -coloring (namely, a new way of applying round elimination, a way to automatically compute fixed points, and a way to let computers verify that an entire family of problems is non-trivial) will pave the way to proving lower bounds for -defective -coloring for . In the following, we give more details on our contributions on defective coloring.
Our Contribution: Defective -Coloring.
Not many bounds on the complexity of defective colorings are known (we discuss known bounds in the full version of this paper). An exception is the case of defective colorings with colors, which is understood. By computing an MIS (which can be done in rounds [15]) and assigning the MIS nodes one of the colors and the remaining nodes the other color, one obtains a -defective -coloring of the graph. Interestingly, the problem becomes hard if we try to just go one step further: in [9], it was shown that computing a -defective -coloring is a hard problem. This result has been shown via a reduction from the hardness of sinkless orientation. However, this reduction is based on the construction of virtual graphs on which the defective coloring algorithm is executed in order to obtain a sinkless orientation on the original graph, and in particular the lower bounds are not proved by providing a non-trivial fixed point. As a second application of our fixed point generation method, we show the following.
There exists a non-trivial fixed point relaxation for -defective -coloring.
This result is significant in light of the fundamental open question stated in [18, 5] asking whether, for every locally checkable problem that has a deterministic and a randomized lower bound, such a lower bound can be proven via a round elimination fixed point. The -defective -coloring problem was one of an only very small number of such problems for which previously no fixed point lower bound proof was known.
Our Contribution: Defective -Coloring.
As a main application of our automatic fixed point procedure, we study the defective coloring problem with colors. From the arbdefective coloring lower bound of [5], it is known that -defective -coloring is hard if and thus if . In [9], it was further shown that if , -defective -coloring can be solved in rounds. By using our fixed point method, we manage to partially close this gap by proving the following statement.
For , the -defective -coloring problem is a hard problem, i.e., it requires rounds deterministically and rounds with randomization.
This in particular implies that there is no -defective -coloring algorithm that violates those time lower bounds, thereby ruling out the possibility of using defective -coloring as an approach for attacking and -coloring in the manner outlined before Section 1.1.
We note that the fixed point that we automatically generate for this problem is highly non-trivial, and that manually proving that the fixed point that we provide is indeed a fixed point would require to perform a case analysis over hundreds of cases. For this reason, we do not manually prove that the fixed point that we provide is indeed a fixed point. Instead, we provide a way to automate this process, by reducing the problem of determining whether a problem is a fixed point to the problem of proving that certain systems of inequalities have no solution. The remaining task of showing that said systems have no solution can be performed automatically via computer tools. This automatization process provides a partial answer to Open Question 9 in [5].
1.1.3 A More Efficient Method for Performing Round Elimination
Another major contribution of our work is providing a new way of applying round elimination. In particular, we provide a new procedure for computing the problem . We would like to point out that after our work appeared online, our procedure has already proved to be extremely helpful for writing proofs based on round elimination. In fact, our procedure has been used to prove quantum advantage in the distributed setting [1]. We now provide more details on this procedure, and how we modified it to obtain the fixed point procedure mentioned in Section 1.1.1.
As a first step, in our work, we first provide a novel way for computing a locally checkable problem that is exactly one round easier than . Then, we show that, by applying such a procedure in a slightly modified way, instead of obtaining the problem , we obtain some problem which is guaranteed to be a fixed point relaxation of . While in some cases the obtained problem may be solvable in rounds (i.e., this must be the case when applying the procedure on an easy problem), the results presented in Section 1.1.2 are obtained by proving that the fixed points that we get by applying the procedure on defective colorings are non-trivial.
While our new procedure for applying the round elimination technique has applications for finding fixed points, this procedure is interesting on its own. In order to better explain the reason, we first highlight the main issue of the standard way of applying round elimination. While for a given locally checkable problem , the framework of Brandt [16] gives a fully automatic way for computing a locally checkable problem that is exactly one round easier than , this computation is in general not computationally efficient. To illustrate why, we somewhat informally sketch how round elimination works (for a formal description we refer to Section 3.3).
How Round Elimination Works.
For the automatic round elimination framework, a locally checkable problem on a -regular graph is formalized on the bipartite graph between the nodes and the edges of .444More generally, round elimination can be defined on biregular bipartite graphs or hypergraphs (see Section 3). That is, is obtained by adding an additional node in the middle of the edges in . Each edge of is thus split into halfedges. A solution to a locally checkable problem is given by an assignment of labels from a finite alphabet to all edges of (i.e., to each halfedge of ). The validity of a solution is given by a set of allowed node and edge configurations, where a node configuration is a multiset of labels of size and an edge configuration is a multiset of labels of size . One step of round elimination on is done by performing two steps of round elimination on (note that one round on corresponds to two rounds on ). When starting from a node-centric problem (i.e., a problem where the nodes in corresponding to nodes in assign the labels to their incident half-edges), the first step transforms into an edge-centric problem that is exactly one round easier on and the second step transforms the problem into a node-centric problem that is one round easier than on and thus one round easier than on . The label set of is the power set of and the label set of is the power set of . The allowed edge configurations of are, roughly speaking, the multisets of labels such that for all and , is an allowed edge configuration of .555In the formally precise definition of the set of edge configurations of provided in Section 3.3, we’ll refine this definition slightly. The allowed node configurations of are all the multisets of labels (that appear in some allowed edge configuration of ) such that there exists an allowed node configuration with in problem . In the second step, is obtained in the same way from , but by exchanging the roles of nodes and edges. That is, in the second step, the “for all” quantifier is applied to the allowed node configurations and the “exists” quantifier is applied to the allowed edge configurations (of ).
The Computationally Expensive Part.
Note that from a computational point of view, it is mainly the application of the “for all” quantifier on the edge side when going from to and even more importantly on the node side when going from to that is challenging. When implemented naively, one has to iterate over all possible size- multisets of in the first step and over all possible size- multisets of in the second step. While in general, the problem that is one round easier than the original problem on can be doubly exponentially larger than , for interesting problems this is often not the case. For such more well-behaved problems, the “for all” case can potentially be computed in a much more efficient way.
Our Contribution.
As our final contribution, we give a new elegant way to perform the application of the “for all” quantifier in round elimination. The method makes use of the fact that often the node and edge configurations of a problem can be represented by a relatively small number of condensed configurations. A condensed node or edge configuration is a multiset (where for nodes and for edges) of sets of labels, representing the set of all configurations for which for all . We prove that the “for all” part of round elimination can be performed by a simple process that consists of steps of the following kind. In each step, we take two condensed configurations of the current problem and we combine those condensed configurations in some way to generate new condensed configurations. We then remove redundant configurations and continue until such a step cannot generate any new condensed configurations. In the end, each condensed configuration is interpreted as a multiset of labels of the new problem. We formally define the process and prove its correctness in Section 4.
Informally, we prove that each configuration of the resulting problem can be described as a binary tree, where leaf nodes are condensed configurations of the original problem, and each internal node of the tree is the configuration obtained by combining its two children.
Since our new procedure is mainly used as a tool for obtaining fixed points, we do not formally state the benefits of this new procedure. However, we informally highlight the following:
-
The new procedure avoids the cost of enumerating all possible size- multisets of in the second step, and its running time only depends on the number of input configurations, output configurations, and the height of the aforementioned trees. Such trees have height at most , and we observe that, for many natural problems, the height is much smaller. We thus obtain that, for many problems of interest, the running time of the new procedure is output-sensitive.
-
Thanks to the new procedure, we obtain that, in order to check whether a problem is a fixed point, it is sufficient to check whether the combination of pairs of condensed configurations does not create new configurations. While this drastically reduces the time complexity of checking whether a problem is a fixed point, this also makes it much easier to prove that a problem is a fixed point. In fact, in the latter case, it is sufficient to consider two configurations at a time, instead of going through an exponential number of cases. We point out that, even if some friendly oracle gave us the fixed point for defective -coloring (which we present in the full version of this paper), we believe that, without exploiting this new procedure, proving that such a problem is indeed a fixed point would not have been possible.
2 Road Map
Further related work.
In the full version of this paper, we provide further related work. More in detail, we discuss existing fixed points, the surprising fact that fixed points can be used to prove lower bounds as a function of , and defective coloring.
Preliminaries.
In Section 3, we provide some preliminaries. We first define the model of computation and the language that we use to formally describe problems. Then, we describe the round elimination framework.
A new way of applying round elimination.
On a high level, round elimination allows us to start from a problem and to compute a problem that, under some assumptions, is exactly one round easier (in the distributed setting) than . As it will become clear in Section 3, computing as a function of can be a tricky process. In Section 4 we provide a novel and simplified way to compute as a function of , and in the full version of this paper we prove its correctness.
Fixed point generation.
A problem is a non-trivial fixed point relaxation of if it satisfies the following: can be solved in rounds if we are given a solution for ; cannot be solved in rounds in the so-called port numbering model (see Section 3 for the definition of this model); By applying round elimination on , we obtain itself.
It is known by prior work (see Theorem 1) that, if there exists a non-trivial fixed point relaxation for a problem , then requires rounds for deterministic algorithms and rounds for randomized ones. In the full version of this paper, we provide an automatic way to obtain non-trivial fixed point relaxations. More in detail, we provide a procedure that takes as input a problem and an object (called diagram), and it produces a problem that is always guaranteed to be a fixed point. Whether such a fixed point is non-trivial depends on and on the choice of .
Selecting the right diagram.
As mentioned, the choice of the diagram may affect the triviality of the obtained fixed point. In the full version of this paper, we first provide a generic way to construct a diagram as a function of , that we call default diagram. Then, we show possible ways to modify the default diagram in the case in which the fixed-point obtained with the default diagram is a trivial one.
An alternative proof for the hardness of -coloring.
In the full version of this paper, we show a first application of our fixed point procedure, by providing a non-trivial fixed point relaxation for the -coloring problem. Such a fixed point was already shown in [5], but here we show a much easier proof. While this section is not the main contribution of our work, its main purpose is to warm-up the reader for what comes later.
An alternative proof for the hardness of defective -coloring.
In the full version of this paper, we show another application of our fixed point procedure, by providing a non-trivial fixed point relaxation for the -defective -coloring problem. This is one of the few problems for which an lower bound is known by prior work [9], but a non-trivial fixed point relaxation for this problem was unknown. Whether a non-trivial fixed point relaxation exists for all problems that require deterministic rounds is one of the major open questions about round elimination, and hence we make progress in understanding it. Again, this result is not the main contribution of our work, and its main purpose is to prepare the reader for what comes next.
Defective -coloring.
In the full version of this paper, we use our fixed point procedure to show a lower bound for defective -coloring. While the proofs for -coloring and defective -coloring require a relatively short case analysis, the proof for defective -coloring requires to analyze hundreds of cases. For this reason, we prove that such a case analysis can be performed automatically by using computer tools. In particular, we reduce the task of checking whether a given problem is the result of applying our fixed point procedure, to proving that all systems of inequalities belonging to a certain finite set have no solution, which can be checked automatically via computer tools.
Open questions.
We present some open questions in the full version of this paper.
3 Preliminaries
3.1 The Model
The computational model that we consider is the standard model of distributed computing [29, 35], where the nodes of a graph communicate over the edges . More precisely, time is divided into synchronous rounds, and in each round each node can send an arbitrarily large message to each neighbor. Moreover, between sending messages, nodes can perform any internal computation on the information they gathered so far. In the beginning of the computation, each node is aware of its own degree , and has an internal ordering of its incident edges represented by the ports being assigned bijectively to ’s incident edges. We also assume that each node is aware of the number of nodes and the maximum degree of the input graph. As we will prove lower bounds in this work, this assumption makes our results only stronger. Moreover, each node is equipped with some symmetry-breaking information to avoid trivial impossibilities: in the case of deterministic algorithms, each node is assigned some globally unique ID of length bits; in the case of randomized algorithms, each node instead has access to an unlimited amount of private random bits. Each node executes the same algorithm that governs which messages a node sends (depending on the accumulated knowledge of the node) and what the node outputs at the end of the computation. Each node has to terminate at some point and then provide a local output; all local outputs together form the global solution to the problem. The (round or time) complexity of a distributed algorithm is the number of rounds until the last node terminates. In the randomized setting, as usual, the algorithms are required to be Monte-Carlo algorithms that produce a correct solution with high probability, i.e., with probability at least .
While the lower bounds we prove hold in the model, for technical reasons we will also make use of the port numbering model along the way. The (deterministic) port numbering model is the same as the deterministic model apart from two differences:
-
1.
No symmetry-breaking information is provided, i.e., nodes are not equipped with IDs.
-
2.
For each hyperedge , a total order on the set of incident nodes is provided (which can be formalized via a bijection between this node set and the set , where denotes the number of nodes contained in ).
The second difference can be seen as an analog (on the hyperedge side) of the port numbers via which the nodes can distinguish between incident hyperedges.
3.2 Problems
The problems we study in this work fall into the class of locally checkable problems. Locally checkable problems are problems that can be defined via local constraints and encompass the vast majority of problems studied in the model. A modern formalism to define these problems is given by the so-called black-white formalism that we will also use in this paper. In fact, as we will see, this formalism captures locally checkable problems not only on graphs, but more generally on hypergraphs (where we will denote the maximum number of nodes in a hyperedge by ). In the full version of this paper we provide an example illustrating (some of) the definitions provided in this section.
The black-white formalism.
In the black-white formalism, a locally checkable problem is given as a triple . Here, is a finite set of elements, called labels, and , where each and is a collection of multisets of cardinality with labels from . We call the node constraint of and the edge constraint of . On a hypergraph, a correct solution for is an assignment of labels from to the incident node-hyperedge pairs such that for each node , the multiset of labels corresponding to is contained in , and analogously for hyperedges w.r.t. the respective . More formally, let denote the set of pairs where is a hyperedge incident to . A correct solution for on a hypergraph is a mapping such that, for each , we have , and, for each , we have . Here, the rank of a hyperedge is the number of nodes contained in , and the displayed sets are to be understood as multisets.
When solving a locally checkable problem in the distributed setting, each node has to output one label for each “incident” node-hyperedge pair in such that the induced global solution is correct. While the improvements for the general round elimination technique (discussed below) that we will obtain in this work apply to the general hypergraph setting, for the results about concrete problems that we provide we can restrict attention to the special case of graphs. In this special case, each hyperedge is of rank , and consequently we will replace the edge constraint by . Moreover, to simplify notation, in this case, we will set .
We remark that besides providing a formalism for graphs by considering them as a special case of hypergraphs, the black-white formalism provides a (different) way to encode and study problems on bipartite graphs, by identifying the “black” nodes in the bipartition with the nodes in the above formalism, and the “white” nodes with the hyperedges. This relation to bipartite graphs is also where the name “black-white formalism” comes from.
As can be observed, the definition of the problems in this formalism depends on (and ), which provides the power to also describe important problems like -coloring in this formalism. If we are to be very precise, in this formalism each problem is a collection of problems indexed by (and, if considered on hypergraphs, ). Throughout the paper, we implicitly assume that some (arbitrary) (and, if required, some ) is fixed. Note that this does not impact the generality of our results.
Finally, we remark that, for simplicity, we consider two locally checkable problems given in the black-white formalism as identical if one can be obtained from the other by renaming the labels used to describe the latter.
Configurations.
We will use the term configuration to refer to a multiset of labels, and write it in either of the two equivalent forms and . Note that the order of the does not matter (also in the second form): all configurations that can be obtained from a configuration by reordering are considered to be the same configuration. When referring to the multiset of labels assigned to the pairs incident to a fixed node , we will use the term node configuration; when referring to the multiset of labels assigned to the pairs corresponding to a fixed (hyper)edge , we will use the term edge configuration. Moreover, for simplicity we may slightly abuse notation by writing if are sets containing the labels , respectively.
It will be convenient to refer to certain collections of configurations in a condensed manner. A condensed configuration is a configuration of sets of labels. Configuration is to be understood as the set of all configurations (though we will also consider the condensed configuration as a configuration of sets when convenient). To indicate that a configuration of sets represents a condensed configuration, we will often write each set in the configuration in the form (unless the set only contains one element , in which case we will simply write the set as ).
Diagrams.
A useful way of capturing certain aspects of problems is via so-called diagrams. A diagram is nothing else than a directed acyclic graph with node set and edge set . The edge diagram of a problem is the diagram obtained by setting and defining as the set of those directed edges that satisfy that and, for every configuration with for some , also . When displaying a diagram, we often omit arrows that can be obtained as the composition of displayed arrows. We call a subset right-closed (w.r.t. ) if, for any edge , implies .
3.3 The Round Elimination Technique
In this section, we give a formal introduction to round elimination. As some of the definitions provided in this section are fairly technical, the reader is encouraged to consult the illustrating example provided in the full version of this paper alongside reading the definitions.
For technical reasons, round elimination requires the considered input (hyper)graphs to be regular (and uniform). As such, we will assume throughout the paper that every node of the input (hyper)graph has the same degree and every (hyper)edge has the same rank (which, in the case of graphs, is simply ). This also simplifies the representation of locally checkable problems : now we can assume that and are collections of multisets of cardinalities and , respectively, instead of sequences of similar collections. Note that, as we will prove lower bounds in this work, the inherent restriction to regular graphs makes our results only stronger.
and .
At the heart of the round elimination technique lie the round elimination operators and , which are functions that take a locally checkable problem in the black-white formalism as input and return such a problem. More precisely, for a locally checkable problem , the locally checkable problem is defined as follows.
The label set of is simply the set of non-empty subsets of , i.e., . For the definition of the edge constraint of , we need the notion of a maximal configuration. Let be a collection of configurations of sets of labels. Then, a configuration is maximal (in ) if there is no configuration (of the same length) such that there exists a bijection satisfying for all and for at least one . In other words, a configuration of sets is maximal if no other configuration in the considered configuration space can be reached by enlarging (some of) the sets (and reordering the sets).
Now we can define as follows. Let denote the collection of all configurations such that and for all choices of labels we have . Then, is obtained from by removing all configurations that are not maximal in . Finally, the node constraint of is defined as the collection of all configurations such that each appears in at least one configuration from and there exists a choice of labels satisfying .
The problem is defined dually to , where the role of nodes and hyperedges are reversed. More precisely, we have the following. As before, . The node constraint of is the collection of maximal configurations such that and for all choices of labels we have . The edge constraint of is the collection of all configurations such that each appears in at least one configuration from and there exists a choice of labels satisfying .
We will refer to the operation of deriving from (and from ) as applying the universal quantifier (to and , respectively) and say that a problem satisfies the universal quantifier if it is the result of such an operation.
The hard part in computing and is applying the universal quantifier. In fact, consider the problem . There is an easy way to compute , that is the following. Start from all the configurations in , and for each configuration add to the condensed configuration obtained by replacing each label by the set that contains all label sets in containing .
The round elimination sequence.
In the round elimination framework, the two operators and are used to define a sequence of problems that is essential for obtaining complexity lower bounds via round elimination. This sequence is defined via for all , where is the given problem of interest. The following theorem provides a way to obtain lower bounds for the complexity of via analyzing the -round-solvability of the problems in the sequence. It is a simplified version of Theorem 7.1 from [5].
Theorem 1.
Let be a sequence of problems satisfying for all . Moreover, let be an integer (that may depend on and/or ) such that for all , and for all . Then, if is not -round-solvable in the port numbering model, has lower bounds of rounds in the deterministic model and rounds in the randomized model.
Fixed points.
As implied by Theorem 1, it is crucial for proving lower bounds via round elimination to be able to determine the -round solvability of problems in the round elimination sequence produced by the studied problem . A class of problems that produces very simple sequences are so-called fixed points. A locally checkable problem is called a fixed point if . Moreover, for a fixed point , the problem is called the intermediate problem. Note that such an intermediate problem satisfies . We get the following corollary from Theorem 1.
Corollary 2.
Let be a fixed point in the round elimination framework. Then, if is not -round-solvable in the port numbering model, has lower bounds of rounds in the deterministic model and rounds in the randomized model.
0-round-solvability.
Due to Theorem 1, we are interested in determining whether a problem can be solved in rounds or not. For technical reasons, throughout the paper, whenever we consider the -round-solvability of a problem, we will consider it in the port numbering model. In the port numbering model, -round-solvability admits a simple characterization: a problem is -round-solvable if and only if there is a configuration such that, for any (not necessarily distinct) labels , it holds that . We will use the terms trivial and non-trivial to refer to -round-solvable and non--round-solvable problems, respectively. In particular, we will be interested in trivial and non-trivial fixed points.
For the interested reader, in the full version of this paper we provide an example of the application of the round elimination procedure to a simple problem called sinkless orientation.
4 A New Way of Applying Round Elimination
In this section, we describe a novel and simple way for applying the round elimination technique. As already discussed in Section 3.3, the hard and error-prone part in applying the and operators consists in applying the universal quantifier. Let be the problem of interest, where contains multisets of size and contains multisets of size . Also, let . Recall that applying the universal quantifier means computing as follows. First, let be the maximal set such that for all it holds that, for all , , and all multisets are in . Then, is obtained by removing all non-maximal configurations from . This definition, if implemented in a naive manner, requires considering all possible configurations from labels in , and then, for each of them, checking if all possible configurations obtained by selecting one label from each set in the configuration are contained in .
4.1 A new way to compute
We show a drastically simplified way of applying the universal quantifier, that, at each point in time, requires to consider only two configurations and to perform elementary operations on those.
Input of the new procedure.
While, formally, the given constraint is described as a set of multisets, in some cases the given constraint is described in a more compact form, that is, by providing condensed configurations. The procedure that we describe does not need to unpack condensed configurations into a set of non-condensed ones, and this feature allows to apply this new procedure more easily. For this reason, we assume that is described as a set of condensed configurations, that is, contains multisets, where each multiset is of the form , and for all it holds that . Clearly, if we are given as a list of non-condensed configurations, we can convert it into this form by replacing each label with a singleton set. While we assume that the input is described as a set of condensed configurations, the output of the procedure is going to be a set of non-condensed configurations. We call the condensed configurations in input configurations.
Combining configurations.
At the heart of our procedure lies an operation that combines two given configurations of sets. We now formally define what it means to combine two such configurations. Let and be two configurations, where and are sets. Let be a bijection, i.e., a permutation of . Let . Combining and w.r.t. and means constructing the configuration where if and otherwise. In other words, we consider an arbitrary perfect matching between the sets of the two configurations, and we take the union for one matched pair and the intersection for the remaining matched pairs. In Figure 1, we show an example of a combination of two configurations.
The New Procedure.
In the following, we construct a sequence of sets of configurations until certain desirable properties are obtained. The first step of the procedure is setting . The next step is to apply a subroutine that creates as a function of , and this subroutine is repeatedly applied until we get that . Let the final result be .
The subroutine computes all possible combinations of pairs of configurations (including a configuration with itself) that are in , for all possible permutations and for all possible choices of . If a resulting configuration contains an empty set, the configuration is discarded. Let be the set of configurations obtained by starting from the configurations in , adding the newly computed configurations, and then removing the non-maximal ones. We call the defined procedure , which is described more formally in Algorithm 1.
In the full version of this paper, we will first provide an example of execution of our new procedure, and then we will prove that the constraint returned by is equal to the constraint as defined according to the definition of round elimination given in Section 3. Finally, we will also prove that the procedure always terminates.
References
- [1] Alkida Balliu, Sebastian Brandt, Xavier Coiteux-Roy, Francesco D’Amore, Massimo Equi, Francois Le Gall, Henrik Lievonen, Augusto Modanese, Dennis Olivetti, Marc-Olivier Renou, Jukka Suomela, Lucas Tendick, and Isadora Veeren. Distributed quantum advantage for local problems. In Proceedings of the 57th Annual ACM SIGACT Symposium on Theory of Computing, (STOC 2025), 2025.
- [2] Alkida Balliu, Sebastian Brandt, Yuval Efron, Juho Hirvonen, Yannic Maus, Dennis Olivetti, and Jukka Suomela. Classification of distributed binary labeling problems. In Proc. 34th Symp. on Distributed Computing (DISC), 2020.
- [3] Alkida Balliu, Sebastian Brandt, Juho Hirvonen, Dennis Olivetti, Mikaël Rabie, and Jukka Suomela. Lower bounds for maximal matchings and maximal independent sets. In Proc. 60th IEEE Symp. on Foundations of Computer Science (FOCS), pages 481–497, 2019. doi:10.1109/FOCS.2019.00037.
- [4] Alkida Balliu, Sebastian Brandt, Fabian Kuhn, and Dennis Olivetti. Improved distributed lower bounds for MIS and bounded (out-)degree dominating sets in trees. In Proc. 40th ACM Symposium on Principles of Distributed Computing (PODC), 2021. doi:10.1145/3465084.3467901.
- [5] Alkida Balliu, Sebastian Brandt, Fabian Kuhn, and Dennis Olivetti. Distributed -coloring plays hide-and-seek. In 54th ACM SIGACT Symposium on Theory of Computing (STOC), pages 464–477, 2022.
- [6] Alkida Balliu, Sebastian Brandt, Fabian Kuhn, and Dennis Olivetti. Distributed edge coloring in time polylogarithmic in . In PODC ’22: ACM Symposium on Principles of Distributed Computing, Salerno, Italy, July 25 - 29, 2022, pages 15–25. ACM, 2022. doi:10.1145/3519270.3538440.
- [7] Alkida Balliu, Sebastian Brandt, Fabian Kuhn, and Dennis Olivetti. Distributed maximal matching and maximal independent set on hypergraphs. In Proc. 34th ACM-SIAM Symp. on Discrete Algorithms (SODA), pages 2632–2676, 2023. doi:10.1137/1.9781611977554.CH100.
- [8] Alkida Balliu, Sebastian Brandt, and Dennis Olivetti. Distributed lower bounds for ruling sets. In Proc. 61st IEEE Symp. on Foundations of Computer Science (FOCS), pages 365–376, 2020. doi:10.1109/FOCS46700.2020.00042.
- [9] Alkida Balliu, Juho Hirvonen, Christoph Lenzen, Dennis Olivetti, and Jukka Suomela. Locality of not-so-weak coloring. In Structural Information and Communication Complexity - 26th International Colloquium, SIROCCO 2019, L’Aquila, Italy, July 1-4, 2019, Proceedings, volume 11639 of Lecture Notes in Computer Science, pages 37–51. Springer, 2019. doi:10.1007/978-3-030-24922-9_3.
- [10] Alkida Balliu, Fabian Kuhn, and Dennis Olivetti. Distributed edge coloring in time quasi-polylogarithmic in delta. In Proc. 39th ACM Symp. on Principles of Distributed Computing (PODC), pages 289–298, 2020. doi:10.1145/3382734.3405710.
- [11] Leonid Barenboim. Deterministic (+1)-Coloring in Sublinear (in ) Time in Static, Dynamic, and Faulty Networks. Journal of ACM, 63(5):1–22, 2016. doi:10.1145/2979675.
- [12] Leonid Barenboim and Michael Elkin. Sublogarithmic distributed MIS algorithm for sparse graphs using Nash-Williams decomposition. Distributed Comput., 22:363–379, 2010. doi:10.1007/s00446-009-0088-2.
- [13] Leonid Barenboim and Michael Elkin. Deterministic distributed vertex coloring in polylogarithmic time. Journal of ACM, 58:23:1–23:25, 2011. doi:10.1145/2027216.2027221.
- [14] Leonid Barenboim, Michael Elkin, and Uri Goldenberg. Locally-iterative distributed -coloring below Szegedy-Vishwanathan barrier, and applications to self-stabilization and to restricted-bandwidth models. In Proc. 37th ACM Symp. on Principles of Distributed Computing (PODC), pages 437–446, 2018. doi:10.1145/3212734.3212769.
- [15] Leonid Barenboim, Michael Elkin, and Fabian Kuhn. Distributed (+1)-coloring in linear (in ) time. SIAM Journal on Computing, 43(1):72–95, 2014. doi:10.1137/12088848X.
- [16] Sebastian Brandt. An automatic speedup theorem for distributed problems. In Proceedings of the 2019 ACM Symposium on Principles of Distributed Computing, PODC 2019, Toronto, ON, Canada, July 29 - August 2, 2019, pages 379–388, 2019. doi:10.1145/3293611.3331611.
- [17] Sebastian Brandt, Orr Fischer, Juho Hirvonen, Barbara Keller, Tuomo Lempiäinen, Joel Rybicki, Jukka Suomela, and Jara Uitto. A lower bound for the distributed Lovász local lemma. In Proceedings of the 48th ACM Symposium on Theory of Computing (STOC 2016), pages 479–488. ACM Press, 2016. doi:10.1145/2897518.2897570.
- [18] Sebastian Brandt and Dennis Olivetti. Truly tight-in- bounds for bipartite maximal matching and variants. In Proc. 39th ACM Symp. on Principles of Distributed Computing (PODC), pages 69–78, 2020. doi:10.1145/3382734.3405745.
- [19] Yi-Jun Chang, Tsvi Kopelowitz, and Seth Pettie. An exponential separation between randomized and deterministic complexity in the LOCAL model. SIAM Journal on Computing, 48(1):122–143, 2019. doi:10.1137/17M1117537.
- [20] Yi-Jun Chang, Wenzheng Li, and Seth Pettie. An optimal distributed (+1)-coloring algorithm? In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, (STOC 2018), pages 445–456, 2018. doi:10.1145/3188745.3188964.
- [21] Salwa Faour, Mohsen Ghaffari, Christoph Grunau, Fabian Kuhn, and Václav Rozhon. Local distributed rounding: Generalized to mis, matching, set cover, and beyond. In Proc. 34th ACM-SIAM Symp. on Discrete Algorithms (SODA), pages 4409–4447, 2023. doi:10.1137/1.9781611977554.CH168.
- [22] Pierre Fraigniaud, Marc Heinrich, and Adrian Kosowski. Local conflict coloring. In Proc. 57th IEEE Symp. on Foundations of Computer Science (FOCS), pages 625–634, 2016. doi:10.1109/FOCS.2016.73.
- [23] Marc Fuchs and Fabian Kuhn. List defective colorings: Distributed algorithms and applications. In Rotem Oshman, editor, Proc. 37th Int. Symp. on Distributed Computing (DISC), volume 281 of LIPIcs, pages 22:1–22:23, 2023. doi:10.4230/LIPICS.DISC.2023.22.
- [24] Mohsen Ghaffari and Christoph Grunau. Near-optimal deterministic network decomposition and ruling set, and improved MIS. In 2024 IEEE 65th Annual Symposium on Foundations of Computer Science (FOCS), pages 2148–2179. IEEE, 2024. doi:10.1109/FOCS61266.2024.00007.
- [25] Mohsen Ghaffari, Christoph Grunau, Bernhard Haeupler, Saeed Ilchi, and Václav Rozhon. Improved distributed network decomposition, hitting sets, and spanners, via derandomization. In Proc. 34th ACM-SIAM Symp. on Discrete Algorithms (SODA), pages 2532–2566, 2023. doi:10.1137/1.9781611977554.CH97.
- [26] Mohsen Ghaffari, David G. Harris, and Fabian Kuhn. On derandomizing local distributed algorithms. In Proc. 59th Symp. on Foundations of Computer Science (FOCS), pages 662–673, 2018. doi:10.1109/FOCS.2018.00069.
- [27] Fabian Kuhn. Faster deterministic distributed coloring through recursive list coloring. In Proc. 32st ACM-SIAM Symp. on Discrete Algorithms (SODA), pages 1244–1259, 2020. doi:10.1137/1.9781611975994.76.
- [28] Fabian Kuhn, Thomas Moscibroda, and Roger Wattenhofer. Local computation: Lower and upper bounds. Journal of ACM, 63:17:1–17:44, 2016. doi:10.1145/2742012.
- [29] Nathan Linial. Locality in Distributed Graph Algorithms. SIAM Journal on Computing, 21(1):193–201, 1992. doi:10.1137/0221015.
- [30] L. Lovász. On decompositions of graphs. Studia Sci. Math. Hungar., 1:237–238, 1966.
- [31] Yannic Maus and Tigran Tonoyan. Local conflict coloring revisited: Linial for lists. In 34th International Symposium on Distributed Computing, DISC 2020, October 12-16, 2020, Virtual Conference, volume 179 of LIPIcs, pages 16:1–16:18. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2020. doi:10.4230/LIPIcs.DISC.2020.16.
- [32] Moni Naor. A lower bound on probabilistic algorithms for distributive ring coloring. SIAM Journal on Discrete Mathematics, 4(3):409–412, 1991. doi:10.1137/0404036.
- [33] Moni Naor and Larry J. Stockmeyer. What can be computed locally? SIAM Journal on Computing, 24(6):1259–1277, 1995. doi:10.1137/S0097539793254571.
- [34] Dennis Olivetti. Round Eliminator: a tool for automatic speedup simulation, 2019. URL: https://github.com/olidennis/round-eliminator.
- [35] David Peleg. Distributed Computing: A Locality-Sensitive Approach. Society for Industrial and Applied Mathematics, 2000. doi:10.1137/1.9780898719772.
- [36] Václav Rozhoň and Mohsen Ghaffari. Polylogarithmic-time deterministic network decomposition and distributed derandomization. In Proceedings of 52nd Annual ACM Symposium on Theory of Computing (STOC 2020), 2020. doi:10.1145/3357713.3384298.
