Faster Dynamic -Coloring Against Adaptive Adversaries
Abstract
We consider the problem of maintaining a proper -vertex coloring in a graph on -vertices and maximum degree undergoing edge insertions and deletions. We give a randomized algorithm with amortized update time against adaptive adversaries, meaning that updates may depend on past decisions by the algorithm. This improves on the very recent -update-time algorithm by Behnezhad, Rajaraman, and Wasim (SODA 2025) and matches a natural barrier for dynamic -coloring algorithms. The main improvements are on the densest regions of the graph, where we use structural hints from the study of distributed graph algorithms.
Keywords and phrases:
Dynamic Graph Algorithms, ColoringCategory:
Track A: Algorithms, Complexity and GamesFunding:
Maxime Flin: Supported by the Icelandic Research Fund Grant No. 2310015-053.Copyright and License:
![[Uncaptioned image]](x1.png)
2012 ACM Subject Classification:
Theory of computation Dynamic graph algorithmsAcknowledgements:
Discussions with participants of Dagstuhl Seminar 24471, Graph Algorithms: Distributed Meets Dynamic, served as a key motivation for this work.Editors:
Keren Censor-Hillel, Fabrizio Grandoni, Joël Ouaknine, and Gabriele PuppisSeries and Publisher:

1 Introduction
In the -coloring problem, we are given a graph with maximum degree and must assign to each vertex a color such that adjacent vertices receive different colors. Despite its simplicity in the classical RAM model, this problem is surprisingly challenging in more constrained models such as distributed graphs [22, 3, 19, 8], sublinear models [1, 7, 10, 2], dynamic graphs [5, 20, 6, 4] and even recently communication complexity [16, 9].
This paper presents a fully dynamic algorithm for the -coloring problem. The vertex set is fixed of size , while edges can be inserted or deleted as long as the maximum degree remains at most at all times. We assume is known in advance. As the graph changes, our algorithm must maintain a -coloring. The time the algorithm takes to process an edge insertion or deletion is called the update time and our aim is to minimize it.
The study of dynamic algorithms for -coloring was initiated by [5], which provided a randomized algorithm with amortized update time. Independently, the authors of [6] and [20] provided randomized algorithms with amortized update time. The caveat with those algorithms is that they assume updates are oblivious: all edge insertions and deletions are fixed in advance and may not adapt to the colors chosen by the algorithm.
Maintaining a coloring against an adaptive adversary – that may base the updates on past decisions by the algorithm – is significantly harder, as the adversary can force the algorithm to recolor vertices at every update by connecting same-colored vertices. Until very recently, no update time algorithm was known for -coloring against adaptive adversaries. In a recent breakthrough, Behnezhad, Rajaraman, and Wasim [4] presented a randomized algorithm with amortized update time against adaptive adversaries111In this paper, we use to hide polylogarithmic factors.. In this paper, we significantly improve the runtime, obtaining the following result.
Theorem 1.
There exists a randomized fully dynamic algorithm maintaining a -coloring against adaptive adversaries in amortized update time with high probability.
At its core, our algorithm is based on a structural decomposition of the vertices into four layers so that vertices of the bottom layer can be colored efficiently by maintaining and searching small sets. Vertices of higher layers are colored by random color trials enhanced with a color stealing mechanism. Specifically, when a vertex cannot use a color due to a lower-level neighbor, our algorithm steals the color and recolors the lower-level vertex instead. Interestingly, the layers correspond to the order in which streaming and distributed algorithms build their colorings.
In addition to the runtime improvement, our algorithm approaches a natural barrier for dynamic coloring algorithms against adaptive adversaries. The algorithm of [4] was primarily limited by the challenge of efficiently recoloring dense regions of the graph. In contrast, our algorithm shifts this bottleneck to the recoloring of sparser vertices and the handling of random color trials in general. We elaborate on this in the following discussion.
1.1 Discussion & Open Problems
Before we explain why is a natural barrier, we first establish why poses a fundamental limit for dynamic algorithms based on random color trials. Observe that this limitation extends to simpler problems, such as -coloring.
The bottleneck for recoloring a vertex lies in efficiently verifying whether a color is available. One possibility is to iterate through all neighbors in time; another is to iterate through color classes (i.e., the set of vertices with a given color). At best, all color classes contain vertices each. So, any algorithm that checks arbitrary or random colors using the former method when is small and the latter when is large therefore has update time at best. Recall that an adaptive adversary can trigger recoloring at every update, preventing any opportunity for amortization.
To reduce the number of colors to , the approach of [13, 19, 1, 8, 4] and ours is to compute a partial coloring in which (sparse) vertices have many pairs of neighbors colored the same. The randomized argument underlying this approach is fragile, requiring a complete rerun every update. Since it is based on random color trials at every vertex, it runs in time. Amortized and balanced with the update time for small , we obtain the update time.
Problem 1.
Does there exist a fully dynamic -coloring with update time that remains robust against adaptive adversaries?
1.2 Organization of the Paper
In Section 2, we provide a high-level overview of our algorithm and compare it with [4]. Section 3 introduces the sparser-denser decomposition that forms the foundation of our approach. The core of our dynamic algorithm, along with the proof of Theorem 1, is presented in Section 4. Finally, we defer the least novel part of our algorithm – the periodic recomputation of a fresh coloring of the whole graph – to Section 5.
2 Overview
We present the key concepts underlying Theorem 1, deliberately simplifying certain technical details for the sake of intuition. We focus on the case where , as an update time of is straightforward to achieve by examining the adjacency list of the vertices.
The Sparser-Denser Decomposition.
We employ a modified version of the structural decomposition introduced by Reed [24], and further explored in [19, 1], which partitions the vertices into sparse vertices and dense clusters. We classify vertices into clusters based on their average density. Vertices in clusters with low average density are designated as sparser, while those in clusters with higher average density are considered denser (see Definition 4). This method is based on structural observations about the mean local density within a cluster (see Lemma 7), and provides a significant efficiency gain compared to the specific form of decomposition used in [4].
Coloring the Sparser Vertices.
The strategy for coloring sparser vertices is essentially the same as in [4]. Updates are performed in phases of length , with each phase beginning by computing a fresh coloring of the graph. This ensures that the adversary cannot decrease the number of available colors below because each sparser vertex starts with available colors (see Proposition 11) and the algorithm recolors at most one vertex per update. As the adversary updates the graph, we use the observation from [1, 4] that a vertex with available colors can be recolored in time by random color trials. Note that the cost of computing a fresh coloring, amortized over updates of the following phase, is .
Coloring the Denser Vertices.
We treat two types of vertices differently in each dense cluster. We exploit the fact that most vertices have sparsity not much greater than the average sparsity of their cluster. These vertices – the inliers – have neighbors outside their cluster and anti-neighbors within their cluster. This allows for a compact representation of the colors available to inliers, and a simple deterministic search suffices to recolor them. The remaining vertices – the outliers – are recolored using random color trials, similar to the strategy for sparser vertices. The key idea is that outliers may steal a color from an inlier neighbor, causing the inlier to be recolored as described above. Since most colors are available for stealing, only a few random trials are required. (See Lemma 15 for details.)
Maintaining the Color Balance.
To guarantee efficient color trials, we ensure that each color class contains vertices. This condition holds for the sparser vertices by an adaptation of [4] (see Lemma 19). For the denser vertices, we maintain a Dense Balance Invariant: each cluster contains at most two vertices of each color. To ensure that enough colors are available for inliers, we maintain a second invariant.
The Matching Invariant ensures that enough colors are repeated within each cluster. These repeated colors form a colorful matching [1], which can be maintained efficiently (again) using random color trials. With the Matching Invariant in place, we can ensure that inliers receive a color that is unique within their cluster, thereby preserving the Dense Balance Invariant.
It is noteworthy how extensively our algorithm employs color stealing: sparser vertices can steal colors from matched vertices, which can steal from outliers, and outliers can steal from inliers.
Comparison with [4].
The parameter used in [4] to distinguish between sparser and denser vertices is the of the structural decomposition [24, 19], ultimately set to . While this choice allows for a uniform treatment of dense vertices, it is costly due to its quadratic dependence on (local) sparsity. Specifically, sparse vertices have available colors, which limits the length of each phase and increases the amortized cost of recomputing fresh colorings. Additionally, maintaining the decomposition itself incurs an update time of . In contrast, we keep constant and instead use the average sparsity of dense clusters directly as a parameter to classify vertices as sparser or denser. Our method employs a two-pronged approach to recoloring denser vertices (inliers vs. outliers), with both steps relatively simple. While [4] reasons about dense vertices through perfect matchings in the palette graph of [1], we work directly with vertices and colors. The augmenting paths in their palette graph are similar to our color-stealing mechanism.
A notable technical difference concerns the colorful matching: while [4] maintains a maximal matching, we find that a sufficiently large matching suffices, allowing for simpler updates.
Recent years have seen a number of exciting results on -coloring across models, all made possible by the power of random color trials and, in most cases, with the help of Reed’s structural decomposition [24]. We find particularly interesting that analyzing the average sparsity/density of its clusters (inspired by [18, 14]) enabled us to improve on [4] down to a natural barrier. In particular, the inliers/outliers dichotomy for balancing the cost of coloring dense and sparse vertices may be of independent interest.
2.1 Notation
For an integer , we write . Throughout the paper, the input graph has a fixed vertex set . The neighborhood of is and we write its degree. For a set , let be the set of neighbors in the set. We assume that is known in advance and that, at all time, every vertex has degree at most . A non-adjacent pair of nodes is said to be an anti-edge.
A partial coloring is a function , where indicates that is not colored. For a set of nodes, let . A coloring is total when all vertices are colored, i.e., when . The coloring is proper if, for every edge , either or . In this paper, all colorings are implicitly proper. The palette of is , i.e., the set of colors not used by neighbors of . The colors of the palette are also said to be available (for ).
We say that an event holds “with high probability” – often abridged to “w.h.p.” – when it holds with probability at least for a desirably large constant .
The algorithm frequently uses classic sets and operations that can be implemented efficiently using, say, a red-black tree. They support insertions, deletions, and uniform sampling in worst-case update time.
3 The Sparser-Denser Decomposition
Our algorithm uses a modified version of the sparse-dense decomposition introduced by [24] and further extended in [19, 1] for -coloring in distributed and streaming models. For some small , these decompositions partition the vertices into a set of -sparse vertices and a number of dense clusters called -almost-cliques. We introduce now the key definitions of sparsity and almost-cliques.
Definition 2.
Vertex is -sparse if contains at most edges.
Definition 3.
A set of vertices is an -almost-clique if
-
1.
, and
-
2.
every has .
For a vertex in an almost-clique , the set of its neighbors outside and the set of its non-neighbors inside play a key role in our algorithm. We denote and the sets of external- and anti-neighbors, respectively. We use and to denote their size.
As explained earlier, we differentiate between sparse and dense vertices differently from [24, 19, 1, 4]. Namely, it suffices for us that an almost-cliques is dense on average. To make this precise, let and be the average external- and anti-degree in , respectively. We can now state the definition of our sparser-denser decomposition.
Definition 4.
Let , and a graph with maximum degree . An -sparser-denser decomposition of is a partition of its vertices into sets such that
-
1.
every is -sparse, and
-
2.
every is an -almost-clique such that .
To compare our decomposition to that of [24, 19, 1], we use their core decomposition with a constant but choose a sparsity parameter and let be the -sparse vertices along with the almost-cliques for which . This results only in -sparse vertices due to a structural results on the sparsity of dense vertices (see Lemma 7). Henceforth, we refer to vertices of as to the sparser vertices and vertices of as the denser vertices.
As explained in Section 2, we wish to color most of the denser vertices by iterating over their external- and anti-neighbors. The vertices for which this is feasible are called inliers:
Definition 5.
For each in an -sparser-denser decomposition, define
where vertices of (resp. ) are called inliers (resp. outliers).
Note that by Markov’s inequality, at least three quarters of each are inliers.
3.1 Maintaining the Decomposition
In this section, we explain how the fully dynamic algorithm of [4] for maintaining Reed’s decomposition against an adaptive adversary can be adapted to our needs:
Lemma 6.
There exists a universal constant for which the following holds. Let and . There exists a randomized fully dynamic algorithm that maintains array that induces a partition of the vertices into sets and such that, w.h.p.,
-
is an -refined partition,
-
for each , it maintains the set of anti-edges in , and
-
for each , it maintains the sets and .
The amortized update time against an adaptive adversary is with high probability.
Our algorithm builds on an -almost-clique decomposition, as defined by [1]: a vertex partition where the nodes in have -sparsity and the ’s are -almost-cliques. We filter out the ’s with sparsity more than and add them into to obtain the set . The key structural result for our partition is the following lemma:
Lemma 7.
Let and . Suppose is a vertex partition such that every vertex in is -sparse and each is an -almost-clique. Then, every such that is -sparse.
Proof.
Let and such that . First, we argue that every with is -sparse. In particular, if or , then is at least -sparse.
It was already proven in [17, Lemma 6.2] that is at least -sparse. Our definition of external degree, counting both sparse and dense neighbors outside , differs from [17] so we222A proof of this fact has already appeared in a technical report by Halldórsson, Nolin and Tonoyan [18, Lemma 3]. Since it is a short and important proof, we include it here for completeness. prove that is at least -sparse. We may assume that contains at least edges, for the claim otherwise trivially holds. By handshaking lemma, the number of edges in is
(1) |
So we lower bound for each . If , then it has at most shared neighbors with , so . If , then contains at most many edges by definition. On the other hand, of the edges in , at most are not in . Using our assumption on the number of edges in , this means that , so that . Since all external neighbors contribute at least to the sum in Eq 1, the number of edges in is at most . If , this implies that is -sparse.
We henceforth assume that and . Now, there are two cases, depending on which of or dominates.
If , then . Since holds for all vertices in , it also holds on average and we get that . Using again, we get . Since a vertex with degree at most is -sparse, is -sparse, which is -sparse.
If , then . Observe that contains anti-edges, of which at most may have an endpoint out of . So contains at least anti-edges for . So is -sparse which is -sparse.
To maintain our sparse-denser decomposition, we use the algorithm of [4] for the almost-clique decomposition.
Proposition 8 ([4, Theorem 4.1]).
For every and , there exists a fully dynamic randomized algorithm that maintains a vertex partition of any -vertex graph with maximum degree such that, w.h.p.,
-
1.
vertices of are -sparse,
-
2.
each is a -almost-clique,
-
3.
for every vertex it maintains the sets and , and
-
4.
for every almost-clique , it maintains the set of anti-edges in .
The amortized update time against an adaptive adversary is with high probability.
We can now prove that a sparser-denser decomposition can indeed be maintained.
Proof of Lemma 6.
Define and let be the universal constant such that vertices of are -sparse in Proposition 8. We use the algorithm of Proposition 8 with and define as the set containing all vertices of and the vertices from almost-cliques with . Since we assume in Lemma 6 that , every vertex of is -sparse. Every vertex of is -sparse by Lemma 7. The remaining -almost-cliques are renamed and verify that by construction. So the resulting decomposition is indeed an -sparser-denser decomposition.
The algorithm of Proposition 8 already maintains the set of anti-edges in each , so it only remains to explain how we maintain sets of external- and anti-neighbors for each denser vertex. Every time the algorithm of Proposition 8 modifies an element in or , we inspect the change and modify accordingly. Similarly, when the algorithm modifies a set of anti-edges by inserting or deleting some anti-edges , we modify the sets and accordingly. Since each inspection (and possible modification) takes time, it does not affect the complexity of the algorithm by more than a constant factor.
3.2 Clique Palette, Colorful Matching & Accounting Lemma
Similarly to [1, 15], we wish to assign dense vertices colors that are unused by vertices of their almost-clique. Together these form the clique palette.
Definition 9.
For any (possibly partial) coloring of , and almost-clique , the clique palette is the set of colors unused by vertices of . More formally, .
To ensure the clique palette always contains colors, we give the same color to a pair of vertices in the almost-clique. We say a color is repeated or redundant in when at least two vertices of hold it. Assadi, Chen, and Khanna [1] introduced the notion of a colorful matching: if colors are repeated in , there exists an -sized anti-matching in where the endpoints of a matched anti-edge have the same color.
Each redundant color in gives its vertices some slack, while using the clique palette forbids at most one color for each anti-neighbor. Accounting for both, we obtain the following bound on the number of colors in the clique palette available for each vertex:
Lemma 10 (Accounting Lemma).
Let be a (possibly partial) coloring and an almost-clique with a colorful matching of size . For every , the number of colors available for in the clique palette is at least
Proof.
For succinctness, let denote the number of uncolored vertices in . Observe how the number of vertices in relates to the size of the colorful matching by partitioning vertices according to their colors:
(2) |
where the last inequality comes from the fact that at least colors are used by at least two vertices in . Comparing the number of colors in the clique palette available to to the size of shows the contribution of redundant colors and uncolored vertices:
(by Eq 2) |
To complete the proof, observe that , so that simplifies to . Hence, the lemma.
4 The Dynamic Algorithm
In this section, we prove Theorem 1. We begin with the definition of phases and describe the properties of the coloring at the beginning of each phase. We then explain how the algorithm maintains a proper coloring over one phase. The analysis has three parts: first, we show that the algorithm maintains some invariants throughout the phase (Section 4.1); second, we prove that recoloring denser vertices is efficient (Section 4.2); finally, we analyze the re-coloring of sparser vertices (Section 4.3). Together, they imply Theorem 1 (Section 4.4).
When , the naive deterministic algorithm that scans the entire neighborhood of vertices before recoloring them (when necessary) has worst-case update time . We henceforth assume that , for a sufficiently large constant where and are the parameters of the sparser-denser decomposition.
Phases & Fresh Coloring.
We divide updates into phases of insertions and deletions and analyse the algorithm phase by phase. At the beginning of each phase, we compute a fresh coloring of the graph.
A phase is updates long, where is the constant from Proposition 11.
The algorithm recolors at most one vertex in per update, so the adversary can only remove one color per update from . By recomputing a fresh coloring every updates, we ensure that sparser vertices always have colors available, as they start the phase with more than available colors.
Proposition 11.
There exists a universal constant such that the following holds. For for some large hidden constant, let be a -decomposition of an -vertex graph with maximum degree . W.h.p., RefreshColoring runs in time, producing a total coloring such that
-
(Sparse Slack) every sparse vertex has ,
-
(Sparse Balance) for every ,
-
(Dense Balance) in each , every color is used at most twice, and
-
(Matching) each has at least redundant colors.
The description of RefreshColoring and the proof of Proposition 11 are deferred to Section 5 to preserve the flow of the paper. Henceforth, we say that “RefreshColoring succeeded” if it ends and the coloring it produces has all the aforementioned properties. Throughout each phase, we maintain the Dense Balance and Matching properties given by RefreshColoring as invariants – and refer to them as the Dense Balance Invariant and Matching Invariant.
During a phase, it helps to keep the sparser-denser partition fixed. A simple adaptation of the algorithm from Lemma 6 allows us to make this assumption. We emphasize that during a phase, the adversary may increase or decrease external- and anti-degrees of vertices and their averages. However, given the phase length, it does not materially affect the decomposition.
Lemma 12.
Let , and be as in Lemma 6. There is an algorithm with amortized update time that maintains a vertex partition such that
-
at the beginning each phase, the partition is an -decomposition,
-
the vertex partition does not change during phases,
-
at all times, denser vertices have and , and
-
for each , the values of and increase or decrease by at most one within a phase.
Proof.
Use Lemma 6 to maintain a sparser-denser decomposition with update time except that, during phases, instead of processing updates, we store them in a stash. When a new phase begins, we process all updates stored in the stash and obtain an -refined partition of the graph at that time. Clearly the vertex partition is fixed during phases. Now, observe that at the beginning of a phase, since every is an -almost-clique, the denser vertices have and . Since the adversary can insert/delete at most edges per phase, we always have that and . Similarly, the adversary may increase/decrease the sums of external- and anti-degrees by at most , hence the values of and increase/decrease by up to an additive term over the course of each phase.
The Dynamic Algorithm.
Updates are triggered by calls to Insert and Delete. They maintain the sparser-denser partition as described in Lemma 12 and, when a new phase begins, recompute a coloring from scratch using RefreshColoring. During phases, Insert and Delete update the data-structures and respectively call RecolorInsert and RecolorDelete to handle color conflicts.
Upon insertion of , RecolorInsert verifies if it creates a color conflict (line 4), in which case it uncolors . In most cases, it suffices to recolor using RecolorSparse or RecolorDense depending on whether or not (lines 10 to 13). The one exception to this concerns matched vertices. A denser vertex is matched if and only if its color is repeated by another vertex of its almost-clique. Maintaining sufficiently many matched vertices in each almost-clique (see Matching Invariant) is necessary for RecolorDense. If, after an update, there are not enough matched vertices – either because the almost-clique lost an anti-edge (line 6) or because increased (line 2) – we call IncrMatching(D), which creates a new redundant color in . To same-color pairs of (non-adjacent) vertices in , we use RecolorMatching. It proceeds similarly in recoloring sparse vertices and outliers: random color trials in and possibly stealing a color from an inlier in .
Let us list the data-structures we use. The coloring is stored as an array such that is the color of . We also maintain the color class for each . When we change the color of a vertex, we use the function Color(v, ) that sets to and updates color classes accordingly. To keep track of redundant colors, we maintain:
-
an array such that if and are in the same almost-clique and colored the same, otherwise ;
-
for each almost-clique , the set of repeated colors in .
We now explain how the recoloring procedures used in Algorithm 1 work for sparser, denser and matched vertices.
Sparser Vertices.
For recoloring a vertex in , we sample random colors in and recolor the vertex with the first color that is not used by neighbors of and used by at most one denser neighbor. As we shall see, there are at least such colors with high probability333For convenience, if no available color exist, the algorithm loops forever. As we explain later, this occurs with probability . If one wishes the algorithm to have finite expected update time, the algorithm can be modified to restart a phase if no available color was found after samples., so color tries suffice with high probability. To verify if a color is available, the algorithm loops through its color class (line 4). Finally, if the color was used by a unique denser neighbor (line 6), we steal the color from and recolor the denser vertex using RecolorDense.
Denser Vertices.
We now explain how we recolor denser unmatched vertices. Recall that inliers are such that and (Definition 5). Note that the algorithm does not maintain those sets explicitly, as it can simply test for those inequalities. Importantly, inliers always have colors available in the clique palette by the Accounting Lemma (Lemma 10). We therefore color them by computing the set of colors used by external neighbors and searching for an available color in (line 8).
We color outliers like the sparser vertices (lines 2 to 7): by sampling colors in . We allow outliers to steal the color of an unmatched inlier and then recolor the inlier instead (line 6).
To implement RecolorDense efficiently, for each almost-clique , we maintain
-
the set of unused colors , a.k.a. the clique palette,
-
the set of vertices in with color .
Those data-structures are easy to maintain with worst-case update time because each update affects at most two vertices – thus at most two almost-cliques – and each set changes by at most one element.
Recoloring a Matched Anti-Edge.
For a pair , RecolorMatching(u,v) colors – or recolors – and the same. As we cannot ensure they share many available colors in the clique palette, we allow and to pick any color that is used by neither a matched vertex (line 4) nor external neighbors (line 5). Since we only need few repeated colors, they always have colors to choose from and we can find one using random samples. However, when doing so, we might have to – in fact, most likely will – recolor an unmatched vertex (line 8).
RecolorMatching is used whenever matched vertices need to get recolored (line 12 in Algorithm 1) or when the matching size needs to be increased (line 12 in Algorithm 4). To increase the size of the matching, IncrMatching needs to find a pairs of unmatched anti-neighbors to same-color. Since we maintain a matching of size while contains and anti-degrees are at most , a random anti-edge in has both endpoints unmatched with constant probability.
4.1 Dense Invariants
We begin by showing that our recoloring algorithm maintains the dense invariants (Lemma 13). More interestingly, we show in Lemma 14 that when the Matching Invariant is broken by an update, one call to IncrMatching suffices to fix it.
Lemma 13.
Suppose the invariants hold before a call to RecolorDense(v) with or RecolorMatching(u,v) or RecolorSparse(v). If the call ends, it maintains the invariants.
Proof.
First, consider a call to RecolorDense(v). If , since is unmatched, recoloring does not affect the Matching Invariant; it maintains the Dense Balance Invariant because gets a color from , i.e., a color that is unused in . If , again since is unmatched it maintains the Matching Invariant. RecolorDense(v) colors with a color which is either unused in or used by an unmatched inlier (lines 4-5). If the color is free, the Dense Balance is maintained; if the color is used by an unmatched inlier , we uncolor and call RecolorDense(w) (line 6). As we explained before, recoloring an unmatched inlier preserves invariants.
Consider now a call to RecolorMatching(u,v). Observe that it can only increase the number of matched vertices, so it maintains the Matching Invariant. It colors and with some . So is either free or used by a unique vertex in . If a vertex uses , the algorithm uncolors and calls RecolorDense(w) (line 6), which maintains the invariants as we previously explained.
Finally, a call to RecolorSparse(v) where recolors , which does not affect the invariants. It may steal a color to one denser neighbor . If is unmatched RecolorDense(w) maintains the invariant. If is matched, the algorithm calls RecolorMatching(w, matched[w]) which can only end if and its matched vertex are same-colored. So the call to RecolorSparse maintains the invariants.
Lemma 13 shows that RecolorInsert and RecolorDelete maintain the Dense Balance Invariant since it is only affected when vertices are recolored. On the other hand, the Matching Invariant depends on the values of and , which are affected by insertions or deletions of edges.
Lemma 14.
Consider a fixed update RecolorInsert(u,v) or RecolorDelete(u,v) before which the invariant holds. When RecolorDelete(u,v) ends or when RecolorInsert(u,v) reaches line 10, the invariants holds.
Proof.
Assume that when the algorithm reaches line 2 of RecolorDelete or line 9 of RecolorInsert. It must be that because the invariant held before the update and
-
inserting an edge cannot increase and may decrease by one if the edge inserted was between matched vertices (line 6);
-
deleting an edge may increase by , which means increases by at most one, while does not change.
In either case IncrMatching is called before RecolorDelete ends or RecolorInsert reaches line 10. In order to end, IncrMatching matches a new pair of previously unmatched vertices and calls RecolorMatching(u,v) (line 12 in Algorithm 4). Then RecolorMatching(u,v) ends only after same-coloring and (line 8 in Algorithm 4), so the updated matching has size . The Dense Balance Invariant holds when the algorithms call IncrMatching (because it held before the update and uncoloring vertices does not affect it) and calls to RecolorMatching and RecolorDense preserve it (Lemma 13).
4.2 Complexity of Recoloring the Denser Vertices
We investigate the time complexity of our recoloring procedures for dense vertices and express it in terms of the maximal size of a color class at the time of the update and the sparsity parameter . Observe that the probabilistic statements of Lemmas 15, 16, and 17 are only over the randomness sampled by the algorithm during the corresponding calls to RecolorDense, RecolorMatching and IncrMatching.
Lemma 15.
Suppose that the invariants hold before a call to RecolorDense(v) where , and . W.h.p. over the random colors it samples, the call ends in
time.
Proof.
Suppose first that is an inlier, i.e., such that and . We claim that every inlier has an available color in . Observe that this set is exactly . Under the Matching Invariant, at least colors are repeated in and since is uncolored, the accounting lemma (Lemma 10) implies that
where the second inequality uses the assumption that and is an integer. To find this color, the algorithm loops over vertices of and constructs the set in time. Then it loops over and stops when it finds a color that is not in . If contains more than colors, the algorithm finds a color after time. Otherwise, the algorithm goes over at most colors; hence ends in time.
Suppose now that is an outlier. The algorithm samples colors in until it finds one that is not used by (1) a matched vertex, (2) an external neighbor, or (3) an outlier. That is a color not in (lines 4-5). We claim that this set contains at most colors. At most colors are used by an external neighbors of . By Markov’s inequality, the number of vertices with or is at most . Finally, observe that we always have . Indeed, RecolorInsert and RecolorDelete call IncrMatching only when because . Calls to RecolorMatching(u,v) where and are already matched do not increase the matching size. Adding these up, at most colors are blocked by for . A random color in therefore works w.p. , so the probability that the algorithm tries more than random colors is . To verify if a random color works, the algorithm iterates through its color class in time. Then, there might be a additional call to RecolorDense(u) where , which results in the claimed runtime.
RecolorMatching colors and the same way RecolorDense colors an outlier. The complexity of recoloring an unmatched denser vertex (line 8) is upper bounded by Lemma 15. So we obtain the following lemma:
Lemma 16.
Suppose all invariants hold before a call to RecolorMatching(u,v). W.h.p., the call ends after time.
It remains to bound the time spent on fixing the matching size with IncrMatching.
Lemma 17.
Consider a call to IncrMatching(D) before which the Dense Balance Invariants holds but not the Matching Invariant. W.h.p., it ends in time and has increased by one.
Proof.
We may assume that since otherwise the Matching Invariant holds even with an empty matching. In particular, is not empty. If , at least anti-edges of have both endpoints unmatched because contains anti-edges while the current colorful matching is incident to at most of them for, say, . The probability that IncrMatching samples anti-edges in without finding any with both endpoints available is thus . Lemma 16 then implies the result.
4.3 Recoloring the Sparser Vertices
RefreshColoring ensures vertices of have sufficiently many colors in their palette at the beginning of the phase to maintain large palettes over entire phases. Importantly, vertices in conflict only with neighbors in . Recall that is the palette of colors can adopt with respect to .
Lemma 18.
Suppose RefreshColoring succeeded. A call to RecolorSparse(v) where and ends in
time with high probability over the random colors it samples.
Proof.
Call the set of colors used by at least two denser neighbors of . The algorithm samples colors in until it finds one in and we claim that this set contains at least colors at all time. It is easy to verify that , where is the set of available colors, hence that because . So we may assume that . Now, since RefreshColoring succeeded (Proposition 11), we have when the phase begins. As each update recolors at most one sparser vertex, after updates, we have that .
Each random color trial (line 2 in Algorithm 2) recolors with probability at least
Hence, w.h.p., the algorithm samples colors before finding one that fits. To verify if a color is suitable for recoloring , the algorithm iterates through vertices of and checks if they belong to or in time. Overall, finding a suitable takes time. Then, if there is a with color , RecolorSparse makes one call to RecolorDense(w) or RecolorMatching(w, matched[w]) which recolors in time by Lemmas 15 and 16.
Overall, our algorithm is efficient only if we can maintain small enough color classes. The Dense Balance property ensures that denser vertices contribute to each color class. The contribution of vertices in is accounted for separately, using an argument over the randomness of the entire phase:
Lemma 19.
Suppose that RefreshColoring succeeded. With high probability over the randomness of the whole phase, for every , its color class contains
(3) |
sparser vertices before every update of the phase.
Proof.
Fix . We show that Eq 3 holds before the -th update of this phase with high probability. Fix and count the number of sparser vertices to join during this phase. Define to indicate if, during the -th update, a sparser vertex joins . Observe that, by the success of RefreshColoring, RecolorSparse colors with a uniform color from a set of at least colors, regardless of the recolorings that occurred earlier in the phase. Hence, the probability that it recolors with is . By the Chernoff bound with stochastic domination (Lemma 24), we have with high probability. So before the -th update, the set contains the vertices initially present after RefreshColoring (Sparse Balance property in Proposition 11) and at most new ones. This holds for all by union bound and thus implies the lemma.
4.4 Proof of Theorem 1
Let be the universal constant from Lemma 6 and define and . When , we use the naive algorithm that scans the neighborhood of the vertices after each update. When , we use Algorithms 1 and 1. We consider a fixed phase and show that the amortized update time is with high probability. Recall that a phase is a sequence of updates, where is the universal constant described in Proposition 11. Moreover, one should bear in mind that the algorithm of Lemma 6 maintains an -sparser-denser decomposition with update time such that the vertex partition is fixed throughout the phase.
Correctness & Invariants. RefreshColoring gives a total and proper -coloring and neither RecolorInsert or RecolorDelete introduce color conflicts. By induction, the invariants hold w.h.p. before every update of the phase. Indeed, by Proposition 11, if RefreshColoring ends, w.h.p., the invariants hold before the first update of the phase. By Lemmas 13 and 14, invariants are maintained by RecolorDense, RecolorMatching and RecolorSparse, and thus are preserved after each update.
Efficiency. By Proposition 11, computing the fresh coloring takes time. With high probability, RefreshColoring succeeds and henceforth we condition on its success. By Lemmas 18, 15, and 16, w.h.p., each update ends in time. On the other hand, w.h.p., before every update and for every , we have that,
because, by the Dense Balance Invariant, each of the at most almost-cliques contributes at most two to , and by Lemma 19 for the sparse vertices. With the union bound, the bounds on the update time and the color class sizes hold with high probability. Overall, w.h.p., the amortized update time during this phase is
for and . ∎
5 Computing a Fresh Coloring
Recall that every updates, the algorithm recomputes a fresh coloring with nice properties. In this section, we provide and analyse this algorithm, thus prove the following proposition.
See 11
RefreshColoring begins by uncoloring every vertex to start from a fresh state. It then uses a procedure called OneShotColoring in which every vertex of tries a random color. It is a well-known fact that the remaining uncolored vertices have “slack” (Proposition 20). The algorithm colors the remaining vertices in in a random order, which ensures that the total number of color trials is [4]. Denser vertices do not participate in OneShotColoring to ensure that the partial coloring it produced does not hinder the construction of a colorful matching. This is why the Sparse Slack guarantee is only in terms of rather than . RefreshColoring colors almost-cliques sequentially, using the coloring primitives devised in Section 4.
Generating Slack.
To analyse the slack produced by OneShotColoring, we use the following well-known results in the distributed graph literature (e.g., [8, Lemma 3.3] or [17, Lemma 6.1] or even [23]). Authors of [1, Lemma A.1] and [4] use a version of this result for .
Proposition 20.
Suppose is a graph of maximal degree . If is the coloring at the end of OneShotColoring, then a -sparse vertex with has that444we did not attempt to optimize this constant
with probability at least .
Through Proposition 20, OneShotColoring ensures the Sparse Slack property.
Lemma 21.
Let and be as described by Proposition 11. With high probability, we have that
Proof.
RecolorSparse(v) ends only after properly coloring , so if the algorithm reaches line 6, all vertices of must be properly colored. We first prove that the Sparse Slack properly holds w.h.p. after OneShotColoring, then the claim about the Sparse Balance. Both hold simultaneously w.h.p. by union bound.
We claim that when OneShotColoring ends, w.h.p., every vertex has
(4) |
Indeed, consider a fixed vertex . By definition of , the vertex is -sparse in . Since we run OneShotColoring in , let us argue that is -sparse in . We may assume that has degree at least , since it otherwise always have available colors. If has at least neighbors in , then Eq 4 holds since ignores colors of neighbors in . Otherwise, contains at least anti-edges, of which at most are incident to , so is at least -sparse in . Since we assume that for some large enough hidden constant, Proposition 20 applies to and, with probability at least , we obtain that for . This holds w.h.p. for every vertex in by union bound over its vertices. This implies Eq 4 because after OneShotColoring, only vertices of are colored, thus . Calls to RecolorSparse colors extends the coloring; thus, when the algorithm reaches line 6, the Sparse Slack property holds.
Denote by the uncolored vertices of after OneShotColoring in the order by which they are colored by RefreshColoring. Consider a fixed and let us now prove that before the -th call to RecolorSparse for fixed a , w.h.p. over the randomness of sampled so far, we have . During OneShotColoring, a vertex may adopt the color it sampled and, by the classic Chernoff Bound, w.h.p., every color is sampled by at most vertices. Define the random variable indicating if is colored with by RecolorSparse. Eq 4 implies RecolorSparse colors with a color uniformly distributed in a set of size at least . More formally, we have that . So, we expect vertices to be colored with over the first calls to RecolorSparse. The Chernoff Bound (Lemma 24) shows that, w.h.p., when the algorithm makes its -th call to RecolorSparse, the color class contains vertices. This holds w.h.p. for every and by union bound.
As [4, Appendix A.1] showed, this algorithm makes color trials (executions of line 2 in RecolorSparse) with high probability. We obtain the following corollary by union bound: OneShotColoring runs in and each of the color trials takes time by Lemma 21.
Corollary 22.
W.h.p., RefreshColoring reaches line 6 in time, every vertex in is colored, the Sparse Slack and Sparse Balance properties hold.
Coloring Denser Vertices.
Random color trials in according to a random ordering could also be used for dense vertices, but as we need to ensure the Matching and Dense Balance invariants, we use a more tailord approach albeit similar.
Lemma 23.
Let , be as described by Proposition 11. For any partial coloring outside of such that the Sparse Slack, Sparse Balance and Dense Balance hold, w.h.p., RefreshColoring extends the coloring to (lines 7 to 9) in time and such that contains a colorful matching of size and the Dense Balance holds.
Proof.
Since we begin with all vertices uncolored, IncrMatching never makes a call to RecolorDense. Using the same argument as Lemma 17, it samples at most anti-edges before finding one with both endpoints unmatched and tries colors before finding one available for both endpoints. The algorithm therefore spends
time computing a colorful matching (line 7) because . Each call to IncrMatching increases the size of the colorful matching by one, so the matching and Dense Balance invariants hold when RefreshColoring reaches line 8.
By Markov’s inequality, contains at most outliers. Since all inliers of are uncolored, the accounting lemma (Lemma 10) shows that at least colors of the clique palette are available for each outlier . Hence, ColorDense(v) where ends in time with high probability. So the coloring of outliers in takes up to time.
Suppose inliers are colored in the order . When the algorithm calls ColorDense(), at least remain uncolored in . In particular, a random color is suitable to color with probability at least where is a constant. Indeed, recall that an inlier has and while has that for some constant (Definition 4). If , then and a random color in belongs to with probability at least . Otherwise, the accounting lemma (Lemma 10) implies that , which means that a random color in is available to with probability at least . In any case, after color tries, ColorDense finds a color for with high probability. Searching the color class for each color trial, the total time necessary to color inliers of is
because from the sparse and Dense Balance and . So the time needed to color is as it takes time to color the matching and outliers and time to color inliers.
Proposition 11 follows trivially from combining Corollaries 22 and 23 for all almost-cliques.
References
- [1] Sepehr Assadi, Yu Chen, and Sanjeev Khanna. Sublinear algorithms for vertex coloring. In the Proceedings of the ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 767–786, 2019. doi:10.1137/1.9781611975482.48.
- [2] Sepehr Assadi and Helia Yazdanyar. Simple sublinear algorithms for ( + 1) vertex coloring via asymmetric palette sparsification. In Ioana Oriana Bercea and Rasmus Pagh, editors, 2025 Symposium on Simplicity in Algorithms, SOSA 2025, New Orleans, LA, USA, January 13-15, 2025, pages 1–8. SIAM, 2025. doi:10.1137/1.9781611978315.1.
- [3] Leonid Barenboim, Michael Elkin, Seth Pettie, and Johannes Schneider. The locality of distributed symmetry breaking. J. ACM, 63(3):20:1–20:45, 2016. doi:10.1145/2903137.
- [4] Soheil Behnezhad, Rajmohan Rajaraman, and Omer Wasim. Fully dynamic ( + 1)-coloring against adaptive adversaries. In Yossi Azar and Debmalya Panigrahi, editors, Proceedings of the 2025 Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2025, New Orleans, LA, USA, January 12-15, 2025, pages 4983–5026. SIAM, 2025. doi:10.1137/1.9781611978322.169.
- [5] Sayan Bhattacharya, Deeparnab Chakrabarty, Monika Henzinger, and Danupon Nanongkai. Dynamic algorithms for graph coloring. In Artur Czumaj, editor, Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2018, New Orleans, LA, USA, January 7-10, 2018, pages 1–20. SIAM, 2018. doi:10.1137/1.9781611975031.1.
- [6] Sayan Bhattacharya, Fabrizio Grandoni, Janardhan Kulkarni, Quanquan C. Liu, and Shay Solomon. Fully dynamic ( +1)-coloring in O(1) update time. ACM Trans. Algorithms, 18(2):10:1–10:25, 2022. doi:10.1145/3494539.
- [7] Yi-Jun Chang, Manuela Fischer, Mohsen Ghaffari, Jara Uitto, and Yufan Zheng. The complexity of (+1) coloring in congested clique, massively parallel computation, and centralized local computation. In Peter Robinson and Faith Ellen, editors, Proceedings of the 2019 ACM Symposium on Principles of Distributed Computing, PODC 2019, Toronto, ON, Canada, July 29 - August 2, 2019, pages 471–480. ACM, 2019. doi:10.1145/3293611.3331607.
- [8] Yi-Jun Chang, Wenzheng Li, and Seth Pettie. Distributed (+1)-coloring via ultrafast graph shattering. SIAM J. Comput., 49(3):497–539, 2020. doi:10.1137/19M1249527.
- [9] Yi-Jun Chang, Gopinath Mishra, Hung Thuan Nguyen, and Farrel D. Salim. Round and communication efficient graph coloring. CoRR, abs/2412.12589, 2024. doi:10.48550/arXiv.2412.12589.
- [10] Artur Czumaj, Peter Davies, and Merav Parter. Simple, deterministic, constant-round coloring in congested clique and MPC. SIAM J. Comput., 50(5):1603–1626, 2021. doi:10.1137/20M1366502.
- [11] Benjamin Doerr. Probabilistic tools for the analysis of randomized optimization heuristics. In Benjamin Doerr and Frank Neumann, editors, Theory of Evolutionary Computation - Recent Developments in Discrete Optimization, Natural Computing Series, pages 1–87. Springer, 2020. doi:10.1007/978-3-030-29414-4_1.
- [12] Devdatt P. Dubhashi and Alessandro Panconesi. Concentration of measure for the analysis of randomized algorithms. Cambridge University Press, Cambridge, 2009. doi:10.1017/CBO9780511581274.
- [13] Michael Elkin, Seth Pettie, and Hsin-Hao Su. (2 - l)-edge-coloring is much easier than maximal matching in the distributed setting. In Piotr Indyk, editor, Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2015, San Diego, CA, USA, January 4-6, 2015, pages 355–370. SIAM, 2015. doi:10.1137/1.9781611973730.26.
- [14] Maxime Flin, Mohsen Ghaffari, Magnús M. Halldórsson, Fabian Kuhn, and Alexandre Nolin. Coloring fast with broadcasts. In Proceedings of the 35th ACM Symposium on Parallelism in Algorithms and Architectures, SPAA 2023, Orlando, FL, USA, June 17-19, 2023, pages 455–465. ACM, 2023. doi:10.1145/3558481.3591095.
- [15] Maxime Flin, Mohsen Ghaffari, Magnús M. Halldórsson, Fabian Kuhn, and Alexandre Nolin. A distributed palette sparsification theorem. In Proceedings of the 2024 ACM-SIAM Symposium on Discrete Algorithms, SODA 2024, Alexandria, VA, USA, January 7-10, 2024. SIAM, 2024. doi:10.1137/1.9781611977912.142.
- [16] Maxime Flin and Parth Mittal. (+1) vertex coloring in O(n) communication. In Proceedings of the 43rd ACM Symposium on Principles of Distributed Computing, PODC 2024, Nantes, France, June 17-21, 2024, pages 416–424. ACM, 2024. doi:10.1145/3662158.3662796.
- [17] Magnús M. Halldórsson, Fabian Kuhn, Yannic Maus, and Tigran Tonoyan. Efficient randomized distributed coloring in CONGEST. In Samir Khuller and Virginia Vassilevska Williams, editors, STOC ’21: 53rd Annual ACM SIGACT Symposium on Theory of Computing, Virtual Event, Italy, June 21-25, 2021, pages 1180–1193. ACM, 2021. doi:10.1145/3406325.3451089.
- [18] Magnús M. Halldórsson, Alexandre Nolin, and Tigran Tonoyan. Ultrafast distributed coloring of high degree graphs. CoRR, abs/2105.04700, 2021. arXiv:2105.04700.
- [19] David G. Harris, Johannes Schneider, and Hsin-Hao Su. Distributed ( +1)-coloring in sublogarithmic rounds. J. ACM, 65(4):19:1–19:21, 2018. doi:10.1145/3178120.
- [20] Monika Henzinger and Pan Peng. Constant-time dynamic ( +1)-coloring. ACM Trans. Algorithms, 18(2):16:1–16:21, 2022. doi:10.1145/3501403.
- [21] William Kuszmaul and Qi Qi. The multiplicative version of azuma’s inequality, with an application to contention analysis. CoRR, abs/2102.05077, 2021. arXiv:2102.05077.
- [22] Nathan Linial. Locality in distributed graph algorithms. SIAM J. Comput., 21(1):193–201, 1992. doi:10.1137/0221015.
- [23] Michael Molloy and Bruce Reed. Graph colouring and the probabilistic method, volume 23. Springer Science & Business Media, 2002. doi:10.1007/978-3-642-04016-0.
- [24] Bruce A. Reed. , , and . J. Graph Theory, 27(4):177–212, 1998. doi:10.1002/(SICI)1097-0118(199804)27:4\%3C177::AID-JGT1\%3E3.0.CO;2-K.
Appendix A Concentration Inequalities
We use the Chernoff Bound to prove concentration of random variables around their expected values. To comply with dependencies between our random variables, we use a stronger form than the classic variant of the bound. We refer readers to [12] or [11, Section 1.10] or [21, Corollaries 6 and 14].
Lemma 24 (Chernoff bounds with stochastic domination).
Let be random variables and be any function of the first variables with value in . Consider . If there exists such that for every , then for any ,
(5) | ||||
Conversely, if there exists such that for every , then for any , | ||||
(6) |