Optimal decremental connectivity in planar graphs

We show an algorithm for dynamic maintenance of connectivity information in an undirected planar graph subject to edge deletions. Our algorithm may answer connectivity queries of the form `Are vertices $u$ and $v$ connected with a path?' in constant time. The queries can be intermixed with any sequence of edge deletions, and the algorithm handles all updates in $O(n)$ time. This results improves over previously known $O(n \log n)$ time algorithm.


Introduction
The dynamic graph connectivity problem consists in maintaining connectivity information about an undirected graph, which is undergoing modifications. Typically, the modifications are additions or removals of edges or vertices. In this paper we focus on the problems in which each modification adds or removes a single edge. These problems have three variants: in the incremental version, edges can only be added to the graph, in the decremental one the edges may only be removed, whereas in the fully dynamic version both edge insertions and deletions are allowed. Graph updates are intermixed with a set of connectivity queries of the form 'Are vertices u and w in the same connected component? ' We consider the decremental connectivity problem for planar graphs, and show an algorithm that may answer connectivity queries in constant time and process any sequence of edge deletions in O(n) time. The previously known best running time of O(n log n) was obtained by using the fully dynamic algorithm.

Prior work
It is easy to see that the incremental graph connectivity can be solved using an algorithm for the union-find problem. It follows from the result of Tarjan [14] that a sequence of n edge insertions and n queries can be handled in O(nα(n)) time, where α(n) is the extremely slowly growing inverse Ackermann function.
There has been a long line of research considering the fully dynamic connectivity in general graphs [5,2,7,9,17,10,19]. The best currently known algorithms have polylogartithmic update and query time. Thorup [17] has shown a randomized algorithm with O(log n(log log n) 3 ) amortized update and O(log n/ log log log n) query time. An algorithm by Wulff-Nilsen [19] handles updates in slightly worse O(log 2 n/ log log n) amortized time, but it is deterministic and answers queries in O(log n/ log log n) time. The best algorithm with worst-case update guarantee is a randomized algorithm by Kapron, King and Mountjoy [10], which processes updates in O(log 5 n) time and answers queries in O(log n/ log log n) time.
For the decremental variant, Thorup [16] has shown a randomized algorithm, which can process any sequence of edge deletions in O(m log(n 2 /m)+n(log n) 3 (log log n) 2 ) time and answers queries in constant time. If m = Θ(n 2 ), the update time is O(m), whereas for m = Ω(n(log n log log n) 2 ) it is O(m log n).
The picture is much simpler in case of planar graphs. Eppstein et. al [4] gave a fully dynamic algorithm which handles updates and queries in O(log n) amortized time, but requires that the graph embedding remains fixed. For the general case (i.e., when the embedding may change) Eppstein et. al [3] gave an algorithm with O(log 2 n) amortized update time and O(log n) query time.
In planar graphs, the best known solution for the incremental connectivity problem is the union-find algorithm. However, for the special case when the final resulting planar graph is given upfront, and after that the edge insertions and queries are given in a dynamic fashion Gustedt [6] has shown an O(n) time algorithm. On the other hand, for the decremental problem nothing better than a direct application of the fully dynamic algorithm is known. This is different from both general graphs and trees, where the decremental connectivity problems have better solutions than what could be achieved by a simple application of their fully dynamic counterparts. In case of general graphs, the best total update time was O(m log n) [16] (except for very sparse graphs, including planar graphs), compared to O(m log n(log log n) 3 ) time for the fully dynamic variant. For trees, only O(n) time is necessary to perform all updates in the decremental scenario [1], while in the fully dynamic case one can use dynamic trees and obtain O(log n) worst case update time.
There has also been some progress in obtaining lower bounds for dynamic connectivity problems. Tarjan and La Poutré [15,13] have shown that incremental connectivity requires Ω(α(n)) time per operation on a pointer machine. Henzinger and Fredman [8] considered the fully dynamic problem and RAM model and obtained a lower bound of Ω(log n/ log log n), which also works for plane graphs. This was improved by Demaine and Pǎtraşcu [12] to a lower bound of Ω(log n) in cell-probe model. The lower bound holds also for plane graphs.

Our results
We show an algorithm for the decremental connectivity problem in planar graphs, which processes any sequence of edge deletions in O(n) time and answers queries in constant time. This improves over the previous bound of O(n log n), which can be obtained by applying the fully dynamic algorithm by Eppstein [4], and matches the running time of decremental connectivity on trees [1].
In fact, we present a O(n) time reduction from the decremental connectivity problem to a collection of incremental problems in graphs of total size O(n). These incremental problems have a specific structure: the set of allowed union operations forms a planar graph and is given in advance. As shown by Gustedt [6], such a problem can be solved in linear time. Our result shows that in terms of total update time, the decremental connectivity problem in planar graphs is definitely not harder than incremental one. It should be noted that union-find algorithm can process any sequence of k query or update operations in O(kα(n)) time, while our algorithm requires O(n) time to process any sequence of edge deletions and answers queries in constant time.
Moreover, since the fully dynamic connectivity has a lower bound of Ω(log n) (even in plane graphs) shown by Demaine and Pǎtraşcu [12], our results implies that in planar graphs decremental connectivity is strictly easier than fully dynamic one. We suspect that the same holds for general graphs, and we conjecture that it is possible to break the Ω(log n) bound for a single operation of a decremental connectivity algorithm, or the Ω(m log n) bound for processing a sequence of m edge deletions.
Our algorithm, unlike the majority of algorithms for maintaining connectivity, does not maintain the spanning tree of the current graph. As a result, it does not have to search for a replacement edge when an edge from the spanning tree is deleted. It is based on a novel and very simple approach for detecting bridges, which alone gives O(n log n) total time. We use the fact that a deletion of edge uw in the graph causes some connected component to split if both sides of uw belong to the same face. This condition can in turn be verified by solving an incremental connectivity problem in the dual graph. When we detect a deletion that splits a connected component, we start two parallel DFS searches from u and w to identify the smaller of the two new components. Once the first search finishes, the other one is stopped. A simple argument shows that this algorithm runs in O(n log n) time.
We then show that the DFS searches can be speeded up using an r-division, that is a decomposition of a planar graph into subgraphs of size at most r = log 2 n. This gives an algorithm running in O(n log log n) time. For further illustration of this idea we show how to apply it recursively in order to obtain an O(n log * n) time algorithm. However, we observe that it is enough to use this recursion only twice. This is because the O(n log log n) time algorithm, as an intermediate step, reduces the problem of maintaining connectivity in the input graph to maintaining connectivity in a number of graphs of size at most r = log 2 n. By using this reduction twice, we reduce the problem to graphs of size O(log 2 log n). The number of such graphs is so small that we can simply precompute the answers for all of them and use these precomputed answers to obtain the main result of the paper. The preprocessing of all graphs of bounded size is again an idea that, to the best of our knowledge, has never been previously used for designing dynamic graph algorithms.

Organization of the paper
In Section 2 we introduce notation and recall some of the concepts that we later use. The following sections describe our algorithm. We start with the description of the simple O(n log n) time algorithm in Section 3, and then in every section we show an improvement in the running time.
In Section 4 we show how to use r-division to get an O(n log log n) algorithm. Section 5, shows how to improve the reduction, so that it can be used more than once, which results in an O(n log * n) time algorithm. Finally, in Section 6 we show how to solve the decremental connectivity in optimal time for graphs of size O(log 2 log n), after initial preprocessing. This, combined with the reduction applied twice, gives the main result of the paper.

Preliminaries
Let G = (V, E) be an undirected, unweighted planar graph, and n = |V |. By V (G), E(G) and F (G) we denote the sets of vertices, edges and faces of G. By Euler's formula is the set of connected components of G. The dual graph G * is constructed from G by embedding a single vertex in every face of G and connecting the vertices in adjacent faces of G. Note that if two faces f 1 , f 2 share more than one edge, G * has multiple edges between f 1 and f 2 .
In the paper we deal with algorithms that maintain the connectivity information about a graph G subject to edge deletions. By the total running time we denote the total time of handling deletions of all edges from the graph.
The identifier of a connected component (henceforth denoted cc-identifier ) is a value assigned to a vertex v ∈ V , which uniquely identifies the connected component of G, i.e., two vertices have the same cc-identifier if and only if they belong to the same connected component. The cc-identifiers change as the edges are deleted, and they may not be preserved after edge deletion. An algorithm maintains cc-identifiers explicitly if after every deletion it returns the list of changes to the cc-identifiers. We assume that cc-identifiers are O(log n)-bit integers. Note that an algorithm which maintains cc-identifiers explicitly can be simply turned into an algorithm with constant query time. In order to answer a query regarding two vertices, it suffices to compare the cc-identifiers of the two vertices. By definition, the vertices are in the same connected component if and only if their cc-identifiers are equal.
Let us now recall the notion of an r-division. A region R is an edge-induced subgraph of G. A boundary vertex of a region R is a vertex v ∈ V (R) that is adjacent to an edge e ∈ E(R). We denote the set of boundary vertices of a region R by ∂(R). An r-division P of G is a partition of G into O(n/r) edge-disjoint regions (which might share vertices), such that each region contains at most r vertices and O( √ r) boundary vertices. The set of boundary vertices of a division P, denoted ∂(P) is the union of the sets ∂(R) over all regions R of P. Note that |∂(P)| = O(n/ √ r).
Let G be a planar graph. In the preprocessing phase of our algorithms, we build an r-division of G. This r-division will be updated in a natural way, as edges are deleted from G. Namely, when an edge is deleted from the graph, we update its r-division by deleting the corresponding edge. However, if we strictly follow the definition, what we obtain may no longer be an r-division.
For that reason, we loosen the definition of an r-division, so that it includes the divisions obtained by deleting edges. Consider an r-division P built for a graph G. Moreover, let G be a graph obtained from G by deleting edges, and let P be the r-division P updated in the following way. Let R be a region of P. Then, we define the graph R in P obtained by removing edges from R to be a region of P , although it may no longer be an edge-induced subgraph of G , e.g., it may contain isolated vertices. Similarly, we define the set of boundary vertices of P to be the set of boundary vertices of P. Again, according to this definition, a boundary vertex v of P may be incident to edges of a single region (because the edges incident to v that belonged to other regions have been deleted). In the following, we say that P is an r-division of G .
Since Lemma 2.1 requires the graph to be biconnected and triangulated, in order to obtain an rdivision for a graph which does not have these properties, we first add edges to G to make it biconnected and triangulated, then compute the r-division of G, and then delete the added edges both from G and its division.
Without loss of generality, we can assume that each vertex v ∈ V has degree at most 3. This can be assured by triangulating the dual graph in the very beginning. In particular, this assures that each vertex belongs to a constant number of regions in an r-division.
We assume that all logarithms we use are binary. We define log (0) n := n and, for t > 1 log (t) n := log (t−1) log n. Moreover, we define the iterated logarithm log * n := min{t : t ∈ N, log (t) n ≤ 1}.

O(n log n) time algorithm
Let G be a planar graph subject to edge deletions. We call an edge deletion critical if and only if it increases the number of components of G, i.e., the deleted edge is a bridge in G. We first show a dynamic algorithm that for every edge deletion decides, whether it is critical. It is based on a simple relation between the graph G and its dual. Proof. We will maintain the number of faces in G. When an edge e is deleted, we simply have to merge faces on both sides of e (if they are different from each other). This can be implemented using union-find data structure on the vertices of the dual graph.
More formally, we build and maintain a graph D G . Initially, this is a graph consisting of vertices of G * (faces of G). When an edge is deleted from G, we add its dual edge to D G (see Fig. 1). Clearly, the connected components of D G are exactly the faces of G. Since edges are only added to D G , we can easily maintain the number of connected components in D G with a union-find data structure.
This allows us to detect critical deletions in G. After every edge deletion, we know the number of edges and vertices of G. Moreover, we know that the number of faces of G is equal to the number of connected components of D G , which we also maintain. As a result, by Euler's formula, we get the number of connected components of G, so in particular we may check if the deletion caused the number of connected components to increase. The algorithm executes O(n) find and union operations on the union-find data structure.
However, the sequence of union operations has a certain structure. Let G 1 be the initial version of the graph G (before any edge deletion). Observe that each union operation takes as arguments the endpoints of an edge of G * 1 . The variant of the union-find problem, in which the set of allowed union operations forms a planar graph given during initialization, was considered by Gustedt [6]. He showed that for this special case of the union-find problem there exists an algorithm that may execute any sequence of O(n) operations in O(n) time, given an n-vertex planar graph. Thus, we infer that our algorithm runs in O(n) time.
We can now use Lemma 3.1 to show a simple decremental connectivity algorithm that runs in O(n log n) total time.

Lemma 3.2.
Let G be a planar graph subject to edge deletions. There exists a decremental connectivity algorithm that for every vertex of G maintains its cc-identifier explicitly. It runs in O(n log n) total time.
Proof. We use Lemma 3.1 to detect critical deletions. When an edge uw is deleted, and the deletion is not critical, nothing has to be done. Otherwise, after a critical deletion, some connected component C breaks into two components C u and C w (u ∈ C u , w ∈ C w ) and we start two parallel depth-first searches from u and w. We stop both searches once the first of them finishes. W.l.o.g. assume that it is the search started from u. Thus, we know that the size of C u is at most half of the size of C. 1 We can now iterate through all vertices of C u and change their cc-identifiers to a new unique number. All these steps require O(|C u |) time. The running time of the algorithm is proportional to the total number of changes of the cc-identifiers. Since every vertex changes its identifier only when the size of its connected component halves, we infer that the total running time is O(n log n).

O(n log log n) time algorithm
In order to speed up the O(n log n) algorithm, we need to speed up the linear depth-first searches that are run after a critical edge deletion. We build an r-division P of G for r = log 2 n and use a separate decremental connectivity algorithm to maintain the connectivity information inside each region. On top of that, we maintain a skeleton graph that represents connectivity information between the set of boundary vertices (and possibly some other vertices that we consider important). Loosely speaking, since the number of boundary vertices is O(n/ log n) we can pay a cost of O(log n) for maintaining the cc-identifier for each of them. The skeleton graph is also planar, but our algorithms do not use this property.
In our algorithm we will update the skeleton graph of G, as edges are deleted. Similarly to the O(n log n) algorithm, we need a way of detecting whether an edge deletion in G increases the number of connected components in the skeleton graph.

Lemma 4.2. Let G be a dynamic planar graph, subject to edge deletions. Assume that we maintain its skeleton graph G s computed for an r-division P and a skeleton set V s . An edge deletion in G causes an increase in the number of connected components in G s if and only if the deletion is critical in G and there exists a region of P, in which the deletion disconnects some two vertices of V s .
Before we proceed with the proof, let us note that all its conditions are necessary. In particular, a critical deletion in G may not disconnect some two vertices of a skeleton set in a region (e.g. edge uw in Fig. 2c, whose deletion does not affect the skeleton graph at all). It may also happen that the deletion is not critical in G, but inside some region it disconnects some two vertices of V s (e.g. edge xy in Fig. 2c).
Proof. Recall that two vertices of V s are connected in G iff they are connected in G s . ( =⇒ ) If two vertices of V s become disconnected in G s , they also become disconnected in G, so the edge deletion is critical. The deletion has to disconnect some two vertices in a region, because otherwise the graph G s would not change at all. ( ⇐= ) Assume that the deletion disconnected vertices u, w ∈ V s in a region R. Thus, the deleted edge was on some path from u to w. Since the edge deletion is critical in G, the deleted edge was a bridge in G. After the deletion there is no path from u to w in G and consequently also in G s .
We are ready to show the main building block of our O(n log log n) algorithm. Proof. We build an r-division P of G for r = log 2 n. By Lemma 2.1, this takes O(n) time. For each region of the division, we run the assumed decremental algorithm to handle edge deletions. Moreover, we use Lemma 3.1 to detect critical deletions in G.
We build the skeleton graph G s for G, r-division P and a skeleton set V s = ∂(P). We maintain G s , as edges are deleted, that is the deletions to G are reflected in G s . This can be done using the decremental algorithms that we run for every region. Since they maintain the cc-identifiers explicitly (we call these identifiers local cc-identifiers), we may detect the moment when some two vertices of V s become disconnected within one region and G s needs to be updated. Note that if a deletion causes t cc-identifiers to change, we may update G s in O(t) time, so the time for updating G s is linear in the number of local cc-identifiers that are changed.
For every vertex of G s , we maintain its cc-identifier (called a global cc-identifier). Once G s is updated after an edge deletion, we use Lemma 4.2 to check whether the number of connected components of G s increased. According to the lemma, it suffices to check whether the deletion is critical in G (this is reported by the algorithm of Lemma 3.1), and whether some two skeleton vertices became disconnected within some region (this can be checked easily by inspecting the changes of the cc-identifiers).
When we detect that the number of connected components of the skeleton graph G s has increased, similarly to the O(n log n) algorithm we run two parallel DFS searches to identify the smaller of the two new connected components. After that, we update the global cc-identifiers.
In order to answer a query regarding two vertices u and w, we perform two checks. First, if the vertices belong to the same region, we check whether there exists a path connecting them that does not contain any boundary vertices. This can be done by querying the decremental algorithm for the appropriate region.
Then, we check whether there is a path from u to w that that contains some boundary vertex. For each of the two vertices, we find two arbitrary boundary vertices b u and b w that u and w are connected to (note that with no additional overhead we may maintain, for each region and each local cc-identifier, a list of boundary vertices with this cc-identifier). Then, we check whether b u and b w have the same global cc-identifier.
Let us now analyze the running time. The algorithm of Lemma 3.1 requires O(n) time. The decremental algorithms run inside regions take O(n · f (r)/r) = O(n · f (log 2 n)/ log 2 n) time. Lastly, we bound the running time of the DFS searches performed to update the global cc-identifiers. We use an argument similar to the one in the proof of Lemma 3.2. The skeleton graph has O(n/ log n) vertices, and each global cc-identifier can change at most O(log(n/ log n)) = O(log n) times. Hence, the DFS searches require O((n/ log n) log n) = O(n) time. The lemma follows.
By applying Lemma 3.2 to Lemma 4.3, we obtain the following.

O(n log * n) time algorithm
In order to obtain a faster algorithm, we would like to use Lemma 4.3 multiple times, starting from the O(n log n) algorithm, and each time applying the lemma to the algorithm obtained in the previous step. This, however, cannot be done directly. While the lemma requires an algorithm that maintains all cc-identifiers explicitly, it does not produce an algorithm with this property. We deal with this problem in this section.
Observe that in the proof of Lemma 4.3 we only used the algorithms to maintain the cc-identifiers of the vertices of the skeleton set. We show that we can adapt our algorithms to maintain only some of the cc-identifiers. Lemma 5.1. Assume there exists a decremental connectivity algorithm for planar graphs that, given a graph G = (V, E) and a set V e ⊆ V (called an explicit set): • maintains cc-identifiers of the vertices of V e explicitly, • processes updates in f (n) + O(|V e | log n) time, • may return the cc-identifier of any vertex in g(n) time, where f (n) and g(n) are nondecreasing functions.
Then, there exists a decremental connectivity algorithm for planar graphs, which, given a graph G = (V, E) and a set V e ⊆ V : • maintains cc-identifiers of the vertices of V e explicitly, • processes updates in O(n + |V e | log n + n · f (log 2 n)/ log 2 n) time, • may return the cc-identifier of any vertex in g(log 2 n) + O(1) time.
Proof. We build an r-division P of G for r = log 2 n. By Lemma 2.1, this takes O(n) time. We also build a skeleton graph G s , by taking a skeleton set V s := V e ∪ ∂(P).
For each region of P, we run the assumed decremental connectivity algorithm. Observe that in the proof of Lemma 4.3, we only need these algorithms to explicitly maintain cc-identifiers of vertices of V s . Thus, the set of explicit vertices for an algorithm run in a region R is V s ∩ V (R). The decremental algorithm run for R will maintain local cc-identifiers of these vertices.
We maintain the global cc-identifiers in the skeleton graph G s in the same way as in the proof of Lemma 4.3. The only difference is that now the skeleton set V s is bigger. Since V s = V e ∪ ∂(P), this requires O(n + |V s | log n) = O(n + (|V e | + n/ √ r) log n) = O(n + (|V e | + n/ log n) log n) = O(n + |V e | log n) time. Thus, the update time is O(n + |V e | log n + n · f (log 2 n)/ log 2 n).
Since the cc-identifiers of vertices of G s are maintained explicitly, in particular we explicitly maintain the cc-identifiers of vertices of V e . It remains to describe the process of computing the global cc-identifier of an arbitrary vertex v ∈ V . We first query the decremental algorithm that is run for the region R containing v (in case v is a boundary vertex, we may use an arbitrary region) to obtain the local ccidentifier of v. We check whether there exists a vertex in V s ∩ V (R) that has the same local cc-identifier as v. Since the local cc-identifiers of elements of V s ∩ V (R) are maintained explicitly, at no additional overhead we may simply maintain lists of these vertices, grouped by their local cc-identifier. If there is a vertex among V s ∩V (R) with the same local cc-identifier as v, we return its global cc-identifier (maintained explicitly). Otherwise, we return a new cc-identifier by encoding as an integer a pair consisting of the identifier of the region containing v (this requires O(log n) bits) and the local cc-identifier of v (which requires O(log log n) bits). Thus, obtaining a cc-identifier of an arbitrary vertex requires g(log 2 n) + O(1) time.
In order to obtain a faster algorithm we use Lemma 5.1 multiple times. We prove inductively that for t = 1, 2, . . . there exists an algorithm A t which processes updates in O(tn + n log (t) n + |V e | log n) time and returns the cc-identifier of any vertex in O(t) time. The basis of the induction (algorithm A 1 ) is the algorithm of Lemma 3.2 that maintains cc-identifiers explicitly. Now, consider t > 1, and denote by f t (n) the running time of algorithm A t . We construct algorithm A t by applying Lemma 5.1 to A t−1 . The total update time is O(n + |V e | log n + n · f t−1 (log 2 n)/ log 2 n) = O(n + |V e | log n + n/ log 2 n((t − 1) log 2 n + log 2 n log (t−1) log 2 n)) = O(n + |V e | log n + n((t − 1) + log (t−1) log 2 n)) = O(tn + |V e | log n + n log (t−1) log 2 n) = O(tn + |V e | log n + n log (t) n) For t = log * n and V e = ∅ we obtain an algorithm that processes all updates in O(n log * n) time and answers queries in O(log * n) time.
From the formal point of view, some comment regarding the recursion is necessary. When applying Lemma 5.1, we reduce the problem of maintaining connectivity in a graph on n vertices to a collection of O(n/r) graphs of size at most r. In theory, this statement includes the case when the total size of all graphs increases by a constant factor of 2, every time we apply the recursion. However, this cannot happen, as we divide the graph using an r-division. In particular, this means that when creating smaller subproblems we partition the edges of the graph. In the following section, when we show the main result of the paper, we apply Lemma 5.1 only twice, so this explanation is not necessary.

O(n) time algorithm
In this section we finally show an algorithm that runs in O(n) time. We view Lemma 5.1 as a reduction from the problem of maintaining connectivity in a graph of size n to the same problem in a collection of graphs of size log 2 n, whose total size is O(n). The algorithm ran for a region R is given as the explicit set the set V e ∩ V (R). Moreover, the query time increases by a constant. This reduction has an overhead of O(n + |V e | log n).
If we use V e = ∅, and apply this reduction twice we obtain that in order to maintain connectivity in an n-vertex graph, we can maintain connectivity in graphs of at most O(log 2 log n) vertices and total size O(n). We also pay O(n) for this reduction. However, the number of graphs on at most O(log 2 log n) vertices is so small that we can simply precompute their connected components. Proof. We will call the set V e the explicit set. The state of the algorithm is uniquely described by the current set of edges in the graph and the explicit set. There are 2 t(t−1)/2 labeled graphs on t vertices (including non-planar graphs) and O(2 t ) possible explicit sets. Thus, there are O(2 t 2 ) possible states, which, for t = O(log 2 log n) gives 2 O(log 4 log n) = 2 o(log n) = o(n). In particular, each state can be encoded as a binary string of length O(log 4 log n) which fits in a single machine word.
For each state, we precompute cc-identifiers. Moreover, for each pair of state and an edge to be deleted, we compute the changes to the cc-identifiers of vertices in the explicit set. Observe that if the edge deletion is critical, we simply need to compute the set of vertices in the smaller out of the two connected components that are created and store the intersection of this set and V e . These vertices should be assigned new, unique cc-identifiers.
We encode the graph by a binary word of length O(log 4 log n), where each bit represents an edge between some pair of vertices. Thus, when an edge is deleted, we may compute the new state of the algorithm in constant time by switching off a single bit. For any planar graph and any sequence of deletions, the number of changes of cc-identifiers of vertices of V e is O(|V e | log n) (using the analysis similar to the one from the proof of Lemma 3.2). The query time is constant, since the cc-identifiers are maintained explicitly. For each of the 2 O(log 4 log n) states, we require O(log 4 log n) preprocessing time. Thus, the preprocessing time is o(n).
By applying Lemma 5.1 to the algorithm of Lemma 6.1, and then applying Lemma 5.1 to the resulting algorithm we obtain the main result of the paper. Theorem 6.2. There exists a decremental connectivity algorithm for planar graphs that supports updates in O(n) total time and answers queries in constant time.

Conclusion and open problems
We have shown a reduction from the decremental connectivity problem in planar graphs to incremental connectivity. As a result, we obtain an algorithm for decremental connectivity that processes all updates in optimal O(n) time and answers queries in constant time. This shows that the total time complexity of the deceremental problem is not Ω(n log n), which seemed to be a natural bound. In other words we have shown that a lower bound of Ω(n log n), that would be an analogous to the lower bound in [12], cannot hold for decremental algorithms in planar graphs. We actually conjecture that even for general graphs there exists an o(n log n) time decremental algorithm.
An interesting question would be to study the worst-case time complexity of decremental connectivity in planar graphs, which has not been fully understood yet. And, contrary to the incremental problem, no nontrivial lower bounds are known.