Succinct Data Structures for Chordal Graph with Bounded Leafage or Vertex Leafage
Abstract
We improve the recent succinct data structure result of Balakrishnan et al. for chordal graphs with bounded vertex leafage (SWAT 2024). A chordal graph is a widely studied graph class which can be characterized as the intersection graph of subtrees of a host tree, denoted as a tree representation of the chordal graph. The vertex leafage and leafage parameters of a chordal graph deal with the existence of a tree representation with a bounded number of leaves in either the subtrees representing the vertices or the host tree itself.
We simplify the lower bound proof of Balakrishnan et al. which applied to only chordal graphs with bounded vertex leafage, and extend it to a lower bound proof for chordal graphs with bounded leafage as well. For both classes of graphs, the information-theoretic lower bound we (re-)obtain for is bits, where the leafage or vertex leafage of the graph is at most . We further extend the range of the parameter to as well.
Then we give a succinct data structure using bits to answer adjacent queries, which test the adjacency between pairs of vertices, in time compared to the time of the data structure of Balakrishnan et al. For the neighborhood query which lists the neighbours of a given vertex, our query time is per neighbour compared to per neighbour.
We also extend the data structure ideas to obtain a succinct data structure for chordal graphs with bounded leafage , answering an open question of Balakrishnan et al. Our succinct data structure, which uses bits, has query time for the adjacent query and per neighbour for the neighborhood query. Using slightly more space (an additional bits for any ) allows distance queries, which compute the number of edges in the shortest path between two given vertices, to be answered in time as well.
Keywords and phrases:
Chordal Graph, Leafage, Vertex Leafage, Succinct Data Structure2012 ACM Subject Classification:
Theory of computation Data compression ; Theory of computation Data structures design and analysisFunding:
This work is supported by NSERC.Editors:
Pat Morin and Eunjin OhSeries and Publisher:
Leibniz International Proceedings in Informatics, Schloss Dagstuhl – Leibniz-Zentrum für Informatik
1 Introduction
Chordal graphs and related graph classes are one of the most well studied classes of graphs [13, 10, 20, 18, 23, 11, 29, 6]. One of the first instances where chordal graphs are encountered is in the study of Gaussian elimination of sparse matrices [29], which leads to a characterization of chordal graphs based on an elimination ordering. Another characterization is that there are no induced cycles of length four or more [10]. A third characterization, which is more suitable towards data structures, is the characterization as an intersection graph. For each chordal graph , there exists a host tree (which is unrooted) and subtrees of (one for each vertex), such that two vertices and are adjacent in if and only if and intersect at some node of [18].
It is this last characterization of chordal graphs that allows us to (easily) generate many interesting subclasses of chordal graphs, by imposing conditions111This is not the only way to characterize these subclasses of graphs. For instance, interval graphs can also be characterized by being chordal and having no asteroid triples. on either the subtree or the host tree . For example, if we impose the condition that the host tree is a path (which also imposes the condition that the subtrees themselves are paths), then the subclass of graphs we generate is the class of interval graphs [7]. If we instead only impose that the subtrees themselves are paths, then we generate the class of path graphs [17]. If we further impose conditions on these path subtrees, such as selecting a root in the host tree and insisting that each path is a subpath of a root-to-leaf path, then we obtain a RDV (rooted, directed, vertex) graph [24].
If we impose the condition that the host tree has at most leaves (as the tree is unrooted, a leaf is a degree 1 node), then this parameter is known as the leafage of the chordal graph [23]. We may alternatively impose that only the subtrees rather than the host tree have at most leaves each (i.e at most degree 1 nodes). Then this parameter is known as the vertex leafage [11]. A chordal graph with bounded leafage has a tree representation where the host tree has at most leaves, and a chordal graph with bounded vertex leafage has a tree representation where all subtrees have at most leaves. For , this corresponds to interval graphs and path graphs, respectively, whereas for , we obtain all chordal graphs. Thus, the leafage parameter measures how close a chordal graph is to an interval graph, and the vertex leafage parameter measures how close a chordal graph is to a path graph. See Figure 1.
We observe that if the host tree has at most leaves, then all subtrees would as well. Therefore, chordal graphs with leafage is a subset of chordal graphs with vertex leafage . Consequently, any data structure for vertex leafage applies to leafage, and any lower bound for leafage applies to vertex leafage.
An enumeration of chordal graphs and path graphs gives lower bounds for any data structure which is able to compute adjacency, degree and neighbourhood queries. A chordal graph requires at least bits [25, 31] while both interval graphs and path graphs ([9] plus the above observation) requires at least bits 222We will use to denote , which gives some bounds for the extreme values of the leafage parameter.
In this paper, we study (static) data structures for (unweighted) chordal graphs parametrized by leafage and vertex leafage parameters, with an emphasize on space usage. Our aim is to store a chordal graph parametrized by either the leafage or vertex leafage parameters, using information-theoretic minimal space within an additive lower-order term, while supporting the following queries efficiently: , which tests the adjacency of the vertices and , , which returns a list of the neighbours of the vertex , and , which returns the distance (the number of edges on the shortest path) between and . We aim to improve the work of Balakrishnan et al. [3] by eliminating their linear dependence on the vertex leafage parameter in the adjacent query, and the quadratic dependence in the neighborhood query for chordal graphs with bounded vertex leafage. Furthermore, we investigate an open question of theirs which asks for a simple representation of chordal graphs with bounded leafage that supports faster queries; they only specifically considered bounded vertex leafage, and provided lower and upper bounds accordingly.
1.1 Related Work
The most pertinent work is the recent data structure of Balakrishnan et al. [3] for chordal graphs with bounded vertex leafage. They showed an information-theoretic lower bound of bits, for parameter range . For the upper bound, they gave a data structure occupying bits which answers adjacent queries in time. For neighborhood queries, they require an additional bits, with query time where is the degree of the query vertex.
There are many succinct data structures for various classes of graphs related to chordal graphs. Munro and Wu [25] gave a succinct data structure for general chordal graphs, occupying bits, with query times for adjacent, for neighborhood and for distance, for any . This was improved by He et al. [22] to remove a factor of in the time bound for queries, so that adjacent has time, neighborhood has time, and distance has time.
The two graph classes with leafage and vertex leafage 2, interval graphs and path graphs, are studied by Acan et al. [1], He et al. [21], Balakrishnan et al. [4] and He et al. [22]. The result of these works is that interval graphs can be represented succinctly using bits which supports adjacent, neighborhood [1] and distance [21] all in optimal time. For path graphs, a succinct bit data structure supports adjacent in time and neighborhood in the same time per neighbour. To achieve time, they use bits, for any constant [22].
Lastly, there are results for other related classes of intersection graphs (of various geometric and combinatorial objects), such as permutation graphs (line segments between two parallel lines) [30], graphs with small tree-width (subgraphs of chordal graphs with small maximal clique) [14], -boxicity graphs (higher dimensional analog of interval graphs) [2].
1.2 Our Results
We give lower bounds and data structures for chordal graphs with bounded vertex leafage or bounded leafage. Our results for the former class of graphs extends the range of parameter values applicable in the lower bounds and improves the query times in the data structures, compared to the work of Balakrishnan et al. [3]. Our results for the later class of graphs answers an open question posed by them.
Lower Bounds.
With respect to lower bounds for chordal graphs with bounded vertex leafage , Balakrishnan et al. showed an information-theoretic lower bound of bits, applying to the parameter range . We extend the range to with the same lower bound, by using a simpler host tree, and using a simpler full colouring scheme rather than a partial colouring scheme. This further allows us to extend our construction to give a lower bound for as well, covering all values of . In this range, the lower bound we obtain have the form , where .
For chordal graphs with bounded leafage, we prove that they have the same lower bound as chordal graphs with bounded vertex leafage, namely bits, when . Our construction also allows us to extend the parameter to . Again, the lower bound we obtain have the form , where (for a different function ). This is the first information-theoretic lower bound for this class of graphs.
Data Structures for Bounded Vertex Leafage.
We give a succinct data structure occupying bits compared to that of bits of Balakrishnan et al. [3]. Thus our data structure is succinct for rather than just . Furthermore, we achieve query time for adjacent query compared to time for their data structure. For the neighborhood query, we use time per neighbour rather than per neighbour. This is around a factor of faster for adjacent and a factor of for neighborhood, which is significant, especially if is large. A comparison can be seen in Table 1.
To do this, rather than breaking the subtree into a set of paths, and storing them within a path graph data structure, we simply store the preorder numbers of the leaves of the tree within a dictionary. This allows us to both compress the space and forgo data structures which is needed to identify paths that come from the same subtree, which saves a factor of in the time complexities. Next, by storing the leaves of the subtree together, we are able to identify a neighbour of a vertex using a single leaf of , rather than possibly returning every leaf of , saving the second factor of in the neighborhood query.
Data Structures for Bounded Leafage.
Using the additional structure that the host tree has at most leaves, we construct the first succinct data structure for chordal graphs with bounded leafage such that , using bits of space. Our data structure supports adjacent in time and neighborhood in time per neighbour. Using an additional bits (for any constant ), we also support distance in time. We do this by breaking the host tree into paths, and storing the leaf of the subtree on each path (if one exists). This allows us to identify leaves of by this path number, leading to more efficient accesses.
All our results are obtained in the word RAM model with bit words.
| Source | Space for Adjacency (bits) | Adjacency | * | Neighbour |
|---|---|---|---|---|
| [3] | ||||
| Thm 11 | ||||
| Thm 12 |
2 Preliminaries
In this section, we introduce the existing data structures that our solution use as building blocks. As we will be discussing both graphs and trees at the same time, we will use the term vertex to refer vertices in the graph and node for trees.
Sets of Integers.
We begin by considering various methods of storing sets of non-negative integers (viewed in sorted order).
A key succinct data structure is the bitvector: a length array of bits. A bitvector supports three operations. 1) or simply : return the -th bit in the bitvector. 2) : return the number of 1 bits in the prefix . Symmetrically for . 3) : return the index of the -th 1 bit in . Symmetrically for . We note that a bitvector can be viewed as storing the set of integers corresponding to the index of the 1 bits. Thus, the for set of integers will return the -th smallest integer. The bitvector we will use is the result of Clark and Munro:
Lemma 1 ([12]).
A bit vector of length can be succinctly represented using bits to support and access in time.
For a sparse set of integers, we will use the dictionary result of Raman et al. [28].
Lemma 2.
A subset of integers from can be represented using bits, to support select queries in time.
The predecessor and successor queries on a set of integers are: 1) : which returns the largest element of no greater than , i.e. , and symmetrically, 2) : which returns the smallest element of no less than than . When the size of the integers are bounded (and allowing duplicates), we may employ the folklore solution, which stores the termwise differences of the sequence as unary in a bitvector.
Lemma 3.
Let be a multi-set of non-negative integers bounded by . Then can be stored in bits answer , and succ queries in time.
Ordinal Trees.
As we will be studying chordal graphs, using primarily the characterization that it is the intersection graph of subtrees in a tree, we will need a data structure storing said tree. Succinct ordinal tree data structures have a long line of study, with new queries being added as recently as He et al. [21]
Lemma 4 ([21]).
An ordinal tree can be represented succinctly in bits to support a wide variety of operations in time. These operations include: 1) , which returns the index in the preorder traversal that we encounter the node , 2) , which returns the lowest common ancestor of nodes and , 3) , which returns the parent of a node and 4) , which returns the -th child of a node .
We will refer to of a node as its preorder number, and use it to identify nodes. Thus, when we refer to a node , is treated as an integer, and we may write for the next node encountered in the preorder traversal of . One application of ordinal trees is to range minimum/maximum queries, through a Cartesian tree. For an array of integers, a range minimum query or RMQ takes two indices with and returns the index of the minimum (or symmetrically, maximum) value in the subarray . We use the following result:
Lemma 5 ([15]).
There is a data structure for the RMQ problem that uses bits of space and answers queries in query time.
Orthogonal Range Searching.
In the 2-dimensional orthogonal range reporting query, we are asked to preprocess a set of 2-dimensional points, so that given an axis aligned query rectangle , return the list of points lying in the rectangle, i.e . Bose et al. [8] designed a succinct data structure for this problem when the points have integral coordinates in the grid.
We will need the following two variants of this query, which deals with many points sharing the same -coordinate. First is the orthogonal range distinctness query: Given an axis-aligned rectangle , return the set of distinct -coordinates such that there exists a point that lies in the rectangle . Second is the orthogonal range distinct maximum/minimum query: Given an axis-aligned rectangle , and for each distinct -coordinate in the set of points , return the point with the maximum/minimum -coordinate among the points in the rectangle with -coordinate .
We may solve these two queries by a slight modification of the data structure of Bose et al. An inspection of their orthogonal range reporting query algorithm shows the following. Their algorithm identifies the set of -coordinates each of which has a point in the given query rectangle. For each such -coordinate identified, the set of points with that -coordinate is represented in a bitvector, and the query rectangle defines a subrange of that bitvector. The algorithm traverses the bitvector by using select for each of the bits, and then recovers the coordinates of the points represented. Thus, we may select for the first or the last 1 in the subrange of the bitvector defined by the query rectangle, which returns the minimum or maximum points.
Lemma 6.
Let be a set of points in an grid. can be represented using bits to support orthogonal range reporting in time, where is the size of the output. It can further support orthogonal range distinctness, and orthogonal range distinct minimum/maximum queries in time, where is the number of distinct -coordinates among the points in the rectangle.
3 Lower Bound for Bounded Leafage and Vertex Leafage
In this section, we will re-derive the lower bound result of Balakrishnan et al. [3] for graphs with bounded vertex leafage . We will then generalize the result for a larger set of values for . We will also derive lower bound results for graphs with bounded leafage , which was not considered by Balakrishnan et al.
Since chordal graphs with bounded leafage are also graphs with bounded vertex leafage, a lower bound for chordal graphs with bounded leafage also applies to chordal graphs with bounded vertex leafage. We will still give different constructions for these two classes of graphs, to emphasize how they can be extended when .
3.1 Bounded Vertex Leafage
As in the lower bounds shown in Munro and Wu [25], Wormald [31] and Bender et al. [5], we will mainly look at split graphs. A split graph (similar to a bipartite graph) is a graph where the vertices are separated into two groups and . is a complete graph, and is an independent set. Any edge between a vertex and is allowed. It is easy to see that any split graph is a chordal graph by constructing a tree representation as follows: the host tree (a star) will consist of 1 node as the root and children of the root as leaves, one for each vertex of . Let denote the leaves corresponding to the vertices . For each vertex of , the subtree that corresponds to it is the singular node . For each vertex of , the subtree that corresponds to it is the root of the tree plus the nodes of adjacent to (i.e. ). See Figure 2. In this fashion, the subtrees corresponding to vertices of always intersect each other at the root, so the vertices of forms a clique as required. The subtrees corresponding to vertices of never intersect, so forms an independent set as required, and every edge that exist between and exists since the subtree of extends to the node that represents .
We will now give an information-theoretic lower bound for chordal graphs with bounded leafage by counting a subclass of these graphs.
Theorem 7.
Any data structure supporting adjacency queries on chordal graphs on vertices with bounded vertex leafage such that requires at least bits.
Furthermore, if for a fixed constant then it requires at least bits where is the maximum of the function
in the range , and this maximum is always achieved in the range . If , it requires bits.
Proof.
First consider the case where .
Let , and let be the sizes of the independents set and clique respectively for our split graph. Furthermore, assume that is large enough that we have (i.e. ). We also have the estimate . This arises from Stirling’s approximation and that :
The last step uses by the limit definition of : .
For each vertex of we can select vertices from to be adjacent to. So all subtrees in the tree representation constructed above has at most leaves, and thus the graph obtained has bounded vertex leafage . The number of graphs we can generate is since each vertex of ’s neighbourhood could be any size subset of . Note that if we treat this construction as a labelled graph, then all graphs we construct are distinct, as different sets of vertices of leads to a different neighbourhood and hence a different graph. Moving from labelled graphs to unlabelled graphs, we generate each unlabelled graph at most times due to the number of potential isomorphisms of the graph. Thus the lower bound on the space needed for vertex leafage graphs is at least
Now consider the case in which for some constant . 333We will show that for the lower bound we obtain is bits which matches the lower bound for all chordal graphs. Thus, this would be the lower bound for all as well. Let be variable such that . Set and . Applying Stirling’s approximation, the number of graphs we generate is
assuming that . In the case that , then we do not choose elements from but rather choose elements since that maximizes the binomial coefficient. Doing so gives
Thus the lower bound is where given a fixed is the maximum of the function on the range defined by
is clearly increasing on the first interval, positive when and zero when . Taking the derivative, we see that is obtained at some zero of the function:
Since in the second range, we see that and . Thus, in order for this sum to evaluate to 0, , so that . When , we see that is indeed a solution, which evaluates matching the original chordal graph lower bound.
For a graph of the value of (compared to the value of ), see Figure 3.
3.2 Bounded Leafage
We now consider chordal graphs with bounded leafage (i.e. the host tree has at most leaves), by subdividing the edges of tree representation of the split graph with leaves.
Theorem 8.
Any data structure supporting adjacency queries on chordal graphs on vertices with bounded (vertex) leafage such that requires at least bits.
If for some constant fixed constant , then it requires at least bits, where is the maximum of the function:
Here denotes the fractional part of .
Proof.
We define the host tree to be an (extended) star, with central node of degree together with paths of length , between and a leaf. We set the subtrees representing vertices of the chordal graph (which we will call static vertices) to be distinct singleton node in the paths, excluding . For the remainder of vertices (which we will call variable vertices), their subtrees will consist of the root plus a (potentially empty) prefix of each of the paths (i.e. it is the union of such paths). Thus each variable vertex that has a different subtree than another variable vertex will have a different set of static vertices as neighbours. See Figure 4.
The number of different possible variable vertices is since for each of the paths, we have different lengths. Since each of the variable vertices can be one of these, the total number of ways we can choose our variable vertices is . The total number of graphs we can generate is thus where each graph may be generated at most times. Thus our information theoretic lower bound is:
This applies for . To extend to we may set and choose the number of static vertices to be and the number of variable vertices to be without changing the above bound.
In the case that , we set the size of to be with . We set the length of each of the paths to be the same. That is, there are paths of length and paths of length (recall that we use to denote the fractional part of ). Enumerating as above, we obtain a lower bound that is:
Finally, we choose (depending on ) as to maximize this term.
For a diagram of the value of this lower bound, compared to that of the lower bound for bounded vertex leafage chordal graphs in the previous subsection, see Figure 3.
4 Data Structures for Chordal Graphs with bounded Vertex Leafage
In this section we will give data structures for chordal graphs with bounded vertex leafage . We will aim for bits of space but as in Balakrishnan et al. [3], allow some extra space to support neighbourhood more efficiently. We note that this differs from the lower bound (outside of lower order terms) only from the versus terms (i.e. a difference of bits). We note that if , so this is a lower order term and can be ignored. Otherwise, if , then the difference is a lower order term: . Thus in both instances, we may ignore this difference as it is a lower order term. Hence such a data structure would be succinct for all . We give two data structures, where the second uses slightly more space and more time for the adjacent query, but uses less space for the neighborhood query.
We will begin with a tree representation of , , such that each has at most leaves. Root the tree arbitrarily. For a subtree , denote the unique node with the smallest depth as the apex of (or of ). Furthermore, using the following lemma, we may assume that every node of is the apex of some subtree .
Lemma 9.
Let be a chordal graph with bounded vertex leafage . Then there is a tree representation such that the subtree of any vertex of has at most leaves and that every node of is the apex of some vertex. Furthermore, if has bounded leafage , then there exists a tree representation with at most leaves such that every node of is the apex of some vertex.
Proof.
We will convert any tree representation of into one with the property that all nodes of is the apex of some vertex.
Suppose we have a tree representation such that some node is not the apex of any vertex. Then any subtrees containing must also contain the parent of . Merge into its parent (i.e. the children of become the children of the parent of ). Any two subtrees and that intersect at also intersect in the parent of , so no new intersections are created nor destroyed, maintaining a valid tree representation of . Merging the node into its parent does not increase the number of leaves of , nor the number of leaves of any subtree , and so is a valid tree representation with bounded leafage or vertex leafage .
Repeating this process to eliminate all such nodes , we obtain a tree representation with the property that every node is the apex of some subtree .
Since each node of is the apex of some vertex, there are at most nodes in . Next, we make the following modification to the tree representation to ensure that the apex of each vertex has at least two children in . This allows us to compute via LCA of the leftmost and rightmost leaves of . We achieve this goal by adding two new children to each node of as the right most children of . For each vertex , whose apex has 1 child in , we expand to one of the newly added child of in . For each vertex , whose apex has 0 children in , we expand to both newly added children of in . This adds new nodes to so that and retains the property that every internal node is the apex of some subtree (leaves of may not any more).
We first give a succinct data structure which allows us to recover the graph. We sort the vertices by the preorder number of its leftmost leaf, and label the vertices . Let where is the number of vertices whose leftmost leaf is the tree node . Then, gives the preorder number of the leftmost leaf of .
For each vertex , we will need to compute predecessor and successor queries on the preorder numbers of the leaves of . For each vertex , let be the -th leaf of in preorder. We store the set of at most integers, (i.e. excluding the leftmost leaf) using Lemma 2, in bits. Using select, we may retrieve the -th leaf in time. Thus for simplicity, we may view as an array. Using binary search, we can support pred and succ queries on this array in time.
We may speed this up slightly by storing a subset of the array to obtain an approximate predecessor. As this subset is sparse, we will use a fusion tree [16, 27].
Lemma 10 ([16, 27]).
A set of elements can be stored in bits to support select, pred and succ queries in time, where is the word size in the word RAM model.
We store evenly spaced elements of the array in a fusion tree (where ), using space. To compute a predecessor, we first compute an approximate predecessor using the fusion tree in time. This gives a predecessor which may be too small. We linearly scan the segment defined by this entry and the next sampled entry to find the true predecessor, using time. Setting , we can find the predecessor using time.
This is enough for us to reconstruct the graph, as for each vertex , the leaves of can be computed: the left-most using , and the other leaves by scanning . This defines the tree since the apex is the lowest common ancestor of the first and last leaves of (as we ensured that it has degree in ).
We concatenate the arrays , padding ones that are smaller (i.e. when the subtree has fewer than leaves) so that they are all the same size. Thus we may access each array in time, without spending extra overhead for pointers to each individual array. We will do this for all other similar structures (such as the fusion trees) to avoid the overhead from pointers. The total space used so far is the bitvector , the arrays , fusion trees, and the succinct tree storing . Thus the total space needed so far is bits.
4.1 Supporting adjacent Query
We first find the leftmost and rightmost leaves of . The leftmost leaf can be computed using as above. The rightmost is . We then compute the apex using LCA on the host tree . We compare the apex of the two subtrees and for an ancestor-descendant relationship (i.e. check if ). If they are not in such a relationship, then and are disjoint, and and are not adjacent. When and is in an ancestor-descendant relationship, we may without loss of generality, assume that is an ancestor of .
We claim that intersects if and only if some leaf of is a descendant of . For one direction, if some leaf of is a descendant of , then the path from to this leaf contains . Since this path is contained in , . Conversely, suppose that intersects at some node . Then the path from to again contains , so is a descendant of . Any leaf descendant of in is a descendant of .
To check this condition (some leaf of is a descendant of ), we first compute the range of preorder numbers of the subtree rooted at . The smallest such preorder number is , and the largest is the last child of . Let be this range of preorder numbers. We wish to check if contains any leaf nodes of . To do so, we look at the predecessors of and in the sequence of leaf nodes of : and . We observe that either or are leaf nodes (so or ) or a leaf in the range would cause . Thus computing whether a leaf of is a descendant of can be done using a constant number of bitvector, succinct tree, and predecessor queries, which takes time.
4.2 Supporting neighborhood Queries
We now consider efficiently computing the neighbourhood of a vertex . We will break the neighbourhood into two cases, those vertices such that is a descendant of (or equal to) , and those vertices such that is an ancestor of .
4.2.1 Case 1: vertices with a descendant of
For any case 1 neighbours , . We will traverse , and list the vertices whose apex is the node of that we currently traverse. To traverse , we may simulate a post-order traversal of , by starting at the leftmost leaf of and using parent and LCA. This takes time.
To report the vertices whose apex is a given node of , we define the operation apex_list which takes a node of and returns all vertices such that .
To support apex_list, we store a bitvector where is this number of vertices with apex at node with preorder number . Then, is the number of vertices with apex (using the convention that the root of the tree has preorder number 1). We also store an array of the vertices in order by their apexes. Then computes the range that stores the appropriate vertices and returns them from , that is,
As apex_list computes each neighbour with a given apex in per neighbour, and we ensured each internal node of is the apex of some vertex, the time complexity is where is the degree of vertex . The term is for the leaves, which may not give a neighbour. We may easily remove this term, by storing a length bitvector, which for each leaf of , stores a 1 if the leaf contains the apex of some neighbour , or it is the leftmost child of an internal node of (so we do not skip any internal nodes of if all its children contributes no neighbours). During the post-order traversal of we may skip the leaves that is a 0 using select, to only visit those leaves which has a neighbour. Thus, we return all case 1 neighbours in time per neighbour.
The total space needed is a length bit vector , a length bitvector for each vertex (total bits) and a size bit array .
Lastly, we note that this is essentially storing a permutation of the vertices, between the two orderings obtained by sorting leftmost leaf of , and sorting by apex of .
4.2.2 Case 2: vertices with an ancestor of
As discussed in adjacent, vertices in this case has the property that some leaf of is a descendant of . To compute these, we will store a 2D orthogonal range reporting data structure of Lemma 6. The points we will store is the set
This structure stores the leaves of all vertices . Thus we will convert the adjacency criteria into something that can be expressed using axis-aligned rectangles.
We again compute the range of preorder numbers of the subtree rooted at as , where and is the rightmost child of . For the -coordinate, we may use and to ensure that the leaf is within the subtree rooted at . For the -coordinate, since we use the label of the vertex rather than its preorder number, we need to covert and into the appropriate vertex labels. To determine the set of labels of vertices whose leftmost leaf falls within the subtree rooted at , we find the 0-bits (which represents vertices) which belong to the -th and -th 1 bit in (which denote the delimiting nodes in ). That is we have , which finds the first 0 after the -th 1 in and which finds the first 0 before the -th 1 in .
Consider which contains and such that is an ancestor of . Since is an ancestor of , at least one leaf is outside of the subtree rooted at (since has degree at least 2) and since contains the node , at least one leaf is within the subtree rooted at . There are two cases, either the leftmost leaf of is outside of the subtree rooted at or if the leftmost leaf falls within this subtree, at least one other leaf falls outside of the subtree.
In the first case, consider the rectangle, . A point that falls within this rectangle has the property that so that the leaf is a descendant of . Furthermore, since , the leftmost leaf of falls outside of the subtree rooted at . See Figure 5.
In the second case where the leftmost leaf of the vertex falls in the subtree rooted at but some other leaf does not. This is captured by the rectangle .
Using the orthogonal range distinctness query, we retrieve each vertex once, since all the points corresponding to the same vertex has the same -coordinate by construction. Thus, by Lemma 6 the time complexity is per neighbour. The space cost of the 2D orthogonal range reporting data structure (on points) is bits.
Theorem 11.
Let be a chordal graph with bounded vertex leafage . Then there exists a data structure occupying bits which answers adjacent in time. With additional bits, it can answer neighborhood in time per neighbour.
4.3 Second data structure
We note that the previous data structure stores the coordinates of the leaves of the subtrees twice, once in the array and once in the orthogonal range searching data structure to support neighborhood queries. This causes the space to be unnecessarily large to support neighborhood queries. We observe that the orthogonal range search data structure can be used to mimic access into the array. By doing so, the time complexity for adjacent increases slightly, from to , but the overall space cost can be reduced.
Recall that our orthogonal range search data structure stores the points:
To be able to compute all the queries, we need to be able to simulate access to the array . For our purposes, this means predecessor and successor queries.
First we note that the relevant points corresponds to the rectangle . Thus in time, we may retrieve the preorder numbers of each of the leaves for a given vertex.
For the predecessor and successor queries, the relevant points our query forms a horizontal strip, containing a single -coordinate. Since predecessor queries corresponds to the point in the rectangle with the largest -coordinate this corresponds to a orthogonal range distinct maximum query, with a single -coordinate in the output. Thus, we may support it in time. Symmetrically for successor queries.
Therefore, as accessing the array using predecessor and successor queries, using the orthogonal range search data structure incurs a cost of , we obtain the following theorem:
Theorem 12.
Let be a chordal graph with bounded vertex leafage . There exists a data structure occupying bits which answers adjacent in time and, using additional bits, neighborhood in time per neighbour.
4.4 Distances
In this section we investigate the distance query in chordal graphs with bounded vertex leafage . In the same vein as Munro and Wu, we show a close relationship to a variation of the set intersection oracle problem. The set intersection oracle (or set disjointness) problem is the following:
Preprocess a collection of sets with each set from some universe . Answer queries of the form: given is ?
There are a few ways to measure the size of this problem. One way is , the number of sets. Another is , the total number of elements in the sets. As such, there are conjectures related to both of these measurements.
Patrascu and Roditty [26] gave the following conjecture, based on the number of sets in the input.
Conjecture 13 (Conj. 3 [26]).
Let be a collection of sets, with each set from some universe . Then any data structure answering set disjointness queries in time, must use bits of space, even if for a large enough constant .
Observe that with bits we can write down the result matrix if which allows the computation of the query in time, and so this conjecture essentially states that this is the best we can do, even with small universes.
Using the total sizes of the sets as the input size, Goldstein et al. [19] gave the following conjecture.
Conjecture 14 (Conj 3. [19]).
Let be a collection of sets, and let be the total number of elements of the sets. Then any data structures answering set disjointness queries in time must use space.
These conjectures suggest that the set intersection oracle problem is quite difficult to solve without using trivial solutions such as writing down all the answers, or writing down the answers for large sets and naively iterating through smaller sets (i.e. we could check for every , is ?, which would be quick if was small).
Recall our lower bound construction proof. For we constructed two sets (which is a clique) and (which is an independent set). In this construction, for two vertices , exactly when there exists some vertex that is adjacent to both. Otherwise, we pick any neighbour of and any neighbour of . They both lie in which is a clique, so are adjacent. Hence the distance would be 3. Thus is either 2 or 3 depending on the condition: there exists some adjacent to both and .
Fix and construct the set . Then the existence of adjacent to both and is exactly the non-emptiness of the intersection .
Munro and Wu further gave the reduction as follows: Given the sets construct the split graph with and . For each , the neighbours of is exactly the set . Then the distance query on answers the set disjointness problem.
If we do exactly the same thing, we obtain an obstacle. Since each vertex in has at most neighbours in in a bounded vertex leafage chordal graph, this corresponds to the condition that: for every element , it belongs to at most of the sets . This implies that , and also implies that bits suffice to write down the results of all queries, since for element , it causes at most pairs of sets to intersect.
Using this sparsity like condition, we give the following conjecture:
Conjecture 15.
Let be a collection of sets with universe such that for every , appears in at most sets. Any data structure which has query time must use bits of space.
For chordal graphs with bounded vertex leafage, unless the above conjecture were to be false, we would be unable to answer distance queries in time using succinct space (or even a little bit more space).
5 Data Structure for Chordal Graphs with Bounded Leafage
In this section, we consider data structures for chordal graphs with bounded leafage . We note that a chordal graph with bounded leafage is also a chordal graph with bounded vertex leafage , and thus the data structures in the previous section applies. As this is a more structured class of graphs, our goal is to achieve much better query times, on top of not requiring additional space for neighbourhood queries. Furthermore, this structure allows us to bypass the difficulties with supporting the distance query. We will highlight some of the differences in our data structure that is made possible by this additional structure, which allows for more efficient computation.
By definition, a chordal graph with bounded leafage has a tree representation that consists of a host tree with at most leaves. First, root at one of its leaves. By doing so, we only need to worry about the other leaves of the tree. We may apply Lemma 9, to ensure that and that every node of is the apex of some subtree .
Since there are at most leaves besides the root, we may break the tree down into disjoint paths , where each path contains 1 leaf of the tree, except which contains both a leaf and the root (which was also a leaf of the unrooted tree by construction). For simplicity, we may choose the paths in a preorder traversal, by descending down the tree until we reach the first leaf to obtain the first path, then following the preorder traversal to reach the second leaf to obtain the second path etc. In this way, the preorder traversal of the tree gives blocks of nodes corresponding to the paths. In the following, we will refer to leaves of a subtree which lie on some path . However it may be the case that is non-empty but no leaf of exists on . To ensure that each path which contains nodes from “has” a leaf of , we will consider the the largest depth node of on a path to be a leaf. See Figure 6, the subtree on the right, depicted by dashed black lines, has a “leaf” at the node of depth 4, on the red path ().
Because there are only paths, for a subtree to have leaves, one of its leaves must also be the apex. Therefore, rather than sorting by leftmost leaf as in the case of bounded vertex leafage, we now sort by their apex instead, as obtaining the apex, will also obtain the leftmost leaf, if that is the case.
Sort the vertices by the preorder number of their apex, breaking ties arbitrarily and label the vertices as by their index in this sorted order. From now on, when we refer to a vertex , we refer to its label (i.e. is an integer between 1 and ). Store a bitvector , where is the number of vertices whose apex is the node with preorder number . Then the preorder number of the apex of a vertex is . We note that by Lemma 9, this bitvector is well defined, as .
For each vertex and for each of the paths, we store the depth (in the path) of the largest depth node (i.e. the leaf of on that path) of on that path, and if does not intersect that path, we store instead. Let be the depth of this leaf of on path . We store the partial sums (to ensure that the terms are increasing). Applying Lemma 2, and using the fact that the total sum equals (as the paths are disjoint, so their total lengths equal ), and that , this uses bits. To compute , we have .
We note that since the -th leaf always falls on the path, instead of being arbitrarily placed in the tree as in the bounded vertex leafage case, we may retrieve it in time, rather than requiring predecessor queries which takes longer.
Finally we store a bitvector (path number) in the preorder traversal of the tree , with a 1 bit if a node is the first node of some path . Given a node (with preorder number ), it lies on the path .
The total space needed so far is bits.
Again, using these data structures, we are able to compute the exact subtree for any vertex given as an integer denoting its position in sorted order: the root of is the node with preorder number , and we store the at most leaves of the tree by their depths . For path , the first node (by preorder number) of the path is , so the leaf of is the node of with preorder number (if . Thus by reconstructing the subtress we are able to recover the graph.
Now we will show how to answer the queries efficiently.
5.1 adjacent
Find and using . Then by performing LCA in , determine if one is the ancestor of the other. If not, return not adjacent. Otherwise, suppose that is an ancestor of . Let be the path number of . We check if the -th leaf of , , has depth at least that of (relative to ). If so, return adjacent, and return not adjacent otherwise. This uses a constant number of bitvector and succinct tree operations, and so takes time.
5.2 neighborhood
We consider two cases. Case 1: vertices adjacent to such that is an ancestor of . Case 2: vertices adjacent to such that is a descendant of (or equal to) .
Case 1.
Let be the path number of , and let be the depth of relative to this path. As in adjacent, we want vertices whose -th leaf has depth at least as much as . Compared to the previous section, knowing that we only need to consider the -th leaf, rather than an arbitrary leaf allows us to forgo a orthogonal range search data structure. Let be a conceptual array. We are looking for indices such that (so that it is an ancestor) and . It is well known that this can be accomplished in time per index by equipping with a RMQ data structure (Lemma 5). The space needed is bits as we need one RMQ data structure per path.
Case 2.
We consider each path in order. For each vertex, we store a length bitvector indicating whether the -th leaf is , and using select we iterate over each path with a node in . For path if is non-empty, let be the first and last vertices of . is simply the -th leaf, and is the first node of the path, which we obtain using , except on the path that is on, in which case, . Since we sorted the vertices by their apex, the set of vertices with apex in forms a consecutive block, which we may find using select on as . This takes time per neighbour.
5.3 Iteration through Neighbourhood
It maybe the case that for a graph algorithm, storing the entire neighbourhood of a vertex, especially over a large number of recursive calls is too costly. Instead it would be desirable to iterate over the list of neighbours, so that at any point, only a constant amount of space is needed to store where in the list of neighbours the computation is currently at.
To do so, we consider the two cases in the query. Let be the apex of . First consider the second case, where we output neighbours of such that is a descendant of in the host tree . In this case, we examined each of the -paths which contained a node of , then for each node on the -th path , we returned all vertices with that node as its apex. The vertices whose apex is on this subpath forms a single interval of vertex labels. Therefore, to store where we are in this computation, we need to store the current path number (from which we may compute the subpath in time), and the current node in this single interval that we have returned. This uses words of space.
Now consider the first case, where we output neighbours of such at is an ancestor of . Let be the path number of . In this case, we used a RMQ query on the conceptual array which stored the leaf numbers of each vertex in the -th path (which may be -1 if a vertex did not have a vertex on the -th path). In this case, we wished to output all indices in the prefix such that .
A solution to this was given by Tsakalidis et al. [30] in their work for permutation graphs. Their extension to the regular RMQ data structure can be stated as:
Lemma 16 (Lemma 3.2 of [30]).
Let be a static array of comparable elements. For any constant , there is a data structure using bits of space on top of that supports the following queries in time (making as many queries to ) and using words of working memory:
-
1.
range-maximum queries (or symmetrically range-minimum queries), RMQ,
-
2.
next-above queries (or symmetrically next-below), , enumerating in amortized time.
Formally, implicitly defines a sequence via if and null otherwise, and = if null, and null otherwise. Then we require .
We note that this data structure requires access to the array , as opposed to the data structure of Lemma 5. Here , and accessing elements of can be done in time. Lastly, we note that the operation iterates over all indices that we need.
Therefore, combining these two cases, we may store a reference to where we are in the list of neighbours in words.
5.4
In the previous section, for chordal graphs with bounded vertex leafage, we argued that it is difficult to support the distance query, due to a close connection to the set disjointness problem. Applying the same reduction, we see that we no longer have such an obstacle. Indeed, as the host tree contains at most leaves, the instances of the set disjointness problem that can be reduced to chordal graphs with bounded leafage are exactly those with at most input sets. As we noted, we may store the output matrix in this case using bits, which is much less than bits. Thus, the distance query seems hopeful to be solved for the bounded leafage case.
To compute the distance between two vertices, we will use the greedy algorithm of Munro and Wu [25] for general chordal graphs. Informally, to connect two subtrees and (using a sequence of subtrees), at least one subtree must pass through the lowest common ancestor of and . To reach this LCA from , we greedily pick the subtree intersecting such that is as high up in the tree as possible (and repeat).
The successor operation on the tree is defined by: ( are nodes of ) where is the smallest depth node such that there exists a vertex with and . A shortest path between and can be found by first computing the node , then repeatedly take the successor operation from and . We stop when the next successor operation would give a node that has smaller depth than . The result of the two chains of successor operations are two nodes (an ancestor of ) and (an ancestor of ). Lastly, we compute whether there exists a subtree such that and . The shortest path between any vertex with apex and any vertex with apex is if such a subtree exists and if no such exists.
We assume that (i.e. and are not in an ancestor-descendant relationship), since if one is the ancestor of the other, then there is a single chain of successor operations and we do not need to compute if exists or not.
Suppose that we are given and , two nodes of . Let be the two paths containing and . For now, assume that and are such that no node of one path is an ancestor of another path. For each node of and we wish to store a bit which is 1 if there exists a vertex whose subtree contains the two nodes. This allows us to finish the distance computation by checking this bit using the nodes and . Naively, this will use bits of space if we store a bit for each pair of nodes of the two paths.
Let be this bit for the node at depth in and depth in .
Lemma 17.
The bitarray consists of a block of 1s followed by a block of 0s.
Proof.
Let be the node of depth on the path . Let be an index such that . Then there exists some subtree , which contains both and the depth node of . Since we assume that no node of is an ancestor of (and vice versa), must also contain the node at depth on path . Hence any 1 bit of is preceded by another 1 bit, and we conclude that this bitarray consists of a block of 1s followed by a block of 0s.
Using the above lemma, we may describe this bitvector using a single number: the index of the last 1. Thus, consider the array of length which contains these indices of the last 1.
Lemma 18.
The array is non-increasing.
Proof.
We show that for all . Let be the subtree which realizes . That is, it is the subtree which contains the node of depth on (among all such subtrees) which contains the deepest node on (which is of depth ). Similarly, let be the subtree which realizes . Note that since contains the node it must also contain (we assumed that no node of either path is an ancestor of nodes of the other path). Therefore, by definition of , its deepest nodes on is at least as deep as that of .
Now, as is a non-increasing sequence (i.e. sorted) of length with maximum value , we may store it using bits by Lemma 3. To obtain , we compute , and if so the bit is 1.
Next we deal with the case that some nodes of and and in an ancestor-descendant relationship. Since we assume that and are not in an ancestor-descendant relationship, if the paths and contain nodes that are in an ancestor-descendant relationship, we may ignore them, as we will never query those pairs. Suppose that are ancestors of the nodes of (observe that only one path can contain such nodes and they must be a prefix), then we perform the above calculations using , and store so that sequence is still non-increasing. Note that for this pair of paths, will never be accessed so their values can be anything. See Figure 7.
The total size of these non-increasing sequences, stored using Lemma 3 is then bits. The time to compute the distance query is . We may also compute the shortest path by storing the vertex giving rise to the value of . This takes a further bits.
The total space needed is bits for the necessary data structures from Munro and Wu’s data structure. Our data structure for computing the existence of the vertex uses bits for the distance query. However bits is needed to store the vertices themselves for the spath query which asks for the vertices on a shortest path.
Putting the above together, we obtain the following theorem:
Theorem 19.
Let be a chordal graph with bounded leafage . There exists a data structure occupying bits of space supporting adjacent queries in time, and neighborhood in time per neighbour. Furthermore, we may iterate through the neighbourhood using words to denote the current neighbour in the list, in amortized time per neighbour.
We may support distance queries using additional bits in time (for any constant ), and shortest path queries using additional bits in time per vertex on the path.
References
- [1] Hüseyin Acan, Sankardeep Chakraborty, Seungbum Jo, and Srinivasa Rao Satti. Succinct encodings for families of interval graphs. Algorithmica, 83(3):776–794, 2021. doi:10.1007/s00453-020-00710-w.
- [2] Girish Balakrishnan, Sankardeep Chakraborty, Seungbum Jo, N. S. Narayanaswamy, and Kunihiko Sadakane. Succinct data structure for graphs with d-dimensional t-representation. In Ali Bilgin, James E. Fowler, Joan Serra-Sagristà, Yan Ye, and James A. Storer, editors, Data Compression Conference, DCC 2024, Snowbird, UT, USA, March 19-22, 2024, page 546. IEEE, 2024. doi:10.1109/DCC58796.2024.00063.
- [3] Girish Balakrishnan, Sankardeep Chakraborty, N. S. Narayanaswamy, and Kunihiko Sadakane. Succinct data structure for chordal graphs with bounded vertex leafage. In Hans L. Bodlaender, editor, 19th Scandinavian Symposium and Workshops on Algorithm Theory, SWAT 2024, June 12-14, 2024, Helsinki, Finland, volume 294 of LIPIcs, pages 4:1–4:16. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2024. doi:10.4230/LIPIcs.SWAT.2024.4.
- [4] Girish Balakrishnan, Sankardeep Chakraborty, N.S. Narayanaswamy, and Kunihiko Sadakane. Succinct data structure for path graphs. Information and Computation, 296:105124, 2024. doi:10.1016/j.ic.2023.105124.
- [5] E. A. Bender, L. B. Richmond, and N. C. Wormald. Almost all chordal graphs split. Journal of the Australian Mathematical Society. Series A. Pure Mathematics and Statistics, 38(2):214–221, 1985. doi:10.1017/S1446788700023077.
- [6] C. Berge. Some classes of perfect graphs. In Graph Theory and Theoretical Physics, pages 155–165. Academic Press, London-New York, 1967.
- [7] Kellogg S. Booth and George S. Lueker. Testing for the consecutive ones property, interval graphs, and graph planarity using pq-tree algorithms. J. Comput. Syst. Sci., 13(3):335–379, 1976. doi:10.1016/S0022-0000(76)80045-1.
- [8] Prosenjit Bose, Meng He, Anil Maheshwari, and Pat Morin. Succinct orthogonal range search structures on a grid with applications to text indexing. In Frank K. H. A. Dehne, Marina L. Gavrilova, Jörg-Rüdiger Sack, and Csaba D. Tóth, editors, Algorithms and Data Structures, 11th International Symposium, WADS 2009, Banff, Canada, August 21-23, 2009. Proceedings, volume 5664 of Lecture Notes in Computer Science, pages 98–109. Springer, 2009. doi:10.1007/978-3-642-03367-4_9.
- [9] Boris Bukh and R. Amzi Jeffs. Enumeration of interval graphs and -representable complexes, 2023. doi:10.48550/arXiv.2203.12063.
- [10] Peter Buneman. A characterisation of rigid circuit graphs. Discret. Math., 9(3):205–212, 1974. doi:10.1016/0012-365X(74)90002-8.
- [11] Steven Chaplick and Juraj Stacho. The vertex leafage of chordal graphs. Discret. Appl. Math., 168:14–25, 2014. Fifth Workshop on Graph Classes, Optimization, and Width Parameters, Daejeon, Korea, October 2011. doi:10.1016/j.dam.2012.12.006.
- [12] Clark, David. Compact PAT trees. PhD thesis, University of Waterloo, 1997. URL: http://hdl.handle.net/10012/64.
- [13] G. A. Dirac. On rigid circuit graphs. Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg, 25:71–76, 1961. URL: https://api.semanticscholar.org/CorpusID:120608513.
- [14] Arash Farzan and Shahin Kamali. Compact navigation and distance oracles for graphs with small treewidth. Algorithmica, 69(1):92–116, 2014. doi:10.1007/s00453-012-9712-9.
- [15] Johannes Fischer and Volker Heun. A new succinct representation of rmq-information and improvements in the enhanced suffix array. In Bo Chen, Mike Paterson, and Guochuan Zhang, editors, Combinatorics, Algorithms, Probabilistic and Experimental Methodologies, pages 459–470, Berlin, Heidelberg, 2007. Springer Berlin Heidelberg. doi:10.1007/978-3-540-74450-4_41.
- [16] Michael L. Fredman and Dan E. Willard. Surpassing the information theoretic bound with fusion trees. J. Comput. Syst. Sci., 47(3):424–436, 1993. doi:10.1016/0022-0000(93)90040-4.
- [17] Fănică Gavril. A recognition algorithm for the intersection graphs of paths in trees. Discrete Mathematics, 23(3):211–227, 1978. doi:10.1016/0012-365X(78)90003-1.
- [18] Fǎnicǎ Gavril. The intersection graphs of subtrees in trees are exactly the chordal graphs. Journal of Combinatorial Theory, Series B, 16(1):47–56, 1974. doi:10.1016/0095-8956(74)90094-X.
- [19] Isaac Goldstein, Tsvi Kopelowitz, Moshe Lewenstein, and Ely Porat. Conditional lower bounds for space/time tradeoffs. In Faith Ellen, Antonina Kolokolova, and Jörg-Rüdiger Sack, editors, Algorithms and Data Structures, pages 421–436, Cham, 2017. Springer International Publishing. doi:10.1007/978-3-319-62127-2_36.
- [20] Martin Charles Golumbic. Algorithmic graph theory and perfect graphs. Elsevier, 1980.
- [21] Meng He, J. Ian Munro, Yakov Nekrich, Sebastian Wild, and Kaiyu Wu. Distance oracles for interval graphs via breadth-first rank/select in succinct trees. In Yixin Cao, Siu-Wing Cheng, and Minming Li, editors, 31st International Symposium on Algorithms and Computation, ISAAC 2020, December 14-18, 2020, Hong Kong, China (Virtual Conference), volume 181 of LIPIcs, pages 25:1–25:18. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2020. doi:10.4230/LIPIcs.ISAAC.2020.25.
- [22] Meng He, J. Ian Munro, and Kaiyu Wu. Succinct data structures for path graphs and chordal graphs revisited. In Ali Bilgin, James E. Fowler, Joan Serra-Sagristà, Yan Ye, and James A. Storer, editors, Data Compression Conference, DCC 2024, Snowbird, UT, USA, March 19-22, 2024, pages 492–501. IEEE, 2024. doi:10.1109/DCC58796.2024.00057.
- [23] Douglas B. West In-Jen Lin, Terry A. McKee. The leafage of a chordal graph. Discussiones Mathematicae Graph Theory, 18(1):23–48, 1998. doi:10.7151/dmgt.1061.
- [24] Clyde L. Monma and Victor K.-W. Wei. Intersection graphs of paths in a tree. Journal of Combinatorial Theory, Series B, 41(2):141–181, 1986. doi:10.1016/0095-8956(86)90042-0.
- [25] J. Ian Munro and Kaiyu Wu. Succinct data structures for chordal graphs. In 29th International Symposium on Algorithms and Computation, ISAAC 2018, December 16-19, 2018, Jiaoxi, Yilan, Taiwan, pages 67:1–67:12, 2018. doi:10.4230/LIPIcs.ISAAC.2018.67.
- [26] Mihai Patrascu and Liam Roditty. Distance oracles beyond the thorup-zwick bound. SIAM J. Comput., 43(1):300–311, 2014. doi:10.1137/11084128X.
- [27] Mihai Patrascu and Mikkel Thorup. Dynamic integer sets with optimal rank, select, and predecessor search. In 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, pages 166–175, 2014. doi:10.1109/FOCS.2014.26.
- [28] Rajeev Raman, Venkatesh Raman, and Srinivasa Rao Satti. Succinct indexable dictionaries with applications to encoding k-ary trees, prefix sums and multisets. ACM Transactions on Algorithms, 3(4):43, November 2007. doi:10.1145/1290672.1290680.
- [29] Donald J Rose. Triangulated graphs and the elimination process. Journal of Mathematical Analysis and Applications, 32(3):597–609, 1970. doi:10.1016/0022-247X(70)90282-9.
- [30] Konstantinos Tsakalidis, Sebastian Wild, and Viktor Zamaraev. Succinct permutation graphs. Algorithmica, 85(2):509–543, 2023. doi:10.1007/s00453-022-01039-2.
- [31] Nicholas C. Wormald. Counting labelled chordal graphs. Graphs and Combinatorics, 1(1):193–200, 1985. doi:10.1007/BF02582944.
