Improved Circular Dictionary Matching
Abstract
The circular dictionary matching problem is an extension of the classical dictionary matching problem where every string in the dictionary is interpreted as a circular string: after reading the last character of a string, we can move back to its first character. The circular dictionary matching problem is motivated by applications in bioinformatics and computational geometry.
In 2011, Hon et al. [ISAAC 2011] showed how to efficiently solve circular dictionary matching queries within compressed space by building on Mantaci et al.’s eBWT and Sadakane’s compressed suffix tree. The proposed solution is based on the assumption that the strings in the dictionary are all distinct and non-periodic, no string is a circular rotation of some other string, and the strings in the dictionary have similar lengths.
In this paper, we consider arbitrary dictionaries, and we show how to solve circular dictionary matching queries in time within compressed space using bits, where is the total length of the dictionary, is the length of the pattern, is the number of occurrences, is the number of strings in the dictionary and is the size of the alphabet. Our solution is based on an extension of the suffix array to arbitrary dictionaries and a sampling mechanism for the LCP array of a dictionary inspired by recent results in graph indexing and compression.
Keywords and phrases:
Circular pattern matching, dictionary matching, suffix tree, compressed suffix tree, suffix array, LCP array, Burrows-Wheeler Transform, FM-index2012 ACM Subject Classification:
Theory of computation Pattern matchingAcknowledgements:
I would like to thank Kunihiko Sadakane for introducing me to the topic.Funding:
Funded by the Helsinki Institute for Information Technology (HIIT).Editors:
Paola Bonizzoni and Veli MäkinenSeries and Publisher:

1 Introduction
The Burrows-Wheeler transform (BWT) [12] and the FM-index [27] support pattern matching on the compressed representation of a single string. If we want to encode a collection of strings, we may append a distinct end-of-string to each string and store the Burrows-Wheeler Transform of the concatenation of the strings. This approach is only a naive extension of the BWT and can significantly increase the size of the alphabet. In 2007, Mantaci et al. introduced the eBWT [40], a more sophisticated and elegant extension of the BWT to a collection of strings. When sorting all suffixes of all strings in the collection, Mantaci et al. define the mutual order between two suffixes and to be the mutual order of and in the lexicographic order, where and are the infinite strings obtained by concatenating with itself and with itself infinitely many times (see [13] for a recent paper on different variants of the eBWT). From the definition of the eBWT we obtain that, if we extend the backward search mechanism of the FM-index to multisets of strings, we are not matching a pattern against all suffixes ’s, but against all strings ’s. Equivalently, we are interpreting each string in the collection as a circular string in which, after reaching the last character of the string, we can go back to its first character.
In 2011, Hon et al. applied the framework of the eBWT to solve circular dictionary matching queries [36], even if they explicitly spotted the relationship between their techniques (which extends Sadakane’s compressed suffix tree [45] to a collection of strings) and the eBWT only in a subsequent paper [32]. The circular dictionary matching problem is an extension of the dictionary matching problem, which admits a classical solution based on Aho-Corasick automata [1], as well as more recent solutions within compressed space [8, 34]. Other variations of the same problem include dictionary matching with gaps [6, 35, 5] or mismatches [30], dictionary matching in the streaming model [16], dynamic dictionary matching [31] and internal dictionary matching [14]. The circular dictionary matching problem is motivated by some applications, see [38]. In bioinformatics, the genome of herpes simplex virus (HSV-1) and many other viruses exists as circular strings [47], and microbial samples collected from the environment directly without isolating and culturing the samples have circular chromosomes [26, 46]. In computational geometry, a polygon can be stored by listing its vertices in clockwise order. The circular dictionary matching problem has also been studied from other perspectives (average-case behavior [37], approximate variants [15]).
1.1 Our Contribution
Consider a dictionary of total length on an alphabet of size . In [36], Hon et al. show that, by storing a data structure of bits, it is possible to solve circular dictionary matching queries in time, where is the length of the pattern. This result holds under three assumptions: (i) the ’s are distinct and not periodic, (ii) no is a circular rotation of some other , and (iii) the ’s have bounded aspect ratio, namely, . Assumption (ii) was made explicit only in the subsequent paper [32], where Hon et al. also mention that, in an extension of the paper, they will show how to remove assumptions (i-ii). Assumption (iii) is required to store a space-efficient data structure supporting longest common prefix (LCP) queries. In [33], Hon et al. sketched a new data structure for LCP queries that uses bits without requiring assumption (iii), but all details are again deferred to an extended version.
The main result of our paper is Theorem 1. In particular, we give two main contributions.
-
We obtain results that are valid for an arbitrary dictionary , without imposing any restriction. To this end, in Section 3.2 we introduce a more general compressed suffix array for an arbitrary collection of strings. In particular, we support LCP functionality within only bits by using a sampling mechanism inspired by recent results [23, 17] on Wheeler automata.
-
We provide a self-contained proof of our result and a full example of the data structure for solving circular dictionary matching queries. The original paper [36] contains the main ideas to extend Sadakane’s compressed suffix tree to a multiset of strings, but it is very dense and proofs are often only sketched or missing, which may explain why additional properties to potentially remove assumptions (i - iii) were only observed in subsequent papers. We will provide an intuitive explanation of several steps by using graphs (and in particular cycles, see Figure 2), consistently with a research line that has shown how the suffix array, the Burrows-Wheeler transform and the FM-index admit a natural interpretation in the graph setting [29, 3, 24, 22, 18, 4, 2, 25, 20].
We will also show that the time bound of Theorem 1 can be improved when the number of occurrences is large.
The paper is organized as follows. In Section 2 we introduce the circular dictionary matching problem. In Section 3 we define our compressed suffix tree. In Section 4 we present an algorithm for circular dictionary matching. Due to space constraints, most proofs and some auxiliary results can be found in the full version [21].
2 Preliminaries
Let be a finite alphabet of size . We denote by the set of all finite strings on (including the empty string ) and by the set of all countably infinite strings on . For example, if , then is a string in and is a string in . If , we denote by the length of . If , we denote by the -th character of , and if , we define . If , let . Analogously, if , then is the -th character of , we have for every , and if we have .
If , then for every integer the string is defined by , where is repeated times, and the string is defined by , where is repeated infinitely many times. Given , let be the length of the longest common prefix between and .
We say that a string is primitive is for every and for every integer , if , then and . For every , there exists exactly one primitive string and exactly one integer such that (see [39, Prop. 1.3.1]; we say that is the root of and we write .
Let a string of length . We define the following queries on : (i) : given , return ; (ii) : given and , return ; (iii) : given and , return the unique such that and . To handle limit cases easily, it is expedient to define . A bit array of length can be stored using a data structure (called bitvector) of bits that supports , and queries in time [42].
We consider a fixed total order on and we extend it to lexicographically. For every , we write if . In our examples (see e.g. Figure 1), is a subset of the English alphabet and is the usual order on letters. In our data structures, and is the usual order such that .
2.1 Circular Dictionary Matching
Let and . For every , we say that occurs in at position if .
Let and let . The circular suffix of is the string . For example, if , then , and .
Let , and let be nonempty strings. We say that is a dictionary. We assume that the alphabet is effective, that is, every character in occurs in some (our results could be extended to larger alphabets by using standard tools such as dictionaries [42, 22]). Define the total length of to be . In the example of Figure 1(a), we have and . For every , the circular suffix of is the string , where is the largest integer in such that and is such that . In other words, the circular suffix is the circular suffix starting at position in the concatenation . In Figure 1, if , we have , and . Given a pattern , where , define:
and let . For example, if , in the example of Figure 1 we have and .
The main result of this paper is the following.
Theorem 1.
Let be a dictionary of total length . Then, can be encoded using a data structure of bits such that, given a string of length , we can compute in time.
We can also improve the time bound in Theorem 1 to , without needing to know the value in advance. This is useful if is large.
Following Hon et al. [36], we will build a data structure that extends Sadakane’s suffix tree to . We will achieve the space bound in Theorem 1 as follows. (i) We will store the FM-index of using bits, see Theorem 9. (ii) We will introduce a notion of suffix array (Section 3.2) and a notion of longest comment prefix (LCP) array (Section 3.3). Storing the suffix array and LCP array explicitly would require bits so, in both cases, we will only store a sample (Theorem 10 and Theorem 13), leading to a compressed representation of both arrays. (iii) We will store some auxiliary data structures ( bits in total): 8 bitvectors (called -), a data structure supporting range minimum queries on the LCP array (see Section 3.3), a data structure storing the topology of the suffix tree (see Section 3.4), a data structure storing the topology of the auxiliary tree (see Section 4), and a data structure supporting range minimum queries on the auxiliary array (see Section 4).
3 The Compressed Suffix Tree of
In this section, we extend Sadakane’s compressed suffix tree to the dictionary . To this end, we will introduce a BWT-based encoding of (Section 3.1), a compressed suffix array (Section 3.2), an LCP array (Section 3.3), and the topology of the suffix tree (Section 3.4).
3.1 The Burrows-Wheeler Transform and the FM-index of
Let us consider our dictionary (recall that ). We can naturally define a bijective function from the set to the set such that, if , then the -th character of the concatenation is the -th character of . For example, in Figure 1 we have . Let us define a bitvector to compute and in time. is the bitvector of length that contains exactly ones such that, for every , the number of consecutive zeros after the -th one is (see Figure 1(a)). Assume that . Given , we have and . Conversely, given and , we have . Note that in time we can also compute for every , because .
To exploit the circular nature of the dictionary, we will not sort the finite strings ’s, but we will sort the infinite strings ’s, and the suffix tree of will be the trie of the ’s. Notice that we may have for distinct (in Figure 1, we have ). The next lemma shows that this happens exactly when and have the same root.
Lemma 2.
Let be a dictionary, and let . Then, if and only if .
The next step is to sort the “suffixes” ’s. Since in general the ’s are not pairwise distinct, we will use an ordered partition (see Figure 1(b)).
Definition 3.
Let be a dictionary.
-
Let be the ordered partition of such that, for every , (i) if and are in the same , then and (ii) if , and , then .
-
For every , let , where is such that .
Note that is well defined because it does not depend on the choice of by the definition of . Note also that .
Let , and assume that . Define as follows: (i) if , then , and (ii) if , then . In other words, equals , modulo staying within the same string of . In Figure 1(a), we have , but . For every , we can compute in time by using the bitvector : if , then , and if , then .
If , define . Since the function is a premutation of the set , we always have for every . Let us prove that yields a permutation of the ’s.
Lemma 4.
Let be a dictionary, and let . Then, there exists such that . Moreover, if , we have .
For every , let , where as in Lemma 4. Notice that is permutation of because it is subjective: indeed, for every , if we pick any , and consider such that and such that , then . Moreover, for every define . By Lemma 4, we know that for every .
We can visualize by drawing a (directed edge-labeled) graph , where and , see Figure 2. Since is a permutation of , then every node of has exactly one incoming edge and exactly one ongoing edge, so is the disjoint union of some cycles. Moreover, for every the infinite string that we can read starting from node and following the edges of the corresponding cycle is , because we know that for every . For example, in Figure 2 the infinite string that we can read starting from is .
We will not explicitly build the graph to solve circular dictionary matching queries, but naturally captures the cyclic nature of and will help us detect some properties of the ’s. For example, since we know that the infinite string starting from node is , and , we can easily infer the following properties (see Figure 2): if we consider two edges and , then (i) if , then , and (ii) if and , then . We can formalize these properties in our graph-free setting as follows.
Lemma 5.
Let be a dictionary. let and such that and .
-
1.
(Property 1) If , then .
-
2.
(Property 2) If and , then .
We now want to extend the backward search [27] to our dictionary . We will again use to guide our intuition. Given and given , let be the set of all such that there exists an edge for some . For example, in Figure 2 we have . Notice that is convex, and is also convex. This is true in general: is always convex if is convex. Indeed, if and , then from the properties of mentioned before Lemma 5 we first conclude that the edge leaving node is labeled with , and then we infer that the node reached by this edge must be such that , which implies . We can now formally define in our graph-free setting and prove that is convex if is convex.
Definition 6.
Let be a dictionary, let and let . Define .
Note that, if and , then .
Lemma 7.
Let be a dictionary, let and let . If is convex, then is convex.
Let us define the Burrows-Wheeler transform (BWT) of .
Definition 8.
Let be a dictionary. Define the string of length such that for every .
In other words, is the label of the edge reaching node for every (see Figure 1(b) and Figure 2). The data structure that we will define in Theorem 9 is based on two sequences, and , that are related to (see Figure 1(b)). The sequence is obtained from by sorting its elements. We know that for every pair of edges and , if , then , so is the label of the edge leaving node for every , which implies for every . The bitvector has length and is obtained by marking with 1 the beginning of each equal-letter run in . Formally, for every , we have if and only if or . Since the alphabet is effective, the set consists of all such that the edge leaving node is labeled with .
We can now extend the properties of the Burrows-Wheeler Transform and the FM-index to . In particular, we will show that if an encoding of . This result shows that is related to the eBWT of Mantaci et al. [40], but there are two main differences: (1) we do not need assumptions (i-ii) in Section 1 to define (in particular, the ’s need not be primitive), and (2) is an encoding of but not of (namely, encodes the cyclic nature of the ’s and not a specific circular suffix of each ). To extend the FM-index to , we will once again base our intuition on .
Recall that two (directed edge-labeled) graphs and are isomorphic if there exists a bijection from to such that for every and for every we have if and only if . In other words, two graphs are isomorphic if they are the same graph up to renaming the nodes.
Theorem 9.
Let be a dictionary.
-
1.
is an encoding of , that is, given , we can retrieve up to isomorphism.
-
2.
There exists a data structure encoding of bits that supports the following queries in time: (i) , , and queries on , (ii) : given and , decide if is empty and, if it is nonempty, return and such that , (iii) : given , return such that , and (iv) : given , return such that .
Note that, even if is an encoding of , it is not an encoding of (in particular, from we cannot recover ). This is true even when all ’s are singletons because only stores circular strings, so we cannot retrieve the bitvector the marks the beginning of each string (we cannot even retrieve the specific enumeration , , , of the strings in ).
3.2 The Compressed Suffix Array of
We know that is an ordered partition of , and we know that . The suffix array of should be defined in such a way that we can answer the following query: given , return the set .
In the following, we say that a position refers to the string if . Every position refers to exactly one string. Notice that a set may contain elements that refer to the same string . In Figure 1, we have , and both and refer to . This happens because is not a primitive string. Let us show that, if we know any element of referring to , we can retrieve all elements of referring to .
For every , let be the root of . Then, is divisible by , and for every the root of is , where refers to string . For every , let be the index in corresponding to the first character of .
Fix and , and let any position referring to . For every referring to , we have if and only if (by Lemma 2), if and only if , if and only if is a multiple of . Then, if is the set of all elements of referring to , we have . For example, if Figure 1(b), if we consider and and we pick , then , refers to , , , , and .
To compute we need to compute and . We have already seen that we can compute in time using (see Section 3.1), and we now store a bitvector to compute in time (see Figure 3). The bitvector contains exactly ones, we have , and for every the number of consecutive zeros after the -th one is . Then, for every .
Let us define a suffix array for . Let be the equivalence relation on such that for every we have if and only if and belong to the same and refer to the same string . Fix . Notice that is the union of some -equivalence classes. Let be the subset of obtained by picking the smallest element of each -equivalence class contained in . In Figure 1(b), for , the partition of induced by is , so . Then is the array obtained by concatenating the elements in , the elements in , , the elements in (see Figure 3), where the elements in are sorted from smallest to largest. Equivalently, the elements in each are sorted according to the indexes of the strings to which they refer (by definition, two distinct elements of cannot refer to the same string). We also define a bitvector to mark the beginning of each (see again Figure 3). More precisely, contains exactly ones and for every the number of consecutive zeros after the -th one is .
Fix , where refers to string . Note that there exists such that if and only if , if and only if . Moreover, for every we have that is the set of all elements of referring to . In particular, is an array of length (in Figure 3, we have ).
The suffix array has the desired property: given , we can compute the set in time as follows. We first retrieve by observing that its elements are stored in , , , , , where and . Then, for every , we know that refers to string , where , and we can compute the set of all elements of referring to as shown above. Then, we have and the union is disjoint.
Storing explicitly can require up to bits, so to achieve the space bound in Theorem 1 we need a sampling mechanism similar to the compressed suffix array of a string: we will sample some entries and we will reach a sampled entry by using the backward search to navigate the string (see [42]). More precisely, in our setting we will use the query of Theorem 9 to navigate in a backward fashion. Note that in the conventional case of a string, if we start from the end of the string and we apply the backward search, we can reach each position of the string, but in our setting the graph is the disjoint union of some cycles (see Figure 2), and the backward search does not allow navigating from a cycle to some other cycle. This means that we will need to sample at least one value per cycle. In the worst case, we have cycles (one for each ), so in the worst case we need to sample at least values.
After choosing an appropriate sampling factor , we store the sampled values in an array (respecting the order in ), and we use a bitvector to mark the entries of that have been sampled (see Figure 3). We finally obtain the following theorem.
Theorem 10.
Let be a dictionary. By storing bits, we can compute each entry in time.
We conclude that, by storing the data structure of Theorem 10, for every we can compute in time.
3.3 The LCP Array of
Let us define the longest common prefix (LCP) array of . We know that , so it is natural to give the following definition (see Figure 1(b) and Figure 4(a)).
Definition 11.
Let be a dictionary. Let be the array such that for every .
Note that each is finite because the ’s are pairwise distinct. Let us prove a stronger result: for every (Lemma 12). This implies that we can store an entry using at most bits.
Lemma 12.
Let be a dictionary. Then, for every .
Storing the LCP array explicitly can require bits, so to achieve the space bound of Theorem 1 we need a sampling mechanism similar to the one for suffix arrays in Section 3.2.
Recall that, given an array , we can define range minimum queries as follows: for every , returns an index such that . There exists a data structure of bits that can solve a query in time without accessing [28]; moreover the data structure always returns the smallest such that .
Let us store a data structure for solving range minimum queries on in time. We will use this data structure to retrieve each entry of the LCP array from our sampled LCP array. The main idea is to use the sampling mechanism in [23]: an auxiliary graph allows determining which values will be sampled based on a sampling parameter (see Figure 4(b)). Then, we define an array that stores the sampled values (respecting the order in , see Figure 4(a)), and a bitvector to mark the sampled entries of (see Figure 4(a)). By choosing an appropriate , we obtain the following result.
Theorem 13.
Let be a dictionary. By storing bits, we can compute each entry in time.
3.4 The Topology of the Suffix Tree of
We can now introduce the suffix tree of our dictionary (the details are discussed in the full version). Refer to Figure 5(a) for an example. The (infinite) set of all finite strings that we can read starting from the root is . Every node is an interval that describes the strings that can be read starting from the root and reading a nonempty prefix of the last edge. For example, the set of all strings for which is a prefix of if and only if is and, in fact, the set of all strings that reach node is . The suffix tree contains exactly leaves (namely, for every ), and the set of all nodes has size at most . The set of all strings reaching a node is finite if and only if is an internal node, and in this case the longest string reaching has length , which can be computed by using the LCP array (for example, ). The suffix tree is an ordinal tree, and any ordinal tree with nodes can be encoded using its balanced paranthesis representation , a bit array that we can build incrementally as follows. Visit all nodes of the tree in depth-first search order (respecting the order of the children of each node) starting from the root. Every time we encounter a new node we append an , and as soon as we leave the subtree rooted at we add a . Hence is a bit array containing exactly values equal to and values equal to (see Figure 5(c) for the balanced parenthesis representation of ). Navarro and Sadakane showed how to support navigational queries on the balanced representation of an ordinal tree in compact space and constant time (see the full version), and we store such a data structure for (which requires bits). To correctly switch to the balanced parenthesis representation of , we also store a bitvector of length such that for every we have if and only if (i) and (ii) the index corresponds to a leaf of (see Figure 5(c)).
4 Solving Circular Dictionary Matching Queries
In the following, we consider a pattern of length , and we want to compute .
Fix . We need to compute all such that occurs in at position . In general, if , then , hence , which means that identifies a node of . This means that, if occurs in at position , then there exists such that , and then every prefix of is also in (because every prefix of a string in is also in ). Consequently, by considering the largest such that , we know that we only need to consider to compute all such that occurs in at position . We can then give the following definition.
Definition 14.
Let be a dictionary, and let . For every , let be the largest integer such that and .
Note that is well-defined for every because .
Fix , and . If , then is a prefix of and so occurs in at position if and only . If , we know that is not a prefix of , but it might still be true that a prefix of is equal to . Let be the longest prefix of that is a prefix of (or equivalently, let be the largest integer such that ). Then, occurs in at position if and only . To compute , first notice that every prefix of reaches an ancestor of , and every string reaching a strict ancestor of is a strict prefix of . As a consequence, we can first compute the nearest ancestor of for which , and then is the length of the largest string reaching node .
The following lemma captures our intuition.
Lemma 15.
Let be a dictionary, let , let and let .
-
1.
Assume that . Then, for every we have that occurs in at position if and only if .
-
2.
Assume that . Let be the nearest ancestor of in for which . Then, is not the root of . Moreover, let be the parent of in . Then, , and for every we have that occurs in at position if and only if .
Fix . To implement Lemma 15, we should navigate starting from node , so we need to know and . Moreover, if and , we know that occurs in at position if and only if , so we also need to compute . In the full version, we present an algorithm to compute all the ’s, all the ’s and all the ’s. The algorithm shares many similarities with Ohlebusch et al.’s algorithm for computing matching statistics using the FM-index and the LCP array of a string [44].
For every , let be the set of all such that occurs in at position , and let . Computing is equivalent to computing for every , and . In view of Theorem 1, it will be sufficient to show that we can compute in time, because then we can compute each in time.
Fix . Lemma 15 suggests that, to compute , we can proceed as follows. We start from node is , we consider every and every , and we decide whether occurs in at position (we will see later how to do this efficiently). Next, we navigate from node up to the root. Every time we move from a node to its parent , we consider every and every , and we decide whether occurs in at position (again, we will see later how to do this efficiently). To navigate from to the root, in the worst case we visit many nodes – say nodes. If each of these steps leads to discovering at least one element in we can hope to achieve the bound , but in the worst case many steps are not successful, is small and we cannot achieve the bound .
Notice that only determines the initial node , but once we are navigating up to the root, we do not need to assess whether we have found an element of , because occurs in at position if and only if , and does not depend on . This means that we can determine which nodes will allow discovering an element of before knowing the pattern (that is, at indexing time). We can then give the following definition which, crucially, does not depend on or (see Figure 5(a)).
Definition 16.
Let be a dictionary, and let . We say that is marked if one of the following is true: (i) (i.e., is the root of ), or (ii) (i.e., is not the root of ) and, letting be the parent of in , there exists and there exists such that .
When we navigate from to the root, we should skip all non-marked nodes. Notice the set of all marked nodes induces a new tree structure. More precisely, consider the ordinal tree with root defined as follows. The set of nodes is the set of all marked nodes in ; in particular, the root of belongs to . We can now build incrementally. We traverse all nodes of in depth-first search order, respecting the order of the children of each node (this is the same order used for the balanced parenthesis representation of ). The first marked node that we encounter is , which will be root of . Then, every time we encounter a marked node , let be the nearest strict ancestor of in that is marked, namely, is the first marked node that we encounter after in the (backward) path from to in . Then, will be the parent of in , and if has already been assigned children, then will be its -th smallest child. See Figure 5(b) for an example, and see Figure 5(c) for the balanced parenthesis representation of . In addition to Navarro and Sadakane’s data structure for the tree , we also store the same data structure for the tree , which also requires bits.
We also remember which nodes of are marked by using a bitvector of length such that for every we have if and only if is one of the two values corresponding to a marked node of . More precisely, we build as follows. We visit all nodes of in depth-first search order, respecting the order of the children of each node. Every time we encounter a new node we append an if is marked and a if is not marked, and as soon as we leave the subtree rooted at we add a if is marked and a if is not marked (see Figure 5(c)). By using , in time (i) we can move from to and from to and (ii) we can determine the nearest marked ancestor of a node (see the full version).
Define the array of length (the length of ) such that for every (see Figure 3). We will not store , but only a data structure supporting range minimum queries on in time, which requires bits (see Section 3.3).
We now have all the ingredients to prove Theorem 1, our main claim. As we have seen, it will suffice to compute each in . We start from node , and for every we determine whether occurs in at position . By Lemma 15, we need to check whether , so we repeatedly solve range minimum queries on starting from the interval to find all ’s for which occurs in at position in time proportional to the number of such occurrences. Next, we compute the lowest marked ancestor of , and we use to determine all marked ancestors of . Let be one of these ancestors, and let be its parent in . For every , we determine whether occurs in at position . By Lemma 15, we need to check whether , so we repeatedly solve range minimum queries on starting from the intervals and to find all ’s for which occurs in at position in time proportional to the number of such occurrences.
5 Conclusions and Future Work
We have shown how to improve and extend previous results on circular dictionary matching. In the literature, much effort has been devoted to designing construction algorithms for the data structure of Hon et al. [36] and, implicitly, the eBWT of Mantaci et al. [40]. All available approaches first build the eBWT and the suffix array of circular strings, and then use the suffix array of circular strings to build the remaining auxiliary data structures. In [32], Hon et al. showed that the eBWT and the suffix array can be built in time using bits of working space. Bannai et al. [7] improved the time bound to by showing that the bijective BWT can be built in linear time (Bonomo et al. [10, Section 6] showed how to reduce in linear time the problem of computing the eBWT to the problem of computing the bijective BWT). A more direct algorithm was proposed by Boucher et al. [11]. However, all these algorithms are still based on assumptions (i-ii) in Section 1 and cannot immediately applying to our setting in which we consider an arbitrary dictionary. After building the eBWT and the suffix array, it is possible to build the remaining auxiliary data structure in time using bits of working space [33]. Some proofs in [33] are only sketched, but it is not too difficult to show that, if we do not insist on achieving bits of working space, it is possible to build the remaining auxiliary data structure from the eBWT and the suffix array in linear time.
In a companion paper, we will show that the data structure of Theorem 1 can be built in time. The main technical issue is understanding how to remove assumptions (i-ii) in Section 1 when building the suffix array. The algorithm by Boucher et al.’s [11] is a recursive algorithm based on the SAIS algorithm [43] for building the suffix array. SAIS divides all suffixes into two categories (S-type and L-type). We will show that, to remove assumptions (i-ii), we will use three (and not two) categories, building on a recent recursive algorithm [19] for constructing the “suffix array” of a deterministic automaton.
The natural question is whether the data structure of Theorem 1 (or a similar data structure with the same functionality) can be built in time within bits of working space. We conjecture that this is possible but, thinking of the case , this problem should be at least as difficult as building the compressed suffix tree of a string in time and working bits, which requires advanced techniques [41, 9].
References
- [1] Alfred V. Aho and Margaret J. Corasick. Efficient string matching: an aid to bibliographic search. Commun. ACM, 18(6):333–340, June 1975. doi:10.1145/360825.360855.
- [2] Jarno Alanko, Nicola Cotumaccio, and Nicola Prezza. Linear-time minimization of Wheeler DFAs. In 2022 Data Compression Conference (DCC), pages 53–62. IEEE, 2022. doi:10.1109/DCC52660.2022.00013.
- [3] Jarno Alanko, Giovanna D’Agostino, Alberto Policriti, and Nicola Prezza. Regular languages meet prefix sorting. In Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 911–930. SIAM, 2020. doi:10.1137/1.9781611975994.55.
- [4] Jarno N Alanko, Davide Cenzato, Nicola Cotumaccio, Sung-Hwan Kim, Giovanni Manzini, and Nicola Prezza. Computing the LCP array of a labeled graph. In 35th Annual Symposium on Combinatorial Pattern Matching (CPM 2024). Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2024.
- [5] Amihood Amir, Tsvi Kopelowitz, Avivit Levy, Seth Pettie, Ely Porat, and B Riva Shalom. Mind the gap! Online dictionary matching with one gap. Algorithmica, 81:2123–2157, 2019. doi:10.1007/S00453-018-0526-2.
- [6] Amihood Amir, Avivit Levy, Ely Porat, and B Riva Shalom. Dictionary matching with a few gaps. Theoretical Computer Science, 589:34–46, 2015. doi:10.1016/J.TCS.2015.04.011.
- [7] Hideo Bannai, Juha Kärkkäinen, Dominik Köppl, and Marcin Piątkowski. Constructing the bijective and the extended Burrows-Wheeler transform in linear time. In 32nd Annual Symposium on Combinatorial Pattern Matching (CPM 2021). Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2021.
- [8] Djamal Belazzougui. Succinct dictionary matching with no slowdown. In Amihood Amir and Laxmi Parida, editors, Combinatorial Pattern Matching, pages 88–100, Berlin, Heidelberg, 2010. Springer Berlin Heidelberg. doi:10.1007/978-3-642-13509-5_9.
- [9] Djamal Belazzougui, Fabio Cunial, Juha Kärkkäinen, and Veli Mäkinen. Linear-time string indexing and analysis in small space. ACM Transactions on Algorithms (TALG), 16(2):1–54, 2020. doi:10.1145/3381417.
- [10] Silvia Bonomo, Sabrina Mantaci, Antonio Restivo, Giovanna Rosone, and Marinella Sciortino. Sorting conjugates and suffixes of words in a multiset. International Journal of Foundations of Computer Science, 25(08):1161–1175, 2014.
- [11] Christina Boucher, Davide Cenzato, Zsuzsanna Lipták, Massimiliano Rossi, and Marinella Sciortino. Computing the original eBWT faster, simpler, and with less memory. In Thierry Lecroq and Hélène Touzet, editors, String Processing and Information Retrieval, pages 129–142, Cham, 2021. Springer International Publishing. doi:10.1007/978-3-030-86692-1_11.
- [12] Michael Burrows and David J. Wheeler. A block-sorting lossless data compression algorithm. Technical report, Systems Research Center, 1994.
- [13] Davide Cenzato, Veronica Guerrini, Zsuzsanna Lipták, and Giovanna Rosone. Computing the optimal BWT of very large string collections. In 2023 Data Compression Conference (DCC), pages 71–80, 2023. doi:10.1109/DCC55655.2023.00015.
- [14] Panagiotis Charalampopoulos, Tomasz Kociumaka, Manal Mohamed, Jakub Radoszewski, Wojciech Rytter, and Tomasz Waleń. Internal dictionary matching. Algorithmica, 83(7):2142–2169, 2021. doi:10.1007/S00453-021-00821-Y.
- [15] Panagiotis Charalampopoulos, Tomasz Kociumaka, Jakub Radoszewski, Solon P Pissis, Wojciech Rytter, Tomasz Waleń, and Wiktor Zuba. Approximate circular pattern matching. In ESA 2022-30th Annual European Symposium on Algorithms, 2022.
- [16] Raphaël Clifford, Allyx Fontaine, Ely Porat, Benjamin Sach, and Tatiana Starikovskaya. Dictionary matching in a stream. In Algorithms-ESA 2015: 23rd Annual European Symposium, Patras, Greece, September 14-16, 2015, Proceedings, pages 361–372. Springer, 2015. doi:10.1007/978-3-662-48350-3_31.
- [17] Alessio Conte, Nicola Cotumaccio, Travis Gagie, Giovanni Manzini, Nicola Prezza, and Marinella Sciortino. Computing matching statistics on Wheeler DFAs. In 2023 Data Compression Conference (DCC), pages 150–159, 2023. doi:10.1109/DCC55655.2023.00023.
- [18] Nicola Cotumaccio. Graphs can be succinctly indexed for pattern matching in time. In 2022 Data Compression Conference (DCC), pages 272–281, 2022. doi:10.1109/DCC52660.2022.00035.
- [19] Nicola Cotumaccio. Prefix sorting DFAs: A recursive algorithm. In 34th International Symposium on Algorithms and Computation (ISAAC 2023). Schloss-Dagstuhl-Leibniz Zentrum für Informatik, 2023.
- [20] Nicola Cotumaccio. A Myhill-Nerode theorem for generalized automata, with applications to pattern matching and compression. In 41st International Symposium on Theoretical Aspects of Computer Science (STACS 2024). Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2024.
- [21] Nicola Cotumaccio. Improved circular dictionary matching, 2025. arXiv:2504.03394.
- [22] Nicola Cotumaccio, Giovanna D’Agostino, Alberto Policriti, and Nicola Prezza. Co-lexicographically ordering automata and regular languages - part I. J. ACM, 70(4), August 2023. doi:10.1145/3607471.
- [23] Nicola Cotumaccio, Travis Gagie, Dominik Köppl, and Nicola Prezza. Space-time trade-offs for the LCP array of Wheeler DFAs. In Franco Maria Nardini, Nadia Pisanti, and Rossano Venturini, editors, String Processing and Information Retrieval, pages 143–156, Cham, 2023. Springer Nature Switzerland. doi:10.1007/978-3-031-43980-3_12.
- [24] Nicola Cotumaccio and Nicola Prezza. On indexing and compressing finite automata. In Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 2585–2599. SIAM, 2021. doi:10.1137/1.9781611976465.153.
- [25] Nicola Cotumaccio and Catia Trubiani. Convex Petri nets. In Michael Köhler-Bussmeier, Daniel Moldt, and Heiko Rölke, editors, Proceedings of the International Workshop on Petri Nets and Software Engineering 2024 co-located with the 45th International Conference on Application and Theory of Petri Nets and Concurrency (PETRI NETS 2024), June 24 - 25, 2024, Geneva, Switzerland, volume 3730 of CEUR Workshop Proceedings. CEUR-WS.org, 2024. URL: https://ceur-ws.org/Vol-3730/poster1.pdf.
- [26] Jonathan A Eisen. Environmental shotgun sequencing: its potential and challenges for studying the hidden world of microbes. PLoS biology, 5(3):e82, 2007.
- [27] Paolo Ferragina and Giovanni Manzini. Indexing compressed text. J. ACM, 52(4):552–581, July 2005. doi:10.1145/1082036.1082039.
- [28] Johannes Fischer and Volker Heun. Space-efficient preprocessing schemes for range minimum queries on static arrays. SIAM Journal on Computing, 40(2):465–492, 2011. doi:10.1137/090779759.
- [29] Travis Gagie, Giovanni Manzini, and Jouni Sirén. Wheeler graphs: A framework for BWT-based data structures. Theoretical computer science, 698:67–78, 2017. doi:10.1016/J.TCS.2017.06.016.
- [30] Paweł Gawrychowski and Tatiana Starikovskaya. Streaming dictionary matching with mismatches. Algorithmica, pages 1–21, 2019.
- [31] Shay Golan, Tomasz Kociumaka, Tsvi Kopelowitz, and Ely Porat. Dynamic dictionary matching in the online model. In Algorithms and Data Structures: 16th International Symposium, WADS 2019, Edmonton, AB, Canada, August 5–7, 2019, Proceedings 16, pages 409–422. Springer, 2019. doi:10.1007/978-3-030-24766-9_30.
- [32] Wing-Kai Hon, Tsung-Han Ku, Chen-Hua Lu, Rahul Shah, and Sharma V. Thankachan. Efficient algorithm for circular Burrows-Wheeler transform. In Juha Kärkkäinen and Jens Stoye, editors, Combinatorial Pattern Matching, pages 257–268, Berlin, Heidelberg, 2012. Springer Berlin Heidelberg. doi:10.1007/978-3-642-31265-6_21.
- [33] Wing-Kai Hon, Tsung-Han Ku, Rahul Shah, and Sharma V. Thankachan. Space-efficient construction algorithm for the circular suffix tree. In Johannes Fischer and Peter Sanders, editors, Combinatorial Pattern Matching, pages 142–152, Berlin, Heidelberg, 2013. Springer Berlin Heidelberg. doi:10.1007/978-3-642-38905-4_15.
- [34] Wing-Kai Hon, Tsung-Han Ku, Rahul Shah, Sharma V. Thankachan, and Jeffrey Scott Vitter. Faster compressed dictionary matching. Theoretical Computer Science, 475:113–119, 2013. doi:10.1016/j.tcs.2012.10.050.
- [35] Wing-Kai Hon, Tak-Wah Lam, Rahul Shah, Sharma V Thankachan, Hing-Fung Ting, and Yilin Yang. Dictionary matching with a bounded gap in pattern or in text. Algorithmica, 80:698–713, 2018. doi:10.1007/S00453-017-0288-2.
- [36] Wing-Kai Hon, Chen-Hua Lu, Rahul Shah, and Sharma V Thankachan. Succinct indexes for circular patterns. In Algorithms and Computation: 22nd International Symposium, ISAAC 2011, Yokohama, Japan, December 5-8, 2011. Proceedings 22, pages 673–682. Springer, 2011. doi:10.1007/978-3-642-25591-5_69.
- [37] Costas S Iliopoulos, Solon P Pissis, and M Sohel Rahman. Searching and indexing circular patterns. Algorithms for Next-Generation Sequencing Data: Techniques, Approaches, and Applications, pages 77–90, 2017. doi:10.1007/978-3-319-59826-0_3.
- [38] Costas S. Iliopoulos and M. Sohel Rahman. Indexing circular patterns. In Shin-ichi Nakano and Md. Saidur Rahman, editors, WALCOM: Algorithms and Computation, pages 46–57, Berlin, Heidelberg, 2008. Springer Berlin Heidelberg. doi:10.1007/978-3-540-77891-2_5.
- [39] M. Lothaire. Combinatorics on words, volume 17. Cambridge university press, 1997.
- [40] Sabrina Mantaci, Antonio Restivo, Giovanna Rosone, and Marinella Sciortino. An extension of the Burrows–Wheeler transform. Theoretical Computer Science, 387(3):298–312, 2007. doi:10.1016/J.TCS.2007.07.014.
- [41] J Ian Munro, Gonzalo Navarro, and Yakov Nekrich. Space-efficient construction of compressed indexes in deterministic linear time. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 408–424. SIAM, 2017. doi:10.1137/1.9781611974782.26.
- [42] Gonzalo Navarro. Compact Data Structures: A Practical Approach. Cambridge University Press, 2016.
- [43] Ge Nong, Sen Zhang, and Wai Hong Chan. Two efficient algorithms for linear time suffix array construction. IEEE Transactions on Computers, 60(10):1471–1484, 2011. doi:10.1109/TC.2010.188.
- [44] Enno Ohlebusch, Simon Gog, and Adrian Kügel. Computing matching statistics and maximal exact matches on compressed full-text indexes. In Edgar Chavez and Stefano Lonardi, editors, String Processing and Information Retrieval, pages 347–358, Berlin, Heidelberg, 2010. Springer Berlin Heidelberg. doi:10.1007/978-3-642-16321-0_36.
- [45] Kunihiko Sadakane. Compressed suffix trees with full functionality. Theor. Comp. Sys., 41(4):589–607, December 2007. doi:10.1007/s00224-006-1198-x.
- [46] Carola Simon and Rolf Daniel. Metagenomic analyses: past and future trends. Applied and environmental microbiology, 77(4):1153–1161, 2011.
- [47] Blair L Strang and Nigel D Stow. Circularization of the herpes simplex virus type 1 genome upon lytic infection. Journal of virology, 79(19):12487–12494, 2005.