Wavelet Tree, Part II: Text Indexing
Abstract
The Wavelet Tree data structure introduced in Grossi, Gupta, and Vitter [11] is a space-efficient technique for rank and select queries that generalizes from binary symbols to an arbitrary multisymbol alphabet. Over the last two decades, it has become a pivotal tool in modern full-text indexing and data compression because of its properties and capabilities in compressing and indexing data, with many applications to information retrieval, genome analysis, data mining, and web search. In this paper, we survey the fascinating history and impact of Wavelet Trees; no doubt many more developments are yet to come. Our survey borrows some content from the authors’ earlier works.
This paper is divided into two parts: The first part gives a brief history of Wavelet Trees, including its varieties and practical implementations, which appears in the Festschrift dedicated to Roberto Grossi [4]; the second part (this one) deals with Wavelet Tree-based text indexing and is included in the Festschrift dedicated to Giovanni Manzini.
Keywords and phrases:
Wavelet tree, data compression, text indexing, compressed suffix array, Burrows-Wheeler transform, rank and selectCopyright and License:
2012 ACM Subject Classification:
Theory of computation Data structures design and analysis ; Theory of computation Data compressionAcknowledgements:
The authors would like to thank Hongwei Huo for helpful comments and figures.Editors:
Paolo Ferragina, Travis Gagie, and Gonzalo NavarroSeries and Publisher:
Open Access Series in Informatics, Schloss Dagstuhl – Leibniz-Zentrum für Informatik
1 Introduction
The field of compressed full-text indexing [24] involves the design of data structures (aka, indexes) that support fast substring matching using small amounts of space. For a text string over an arbitrary alphabet of size and a pattern , the goal of text indexing is to preprocess using succinct space so that queries like the following can be quickly answered: (1) count the number of occurrences of in ; (2) locate the positions in where occurs; and (3) starting at text position , extract the length- substring .
A main goal is to create an index whose size is roughly equal to the size of the text in compressed format, with search performance comparable to the well-known indexes on uncompressed text, such as suffix trees and suffix arrays. Some compressed data structures are in addition self-indexes in that the data structure encapsulates the original text and, thus, can quickly recreate any portion of it. As a result, the original text is not needed, it can be discarded and replaced by only the (self-)index.
Most compressed indexing techniques developed in the last two decades make use of the powerful Wavelet Tree data structure, developed by Grossi, Gupta, and Vitter [11], with many applications to information retrieval, genome analysis, data mining, and web search. The Wavelet Tree supports fast rank and select queries on any string from an arbitrary alphabet. As such, it provides a space-efficient extension of the space-efficient rank-select data structures for binary sequences [26, 24] to the case of general alphabets of arbitrary size.
A Wavelet Tree represents any string from a general multisymbol alphabet as a hierarchical set of binary strings. It has the following key property for purposes of compression: If each such binary string is encoded according to its information-theoretic minimum size, then the original string is compressed to its information-theoretic minimum size.
In this paper we discuss how Wavelet Trees can be used to construct an efficient self-index for text using space related to the higher-order entropy of . The two main data structures to which Wavelet Trees are applied for that purpose are the compressed suffix array (CSA) [12, 28, 11] and the FM-index [6, 7].
2 Preliminaries
We refer the reader to PART I, which is included in the Festschrift dedicated to Roberto Grossi [4], for the notation used in this paper, the history and background of the ubiquitous Wavelet Tree, and the key definitions of 0th order entropy and higher-order entropy.
The suffix array gives the positions of all the suffixes of in lexicographical order. We use to denote the suffix array () of . We use to denote the inverse suffix array, where is a permutation of suffixes of in lexicographical order, such that . Table 1 provides a running example.
The key notion of neighbor function [12, 28] is illustrated in Table 1 and formally defined as follows: Given a suffix of and its index in the suffix array (i.e., ), let be the suffix (a.k.a. neighbour) formed by removing the first symbol of . The value of the neighbor function is the index of in the suffix array. More formally,
Let us designate by (as in PART I) the substring of associated with an individual context of symbols. Wavelet Trees are a natural and elegant way to compress each substring so that the code length per symbol is the 0th-order entropy of . It thus follows by Definition 3 in PART I of th-order entropy that the cumulative encoding of all the substrings achieves th-order entropy of the full string . We elaborate on this key idea in Section 4.
| t | # | t | c | a | … | a | t | 1 | |||||
| c | a | a | a | a | … | t | c | 0 | |||||
| a | a | a | a | t | … | c | a | 0 | |||||
| a | a | a | c | a | … | g | c | 0 | |||||
| a | a | a | t | a | … | a | a | 0 | |||||
| a | a | c | a | t | … | c | a | 0 | |||||
| t | a | g | a | t | … | t | t | 1 | |||||
| a | a | g | t | a | … | a | t | 1 | |||||
| t | a | t | # | t | … | g | t | 1 | |||||
| a | a | t | a | g | … | a | t | 1 | |||||
| t | a | t | a | t | … | a | c | 0 | |||||
| g | a | t | a | t | … | a | a | 0 | |||||
| c | a | t | a | t | … | a | t | 1 | |||||
| a | a | t | g | c | … | a | t | 1 | |||||
| a | a | t | t | a | … | g | t | 1 | |||||
| c | a | t | t | g | … | a | g | 1 | |||||
| a | c | a | a | a | … | # | t | 1 | |||||
| t | c | a | a | c | … | t | g | 1 | |||||
| a | c | a | t | a | … | a | a | 0 | |||||
| t | g | a | t | t | … | t | a | 0 | |||||
| a | g | c | a | a | … | a | t | 1 | |||||
| g | g | t | a | t | … | t | t | 1 | |||||
| t | g | t | a | t | … | t | a | 0 | |||||
| a | t | # | t | c | … | t | a | 0 | |||||
| t | t | a | g | a | … | a | t | 1 | |||||
| t | t | a | g | t | … | t | a | 0 | |||||
| a | t | a | t | # | … | t | g | 1 | |||||
| g | t | a | t | a | … | c | a | 0 | |||||
| a | t | a | t | a | … | a | a | 0 | |||||
| t | t | a | t | g | … | t | a | 0 | |||||
| t | t | a | t | t | … | a | g | 1 | |||||
| g | t | c | a | a | … | t | # | 0 | |||||
| t | t | g | c | a | … | t | a | 0 | |||||
| a | t | g | t | a | … | a | t | 1 | |||||
| t | t | t | a | g | … | t | a | 0 | |||||
| # | t | t | g | t | … | g | a | 0 |
3 Text Indexing
The two main paradigms used for modern text indexing are compressed suffix arrays (CSA) [12, 28, 11] and FM-index [6, 7]. We use the notation to denote a text of symbols drawn from an alphabet of size . Both the CSA and FM-Index make use of the suffix array concept.
3.1 Compressed Suffix Array (CSA)
The CSA represents the text string by encoding the neighbor function [12, 28]. The sequence of values suffices to recreate the original text , using some supporting data structures. For example, if we know the symbol that appears at the beginning of the th suffix in lexicographic order, we can easily determine the neighboring symbol in the text (at position in the text) by computing . For each , if we keep the count of the number of symbols in smaller than or equal to in lexicographic order, then the desired symbol is simply the first symbol in lexicographic order whose count is . We can then continue taking steps forward from one neighbor to the next to recover additional symbols.
To encode , we conceptually partition into context lists according to each symbol . The context of is defined as the symbol that begins the suffix starting in position . We denote a -context as a length- string. Suppose we have two suffixes at indices and (where ) in the suffix array that both start with the same symbol (context) . Then if we remove the first symbol from each suffix, the two suffixes that remain must maintain the same relative ordering in the suffix array:
Property 1.
The neighbor function is a piecewise increasing sequence.
We show the neighbor function for text = tcaaaatatatgcaacatatagtattagattgtat# (see Table 1). The suffixes starting with the symbol span indices of the suffix array, and their values form an increasing sequence , 4, 5 ,11, 18, 19, 22, 23, 25, 27, 28, 29, 32, 34, , which we call a’s context list. For a given (to be specified later), we further partition each ’s context list into sublists according to -context . Here -context of denotes the -symbol prefix of the suffix starting at position and its preceding symbol is . For the text shown in Table 1, the symbol precedes the -context of the suffix starting at . As another example, the 2-context at associated with indices further decomposes a’s context list into the sublist . The entries in each sublist also form an increasing sequence.
The next property is key for efficient coding of the neighbor function :
Property 2.
For any -context , all the entries in sublists for all nonempty contexts form an index range of contiguous values in the suffix array . Moreover, all -contexts that precede x in lexicographic order create an index range immediately prior to the index range of in the suffix array.
For coding purposes, this property allows us to normalize each entry in each sublist from 1 to , where is the number of suffixes whose first symbols are of the form for any . For example in Table 1, for , the union of the sublists for all is the contiguous range of 13 values.
In Section 4 we will see how the ability of Wavelet Trees to encode individual sublists with 0th-order entropy results in an cumulative higher-order compression for the full text .
3.2 FM-Index
The FM-index involves accessing the Burrows-Wheeler transform (BWT [1]), which appears as the column in the example in Table 1. Adjacent entries in the column tend to share similar statistics regarding which symbols follow because the symbols that follow them in the text are adjacent in lexicographic order. As an example, is the substring of formed by all symbols of that precede the -context , as you can observe in Table 1.
The close relation between the CSA and BWT follows from the fact that the neighbor function is the inverse of the so-called function of the BWT (which maps the index of a symbol in the column to the index where it appears when rotated forward in the column). In other words, iff , as can be seen from Table 1.
Building is straightforward for symbols that occur only once, as it is the case of # in our running example of Table 1. But computing efficiently is no longer trivial regarding recurring symbols. Nonetheless, it can be solved in optimal time thanks to a clever algorithm that uses an auxiliary vector of size and is shown in Algorithm 1. For the sake of description, we assume that array is indexed by a symbol rather than by an integer. The first for-loop in Algorithm 1 computes, for each symbol , the number of its occurrences in , and thus it sets . Then, the second for-loop turns these symbol-wise occurrences into a cumulative sum, namely . We notice that gives the first position in where the symbol occurs. Therefore, before the last for-loop starts, is the landing position in of the first in (we thus know the -mapping for the first occurrence of every alphabet symbol). Finally, the last for-loop scans the column , and whenever it encounters the symbol , it sets and then increments . So the algorithm maintains the invariant that , after that occurrences of in have been processed. The time complexity of such computation is .
Like the neighbor function in the CSA, the BWT transform suffices to recreate the original text . To do so, it uses a backward scan supported by the -mapping in time and space. The algorithm starts from the last symbol of , which occurs at ; and then it proceeds by moving one symbol at a time to the left in , deploying the properties of the -mapping: if we know the symbol in the column at index , then the preceding symbol in the text is the symbol in the column on the same row . Given this observation, the backward reconstruction of first maps the current symbol occurring in , say , to its corresponding copy in , hence ; and then takes the symbol that is ensured by the properties of the cyclic rotations of the rows of the BWT to precede in (see above). This double step, which returns the algorithmic focus on , allows to move one symbol leftward in . Hence, repeating this for steps, we can reconstruct the original input string in time. As far as the construction of the BWT is concerned, we mention that it can be done by building the suffix array data structure via a plethora of time and space-efficient algorithms that now abound in the literature (see e.g. [17, 16, 22]) or directly via several elegant approaches (see e.g. [14, 25, 18, 23]).
But how useful is it to permute via the BW-Transform? If we read the first column of Table 1, we notice a sequence of possibly long runs of equal symbols. It is clear that by compressing that sequence with a simple RLE-compressor (namely, the one that substitutes each run of equal symbols with a pair ) we would get a high compression ratio. But does not allow us to return to the original string because all strings consisting of 15 a, 3 c, 4 g, and 13 t symbols would obtain the same . It can be proved [1] that the only column allowing to return to , for all possible , is properly . That column could be highly compressible because it satisfies the so-called locally homogeneous property, namely, that substrings of consist of a few distinct symbols. The reason comes from the fact that if we take a string (context) of length (as in the Definition 3 in PART I of th-order entropy), say , the symbols immediately preceding each occurrence of in occur contiguously in . The nice issue here is that many real sources (they are called Markovian) do exist that generate data sequences, other than texts, that can be turned to be locally homogeneous via the Burrows-Wheeler Transform and thus can be highly compressed by simpler compressors.
This basic principle is deployed by the design of the well-known tool (now111See https://en.wikipedia.org/wiki/Bzip2 and https://github.com/kspalaiologos/bzip3 arrived to version 3), whose algorithmic pipeline we briefly sketch: Let us for a moment focus on two simple algorithms, namely, the Move-To-Front (shortly, MTF) and the Run-Length Encoding (shortly RLE, mentioned above). The former maps symbols into integers, and the latter maps runs of equal symbols into pairs. We observe that RLE is a compressor indeed, because the output sequence may be reduced in length in the presence of long runs of equal symbols; while MTF is a transform that can be turned into a compressor by encoding the integers via proper variable-length encoders. In general, the compression performance of those algorithms is very poor; while BWT turns them magically to be very effective if combined together: first apply MTF on , then input the result on RLE, and finally encode its output with a statistical coder like Huffman or arithmetic coding. That’s it222For a scholarly discussion of the BWT and the FM-index, the reader is directed to the book [2]..
Manzini [21] showed that this, and other pipelines based upon the BWT, can represent a text string of symbols using space proportional to times its th-order empirical entropy , with guarantees stronger than the ones ensured by Lempel-Ziv-based compressors (such as gzip). Later analysis using Wavelet Trees reduced the constant factor in front of the term to 1 [11, 8]. Manzini’s result generated increased interest in the BWT transform, whose impact on data compression, and not just compressed text indexing, was highlighted by [5] as a sort of compression booster.
4 Compression and Boosting to Higher-Order Entropy
The original motivation for Wavelet Trees was to extend the RRR method [26] from binary sequences to sequences composed of symbols from general alphabets in order to achieve 0th-order entropy. That capability is important because if a text ’s neighbor function values [12, 11, 27] (in the case of compressed suffix arrays) and the BWT transform ( column) [7, 24] (in the case of the FM-index) are decomposed into substrings based upon ’s contexts of length , then we can use a Wavelet Tree to compress each of those substrings to its 0th-order entropy (plus lower-order terms). The key point is that the resulting cumulative encoding will achieve a compression equal to the th-order entropy of (plus lower-order terms).
This notion of achieving a high-order entropy based upon individual encodings of 0th-order entropy (later called “boosting” in [3, 5]) is inherent in Definition 3 in PART I of th-order entropy, in which is expressed as a sum over all -contexts of the terms . Grossi, Gupta, and Vitter used Wavelet Trees to provide the first demonstration of the asymptotically optimal space bound (i.e., with leading coefficient 1) for compressed suffix arrays and for the Burrows-Wheeler transform [11].
In order to better illustrate this powerful idea, let us consider a generic 0-order statistical compressor whose performance, in bits per symbol, over a string is bounded by . We notice that the function is the one achieved by arithmetic coding and it is for Huffman coding. In order to turn into an effective th-order compressor , for a fixed , we can compute the Burrows-Wheeler Transform of the input string , take all possible length- substrings of , and then partition the column into the substrings (each one formed by the last symbols of the rows prefixed by ). Therefore, the final step is just to compress each by , and concatenate the output bit sequences by alphabetically increasing .
As far as the compression performance in bits per symbol is concerned, we easily derive that it can be bounded as bits, where we have applied Definition 3 in PART I of to the summation of the . Now, if is the arithmetic coder, the previous bound is bits, since . In [5] the authors showed other stronger upper bounds to the performance of and, more importantly, that one actually does not need to fix in advance, since there is a so-called Compression Booster that identifies in optimal linear time a partition of that achieves a compression ratio that is at least as good as the one obtained by , up to an additive lower-order term for any possible in the allowable range. The algorithm is a surprisingly simple and elegant greedy-based algorithm; we refer the interested reader to that paper.
4.1 Using a Single Wavelet Tree with Implicit Partitions of Context
In the previous section, we decomposed the string being compressed (such as the neighbour sequence of the CSA or the column in the FM Index) into contexts. Each individual context was then encoded to its 0th-order entropy, and by Definition 3 in PART I of th-order entropy, the sum of the resulting individual terms resulted in a total of bits (the boosting effect). In some cases, the results hold for all in the allowed range for some constant .
Foschini et al. [9] showed how order th-order entropy can be achieved by using a single Wavelet Tree without explicit partitioning into -contexts. Intuitively, there are two reasons:
-
One reason is that the individual Wavelet Trees that would be constructed for each context can be found within the single Wavelet Tree constructed on the entire BWT string , as illustrated in Figure 1.
-
The other reason follows from using a method like run-length encoding (RLE) or gap encoding (Gap) (in conjunction with a prefix coder such as the or code to encode the lengths) in order to encode the bit arrays of the nodes of the single Wavelet Tree. On the plus side, there is thus no need for data structures to keep track of the statistics of many individual contexts. On the negative side, the coding method RLE or Gap does not know the -context boundaries. However, Foschini et al. [9] showed that RLE can adapt to the statistics for each -context it encounters, and the positives make up for the negatives. By superimposing hypothetical -context boundaries on the string for purposes of analysis, the resulting RLE implicitly encodes each -context in roughly twice th-order space, and by Definition 3 in PART I of th-order entropy, the resulting overall encoding achieves bits leading space term. In effect, it achieves implicit boosting.
In implicit boosting, there is no need to specify in advance; the compression results hold for all in the allowed range , . Foschini et al. [9] used RLE encoding with encoding, but other coding methods could be used, possibly switching dynamically among several methods (along with the bits that indicate the choices made) [3, 13, 15].
Gog et al. [10] compress the BWT by using the RRR method [26] to encode the Wavelet Trees of fixed-size blocks of the BWT rather than using partitions based upon higher-order contexts; they showed that the resulting compression achieves the leading space term. Mäkinnen and Navarro [19, 20] showed the surprising result that RRR [26] applied to a single Wavelet Tree of the entire BWT sequence or sequence (i.e., without any partitioning) also achieves the bits leading space term, for a similar reason as in [9] described above; careful analysis in [19, 20] showed that the sublocks formed by [26] within each Wavelet Tree node implicitly encode each th-order context in 0th-order entropy space plus lower-order terms. Grossi, Vitter, and Xu [13] present interesting experiments on the practical performance of these approaches.
References
- [1] M. Burrows and D. J. Wheeler. A block-sorting lossless data compression algorithm, 1994.
- [2] Paolo Ferragina. Pearls of Algorithm Engineering. Cambridge University Press, 2023.
- [3] Paolo Ferragina, Raffaele Giancarlo, and Giovanni Manzini. The myriad virtues of Wavelet Trees. Information and Computation, 207(8):849–866, 2009. doi:10.1016/j.ic.2008.12.010.
- [4] Paolo Ferragina, Raffaele Giancarlo, Giovanni Manzini, Giovanna Rosone, Rossano Venturini, and Jeffrey Scott Vitter. Wavelet Tree, Part I: A Brief History. appearing in the Festschrift dedicated to Roberto Grossi.
- [5] Paolo Ferragina, Raffaele Giancarlo, Giovanni Manzini, and Marinella Sciortino. Boosting textual compression in optimal linear time. Journal of the ACM, 52(4):688–713, July 2005. doi:10.1145/1082036.1082043.
- [6] Paolo Ferragina and Giovanni Manzini. Opportunistic data structures with applications. In Proceedings of the 41st Annual IEEE Symposium on Foundations of Computer Science (FOCS’00), pages 390–398, Washington, DC, November 2000. IEEE Computer Society. doi:10.1109/SFCS.2000.892127.
- [7] Paolo Ferragina and Giovanni Manzini. Indexing compressed text. Journal of the ACM, 52(4):552–581, 2005. doi:10.1145/1082036.1082039.
- [8] Paolo Ferragina, Giovanni Manzini, Veli Mäkinen, and Gonzalo Navarro. Compressed representations of sequences and full-text indexes. ACM Transactions on Algorithms, 3(2):Article 20, 2007. An earlier version appeared in Proceedings of the 11th Symposium on String Processing and Information Retrieval (SPIRE), 150–160, October 2004.
- [9] Luca Foschini, Roberto Grossi, Ankur Gupta, and Jeffrey S. Vitter. When indexing equals compression: Experiments with compressing suffix arrays and applications. ACM Transactions on Algorithms, 2(4):611–639, 2006. This work combined earlier versions from SIAM Symposium on Discrete Algorithms (SODA), January 2004, and “Fast compression with a static model in high-order entropy,” Data Compression Conference (DCC), March 2004. doi:10.1145/1198513.1198521.
- [10] Simon Gog, Juha Kärkkäinen, Dominik Kempa, Matthias Petri, and Simon J. Puglisi. Fixed block compression boosting in FM-Indexes: Theory and practice. Algorithmica, 81:1370–1391, 2019. doi:10.1007/S00453-018-0475-9.
- [11] Roberto Grossi, Ankur Gupta, and Jeffrey S. Vitter. High-order entropy-compressed text indexes. In Proceedings of the 14th ACM-SIAM Symposium on Discrete Algorithms (SODA’03), pages 841–850, New York, NY, January 2003. Association for Computing Machinery.
- [12] Roberto Grossi and Jeffrey S. Vitter. Compressed suffix arrays and suffix trees with applications to text indexing and string matching. SIAM Journal on Computing, 35(2):378–407, 2005. An earlier version appeared in Proceedings of the 32nd ACM Symposium on Theory of Computing (STOC’00), May 2000, 397–406. doi:10.1137/S0097539702402354.
- [13] Roberto Grossi, Jeffrey S. Vitter, and Bojian Xu. Wavelet trees: From theory to practice. In 2011 First International Conference on Data Compression, Communications and Processing, pages 210–221, June 2011. doi:10.1109/CCP.2011.16.
- [14] Wing-Kai Hon, Kunihiko Sadakane, and Wing-Kin Sung. Breaking a time-and-space barrier in constructing full-text indices. SIAM J. Comput., 38(6):2162–2178, 2009. doi:10.1137/070685373.
- [15] Hongwei Huo, Zongtao He, Pengfei Liu, and Jeffrey S. Vitter. FM-Adaptive: A practical data-aware FM-index, 2025. submitted.
- [16] Juha Kärkkäinen, Peter Sanders, and Stefan Burkhardt. Linear work suffix array construction. Journal of the ACM, 53(6):918–936, 2006. doi:10.1145/1217856.1217858.
- [17] Dong Kyue Kim, Jeong Seop Sim, Heejin Park, and Kunsoo Park. Constructing suffix arrays in linear time. Journal of Discrete Algorithms, 3(2):126–142, 2005. doi:10.1016/J.JDA.2004.08.019.
- [18] Alan Kuhnle, Taher Mun, Christina Boucher, Travis Gagie, Ben Langmead, and Giovanni Manzini. Efficient construction of a complete index for pan-genomics read alignment. In Lenore J. Cowen, editor, Research in Computational Molecular Biology - 23rd Annual International Conference, RECOMB 2019, Washington, DC, USA, May 5-8, 2019, Proceedings, volume 11467 of Lecture Notes in Computer Science, pages 158–173. Springer, 2019. doi:10.1007/978-3-030-17083-7_10.
- [19] Veli Mäkinen and Gonzalo Navarro. Implicit compression boosting with applications to self-indexing. In Proceedings of the 14th International Symposium on String Processing and Information Retrieval (SPIRE’07), pages 229–241, Heidelberg, Germany, 2007. Springer-Verlag Berlin. doi:10.1007/978-3-540-75530-2_21.
- [20] Veli Mäkinen and Gonzalo Navarro. Dynamic entropy-compressed sequences and full-text indexes. ACM Transactions on Algorithms, 4(3):article 32, 2008.
- [21] Giovanni Manzini. An analysis of the Burrows-Wheeler transform. Journal of the ACM, 48(3):407–430, 2001. An earlier version appeared in Proceedings of the 10th Symposium on Discrete Algorithms (SODA ’99), January 1999, 669–677. doi:10.1145/382780.382782.
- [22] Giovanni Manzini and Paolo Ferragina. Engineering a lightweight suffix array construction algorithm. In Rolf H. Möhring and Rajeev Raman, editors, Algorithms - ESA 2002, 10th Annual European Symposium, Rome, Italy, September 17-21, 2002, Proceedings, volume 2461 of Lecture Notes in Computer Science, pages 698–710. Springer, 2002. doi:10.1007/3-540-45749-6_61.
- [23] J. Ian Munro, Gonzalo Navarro, and Yakov Nekrich. Fast compressed self-indexes with deterministic linear-time construction. Algorithmica, 82(2):316–337, 2020. A conference version of this paper appeared in Proc. ISAAC 2017. doi:10.1007/S00453-019-00637-X.
- [24] Gonzalo Navarro and Veli Mäkinen. Compressed full-text indexes. ACM Computing Surveys, 39(1):Article 2, 2007.
- [25] Daisuke Okanohara and Kunihiko Sadakane. A linear-time burrows-wheeler transform using induced sorting. In Jussi Karlgren, Jorma Tarhio, and Heikki Hyyrö, editors, String Processing and Information Retrieval, 16th International Symposium, SPIRE 2009, Saariselkä, Finland, August 25-27, 2009, Proceedings, volume 5721 of Lecture Notes in Computer Science, pages 90–101. Springer, 2009. doi:10.1007/978-3-642-03784-9_9.
- [26] Rajeev Raman, Venkatesh Raman, and S.Srinivasa Rao. Succinct indexable dictionaries with applications to encoding -ary trees, prefix sums and multisets. ACM Transactions on Algorithms, 3(4):Article 43, 2007. doi:10.1145/1290672.1290680.
- [27] Kunihiko Sadakane. Compressed suffix trees with full functionality. Theory Comput. Syst., 41(4):589–607, 2007. doi:10.1007/S00224-006-1198-X.
- [28] Kunihiko Sadakane. Compressed text databases with efficient query algorithms based on the compressed suffix array. In Proceedings of the 11th Symposium on Algorithms and Computation (ISAAC’00), pages 410–421, Banff Alberta, Canada, August 2000. Springer-Verlag Berlin, Heidelberg.
