Combined Search and Encoding for Seeds,
with an Application to Minimal Perfect Hashing
Abstract
Randomised algorithms often employ methods that can fail and that are retried with independent randomness until they succeed. Randomised data structures therefore often store indices of successful attempts, called seeds. If such seeds are required (e.g., for independent substructures) the standard approach is to compute for each the smallest successful seed and store .
The central observation of this paper is that this is not space-optimal. We present a different algorithm that computes a sequence of successful seeds such that the entropy of undercuts the entropy of by bits in most cases. To achieve a memory consumption of , the expected number of inspected seeds increases by a factor of .
We demonstrate the usefulness of our findings with a novel construction for minimal perfect hash functions that, for keys and any , has space requirement and construction time . All previous approaches only support or have construction times that increase exponentially with . Our implementation beats the construction throughput of the state of the art by more than two orders of magnitude for .
Keywords and phrases:
Random Seed, Encoding, Bernoulli Process, Backtracking, Perfect HashingCopyright and License:
2012 ACM Subject Classification:
Theory of computation Randomness, geometry and discrete structures ; Theory of computation Data compression ; Theory of computation Data structures design and analysisSupplementary Material:
Software (Source code): https://github.com/ByteHamster/ConsensusRecSplit [23]archived at
swh:1:dir:eeef738e12bea287eae54a77ce81241587a91767
archived at
swh:1:dir:7b02eda8ffa5a9d61a086ff2fc34fb0fbdc3eff4
Acknowledgements:
This paper is based on the bachelor’s thesis of Jonatan Ziegler [34].Funding:
††margin:Editors:
Anne Benoit, Haim Kaplan, Sebastian Wild, and Grzegorz HermanSeries and Publisher:
Leibniz International Proceedings in Informatics, Schloss Dagstuhl – Leibniz-Zentrum für Informatik
1 Introduction
The construction of randomised data structures can sometimes fail, so the data structures need to store indices of successful attempts called seeds.111Throughout this paper, we use the Simple Uniform Hashing Assumption, as do many works before us [5, 29, 28, 22, 21, 14, 7]. This is equivalent to saying that we assume access to an infinite sequence of uniformly random and independent hash functions, each identified by its index (seed). The assumption is an adequate model for practical hash functions and can be justified in theory using the “split-and-share” trick [6]. In this paper, we present a technique that finds and encodes successful seeds in a space-efficient and time-efficient way.
When searching a single seed, we may imagine an infinite sequence of i.i.d. Bernoulli random variables, called a Bernoulli process, that indicate which natural numbers lead to success and that can be inspected one by one. To model the task of finding a sequence of seeds we use a sequence of Bernoulli processes as follows. In this paper and .
Definition 1 (Seed Search and Encoding Problem (SSEP)).
Let and . Given for each a Bernoulli process with parameter , the task is to craft a bitstring that encodes for each a successful seed , i.e. .
Our primary goals are to minimise the length of in bits, and to minimise for each the number inspected Bernoulli random variables from . Moreover, the time to decode given , and should be constant. The optimal values for the expectation of and are the following (for a proof see the full version of this paper [24]).
| (1) |
1.1 Ad-Hoc Solutions
We briefly consider two natural strategies for solving the SSEP to build intuition. The obvious strategy (MIN) is to encode the smallest successful seeds, i.e. define
and let be an optimal encoding of . Perhaps surprisingly, this requires bits more than the optimum if are bounded away from .
Lemma 2 (MIN is fast but not space-efficient, proof in full version [24]).
If encodes then for all but .
Another simple strategy (UNI) is to encode the smallest seed that is simultaneously successful for all Bernoulli processes, i.e. we define
This yields a bitstring of minimal length but requires total time if are bounded away from .
Lemma 3 (UNI is space-efficient but slow, proof in full version [24]).
If encodes then but .
1.2 The Consensus Algorithm
We present an algorithm for the SSEP designed to simultaneously get close to both optima. We call it Combined Search and Encoding for Successful Seeds, or Consensus for short. We summarise the guarantees of the stronger of two variants here.
Theorem 4 (Main result, informal version of Theorem 7).
For any the full Consensus algorithm with high probability solves the SSEP with and for all . Each seed is a bitstring of fixed length found at a predetermined offset in .
Note that if the geometric mean of is at most then and the leading term in our space overhead is at most .222It is possible to get and in general. This requires work if many are close to such that . In that case we can “summarise” subsequences of using essentially the UNI strategy and then apply our main theorem to a shorter sequence with . We decided against integrating this improvement into our arguments as it interferes with clarity.
Intuition.
First reconsider the MIN-strategy. It involves the sequence with some entropy . There are two issues with storing . The superficial issue is that each number in the sequence follows a geometric distribution, and it is not obvious how to encode in space close to while preserving fast random access to each component.333The standard solution [32] using Golomb-Rice coding [11, 31] comes with bits of overhead. The deeper issue is that is not what a space-optimal approach should encode in the first place. This is demonstrated by the fact that even an optimal encoding of needs space , which UNI undercuts by bits.
Consensus deals with both issues simultaneously. The idea is to store for each a natural number . This allows for fixed width encoding of using bits. If we simply used then the expected number of successful choices for would be . The problem would be, of course, that the actual number of successful choices might be zero for many . So instead we interpret as a seed fragment and define to be a concatenation of seed fragments up to and including . This way, if we run out of choices for , we can hope that a different successful choice for earlier seed fragments will allow for a successful choice for as well. We can construct the sequence using backtracking. We explain our algorithm in detail in Section 2. The main technical challenge is to bound the additional cost this brings.
1.3 Motivation: Seed Sequences in Randomised Data Structures
Randomised data structures frequently use seeds to initialise pseudo-random number generators or as additional inputs to hash functions. Since a seed normally counts how often a construction has been restarted, a seed implies that work has been carried out, meaning any reasonably fast algorithm yields seeds that are only a few bits to a few bytes in size. This does not, however, imply that seeds contribute only negligibly to overall memory consumption. That is because some data structures partition their input into many small parts called buckets and store a seed for each of these buckets. The seeds may even make up the majority of the stored data. This is the case for many constructions of perfect hash functions, but also occurs in a construction of compressed static functions [2], which, in turn, enable space-efficient solutions to the approximate membership problem and the relative membership problem. The way we phrased the SSEP implicitly assumes that buckets are handled independently. We discuss a slight generalisation in the full version [24].
Minimal Perfect Hash Functions.
As an application of Consensus we discuss the construction of minimal perfect hash functions (MPHFs). An MPHF on a set of keys maps these keys to a range without collisions. The function does not necessarily have to store explicitly. The space lower bound is bits assuming the universe of possible keys has size [1, 26, 25, 9]. While a succinct construction has long been known [12], its space overhead is fixed to and large in practice. Many recent practical approaches for constructing MPHFs involve a brute force search for seed values that map certain subsets of the keys in favourable ways [30, 7, 14, 22, 21]. Those that do, all combine aspects of the MIN-strategy (store smallest working seeds) and the UNI-strategy (let a single seed do lots of work) and achieve a space budget of only in time (see Section 5.1). In this paper we break this barrier for the first time. While we give the details in Section 5, the clean version of our result is the following.
Theorem 5 (Bucketed Consensus-RecSplit).
For any there is an MPHF data structure with space requirement , expected construction time and expected query time .
1.4 Overview of the Paper
The structure of the paper is as follows. In Section 2, we explain Consensus in detail, as well as state the main theoretic results. In Sections 4 and 3 we then prove the results. We give an application of Consensus to minimal perfect hashing in Section 5 and also briefly evaluate our practical implementation. Finally, we conclude the paper in Section 6.
2 Combined Search and Encoding for Successful Seeds
In this section we present two variants of our Consensus algorithm. The first aims for conceptual clarity but produces seeds of length bits that cannot be decoded efficiently. The second is more technical but solves this problem.
In the following we always assume that , and are given and is a Bernoulli process with parameter for all .
A Branching Process.
Let be a sequence of numbers.444It may help to imagine that . Consider the infinite tree visualised in Figure 1. It has layers, indexed from to . The root in layer has an infinite number of children and nodes in layer have children each, with -based indexing in both cases. Edges between layer and are associated with the variables from top to bottom where indicates a broken edge depicted as a dotted line. For convenience, we make another definition. For any and any sequence consider the node reached from the root by taking in layer the edge to the -th child. We define to be the index of the reached node within layer .
The Simplified Consensus Algorithm.
In Consensus, we search for a non-broken path from the root of the tree to a leaf node using a DFS, see Algorithm 1. Note that such a path exists with probability because the root has infinitely many children. The path is given by the sequence of choices made in each layer. The returned sequence can be encoded as a bitstring . Then is a successful seed for each by construction. The merits of the approach are summarised in the following theorem.
Theorem 6 (Performance of Simplified Consensus).
Given , and let be a sequence of numbers satisfying555Ideally we would want but the integer constraint on the forces us to be more lenient. for all . Then Algorithm 1 solves the SSEP with and .
Full Consensus Algorithm.
The main issues with the simplified Consensus algorithm are that is awkward to encode in binary and that using directly as the seed makes for seeds of size bits, which are inefficient to compute and handle. We solve these issues with three interventions.
-
1.
We introduce a word size and encode using bits (hoping that it does not overflow, see below).
-
2.
We demand that are powers of two, i.e. for . With our definition of “”, the binary representation of the number now simply arises by concatenating the binary representations of (where is encoded using bits and is encoded using bits).
-
3.
Instead of defining to be , we define to be the -bit suffix of . We will show that this truncation is unlikely to cause collisions, i.e. any two sequences and we encounter have distinct suffixes . Intuitively, this guarantees that the search algorithm will always see “fresh” random variables and does not “notice” the change.
The full algorithm is given as Algorithm 2, using notation defined in Figure 2. Our formal result is as follows.
Theorem 7 (Performance of Full Consensus).
Let be an arbitrary constant. Given , and , let be the unique sequence of numbers such that
Assume we run Algorithm 2 with these parameters and .666We did not attempt to improve the constant “” and expect that is enough in practice. Then there is an event with that implies that the run solves the SSEP, producing a bitstring with containing at offset for all . Moreover, for all .
In particular the result guarantees that can be inferred efficiently given and provided that can be computed efficiently given . In practice it is fine to use approximations of .777Our implementation uses fixed point arithmetic (integers counting “microbits”) to avoid issues related to floating point arithmetic not being associative and distributive. Note that the expectation of is only bounded conditioned on a good event , which is defined in Section 4. To obtain an unconditionally bounded expectation, Algorithm 2 should be modified to give up early in failure cases.888To do so we need not decide whether occurs. We can simply count the number of steps and abort when a limit is reached.
Special Cases and Variants.
An important special case is . With the space lower bound is then simply bits. If we apply Theorem 7 with then the lengths of the seed fragments are all either or with an average approaching . Assuming it may be tempting to use such that all seed fragments have the same length . Similarly, we could pick such that is a “nice” rational number, e.g. for such that alternate between and .
In some cases it may be reasonable to choose . Even though Consensus ceases to be more space efficient than MIN, it still has the advantage that seeds are bitstrings of fixed lengths in predetermined locations. We decided against formally covering this case in Theorem 7, but argue for the following modification in the full version [24] of this paper.
Remark 8.
The guarantees of Theorem 7 also hold for , except that
It may also be worthwhile to consider variants of Consensus that use different values of the overhead parameter in different parts of to minimise some overall cost. Such an idea makes sense in general if we augment the SSEP to include specific costs associated with testing the Bernoulli random variables belonging to the th family.
3 Analysis of Simplified Consensus
We begin by reducing the claim of Theorem 6 to a purely mathematical statement given in Lemma 9, which we prove in Figure 3.
3.1 Reduction of Theorem 6 to Lemma 9
Recall the tree defined in Section 2 and illustrated in Figure 1. For any consider a node in layer . Let be the probability that there is an unbroken path from to a node in layer . (This probability is indeed the same for all nodes in layer .) In the last layer, we have by definition. For all we have since is the probability that an unbroken path via a specific child of exists, and since there is an independent chance for each of the children. The technical challenge of this section is to show that the are not too small.
Lemma 9.
Under the conditions of Theorem 6 we have for all .
This lemma implies Theorem 6 as follows.
Proof of Theorem 6.
Let for be the number of Bernoulli random variables from the family that are inspected. This corresponds to the number of edges between layers and that are inspected by the DFS. With probability such an edge is unbroken and, if it is unbroken, then with probability the DFS will never backtrack out of the corresponding subtree and hence never inspect another edge from the same layer. This implies and thus where the last step uses and Lemma 9.
Concerning the space to encode , there are two parts to consider. The “fixed width” numbers can be encoded using bits. By the assumption on we have and can bound the space by
The “variable width” number has distribution because is the -based index of the first node in layer with an unbroken path to layer . Again by Lemma 9 we have . Using Jensen’s inequality [15, 27] and the fact that is concave, the expected number of bits to encode is hence bounded by
Summing the contributions of both parts yields a bound on as claimed.
3.2 Proof of Lemma 9
To understand the proof of Lemma 9 it may help to consider a simplified case first. Assume and with , i.e. is slightly larger than . We have , which motivates defining such that arises by iteratively applying to an initial value of . From the illustration in Figure 3 we see that can never drop below the largest fixed point of , and it is not hard to show that .
Our full proof has to deal with a setting that is much less clean. The values and depend on , yielding a distinct function for each and is only true in the sense of a geometric average over consecutive indices. We hence analyse compositions for sufficiently large . We begin with some definitions.
-
Let . Note that this gives for .
-
For any set let , , and . Note that the assumption of Theorem 6 gives
-
Let . Its role is to be a number satisfying , , and . In particular, if with then
-
Lastly, let and .
These definitions are designed to make the proof of the following lemma as simple as possible. No attempt was made to optimise constant factors.
Lemma 10.
Let . The following holds.
-
(i)
For all and : .
-
(ii)
If then for .
-
(iii)
If then .
Proof.
-
(i)
satisfies the following bounds: for . We get the left claim by integrating for to get . Integrating again for gives the right claim. Applying to we get
-
(ii)
We use induction on . For we have , as well as and the claim holds. Let now with and . We assume the claim holds for . Let .
-
(iii)
By applying (ii) and using we get
The proof of Lemma 9 is now immediate:
Proof of Lemma 9.
4 Analysis of Full Consensus
Full Consensus performs a DFS in a tree as depicted in Figure 1 in much the same way as simplified Consensus, with two differences. First, the root node only has children (not infinitely many) and the probability that an unbroken path exists is therefore strictly less than . Secondly, when examining the outgoing edge of index of some node in layer , then its status is no longer linked to for . Instead is divided as where and is used. To prepare for the proof of Theorem 7, we present a coupling to, and a claim about simplified Consensus.
A Coupling between Simplified and Full Consensus.
Consider the following natural coupling. Let for and be the random variables underlying simplified Consensus and for and the random variables underlying full Consensus. The coupling should ensure that for all pairs such that simplified Consensus inspects and does not at an earlier time inspect with . In other words, for any pair , if simplified consensus inspects at least one variable with then equals the earliest such variable that is inspected.
This implies that the runs of simplified and full Consensus behave identically as long as, firstly, never reaches and, secondly, simplified Consensus never repeats a suffix, by which we mean inspecting for some pair such that with was previously inspected. In the following, by a step of Consensus we mean one iteration of its main while-loop, which involves inspecting exactly one random variable.
Claim 11.
Simplified Consensus repeats a suffix in step with probability at most .
Proof.
Assume simplified Consensus is about to inspect . Call a blocked suffix if there exists such that and was previously inspected. This necessitates . Clearly, step repeats a suffix if is blocked. A crucial observation is that the DFS has exhaustively explored all branches of the tree associated with any prefix less than and the order in which this exploration has been carried out is no longer relevant. The symmetry between all nodes of the same layer therefore implies that every suffix is blocked with the same probability. Since at most suffixes can be blocked at step , the probability that is blocked (given our knowledge of ) is at most .
Proof of Theorem 7..
First note that the choice of made in Theorem 7 amounts to a valid choice of for applying Theorem 6. Indeed, for each there exists some related to rounding up such that
as required. The number of steps taken by simplified Consensus satisfies . By Markov’s inequality exceeds its expectation by a factor of with probability at most so with probability at least simplified Consensus terminates within steps.
By Claim 11 and a union bound the probability that simplified Consensus repeats a suffix within its first steps is at most . By choosing this probability is . This choice also guarantees that never exceeds during the first steps. Let now . By a union bound and by our coupling implies that simplified and full Consensus explore the same sequence of nodes in their DFS and terminate with the same sequence with . In particular full Consensus succeeds in that case as claimed.
The updated bound on simply reflects that , which in simplified Consensus had expected length now has fixed length . Lastly, if and denote the number of Bernoulli trials from the -th family inspected by simplified and full Consensus, respectively, then conditioned on we have so
5 Application to Minimal Perfect Hashing
In this section, we apply the Consensus idea to the area of minimal perfect hash functions (MPHFs). Recall from Section 1.3 that an MPHF for a given set of keys maps each key in to a unique integer from .
5.1 Existing Strategies for Constructing MPHFs
We begin with a brief overview of established approaches with a focus on those we need. Detailed explanations are found in [18, 19].
Partitioning.
Most approaches randomly partition the input into small buckets using a hash function and then construct an MPHF on each bucket separately. The natural way to obtain an MPHF on the entire input involves storing the bucket-MPHFs and prefix sums for the bucket sizes. Storing the prefix sums can be avoided when the partitioning uses a -perfect hash function, as is done in ShockHash-Flat [21], and as we do in Section 5.3.
Hagerup and Tholey [12] demonstrate that partitioning into tiny buckets of size when suitably combined with exhaustive tabulation of MPHFs on tiny inputs yields an approach with construction time , query time and space overhead . Unfortunately the analysis requires astronomical values of [4] and is restricted to these rather large .
Below we explain monolithic variants of MPHFs, i.e. constructions not using partitioning, which are typically combined with partitioning to obtain faster bucketed variants.
Recursive Splitting.
A recursive splitting strategy [7] involves a predefined splitting tree, which is a rooted tree with leaves.999In the original paper [7], the lowest level is suppressed, with correspondingly changed terminology. See Figure 4 (left) for an example with . An MPHF for an input set of size is given by a family of functions containing for each internal node of the tree a splitting hash function that maps keys to the children of . Together, the family must describe a recursive partitioning of the input set into parts of size . The MPHF is queried for a key by starting at the root. The splitting hash functions are evaluated on and the resulting path is followed until a leaf is reached, the index of which is returned. Each is identified by a seed that is found using trial and error. In the original paper [7] seeds are stored using Golomb-Rice coding [11, 31]. To keep the space overhead low, the last layer of the splitting tree uses -way splits for relatively large (5–16 in practice), which makes splitting hash functions costly to find.
Multiple-Choice Hashing.
We randomly assign a set of candidates to each key hoping that a choice can be made for each key such that is injective. A perfect hash function is then given by a retrieval data structure that stores [4, 10, 20]. A recent variant ShockHash [22, 21] achieves record space efficiency by using the aggressive configuration and (combined with partitioning and recursive splitting). Many retries are needed until exists.
Bucket Placement.
A general strategy with many variants assigns a random bucket to each key and considers for each bucket several (hash) functions with range . A choice is stored for each bucket such that is a bijection. Typically, the values are chosen greedily in decreasing order of bucket sizes [30, 14, 1], but backtracking has also been considered [33, 8] (in a different sense than Consensus).
5.2 Monolithic Consensus-RecSplit
We now describe a variant of recursive splitting in an idealised setting with that uses Consensus to be particularly space efficient. The idea is simple. The data structure consists of two-way splitting hash functions, i.e. hash functions with range , arranged in a balanced binary tree with leaves, see Figure 4 (right).
This means that during construction for a key set we have to select the splitting hash functions from top to bottom such that each function splits the part of arriving at its node perfectly in half. The data structure is entirely given by a sequence of seeds that identify the splitting hash functions. If we search and encode these seeds using the Consensus algorithm, we obtain the following result.
Theorem 12 (Monolithic Consensus-RecSplit).
Let for and . Monolithic-RecSplit is an MPHF data structure with space requirement , expected construction time and query time .
Proof.
We use a separate Consensus data structure per level of the splitting tree and construct them from top to bottom. Consider level of the tree, which contains nodes responsible for keys each. The probability that a given seed for a splitting hash function amounts to a perfect split for keys is by a simple counting argument. A standard approximation gives .
By applying Theorem 7 with , and we obtain a data structure encoding successful seeds. The expected number of seeds inspected is for each node, which corresponds to expected work of per node when seeds are tested by hashing all relevant keys.101010Dividing the work per node by gives . Our choice of (as long as does not kick in) is desired to ensure that using extra bits amounts to roughly the same reduction of work in each level, such that we have a consistent trade-off between time and space. For the total expected work in level there are, up to constant factors, the following two options.
| if then | |||
| if then |
The formula for the first case is maximised for giving . The formula for the second case is maximised for giving . Since both describe geometric series we get as desired.
The space requirement for level is . To bound the sum we look at the terms in separately. For the first, we will use that , which can be checked by induction: is clear. For we have
The leading terms in therefore add up to
The overhead terms are geometrically increasing in . This is because
The sum of the overhead terms is therefore dominated by the last term giving . Overall we get as claimed
A query needs to extract one seed from each of the Consensus data structures. Since each uses uniform values for , the product is easy to compute in constant time, which is sufficient to find each seed in constant time, so we get a query time of in total.
5.3 Bucketed Consensus-RecSplit
We now replace the upper layers of Monolithic Consensus-RecSplit with a minimal -perfect hash function [13]. This reduces query time as there are fewer layers to traverse and reduces construction time because the splits in the top layers are the most expensive ones to compute. Most importantly, it enables efficiently handling input sizes that are not powers of two.
For any a minimal -perfect hash function for a set of keys is a data structure mapping the elements of to buckets indexed with such that each bucket receives exactly keys. If does not divide then bucket receives between and leftover keys. It has long been known that the space lower bound for such a data structure is [1], shown to be in [16]. In the full version of this paper [24] we outline a compact minimal -perfect hash function with, in expectation, bits of space, constant query time, and linear construction time.
We call our resulting MPHF Bucketed Consensus-RecSplit. Its performance is as follows (previously stated in Section 1.3).
Theorem 5 (Bucketed Consensus-RecSplit). [Restated, see original statement.]
For any there is an MPHF data structure with space requirement , expected construction time and expected query time .
Proof.
The idea is to use our monolithic construction but replace the top-most layers with a -perfect hash function. More precisely, we choose and construct a minimal -perfect hash function for the input set. As discussed in the full version, the construction time of is , the expected query time is and the space consumption is bits, all well within the respective budgets. The resulting buckets of keys each (a power of ) are recursively split like in our monolithic data structure, which takes at most bits of space (the term is dominated by ). Since we make no assumptions on there might be up to leftover keys assigned to an extra bucket. For these we construct a separate MPHF with range . Since pretty much any compact (not necessarily succinct) MPHF is good enough here, provided it supports queries in logarithmic time. Queries evaluate first and, if mapped to a regular bucket, evaluate a sequence of splits, or, if mapped to the extra bucket, evaluate . This takes time in the worst case.
Concerning construction time, note that the top-most level in use has nodes of size and the definition of we made simplifies to without the “”. By an argument in Theorem 12, the construction time is then dominated by the bottom layer and bounded by as claimed.
5.4 Experiments
To demonstrate that Consensus-RecSplit is viable in practice, we give an implementation and compare it to other MPHF constructions from the literature.111111We run single-threaded experiments on an Intel i7 11700 processor with a base clock speed of 2.5 GHz. The sizes of the L1 and L2 data caches are 48 KiB and 512 KiB per core, and the L3 cache has a size of 16 MiB. The page size is 4 KiB. We use the GNU C++ compiler version 11.2.0 with optimization flags -O3 -march=native. Like previous papers [19, 21, 22, 14, 3], we use 100 million random string keys of uniform random length as input. The source code is available on GitHub [17, 23]. We only include the most space-efficient previous approaches with space consumption below 1.6 bits per key and refer to [18, 19] for a detailed comparison of state-of-the-art perfect hashing approaches with larger space consumption. Our implementation departs from the algorithmic description in Section 5.3 by using the threshold-based -perfect hash function [21, 13]. We postpone further tuning and parallelisation efforts to a future paper.
Figure 5 shows the trade-off between space consumption and construction time of Consensus-RecSplit. In the full version [24] we also give a table. The performance of all previous approaches degrades abruptly for smaller overheads because of their exponential dependency on while Consensus-RecSplit shows an approximately linear dependence on as predicted in Theorem 5. The space lower bound for MPHFs is bits per key. The previously most space-efficient approach, bipartite ShockHash-RS [21], achieves 1.489 bits per key. In about 15% of the construction time, Consensus-RecSplit achieves a space consumption of just 1.444 bits per key. At a space consumption of 1.489 bits per key, Consensus-RecSplit is more than 350 times faster to construct than the previous state of the art.
Queries take 150–250 ns depending on and therefore the depth of the splitting tree (see full version), so their performance is not too far from compact configurations of RecSplit (100 ns) and bipartite ShockHash-RS (125 ns). Future work can improve on the query performance by storing the seeds needed for each query closer to each other. We can adapt bucketed Consensus-RecSplit by constructing one Consensus data structure storing seeds bucket by bucket instead of an independent data structure for each layer. Preliminary experiments show up to 30% faster queries with moderately slower construction at the same space consumption. Splittings higher up in the tree not only have a smaller success probability, but are also more costly to test. We are therefore incentivised to select a different space overhead parameter for each splitting within the same Consensus data structure. We leave this generalisation of Consensus to non-uniform for future work.
6 Conclusion
This paper concerns data structures that store sequences of successful seeds, i.e. values known to influence pseudo-random processes in desirable ways. Normally these seeds are first found independently of each other using trial and error and then encoded separately. In this paper we show that, perhaps surprisingly, this wastes bits per seed compared to a lower bound.
We present the Consensus approach, which combines search and encoding of successful seeds and reduces the space overhead to bits at the price of increasing the required work by a factor of .
To demonstrate the merits of our ideas in practice, we apply it to the construction of minimal perfect hash functions. Consensus-RecSplit is the first MPHF to achieve a construction time that only depends linearly on the inverse space overhead. Despite using the same recursive splitting idea as [7], the space overhead is, for the same construction throughput, up to two orders of magnitude smaller in practice.
Future Work.
Given our promising results, we intend to continue working on Consensus-RecSplit, in particular looking into parallelisation and improving query times. We also believe that Consensus can be applied to other randomised data structures, including but not limited to other perfect or -perfect hash functions, retrieval data structures and AMQ-filters. We give a first result on -perfect hash functions using Consensus in [13].
References
- [1] Djamal Belazzougui, Fabiano C. Botelho, and Martin Dietzfelbinger. Hash, displace, and compress. In ESA, volume 5757 of Lecture Notes in Computer Science, pages 682–693. Springer, 2009. doi:10.1007/978-3-642-04128-0_61.
- [2] Djamal Belazzougui and Rossano Venturini. Compressed static functions with applications. In SODA, pages 229–240. SIAM, 2013. doi:10.1137/1.9781611973105.17.
- [3] Dominik Bez, Florian Kurpicz, Hans-Peter Lehmann, and Peter Sanders. High performance construction of RecSplit based minimal perfect hash functions. In ESA, volume 274 of LIPIcs, pages 19:1–19:16. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2023. doi:10.4230/LIPICS.ESA.2023.19.
- [4] Fabiano C. Botelho, Rasmus Pagh, and Nivio Ziviani. Practical perfect hashing in nearly optimal space. Inf. Syst., 38(1):108–131, 2013. doi:10.1016/J.IS.2012.06.002.
- [5] Martin Dietzfelbinger and Friedhelm Meyer auf der Heide. A new universal class of hash functions and dynamic hashing in real time. In ICALP, volume 443 of Lecture Notes in Computer Science, pages 6–19. Springer, 1990. doi:10.1007/BFB0032018.
- [6] Martin Dietzfelbinger and Michael Rink. Applications of a splitting trick. In ICALP (1), volume 5555 of Lecture Notes in Computer Science, pages 354–365. Springer, 2009. doi:10.1007/978-3-642-02927-1_30.
- [7] Emmanuel Esposito, Thomas Mueller Graf, and Sebastiano Vigna. RecSplit: Minimal perfect hashing via recursive splitting. In ALENEX, pages 175–185. SIAM, 2020. doi:10.1137/1.9781611976007.14.
- [8] Edward A. Fox, Lenwood S. Heath, Qi Fan Chen, and Amjad M. Daoud. Practical minimal perfect hash functions for large databases. Commun. ACM, 35(1):105–121, 1992. doi:10.1145/129617.129623.
- [9] Michael L. Fredman and J’anos Koml’os. On the size of separating systems and families of perfect hash functions. SIAM J. Algebraic Discrete Methods, 5(1):61–68, 1984. doi:10.1137/0605009.
- [10] Marco Genuzio, Giuseppe Ottaviano, and Sebastiano Vigna. Fast scalable construction of (minimal perfect hash) functions. In SEA, volume 9685 of Lecture Notes in Computer Science, pages 339–352. Springer, 2016. doi:10.1007/978-3-319-38851-9_23.
- [11] Solomon W. Golomb. Run-length encodings. IEEE Trans. Inf. Theory, 12(3):399–401, 1966. doi:10.1109/TIT.1966.1053907.
- [12] Torben Hagerup and Torsten Tholey. Efficient minimal perfect hashing in nearly minimal space. In STACS, volume 2010 of Lecture Notes in Computer Science, pages 317–326. Springer, 2001. doi:10.1007/3-540-44693-1_28.
- [13] Stefan Hermann, Sebastian Kirmayer, Hans-Peter Lehmann, Peter Sanders, and Stefan Walzer. Engineering minimal -perfect hash functions. In ESA, LIPIcs. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2025.
- [14] Stefan Hermann, Hans-Peter Lehmann, Giulio Ermanno Pibiri, Peter Sanders, and Stefan Walzer. PHOBIC: perfect hashing with optimized bucket sizes and interleaved coding. In ESA, volume 308 of LIPIcs, pages 69:1–69:17. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2024. doi:10.4230/LIPICS.ESA.2024.69.
- [15] Johan Ludwig William Valdemar Jensen. Sur les fonctions convexes et les inégalités entre les valeurs moyennes. Acta mathematica, 30(1):175–193, 1906. doi:10.1007/BF02418571.
- [16] Florian Kurpicz, Hans-Peter Lehmann, and Peter Sanders. PaCHash: Packed and compressed hash tables. In ALENEX, pages 162–175. SIAM, 2023. doi:10.1137/1.9781611977561.CH14.
- [17] Hans-Peter Lehmann. ByteHamster/MPHF-Experiments. Software, swhId: swh:1:dir:7b02eda8ffa5a9d61a086ff2fc34fb0fbdc3eff4 (visited on 2025-07-16). URL: https://github.com/ByteHamster/MPHF-Experiments, doi:10.4230/artifacts.23790.
- [18] Hans-Peter Lehmann. Fast and Space-Efficient Perfect Hashing. PhD thesis, Karlsruhe Institute of Technology, Germany, 2024. doi:10.5445/IR/1000176432.
- [19] Hans-Peter Lehmann, Thomas Mueller, Rasmus Pagh, Giulio Ermanno Pibiri, Peter Sanders, Sebastiano Vigna, and Stefan Walzer. Modern minimal perfect hashing: A survey. CoRR, abs/2506.06536, 2025. doi:10.48550/arXiv.2506.06536.
- [20] Hans-Peter Lehmann, Peter Sanders, and Stefan Walzer. SicHash - small irregular cuckoo tables for perfect hashing. In ALENEX, pages 176–189. SIAM, 2023. doi:10.1137/1.9781611977561.CH15.
- [21] Hans-Peter Lehmann, Peter Sanders, and Stefan Walzer. ShockHash: Near optimal-space minimal perfect hashing beyond brute-force. CoRR, abs/2310.14959, 2024. doi:10.48550/arXiv.2310.14959.
- [22] Hans-Peter Lehmann, Peter Sanders, and Stefan Walzer. ShockHash: Towards optimal-space minimal perfect hashing beyond brute-force. In ALENEX, pages 194–206. SIAM, 2024. doi:10.1137/1.9781611977929.15.
- [23] Hans-Peter Lehmann, Peter Sanders, Stefan Walzer, and Jonatan Ziegler. ByteHamster/ConsensusRecSplit. Software, swhId: swh:1:dir:eeef738e12bea287eae54a77ce81241587a91767 (visited on 2025-07-16). URL: https://github.com/ByteHamster/ConsensusRecSplit, doi:10.4230/artifacts.23789.
- [24] Hans-Peter Lehmann, Peter Sanders, Stefan Walzer, and Jonatan Ziegler. Combined search and encoding for seeds, with an application to minimal perfect hashing. CoRR, abs/2502.05613, 2025. doi:10.48550/arXiv.2502.05613.
- [25] Harry G. Mairson. The program complexity of searching a table. In FOCS, pages 40–47. IEEE, 1983. doi:10.1109/SFCS.1983.76.
- [26] Kurt Mehlhorn. On the program size of perfect and universal hash functions. In FOCS, pages 170–175. IEEE, 1982. doi:10.1109/SFCS.1982.80.
- [27] Michael Mitzenmacher and Eli Upfal. Probability and computing: Randomization and probabilistic techniques in algorithms and data analysis. Cambridge university press, 2017.
- [28] Anna Pagh and Rasmus Pagh. Uniform hashing in constant time and optimal space. SIAM J. Comput., 38(1):85–96, 2008. doi:10.1137/060658400.
- [29] Anna Pagh, Rasmus Pagh, and Milan Ruzic. Linear probing with constant independence. In STOC, pages 318–327. ACM, 2007. doi:10.1145/1250790.1250839.
- [30] Giulio Ermanno Pibiri and Roberto Trani. PTHash: Revisiting FCH minimal perfect hashing. In SIGIR, pages 1339–1348. ACM, 2021. doi:10.1145/3404835.3462849.
- [31] Robert F. Rice. Some practical universal noiseless coding techniques. Jet Propulsion Laboratory, JPL Publication, 1979.
- [32] Ian H Witten, Alistair Moffat, and Timothy C Bell. Managing gigabytes: compressing and indexing documents and images. Morgan Kaufmann, 1999.
- [33] Wei-Pang Yang and M. W. Du. A backtracking method for constructing perfect hash functions from a set of mapping functions. BIT, 25(1):148–164, 1985. doi:10.1007/BF01934995.
- [34] Jonatan Ziegler. Compacting minimal perfect hashing using symbiotic random search. Bachelor’s thesis, Karlsruhe Institute for Technology (KIT), 2025. doi:10.5445/IR/1000177814.
