Abstract 1 Introduction 2 Combined Search and Encoding for Successful Seeds 3 Analysis of Simplified Consensus 4 Analysis of Full Consensus 5 Application to Minimal Perfect Hashing 6 Conclusion References

Combined Search and Encoding for Seeds,
with an Application to Minimal Perfect Hashing

Hans-Peter Lehmann ORCID Karlsruhe Institute of Technology, Germany Peter Sanders ORCID Karlsruhe Institute of Technology, Germany Stefan Walzer ORCID Karlsruhe Institute of Technology, Germany Jonatan Ziegler ORCID Karlsruhe Institute of Technology, Germany
Abstract

Randomised algorithms often employ methods that can fail and that are retried with independent randomness until they succeed. Randomised data structures therefore often store indices of successful attempts, called seeds. If n such seeds are required (e.g., for independent substructures) the standard approach is to compute for each i[n] the smallest successful seed Si and store S=(S,,S).

The central observation of this paper is that this is not space-optimal. We present a different algorithm that computes a sequence S=(S,,S) of successful seeds such that the entropy of S undercuts the entropy of S by Ω(n) bits in most cases. To achieve a memory consumption of OPT+εn, the expected number of inspected seeds increases by a factor of 𝒪(1/ε).

We demonstrate the usefulness of our findings with a novel construction for minimal perfect hash functions that, for n keys and any ε[n3/7,1], has space requirement (1+ε)OPT and construction time 𝒪(n/ε). All previous approaches only support ε=ω(1/logn) or have construction times that increase exponentially with 1/ε. Our implementation beats the construction throughput of the state of the art by more than two orders of magnitude for ε3%.

Keywords and phrases:
Random Seed, Encoding, Bernoulli Process, Backtracking, Perfect Hashing
Copyright and License:
[Uncaptioned image] © Hans-Peter Lehmann, Peter Sanders, Stefan Walzer, and Jonatan Ziegler; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Randomness, geometry and discrete structures
; Theory of computation Data compression ; Theory of computation Data structures design and analysis
Related Version:
Extended Version: https://arxiv.org/abs/2502.05613 [24]
Supplementary Material:
Software  (Source code): https://github.com/ByteHamster/ConsensusRecSplit [23]
  archived at Software Heritage Logo swh:1:dir:eeef738e12bea287eae54a77ce81241587a91767
Acknowledgements:
This paper is based on the bachelor’s thesis of Jonatan Ziegler [34].
Funding:
margin: [Uncaptioned image] This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 882500). This work was also supported by funding from the pilot program Core Informatics at KIT (KiKIT) of the Helmholtz Association (HGF).
Editors:
Anne Benoit, Haim Kaplan, Sebastian Wild, and Grzegorz Herman

1 Introduction

The construction of randomised data structures can sometimes fail, so the data structures need to store indices of successful attempts called seeds.111Throughout this paper, we use the Simple Uniform Hashing Assumption, as do many works before us [5, 29, 28, 22, 21, 14, 7]. This is equivalent to saying that we assume access to an infinite sequence of uniformly random and independent hash functions, each identified by its index (seed). The assumption is an adequate model for practical hash functions and can be justified in theory using the “split-and-share” trick [6]. In this paper, we present a technique that finds and encodes successful seeds in a space-efficient and time-efficient way.

When searching a single seed, we may imagine an infinite sequence of i.i.d. Bernoulli random variables, called a Bernoulli process, that indicate which natural numbers lead to success and that can be inspected one by one. To model the task of finding a sequence of seeds we use a sequence of Bernoulli processes as follows. In this paper [n]{1,,n} and {0}.

Definition 1 (Seed Search and Encoding Problem (SSEP)).

Let n and p,,p(0,1]. Given for each i[n] a Bernoulli process B(i)=(Bj(i))j with parameter pi, the task is to craft a bitstring M that encodes for each i[n] a successful seed Si, i.e. BSi(i)=1.

Our primary goals are to minimise the length of M in bits, and to minimise for each i[n] the number Ti inspected Bernoulli random variables from B(i). Moreover, the time to decode Si given p,,p, M and i[n] should be constant. The optimal values for the expectation of |M| and Ti are the following (for a proof see the full version of this paper [24]).

MOPT=i=1nlog(1/pi) and TiOPT=1/pi for i[n]. (1)

1.1 Ad-Hoc Solutions

We briefly consider two natural strategies for solving the SSEP to build intuition. The obvious strategy (MIN) is to encode the smallest successful seeds, i.e. define

Simin{jBj(i)=1} and SMIN(S,,S)

and let M be an optimal encoding of SMIN. Perhaps surprisingly, this requires Ω(n) bits more than the optimum if p,,p are bounded away from 1.

Lemma 2 (MIN is fast but not space-efficient, proof in full version [24]).

If M encodes SMIN then 𝔼[Ti]=TiOPT for all i[n] but 𝔼[|M|]MOPT+i=1n1pi.

Another simple strategy (UNI) is to encode the smallest seed that is simultaneously successful for all Bernoulli processes, i.e. we define

S=min{ji[n]:Bj(i)=1} and Si=S for all i[n].

This yields a bitstring M of minimal length but requires total time i=1nTi=exp(Ω(n)) if p,,p are bounded away from 1.

Lemma 3 (UNI is space-efficient but slow, proof in full version [24]).

If M encodes S then 𝔼[|M|]=MOPT+𝒪(1) but 𝔼[i=1nTi]i=1n1/pii=1nTiOPT.

1.2 The Consensus Algorithm

We present an algorithm for the SSEP designed to simultaneously get close to both optima. We call it Combined Search and Encoding for Successful Seeds, or Consensus for short. We summarise the guarantees of the stronger of two variants here.

Theorem 4 (Main result, informal version of Theorem 7).

For any ε>0 the full Consensus algorithm with high probability solves the SSEP with |M|MOPT+εn+𝒪(logn) and 𝔼[Ti]=𝒪(TiOPT/ε) for all i[n]. Each seed Si is a bitstring of fixed length found at a predetermined offset in M.

Note that if the geometric mean of p,,p is at most 12 then MOPTn and the leading term εn in our space overhead is at most εMOPT.222It is possible to get 𝔼[|M|]MOPT+εmin(MOPT,n)+𝒪(logn) and 𝔼[Ti]=𝒪(TiOPT/ε) in general. This requires work if many pi are close to 1 such that MOPT=o(n). In that case we can “summarise” subsequences of p,,p using essentially the UNI strategy and then apply our main theorem to a shorter sequence p~,,p~n with nMOPT+𝒪(1). We decided against integrating this improvement into our arguments as it interferes with clarity.

Intuition.

First reconsider the MIN-strategy. It involves the sequence SMIN=(S,,S) with some entropy H(SMIN). There are two issues with storing SMIN. The superficial issue is that each number in the sequence follows a geometric distribution, and it is not obvious how to encode SMIN in space close to H(SMIN) while preserving fast random access to each component.333The standard solution [32] using Golomb-Rice coding [11, 31] comes with Ω(n) bits of overhead. The deeper issue is that SMIN is not what a space-optimal approach should encode in the first place. This is demonstrated by the fact that even an optimal encoding of SMIN needs space H(SMIN), which UNI undercuts by Ω(n) bits.

Consensus deals with both issues simultaneously. The idea is to store for each i[n] a natural number 1σi2ε/pi. This allows for fixed width encoding of σi using ε+log1/pi bits. If we simply used Si=σi then the expected number of successful choices for σi would be 2ε>1. The problem would be, of course, that the actual number of successful choices might be zero for many i[n]. So instead we interpret σi as a seed fragment and define Si to be a concatenation of seed fragments up to and including σi. This way, if we run out of choices for σi, we can hope that a different successful choice for earlier seed fragments will allow for a successful choice for σi as well. We can construct the sequence (σ,,σ) using backtracking. We explain our algorithm in detail in Section 2. The main technical challenge is to bound the additional cost this brings.

1.3 Motivation: Seed Sequences in Randomised Data Structures

Randomised data structures frequently use seeds to initialise pseudo-random number generators or as additional inputs to hash functions. Since a seed normally counts how often a construction has been restarted, a seed S implies that work Ω(S+1) has been carried out, meaning any reasonably fast algorithm yields seeds that are only a few bits to a few bytes in size. This does not, however, imply that seeds contribute only negligibly to overall memory consumption. That is because some data structures partition their input into many small parts called buckets and store a seed for each of these buckets. The seeds may even make up the majority of the stored data. This is the case for many constructions of perfect hash functions, but also occurs in a construction of compressed static functions [2], which, in turn, enable space-efficient solutions to the approximate membership problem and the relative membership problem. The way we phrased the SSEP implicitly assumes that buckets are handled independently. We discuss a slight generalisation in the full version [24].

Minimal Perfect Hash Functions.

As an application of Consensus we discuss the construction of minimal perfect hash functions (MPHFs). An MPHF on a set X of keys maps these keys to a range [|X|] without collisions. The function does not necessarily have to store X explicitly. The space lower bound is OPTMPHF=|X|log(e)𝒪(log|X|) bits assuming the universe of possible keys has size Ω(|X|2) [1, 26, 25, 9]. While a succinct construction has long been known [12], its space overhead is fixed to ε=Θ(log²lognlogn) and large in practice. Many recent practical approaches for constructing MPHFs involve a brute force search for seed values that map certain subsets of the keys in favourable ways [30, 7, 14, 22, 21]. Those that do, all combine aspects of the MIN-strategy (store smallest working seeds) and the UNI-strategy (let a single seed do lots of work) and achieve a space budget of (1+ε)·OPTMPHF only in time |X|·exp(Ω(1/ε)) (see Section 5.1). In this paper we break this barrier for the first time. While we give the details in Section 5, the clean version of our result is the following.

Theorem 5 (Bucketed Consensus-RecSplit).

For any ε[n3/7,1] there is an MPHF data structure with space requirement (1+𝒪(ε))OPTMPHF, expected construction time 𝒪(n/ε) and expected query time 𝒪(log1/ε).

1.4 Overview of the Paper

The structure of the paper is as follows. In Section 2, we explain Consensus in detail, as well as state the main theoretic results. In Sections 4 and 3 we then prove the results. We give an application of Consensus to minimal perfect hashing in Section 5 and also briefly evaluate our practical implementation. Finally, we conclude the paper in Section 6.

2 Combined Search and Encoding for Successful Seeds

In this section we present two variants of our Consensus algorithm. The first aims for conceptual clarity but produces seeds of length Ω(n) bits that cannot be decoded efficiently. The second is more technical but solves this problem.

In the following we always assume that n, p,,p(0,1] and ε(0,1] are given and B(i)=(Bj(i))j is a Bernoulli process with parameter pi for all i[n].

A Branching Process.

Let k,,k be a sequence of numbers.444It may help to imagine that ki2ε/pi. Consider the infinite tree visualised in Figure 1. It has n+2 layers, indexed from 0 to n+1. The root in layer 0 has an infinite number of children and nodes in layer 1in have ki children each, with 0-based indexing in both cases. Edges between layer i and i+1 are associated with the variables B(i),B(i),B(i), from top to bottom where Bj(i)=0 indicates a broken edge depicted as a dotted line. For convenience, we make another definition. For any 0n and any sequence (σ,σ,,σ)×{0,,k1}××{0,,k1} consider the node reached from the root by taking in layer 0i the edge to the σi-th child. We define σσ to be the index of the reached node within layer .

Figure 1: A visualisation of the tree underlying the search of the simplified Consensus algorithm.

The Simplified Consensus Algorithm.

In Consensus, we search for a non-broken path from the root of the tree to a leaf node using a DFS, see Algorithm 1. Note that such a path exists with probability 1 because the root has infinitely many children. The path is given by the sequence of choices (σ,,σ)×{0,,k1}××{0,,k1} made in each layer. The returned sequence can be encoded as a bitstring M. Then Siσσi is a successful seed for each i[n] by construction. The merits of the approach are summarised in the following theorem.

Algorithm 1 Simplified Consensus.
Algorithm 2 Full Consensus.
Theorem 6 (Performance of Simplified Consensus).

Given n, p,,p(0,1] and ε(0,1] let k,,k be a sequence of numbers satisfying555Ideally we would want kj2εpj but the integer constraint on the kj forces us to be more lenient. j=1ipjkj2ε[1,2] for all i[n]. Then Algorithm 1 solves the SSEP with 𝔼[Ti]=𝒪(TiOPT/ε) and 𝔼[|M|]MOPT+εn+log1/ε+𝒪(1).

Full Consensus Algorithm.

The main issues with the simplified Consensus algorithm are that (σ,,σ) is awkward to encode in binary and that using σσi directly as the seed Si makes for seeds of size Ω(n) bits, which are inefficient to compute and handle. We solve these issues with three interventions.

  1. 1.

    We introduce a word size w and encode σ using w bits (hoping that it does not overflow, see below).

  2. 2.

    We demand that k,,k are powers of two, i.e. ki=2i for i. With our definition of “”, the binary representation of the number σσi now simply arises by concatenating the binary representations of σ,,σi (where σj is encoded using j bits and σ is encoded using w bits).

  3. 3.

    Instead of defining Si to be σσi, we define Si to be the w-bit suffix of σσi. We will show that this truncation is unlikely to cause collisions, i.e. any two sequences (σ,,σi) and (σ,,σi) we encounter have distinct suffixes SiSi. Intuitively, this guarantees that the search algorithm will always see “fresh” random variables and does not “notice” the change.

Figure 2: The bitstring M returned by the full Consensus algorithm and the notation used to refer to relevant substrings.

The full algorithm is given as Algorithm 2, using notation defined in Figure 2. Our formal result is as follows.

Theorem 7 (Performance of Full Consensus).

Let c>0 be an arbitrary constant. Given n, p,,p(nc,1] and ε(nc,1], let ,, be the unique sequence of numbers such that

j=1ijεi+j=1ilog(1/pi) for i[n].

Assume we run Algorithm 2 with these parameters and w(2+7c)logn.666We did not attempt to improve the constant “2+7c” and expect that w=64 is enough in practice. Then there is an event E with Pr[E]=1𝒪(nc) that implies that the run solves the SSEP, producing a bitstring M with |M|=MOPT+εn+w containing Si{0,1}w at offset j=1ij for all i[n]. Moreover, 𝔼[TiE]=𝒪(TiOPT/ε) for all i[n].

In particular the result guarantees that Si can be inferred efficiently given M and i provided that j=1ilog(1/pi) can be computed efficiently given i. In practice it is fine to use approximations of pi.777Our implementation uses fixed point arithmetic (integers counting “microbits”) to avoid issues related to floating point arithmetic not being associative and distributive. Note that the expectation of Ti is only bounded conditioned on a good event E, which is defined in Section 4. To obtain an unconditionally bounded expectation, Algorithm 2 should be modified to give up early in failure cases.888To do so we need not decide whether E occurs. We can simply count the number of steps and abort when a limit is reached.

Special Cases and Variants.

An important special case is p==p=p. With b=log1/p the space lower bound is then simply MOPT=bn bits. If we apply Theorem 7 with ε(0,1) then the lengths ,, of the seed fragments σ,,σ are all either b+ε or b+ε with an average approaching b+ε. Assuming b it may be tempting to use ε=bb such that all seed fragments have the same length b+ε. Similarly, we could pick ε such that b+ε is a “nice” rational number, e.g. b+ε=b+12 for b such that ,, alternate between b+1 and b.

In some cases it may be reasonable to choose ε>1. Even though Consensus ceases to be more space efficient than MIN, it still has the advantage that seeds are bitstrings of fixed lengths in predetermined locations. We decided against formally covering this case in Theorem 7, but argue for the following modification in the full version [24] of this paper.

 Remark 8.

The guarantees of Theorem 7 also hold for ε[1,logn], except that

𝔼[TiE]=TiOPT(1+exp(Ω(2ε))+nc) (instead of [Uncaptioned image]).

It may also be worthwhile to consider variants of Consensus that use different values of the overhead parameter ε in different parts of M to minimise some overall cost. Such an idea makes sense in general if we augment the SSEP to include specific costs ci associated with testing the Bernoulli random variables belonging to the ith family.

3 Analysis of Simplified Consensus

We begin by reducing the claim of Theorem 6 to a purely mathematical statement given in Lemma 9, which we prove in Figure 3.

3.1 Reduction of Theorem 6 to Lemma 9

Recall the tree defined in Section 2 and illustrated in Figure 1. For any i[n+1] consider a node v in layer i. Let qi be the probability that there is an unbroken path from v to a node in layer n+1. (This probability is indeed the same for all nodes in layer i.) In the last layer, we have qn+1=1 by definition. For all 1in we have qi=1(1piqi+1)ki since piqi+1 is the probability that an unbroken path via a specific child of v exists, and since there is an independent chance for each of the ki children. The technical challenge of this section is to show that the qi are not too small.

Lemma 9.

Under the conditions of Theorem 6 we have qi=Ω(ε) for all i[n].

This lemma implies Theorem 6 as follows.

Proof of Theorem 6.

Let Ti for i[n] be the number of Bernoulli random variables from the family (Bj(i))j that are inspected. This corresponds to the number of edges between layers i and i+1 that are inspected by the DFS. With probability pi such an edge is unbroken and, if it is unbroken, then with probability qi+1 the DFS will never backtrack out of the corresponding subtree and hence never inspect another edge from the same layer. This implies TiGeom(piqi+1) and thus 𝔼[Ti]=1piqi+1=𝒪(TiOPT/ε) where the last step uses TiOPT=1/pi and Lemma 9.

Concerning the space to encode M=(σ,,σ), there are two parts to consider. The “fixed width” numbers (σ,,σ){0,,k1}××{0,,k1} can be encoded using logi[n]ki bits. By the assumption on (ki)i[n] we have i[n]ki2·i[n]2ε/pi and can bound the space by

1+logi[n]ki1+log(2i[n]2εpi)2+εn+i[n]log1pi=MOPT+εn+𝒪(1).

The “variable width” number 1+σ has distribution 1+σGeom(q) because σ is the 0-based index of the first node in layer 1 with an unbroken path to layer n+1. Again by Lemma 9 we have 𝔼[1+σ]=1q=𝒪(1/ε). Using Jensen’s inequality [15, 27] and the fact that log2 is concave, the expected number of bits to encode σ is hence bounded by

𝔼[log(1+σ)]1+log𝔼[1+σ]=1+log(𝒪(1/ε))=log1/ε+𝒪(1).

Summing the contributions of both parts yields a bound on 𝔼[|M|] as claimed.

3.2 Proof of Lemma 9

To understand the proof of Lemma 9 it may help to consider a simplified case first. Assume p==p=p and k==k=k with kp=2ε, i.e. k is slightly larger than 1p. We have qi=1(1pqi+1)k, which motivates defining f(x)=1(1px)k such that qi arises by iteratively applying f to an initial value of qn+1=1. From the illustration in Figure 3 we see that qi can never drop below the largest fixed point x of f, and it is not hard to show that x=Ω(ε).

Figure 3: For p==p=14 and k==k=5 (thus ε=log(5/4)0.32) the values qn+1,qn,qn1, (marked as ticks on the x-axis) arise by iteratively applying f(x)=1(1px)k to the starting value of 1. Each qi is at least the fixed point x0.45 of f.

Our full proof has to deal with a setting that is much less clean. The values pi and ki depend on i, yielding a distinct function fi for each i[n] and piki2ε is only true in the sense of a geometric average over Ω(1/ε) consecutive indices. We hence analyse compositions fafb1 for sufficiently large ba=Θ(1/ε). We begin with some definitions.

  • Let fi(x)=1(1pix)ki. Note that this gives qi=(fif)(1) for i[n+1].

  • For any set I={a,,b1} let PI=iIpi, KI=iIki, and FI=fafb1. Note that the assumption of Theorem 6 gives

    PIKI·2ε|I|=P{1,,b1}K{1,,b1}2ε(b1)[1,2](P{1,,a1}K{1,,a1}2ε(a1)[1,2])1[12,2].
  • Let ε2ε11. Its role is to be a number satisfying ε=Θ(ε), ε1, and (2ε)ε1[4,8]. In particular, if I={a,,b1} with ba=ε1 then

    PIKI=PIKI2ε|I|[12,2]·2ε|I|[4,8][2,16].
  • Lastly, let C=8 and ε=ε/C.

These definitions are designed to make the proof of the following lemma as simple as possible. No attempt was made to optimise constant factors.

Lemma 10.

Let I={a,,b1}[n]. The following holds.

  1. (i)

    For all i[n] and x[0,1]: fi(x)pikix(pikix)²/2.

  2. (ii)

    If baε1 then FI(x)PIKIx(1C|I|x) for x[0,ε].

  3. (iii)

    If ba=ε1 then FI(ε/2)ε/2.

Proof.

  1. (i)

    ex satisfies the following bounds: 1xex1x+x²/2 for x>0. We get the left claim by integrating ex1 for t[0,x] to get 1exx. Integrating again for t[0,x] gives the right claim. Applying to fi(x) we get

    fi(x)=1(1pix)ki1epikix1(1pikix+(pikix)²/2)=pikix(pikix)²/2.
  2. (ii)

    We use induction on ba. For ba=0 we have I=, FI=id as well as PI=KI=1 and the claim holds. Let now I={a,,b1} with baε1and I={a+1,,b1}. We assume the claim holds for I. Let x[0,ε].

    FI(x) =(faFI)(x)=famonotonic(FI(x))inductionfa(PIKIx(1C|I|x))
    (i)pakaPIKIx(1C|I|x)(pakaPIKIx(1C|I|x))²/2
    =PIKIx(1C|I|x)(PIKIx(1C|I|xCε1ε=1<2 and hence (1C|I|x)²1))²/2
    PIKIx(1C|I|x)(PIKIx)²/2=PIKIx(1x(C|I|+PIKI16=2C/2))
    PIKIx(1x(C|I|+C))=PIKIx(1C|I|x).
  3. (iii)

    By applying (ii) and using Cε1ε=1 we get

    FI(ε/2)(ii)PIKI2ε2(1Cε1ε2)=12ε/2.

The proof of Lemma 9 is now immediate:

Proof of Lemma 9.

Let i[n]. Partition I={i,,n} into contiguous intervals I,,Ik such that FI=FIFIk with |I|ε1 and |Ij|=ε1 for j{2,,k}. Then by repeated applications of Lemma 10 (iii), one application of Lemma 10 (ii) and the monotonicity of the relevant functions we get

qi =FI(1)FI(ε/2)=FI(FI((FIk(ε/2))))(iii)FI(ε/2)
(ii)PIKIε2(1C|I|ε2)=PIKI2ε|I|122ε|I|1ε2(1C|I|ε212)ε8.

4 Analysis of Full Consensus

Full Consensus performs a DFS in a tree as depicted in Figure 1 in much the same way as simplified Consensus, with two differences. First, the root node only has 2w children (not infinitely many) and the probability that an unbroken path exists is therefore strictly less than 1. Secondly, when examining the outgoing edge of index σi of some node in layer i, then its status is no longer linked to BΣi for Σ=σσi. Instead Σ is divided as Σ=prefix(Σ)suffix(Σ) where suffix(Σ){0,1}w and Bsuffix(Σ)i is used. To prepare for the proof of Theorem 7, we present a coupling to, and a claim about simplified Consensus.

A Coupling between Simplified and Full Consensus.

Consider the following natural coupling. Let Bj(i) for i[n] and j be the random variables underlying simplified Consensus and B~S(i) for i[n] and S{0,1}w the random variables underlying full Consensus. The coupling should ensure that BΣ(i)=B~suffix(Σ)(i) for all pairs (i,Σ) such that simplified Consensus inspects BΣ(i) and does not at an earlier time inspect BΣ(i) with suffix(Σ)=suffix(Σ). In other words, for any pair (i,S), if simplified consensus inspects at least one variable BΣ(i) with suffix(Σ)=S then B~S(i) equals the earliest such variable that is inspected.

This implies that the runs of simplified and full Consensus behave identically as long as, firstly, σ never reaches 2w and, secondly, simplified Consensus never repeats a suffix, by which we mean inspecting BΣ(i) for some pair (i,Σ) such that BΣ(i) with suffix(Σ)=suffix(Σ) was previously inspected. In the following, by a step of Consensus we mean one iteration of its main while-loop, which involves inspecting exactly one random variable.

Claim 11.

Simplified Consensus repeats a suffix in step t with probability at most t2w.

Proof.

Assume simplified Consensus is about to inspect BΣ(i). Call S{0,1}w a blocked suffix if there exists Σ{0,1} such that S=suffix(Σ) and BΣ(i) was previously inspected. This necessitates prefix(Σ)<prefix(Σ). Clearly, step t repeats a suffix if suffix(Σ) is blocked. A crucial observation is that the DFS has exhaustively explored all branches of the tree associated with any prefix less than Σ and the order in which this exploration has been carried out is no longer relevant. The symmetry between all nodes of the same layer therefore implies that every suffix is blocked with the same probability. Since at most t suffixes can be blocked at step t, the probability that Σ is blocked (given our knowledge of (i,Σ)) is at most t2w.

Proof of Theorem 7..

First note that the choice of (,,) made in Theorem 7 amounts to a valid choice of (k=2,,k=2) for applying Theorem 6. Indeed, for each i[n] there exists some erri[1,2) related to rounding up such that

j=1ipjkj2ε =(j=1ipj)2j=1ij2εi=(j=1ipj)2εi+j=1ilog(1/pi)2εi
=(j=1ipj)2εi+j=1ilog(1/pi)·erri2εi=erri[1,2]

as required. The number T=i=1nTi of steps taken by simplified Consensus satisfies 𝔼[T]=𝒪(i=1n1piε)=𝒪(n1+2c). By Markov’s inequality T exceeds its expectation by a factor of nc with probability at most nc so with probability at least 1𝒪(nc) simplified Consensus terminates within Tlimit=n1+3c steps.

By Claim 11 and a union bound the probability that simplified Consensus repeats a suffix within its first Tlimit steps is at most t=1Tlimitt/2w=𝒪(Tlimit²/2w)=𝒪(n2+6c/2w). By choosing w(2+7c)logn this probability is 𝒪(nc). This choice also guarantees that σ never exceeds 2w during the first Tlimit steps. Let now E={TTlimit}{no repeated suffix in the first Tlimit steps}. By a union bound Pr[E]=1𝒪(nc) and by our coupling E implies that simplified and full Consensus explore the same sequence of nodes in their DFS and terminate with the same sequence σ,,σ with σ<2w. In particular full Consensus succeeds in that case as claimed.

The updated bound on |M| simply reflects that σ, which in simplified Consensus had expected length log1/ε+𝒪(1) now has fixed length w=𝒪(logn). Lastly, if Ti and T~i denote the number of Bernoulli trials from the i-th family inspected by simplified and full Consensus, respectively, then conditioned on E we have Ti=T~i so

𝔼[T~iE]=𝔼[Ti|E]𝔼[Ti]/Pr[E]=𝒪(𝔼[Ti])=𝒪(TiOPT/ε).

5 Application to Minimal Perfect Hashing

In this section, we apply the Consensus idea to the area of minimal perfect hash functions (MPHFs). Recall from Section 1.3 that an MPHF for a given set X of n keys maps each key in X to a unique integer from [n].

5.1 Existing Strategies for Constructing MPHFs

We begin with a brief overview of established approaches with a focus on those we need. Detailed explanations are found in [18, 19].

Partitioning.

Most approaches randomly partition the input into small buckets using a hash function and then construct an MPHF on each bucket separately. The natural way to obtain an MPHF on the entire input involves storing the bucket-MPHFs and prefix sums for the bucket sizes. Storing the prefix sums can be avoided when the partitioning uses a k-perfect hash function, as is done in ShockHash-Flat [21], and as we do in Section 5.3.

Hagerup and Tholey [12] demonstrate that partitioning into tiny buckets of size 𝒪(lognloglogn) when suitably combined with exhaustive tabulation of MPHFs on tiny inputs yields an approach with construction time 𝒪(n), query time 𝒪(1) and space overhead ε=Θ(log²lognlogn). Unfortunately the analysis requires astronomical values of nmax(exp(ω(1/ε)),2150) [4] and is restricted to these rather large ε.

Below we explain monolithic variants of MPHFs, i.e. constructions not using partitioning, which are typically combined with partitioning to obtain faster bucketed variants.

Recursive Splitting.

A recursive splitting strategy [7] involves a predefined splitting tree, which is a rooted tree with n leaves.999In the original paper [7], the lowest level is suppressed, with correspondingly changed terminology. See Figure 4 (left) for an example with n=11. An MPHF for an input set of size n is given by a family of functions containing for each internal node v of the tree a splitting hash function fv that maps keys to the children of v. Together, the family (fv)v must describe a recursive partitioning of the input set into parts of size 1. The MPHF is queried for a key x by starting at the root. The splitting hash functions are evaluated on x and the resulting path is followed until a leaf is reached, the index of which is returned. Each fv is identified by a seed that is found using trial and error. In the original paper [7] seeds are stored using Golomb-Rice coding [11, 31]. To keep the space overhead low, the last layer of the splitting tree uses -way splits for relatively large (5–16 in practice), which makes splitting hash functions costly to find.

Multiple-Choice Hashing.

We randomly assign a set {h(x),,hk(x)}[n(1+ε)] of candidates to each key hoping that a choice σ(x){1,,k} can be made for each key such that xhσ(x)(x) is injective. A perfect hash function is then given by a retrieval data structure that stores σ [4, 10, 20]. A recent variant ShockHash [22, 21] achieves record space efficiency by using the aggressive configuration ε=0 and k=2 (combined with partitioning and recursive splitting). Many retries are needed until σ exists.

Bucket Placement.

A general strategy with many variants assigns a random bucket f(x){1,,b} to each key and considers for each bucket i[b] several (hash) functions gi,1,gi,2, with range [n]. A choice σ(i) is stored for each bucket such that xgf(x),σ(f(x))(x) is a bijection. Typically, the values σ(i) are chosen greedily in decreasing order of bucket sizes [30, 14, 1], but backtracking has also been considered [33, 8] (in a different sense than Consensus).

Figure 4: A general splitting tree and a balanced binary splitting tree as used in Section 5.2.

5.2 Monolithic Consensus-RecSplit

We now describe a variant of recursive splitting in an idealised setting with n=2d that uses Consensus to be particularly space efficient. The idea is simple. The data structure consists of n1 two-way splitting hash functions, i.e. hash functions with range {0,1}, arranged in a balanced binary tree with n leaves, see Figure 4 (right).

This means that during construction for a key set S we have to select the splitting hash functions from top to bottom such that each function splits the part of S arriving at its node perfectly in half. The data structure is entirely given by a sequence of n1 seeds that identify the splitting hash functions. If we search and encode these seeds using the Consensus algorithm, we obtain the following result.

Theorem 12 (Monolithic Consensus-RecSplit).

Let n=2d for d and ε[1n,1]. Monolithic-RecSplit is an MPHF data structure with space requirement OPTMPHF+𝒪(εn+log²n), expected construction time 𝒪(max(n3/2,nε)) and query time 𝒪(logn).

Proof.

We use a separate Consensus data structure per level of the splitting tree and construct them from top to bottom. Consider level {0,,d1} of the tree, which contains 2 nodes responsible for n=2d keys each. The probability that a given seed for a splitting hash function amounts to a perfect split for n keys is p(n)=(nn/2)·2n by a simple counting argument. A standard approximation gives p(n)=Θ(1/n).

By applying Theorem 7 with p==p2=p(n), ε=min(1,ε·n3/4) and w=𝒪(logn) we obtain a data structure encoding 2 successful seeds. The expected number of seeds inspected is 𝒪(1p(n)ε) for each node, which corresponds to expected work of 𝒪(np(n)ε)=𝒪(n3/2/ε) per node when seeds are tested by hashing all n relevant keys.101010Dividing the work per node by ε gives n3/2/ε². Our choice of εn3/4 (as long as min(1,·) does not kick in) is desired to ensure that using extra bits amounts to roughly the same reduction of work in each level, such that we have a consistent trade-off between time and space. For the total expected work w in level there are, up to constant factors, the following two options.

if ε=ε·n3/4 then w=n3/2εn3/4·2=n3/42ε=2(d)·3/4+ε=n3/4·2/4ε.
if ε=1 then w=n3/2·2=2(d)·3/2+=n3/2·2/2.

The formula for the first case is maximised for =d1 giving wd1=Θ(n/ε). The formula for the second case is maximised for =0 giving w=n3/2. Since both describe geometric series we get =0d1w=𝒪(w+wd1)=𝒪(n3/2+n/ε)=𝒪(max(n3/2,n/ε)) as desired.

The space requirement for level is |M|=2·log(1/p(n))+ε2+𝒪(logn). To bound the sum =0d1|M| we look at the terms in |M| separately. For the first, we will use that =0d1p(2d)2=n!nn, which can be checked by induction: d=0 is clear. For d>0 we have

=0d1p(2d)2=p(n)·=1d1p(2d)2=p(n)·(=1d1p(2d)21)²
=p(n)·(=0d2p(2d1)2)²=Ind.p(n)·((n/2)!(n/2)n/2)²=(nn/2)2n·((n/2)!(n/2)n/2)²=n!nn.

The leading terms in |M| therefore add up to

=0d12·log(1/p(n))=log(1/=0d1p(n)2)=log(nnn!)=nloge𝒪(logn).

The overhead terms ε·2 are geometrically increasing in . This is because

ε+1·2+1ε·2=min(1,ε·n+13/4)min(1,ε·n3/4)·2=min(1,ε·(2d1)3/4)min(1,ε·(2d)3/4)·223/4·2=21/4>1.

The sum of the overhead terms is therefore dominated by the last term εd12d1εn·21/4 giving =0d1ε·2εn/(21/41)<6εn. Overall we get as claimed

space==0d1|M|nloge+6εn+𝒪(log²n).

A query needs to extract one seed from each of the d=logn Consensus data structures. Since each uses uniform values for p==pn, the product j=1ipj=p(n)i is easy to compute in constant time, which is sufficient to find each seed in constant time, so we get a query time of 𝒪(d) in total.

5.3 Bucketed Consensus-RecSplit

We now replace the upper layers of Monolithic Consensus-RecSplit with a minimal k-perfect hash function [13]. This reduces query time as there are fewer layers to traverse and reduces construction time because the splits in the top layers are the most expensive ones to compute. Most importantly, it enables efficiently handling input sizes that are not powers of two.

For any k a minimal k-perfect hash function for a set S of n keys is a data structure mapping the elements of S to buckets indexed with [n/k] such that each bucket i[n/k] receives exactly k keys. If k does not divide n then bucket n/k receives between 1 and k1 leftover keys. It has long been known that the space lower bound for such a data structure is n(loge1klogkkk!) [1], shown to be n2klog2πk in [16]. In the full version of this paper [24] we outline a compact minimal k-perfect hash function with, in expectation, 𝒪(nklogk) bits of space, constant query time, and linear construction time.

We call our resulting MPHF Bucketed Consensus-RecSplit. Its performance is as follows (previously stated in Section 1.3).

Theorem 5 (Bucketed Consensus-RecSplit). [Restated, see original statement.]

For any ε[n3/7,1] there is an MPHF data structure with space requirement (1+𝒪(ε))OPTMPHF, expected construction time 𝒪(n/ε) and expected query time 𝒪(log1/ε).

Proof.

The idea is to use our monolithic construction but replace the top-most layers with a k-perfect hash function. More precisely, we choose k=243log1/ε=Θ(ε4/3) and construct a minimal k-perfect hash function F for the input set. As discussed in the full version, the construction time of F is 𝒪(n), the expected query time is 𝒪(1) and the space consumption is 𝒪(nklogk)=𝒪(εn) bits, all well within the respective budgets. The resulting n/k buckets of k keys each (a power of 2) are recursively split like in our monolithic data structure, which takes at most OPTMPHF+𝒪(εn) bits of space (the 𝒪(log²n) term is dominated by 𝒪(εn)). Since we make no assumptions on n there might be up to k1 leftover keys assigned to an extra bucket. For these we construct a separate MPHF F with range {n/k·k+1,,n}. Since kε4/3=ε·ε7/3εn pretty much any compact (not necessarily succinct) MPHF is good enough here, provided it supports queries in logarithmic time. Queries evaluate F first and, if mapped to a regular bucket, evaluate a sequence of logk splits, or, if mapped to the extra bucket, evaluate F. This takes 𝒪(logk)=𝒪(log1/ε) time in the worst case.

Concerning construction time, note that the top-most level in use has nodes of size n=kε4/3 and the definition of ε we made simplifies to ε=n3/4·ε without the “min(1,·)”. By an argument in Theorem 12, the construction time is then dominated by the bottom layer and bounded by 𝒪(n/ε) as claimed.

Figure 5: Experimental evaluation of very space-efficient MPHF algorithms, showing the trade-off between space consumption and construction throughput. Because Consensus-RecSplit is focused on space consumption, we only include approaches that achieve below 1.6 bits per key.

5.4 Experiments

To demonstrate that Consensus-RecSplit is viable in practice, we give an implementation and compare it to other MPHF constructions from the literature.111111We run single-threaded experiments on an Intel i7 11700 processor with a base clock speed of 2.5 GHz. The sizes of the L1 and L2 data caches are 48 KiB and 512 KiB per core, and the L3 cache has a size of 16 MiB. The page size is 4 KiB. We use the GNU C++ compiler version 11.2.0 with optimization flags -O3 -march=native. Like previous papers [19, 21, 22, 14, 3], we use 100 million random string keys of uniform random length [10..50] as input. The source code is available on GitHub [17, 23]. We only include the most space-efficient previous approaches with space consumption below 1.6 bits per key and refer to [18, 19] for a detailed comparison of state-of-the-art perfect hashing approaches with larger space consumption. Our implementation departs from the algorithmic description in Section 5.3 by using the threshold-based k-perfect hash function [21, 13]. We postpone further tuning and parallelisation efforts to a future paper.

Figure 5 shows the trade-off between space consumption and construction time of Consensus-RecSplit. In the full version [24] we also give a table. The performance of all previous approaches degrades abruptly for smaller overheads because of their exponential dependency on 1/ε while Consensus-RecSplit shows an approximately linear dependence on 1/ε as predicted in Theorem 5. The space lower bound for MPHFs is log2e1.443 bits per key. The previously most space-efficient approach, bipartite ShockHash-RS [21], achieves 1.489 bits per key. In about 15% of the construction time, Consensus-RecSplit achieves a space consumption of just 1.444 bits per key. At a space consumption of 1.489 bits per key, Consensus-RecSplit is more than 350 times faster to construct than the previous state of the art.

Queries take 150–250 ns depending on k and therefore the depth of the splitting tree (see full version), so their performance is not too far from compact configurations of RecSplit (100 ns) and bipartite ShockHash-RS (125 ns). Future work can improve on the query performance by storing the seeds needed for each query closer to each other. We can adapt bucketed Consensus-RecSplit by constructing one Consensus data structure storing seeds bucket by bucket instead of an independent data structure for each layer. Preliminary experiments show up to 30% faster queries with moderately slower construction at the same space consumption. Splittings higher up in the tree not only have a smaller success probability, but are also more costly to test. We are therefore incentivised to select a different space overhead parameter ε for each splitting within the same Consensus data structure. We leave this generalisation of Consensus to non-uniform ε for future work.

6 Conclusion

This paper concerns data structures that store sequences of successful seeds, i.e. values known to influence pseudo-random processes in desirable ways. Normally these seeds are first found independently of each other using trial and error and then encoded separately. In this paper we show that, perhaps surprisingly, this wastes Ω(1) bits per seed compared to a lower bound.

We present the Consensus approach, which combines search and encoding of successful seeds and reduces the space overhead to 𝒪(ε) bits at the price of increasing the required work by a factor of 𝒪(1/ε).

To demonstrate the merits of our ideas in practice, we apply it to the construction of minimal perfect hash functions. Consensus-RecSplit is the first MPHF to achieve a construction time that only depends linearly on the inverse space overhead. Despite using the same recursive splitting idea as [7], the space overhead is, for the same construction throughput, up to two orders of magnitude smaller in practice.

Future Work.

Given our promising results, we intend to continue working on Consensus-RecSplit, in particular looking into parallelisation and improving query times. We also believe that Consensus can be applied to other randomised data structures, including but not limited to other perfect or k-perfect hash functions, retrieval data structures and AMQ-filters. We give a first result on k-perfect hash functions using Consensus in [13].

References

  • [1] Djamal Belazzougui, Fabiano C. Botelho, and Martin Dietzfelbinger. Hash, displace, and compress. In ESA, volume 5757 of Lecture Notes in Computer Science, pages 682–693. Springer, 2009. doi:10.1007/978-3-642-04128-0_61.
  • [2] Djamal Belazzougui and Rossano Venturini. Compressed static functions with applications. In SODA, pages 229–240. SIAM, 2013. doi:10.1137/1.9781611973105.17.
  • [3] Dominik Bez, Florian Kurpicz, Hans-Peter Lehmann, and Peter Sanders. High performance construction of RecSplit based minimal perfect hash functions. In ESA, volume 274 of LIPIcs, pages 19:1–19:16. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2023. doi:10.4230/LIPICS.ESA.2023.19.
  • [4] Fabiano C. Botelho, Rasmus Pagh, and Nivio Ziviani. Practical perfect hashing in nearly optimal space. Inf. Syst., 38(1):108–131, 2013. doi:10.1016/J.IS.2012.06.002.
  • [5] Martin Dietzfelbinger and Friedhelm Meyer auf der Heide. A new universal class of hash functions and dynamic hashing in real time. In ICALP, volume 443 of Lecture Notes in Computer Science, pages 6–19. Springer, 1990. doi:10.1007/BFB0032018.
  • [6] Martin Dietzfelbinger and Michael Rink. Applications of a splitting trick. In ICALP (1), volume 5555 of Lecture Notes in Computer Science, pages 354–365. Springer, 2009. doi:10.1007/978-3-642-02927-1_30.
  • [7] Emmanuel Esposito, Thomas Mueller Graf, and Sebastiano Vigna. RecSplit: Minimal perfect hashing via recursive splitting. In ALENEX, pages 175–185. SIAM, 2020. doi:10.1137/1.9781611976007.14.
  • [8] Edward A. Fox, Lenwood S. Heath, Qi Fan Chen, and Amjad M. Daoud. Practical minimal perfect hash functions for large databases. Commun. ACM, 35(1):105–121, 1992. doi:10.1145/129617.129623.
  • [9] Michael L. Fredman and J’anos Koml’os. On the size of separating systems and families of perfect hash functions. SIAM J. Algebraic Discrete Methods, 5(1):61–68, 1984. doi:10.1137/0605009.
  • [10] Marco Genuzio, Giuseppe Ottaviano, and Sebastiano Vigna. Fast scalable construction of (minimal perfect hash) functions. In SEA, volume 9685 of Lecture Notes in Computer Science, pages 339–352. Springer, 2016. doi:10.1007/978-3-319-38851-9_23.
  • [11] Solomon W. Golomb. Run-length encodings. IEEE Trans. Inf. Theory, 12(3):399–401, 1966. doi:10.1109/TIT.1966.1053907.
  • [12] Torben Hagerup and Torsten Tholey. Efficient minimal perfect hashing in nearly minimal space. In STACS, volume 2010 of Lecture Notes in Computer Science, pages 317–326. Springer, 2001. doi:10.1007/3-540-44693-1_28.
  • [13] Stefan Hermann, Sebastian Kirmayer, Hans-Peter Lehmann, Peter Sanders, and Stefan Walzer. Engineering minimal k-perfect hash functions. In ESA, LIPIcs. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2025.
  • [14] Stefan Hermann, Hans-Peter Lehmann, Giulio Ermanno Pibiri, Peter Sanders, and Stefan Walzer. PHOBIC: perfect hashing with optimized bucket sizes and interleaved coding. In ESA, volume 308 of LIPIcs, pages 69:1–69:17. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2024. doi:10.4230/LIPICS.ESA.2024.69.
  • [15] Johan Ludwig William Valdemar Jensen. Sur les fonctions convexes et les inégalités entre les valeurs moyennes. Acta mathematica, 30(1):175–193, 1906. doi:10.1007/BF02418571.
  • [16] Florian Kurpicz, Hans-Peter Lehmann, and Peter Sanders. PaCHash: Packed and compressed hash tables. In ALENEX, pages 162–175. SIAM, 2023. doi:10.1137/1.9781611977561.CH14.
  • [17] Hans-Peter Lehmann. ByteHamster/MPHF-Experiments. Software, swhId: swh:1:dir:7b02eda8ffa5a9d61a086ff2fc34fb0fbdc3eff4 (visited on 2025-07-16). URL: https://github.com/ByteHamster/MPHF-Experiments, doi:10.4230/artifacts.23790.
  • [18] Hans-Peter Lehmann. Fast and Space-Efficient Perfect Hashing. PhD thesis, Karlsruhe Institute of Technology, Germany, 2024. doi:10.5445/IR/1000176432.
  • [19] Hans-Peter Lehmann, Thomas Mueller, Rasmus Pagh, Giulio Ermanno Pibiri, Peter Sanders, Sebastiano Vigna, and Stefan Walzer. Modern minimal perfect hashing: A survey. CoRR, abs/2506.06536, 2025. doi:10.48550/arXiv.2506.06536.
  • [20] Hans-Peter Lehmann, Peter Sanders, and Stefan Walzer. SicHash - small irregular cuckoo tables for perfect hashing. In ALENEX, pages 176–189. SIAM, 2023. doi:10.1137/1.9781611977561.CH15.
  • [21] Hans-Peter Lehmann, Peter Sanders, and Stefan Walzer. ShockHash: Near optimal-space minimal perfect hashing beyond brute-force. CoRR, abs/2310.14959, 2024. doi:10.48550/arXiv.2310.14959.
  • [22] Hans-Peter Lehmann, Peter Sanders, and Stefan Walzer. ShockHash: Towards optimal-space minimal perfect hashing beyond brute-force. In ALENEX, pages 194–206. SIAM, 2024. doi:10.1137/1.9781611977929.15.
  • [23] Hans-Peter Lehmann, Peter Sanders, Stefan Walzer, and Jonatan Ziegler. ByteHamster/ConsensusRecSplit. Software, swhId: swh:1:dir:eeef738e12bea287eae54a77ce81241587a91767 (visited on 2025-07-16). URL: https://github.com/ByteHamster/ConsensusRecSplit, doi:10.4230/artifacts.23789.
  • [24] Hans-Peter Lehmann, Peter Sanders, Stefan Walzer, and Jonatan Ziegler. Combined search and encoding for seeds, with an application to minimal perfect hashing. CoRR, abs/2502.05613, 2025. doi:10.48550/arXiv.2502.05613.
  • [25] Harry G. Mairson. The program complexity of searching a table. In FOCS, pages 40–47. IEEE, 1983. doi:10.1109/SFCS.1983.76.
  • [26] Kurt Mehlhorn. On the program size of perfect and universal hash functions. In FOCS, pages 170–175. IEEE, 1982. doi:10.1109/SFCS.1982.80.
  • [27] Michael Mitzenmacher and Eli Upfal. Probability and computing: Randomization and probabilistic techniques in algorithms and data analysis. Cambridge university press, 2017.
  • [28] Anna Pagh and Rasmus Pagh. Uniform hashing in constant time and optimal space. SIAM J. Comput., 38(1):85–96, 2008. doi:10.1137/060658400.
  • [29] Anna Pagh, Rasmus Pagh, and Milan Ruzic. Linear probing with constant independence. In STOC, pages 318–327. ACM, 2007. doi:10.1145/1250790.1250839.
  • [30] Giulio Ermanno Pibiri and Roberto Trani. PTHash: Revisiting FCH minimal perfect hashing. In SIGIR, pages 1339–1348. ACM, 2021. doi:10.1145/3404835.3462849.
  • [31] Robert F. Rice. Some practical universal noiseless coding techniques. Jet Propulsion Laboratory, JPL Publication, 1979.
  • [32] Ian H Witten, Alistair Moffat, and Timothy C Bell. Managing gigabytes: compressing and indexing documents and images. Morgan Kaufmann, 1999.
  • [33] Wei-Pang Yang and M. W. Du. A backtracking method for constructing perfect hash functions from a set of mapping functions. BIT, 25(1):148–164, 1985. doi:10.1007/BF01934995.
  • [34] Jonatan Ziegler. Compacting minimal perfect hashing using symbiotic random search. Bachelor’s thesis, Karlsruhe Institute for Technology (KIT), 2025. doi:10.5445/IR/1000177814.