Abstract 1 Introduction 2 Preliminaries 3 Main Results and ESPC Index 4 Complexity Proofs and Analysis 5 Analysis and Benchmarking 6 Experimental Findings 7 Conclusions References Appendix A Proof of Proposition 8

Beyond Logarithmic Bounds: Querying in Constant Expected Time with Learned Indexes

Luis Alberto Croquevielle ORCID Imperial College London, UK Guang Yang ORCID Imperial College London, UK Liang Liang ORCID Imperial College London, UK
EPFL, Lausanne, Switzerland
Ali Hadian ORCID Imperial College London, UK Thomas Heinis ORCID Imperial College London, UK
Abstract

Learned indexes leverage machine learning models to accelerate query answering in databases, showing impressive practical performance. However, theoretical understanding of these methods remains incomplete. Existing research suggests that learned indexes have superior asymptotic complexity compared to their non-learned counterparts, but these findings have been established under restrictive probabilistic assumptions. Specifically, for a sorted array with n elements, it has been shown that learned indexes can find a key in O(log(logn)) expected time using at most linear space, compared with O(logn) for non-learned methods.

In this work, we prove O(1) expected time can be achieved with at most linear space, thereby establishing the tightest upper bound so far for the time complexity of an asymptotically optimal learned index. Notably, we use weaker probabilistic assumptions than prior research, meaning our work generalizes previous results. Furthermore, we introduce a new measure of statistical complexity for data. This metric exhibits an information-theoretical interpretation and can be estimated in practice. This characterization provides further theoretical understanding of learned indexes, by helping to explain why some datasets seem to be particularly challenging for these methods.

Keywords and phrases:
Learned Indexes, Expected Time, Stochastic Processes, Rényi Entropy
Copyright and License:
[Uncaptioned image] © Luis Alberto Croquevielle, Guang Yang, Liang Liang, Ali Hadian, and Thomas Heinis; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Predecessor queries
Related Version:
Extended Version: https://arxiv.org/abs/2405.03851 [6]
Funding:
This work is funded by DNAMIC (grant 101115389), NEO (grant 101115317), and ANID Chile through the Scholarship Program (DOCTORADO BECAS CHILE/2023 - 72230222).
Editors:
Sudeepa Roy and Ahmet Kara

1 Introduction

Query answering is of central importance in database systems, and developing efficient algorithms for this task is a key research problem. These algorithms often use an index structure to access records faster, at the cost of higher space usage to store the index. Arguably, the most common scenario in query answering involves point queries, where the aim is to retrieve all tuples where an attribute has an exact value [12, 21]. Also common are range queries, which ask for all tuples where an attribute is within a given interval.

Solutions based on hash tables can answer point queries efficiently [5], but are ineffective in handling range queries. To generalize well to these cases, the relevant attribute is usually stored as a sorted array A. In this configuration, answering point and range queries essentially comes down to finding an element in a sorted array. A point query for value q can be answered by finding the position of q in the array if it exists. A range query matching an interval [q,q] can be answered by finding the first element bigger than or equal to q, and then scanning the array A from that position until an element greater than q is found, or the array ends.

Since the array A is sorted, binary search can be used to find q or the first element greater than q. If A has n elements (also called keys), binary search takes O(logn) operations. This is asymptotically optimal in a random-access machine (RAM) model of computation, for comparison-based searching [17]. However, other index structures such as B+Trees [13, 19] are preferred in practice. They have the same asymptotic complexity but make use of hardware properties to improve performance.

It was suggested in [18] that classical index structures can be understood as models that take a key as input and predict its position in the sorted array. From this perspective, indexes such as B+Tree are potentially suboptimal because they do not take advantage of patterns in the data to improve performance. This has led to the introduction of learned indexes [18, 16, 8, 10, 11], which use machine learning models to predict the position of a key.

Learned indexes combine predictive and corrective steps, as illustrated in Figure 1. In the predictive step, a machine learning model is used to estimate the position of a key. Since this estimation may be wrong, a corrective step finds the exact position, usually by searching around the position estimated by the model. In principle, a learned index can improve upon classical methods by limiting the search to a small range, determined by the prediction error, rather than having to search the entire array. Most learned indexes use simple machine learning models, such as piecewise linear functions, which are fast to compute.

Figure 1: Predictive and corrective steps of a learned index.

While learned indexes exhibit impressive experimental performance [8, 11, 14, 16], their theoretical understanding is still at an early stage. Existing research shows that specific learned indexes can achieve better asymptotic space or time complexity than traditional structures [34, 9], but these results rely on disparate and potentially restrictive assumptions. Furthermore, some datasets are notoriously challenging for learned indexes [24], and no theoretical analysis has yet offered an adequate explanation for this.

In this work, we analyze the asymptotic expected complexity of learned indexes. Specifically, we prove that learned indexes can achieve constant expected query time with at most linear space, constituting the best upper bound so far for the performance of an asymptotically optimal learned index. Our proof is constructive, meaning we provide a specific index that achieves this complexity. This type of index has been considered before [34], but we contribute tighter complexity bounds than previously available. Compared to prior research, our complexity bounds hold under a more general probabilistic model, and in particular, our results do not require any independence assumptions for the data generating process.

We also address the question of why certain datasets are challenging for learned indexes. It has been suggested that, in those cases, the data distribution is inherently hard to learn [24], and several metrics have been proposed to capture this difficulty [34, 9, 32]. We introduce a new measure of data complexity ρf with better mathematical properties. Our metric is related to Rényi entropy in information theory, helping to provide a theoretical grounding for ρf. Also, ρf can be statistically estimated, making it potentially useful in practice to predict a priori how well a learned index will perform. Our experiments support this idea.

2 Preliminaries

2.1 Data

Learned indexes use statistical models. Therefore, theoretical analysis of learned indexes is done under a probabilistic model of some kind [34, 9]. In general, keys are assumed to come from a data generating process with specified properties. For instance, in [34] it is assumed that the keys are independent and identically distributed, while [9] uses a completely different model. We strive to generalize the probabilistic setting as much as possible.

Formally, we model a database attribute X as a stochastic process X={Xi}i. We assume all random variables Xi have the same cumulative distribution function F. For any time n, we can observe the data X1,X2,,Xn generated so far. Sorting the {Xi}i=1n gives rise to a new set of random variables, usually called the order statistics [7], which we denote X(1),X(2),,X(n). We consider the sorted array A at time n to be formed by this last group of random variables, such that A[i]=X(i) for all i=1,,n. Duplicate keys are possible in principle, and do not constitute a problem. In that case, X(i+1)=X(i) for some i and all analysis remains the same.

That way, in our setting a stochastic process generates the data in attribute X. For each n, there is a sorted array A with the first n keys from this process, and a learned index can be built to search this array. For now, we do not assume anything about F. Also, we do not assume independence of the {Xi}i=1n variables. In Sections 3 and 4, we analyze our results under certain probabilistic assumptions. In Section 5, we compare our model and results to those of prior work.

2.2 Learning Problem

Assume array A has n keys. The learning problem can be formalized by the introduction of a 𝐫𝐚𝐧𝐤 function. For any number q, 𝐫𝐚𝐧𝐤A(q) is defined as the number of elements in A that are smaller than or equal to q, that is

𝐫𝐚𝐧𝐤A(q)=i=1n𝟏X(i)q,

where 𝟏 is the indicator function. We omit the subscript A if it is clear from context. For a point query with value q, 𝐫𝐚𝐧𝐤(q) gives the position of q in the array if it exists. For a range query matching an interval [q,q], the relevant records are found by scanning the array A, starting at position 𝐫𝐚𝐧𝐤(q) and ending when an element greater than q is found. Since 𝐫𝐚𝐧𝐤 is sufficient to answer these queries, we focus on the problem of learning the 𝐫𝐚𝐧𝐤 function under specific probabilistic assumptions.

2.3 Model of Computation

The model of computation we use is the Random Access Machine (RAM) under the uniform cost criterion [1]. In this model, each instruction requires one unit of time and each register (storing an arbitrary integer) uses one unit of space. This is in contrast to part of the existing literature. In [34] the computation model is such that O(logn) space units are needed to store an integer n. On the other hand, in [9] an I/O model of computation is used [20], which assumes the existence of an external memory (e.g., disk) from which data is read and written to. Under this model, time complexity consists only of the number of I/O operations.

We argue the RAM model under the uniform cost criterion is better suited for analysing learned indexes than either of these two. On the one hand, numbers are usually represented as data types with a fixed number of bits (e.g., long int in C++). On the other hand, most learned indexes work in main memory and avoid the use of disk storage. To make results comparable, in Section 5 we restate all previous complexity results using our RAM model.

Regarding space complexity, we take the standard approach of only counting the redundant space introduced by the index, and not the space necessary to store the array A in the first place [10]. Regarding time complexity, we do not consider the time needed to train the model, just for finding the keys. We focus the asymptotic analysis on expected time, where the expectation is taken over the realizations of the stochastic process and the choice of query.

For any n, let Rn be a procedure that takes the sorted array A=[X(1),,X(n)] as input, and outputs an index structure Rn(A). Denote by S(Rn(A)) the space overhead of Rn(A). Also, for any q denote by T(Rn(A),q) the query time for q over array A when using the index Rn(A). We are interested in bounds for S and T that do not depend on A or q. With that in mind, we denote

S¯n=supA=[X(1),,X(n)]S(Rn(A)),T¯n=𝔼[T(Rn(A),q)],

where 𝔼 denotes expected value. In Section 3.1 we state asymptotic results for S¯n and T¯n.

3 Main Results and ESPC Index

3.1 Complexity Bounds

We mentioned above that all the Xi share the same cumulative distribution function. Formally, we consider that the Xi are defined in a probability space (Ω,,), such that FXi(x)(Xix) is the same for all i, and is denoted by F. Now, assume F can be characterized by a square-integrable density function f and define ρf as:

ρf=f22=f2(x)𝑑x.

Moreover, assume f has bounded support [a,b], that is, f(x)=0 for all x[a,b]. This can be a reasonable assumption in many practical applications [22, 23] where there are known bounds to the values of data: for instance, for storing medical information such as test results, or personal financial information. Our results can be extended to the case of f with unbounded support, by incurring a penalty on space usage. In Section 4.3.3 we show how this penalty relates to the tails of the probability distribution.

The main complexity results are stated in Theorems 1 and 2. Both are attained by the ESPC index, which we introduce in Section 3.2. Both theorems as stated here assume that the query parameter follows the same probability distribution as the keys. This is done for ease of presentation. In Section 4.3.2 we prove that these results can be generalized to include the case when that assumption does not hold.

Theorem 1.

Suppose f has support [a,b] and ρf<. Define ρ=log((ba)ρf). Then, there is a procedure Rn for building learned indexes such that S¯n=O(n) and T¯n=O(ρ). That is, for array A with n keys an index can be built with space overhead O(n) and expected query time O(ρ). Since ρ is independent of n, expected time is asymptotically O(1).

Theorem 2.

Suppose f has support [a,b] and ρf<. Define ρ=log((ba)ρf). Then, there is a procedure Rn for building learned indexes such that S¯n=O(nlogn) and T¯n=O(ρ+log(logn))). That is, for array A with n keys an index can be built with space overhead O(nlogn) and expected query time O(ρ+log(logn)). Since ρ is independent of n, expected time is asymptotically O(log(logn)).

We prove Theorems 1 and 2 in Section 4.3.1, as corollaries of a more general result.

3.2 Equal-Split Piecewise Constant Index

All complexity results in Section 3.1 are achieved by the Equal-Split Piecewise Constant (ESPC) index, which we now describe. As explained in Section 1, learned indexes usually combine predictive and corrective steps. The ESPC index follows this idea, and is remarkably simple to define and implement. It is important to mention that a similar design, the PCA Index, was analyzed in [34], but the complexity bounds we prove for this type of index are tighter and more general. Moreover, there are some design differences. First, the ESPC index uses exponential search for the corrective step, while the PCA Index uses binary search. More importantly, the PCA Index presumes knowledge of the interval [a,b] where the distribution is supported, a potentially restrictive assumption that we dispense with for the ESPC index.

The general idea is to learn a function 𝐫^ to approximate the 𝐫𝐚𝐧𝐤 function up to some error. Then 𝐫𝐚𝐧𝐤 can be computed exactly by first evaluating 𝐫^ and then correcting the error by use of an exponential search algorithm [2]. If the prediction error is small, this gives a procedure for computing 𝐫𝐚𝐧𝐤 exactly in low expected time. The approximator function 𝐫^ is built as a piecewise constant function defined over equal-length subintervals. This equal-length property ensures that 𝐫^ can be evaluated in constant time.

We now describe how the approximator 𝐫^ is defined. Take an array A formed by the sorted keys X(1),,X(n). For a positive integer K, divide the range [X(1),X(n)] into K subintervals of equal length, which we denote by {Ik}k=1K. Figure 2 illustrates this type of partition for a dataset with 40 keys and K=4. Formally, for each k=1,,K the k-th subinterval is defined as Ik=[tk1,tk] where

tk=X(1)+kδ with δ=X(n)X(1)K.
Figure 2: Partition of key range into four equal-length intervals, with approximator of 𝐫𝐚𝐧𝐤.

Notice that all intervals have the same length, but not necessarily the same number of keys. Now, we want 𝐫^ to be defined as a piecewise constant function. If q does not belong to any Ik, this means that either q<X(1) or q>X(n), so that 𝐫𝐚𝐧𝐤(q)=0 or 𝐫𝐚𝐧𝐤(q)=n, respectively. On the other hand, for each subinterval Ik the ESPC index stores a constant value r^k which approximates 𝐫𝐚𝐧𝐤 over Ik. Combining these ideas, we define 𝐫^ as

𝐫^(q)={0if q<X(1)r^1if qI1 . . . r^Kif qIKnif q>X(n).

Figure 2 represents this type of piecewise approximator over equal-length subintervals. It remains to define the {r^k}k=1K values. Since 𝐫𝐚𝐧𝐤 is an increasing function, for all qIk it holds 𝐫𝐚𝐧𝐤(tk1)𝐫𝐚𝐧𝐤(q)𝐫𝐚𝐧𝐤(tk). This motivates a candidate for r^k in the form of

r^k=12(𝐫𝐚𝐧𝐤(tk1)+𝐫𝐚𝐧𝐤(tk))=𝐫𝐚𝐧𝐤(tk1)+nk2,

where nk=𝐫𝐚𝐧𝐤(tk)𝐫𝐚𝐧𝐤(tk1) is the number of keys in Ik. As we prove in Section 4.1, this means that r^k can approximate 𝐫𝐚𝐧𝐤(q) for any qIk with error at most nk/2, so that the quality of 𝐫^ as an approximator depends on the expected number of keys per interval. Algorithm 1 details the index construction, based on our definitions for {Ik} and {r^k}.

Algorithm 1 Construction of ESPC index.
Input: Sorted array A, positive integer K
Output: ESPC index for A with K equal-length subintervals
1 nlength(A)
2 δ(X(n)X(1))/K
3 Narray of length K with all zeros
4 // Compute number of keys in each subinterval, store in N
5 for i=1 to n do
 6 k1δ(A[i]X(1))
 7 N[k]N[k]+1
 
end for
8rarray of length K
9 r[1]12N[1]
10 // Compute rank estimator r^k for each subinterval, store in r
11 for k=2 to K do  r[k]r[k1]+12N[k] ;
12 return R=(r,δ)

The approximator 𝐫^ can be evaluated in constant time for any q. If q<X(1) then 𝐫^(q)=0, and if q>X(n) then 𝐫^(q)=n. If neither is true, then q belongs to some interval Ik. In that case, 𝐫^(q)=r^k where k is given by k=KqX(1)X(n)X(1)=1δ(qX(1)). To find 𝐫𝐚𝐧𝐤(q) exactly, the ESPC index then performs an exponential search on array A around position 𝐫^(q). We prove in Section 4 that the expected value of the approximation error |𝐫𝐚𝐧𝐤(q)𝐫^(q)| is low enough to establish the results of Section 3.1. Algorithm 2 formalizes this evaluation process. It assumes the existence of a procedure ExponentialSearch(A,i,q) which uses exponential search around index i to find the position of q in array A.

Algorithm 2 Evaluation of ESPC index.
Input: Sorted array A, q, ESPC index R=(r,δ)
Output: Exact value of 𝐫𝐚𝐧𝐤(q)
1 if q<X(1) then return 0;
2 else if q>X(n) then return n;
3 else
 4 k1δ(qX(1))
 5 ir[k]
 6 return ExponentialSearch(A,i,q)
end if

4 Complexity Proofs and Analysis

4.1 Preliminary Results

We now turn to an analysis of the space and time complexity of the ESPC index. In the following, assume the sorted array A has n elements and the index is built with K subintervals {Ik}k=1K. As seen from Algorithm 1, an ESPC index can be represented as a tuple R=(r,δ) where r is an array of length K storing the approximators {r^k}k=1K and δ is the length of the subintervals. For q recall that 𝐫^(q) is the approximation for 𝐫𝐚𝐧𝐤(q) before the exponential search (see Algorithm 2).

In the RAM model with the uniform cost criterion, each stored value uses 1 unit of space. Hence, the space used to store an ESPC index R=(r,δ) is O(K). Additionally, as seen from Algorithm 2, the evaluation for any q takes constant time, plus the exponential search. An exponential search takes time O(2logε)=O(logε) [2], where ε is the difference between the starting position and the correct position. This is summarized in the following result.

Proposition 3.

Let A be a sorted array with n entries, and R be an ESPC index for A with K subintervals. Then, R uses O(K) space and given q, the ESPC index R can be used to find 𝐫𝐚𝐧𝐤(q) in O(logε) time, where ε=|𝐫𝐚𝐧𝐤(q)𝐫^(q)|.

In light of Proposition 3, a way to estimate the time complexity of the ESPC index is to bound the expected value of ε as a function of K. Hence, our focus is on finding an expression for the approximation error ε=|𝐫𝐚𝐧𝐤(q)𝐫^(q)|. We start with the following result, which was mentioned in passing in Section 3.2.

Lemma 4.

Let k{1,,K} and qIk. Then, it holds that |𝐫𝐚𝐧𝐤(q)𝐫^(q)|nk/2 where nk is the number of keys that fall on interval Ik.

The proof is straightforward and can be found in the extended version of the paper [6]. We now introduce other necessary notation. First, notice that if f has bounded support [a,b], then [X(1),X(n)][a,b] holds with probability 1. Now, for a given K we can define a set of subintervals {Jk}k=1K that partition the interval [a,b] into K equal-length parts, similarly to how the {Ik}k=1K form a partition of [X(1),X(n)]. See Figure 3 for an illustration with K=3.

From this definition, it may be the case that some Ik is not entirely contained within just one Jk interval, as Figure 3 illustrates for the case of I2. Notice however that if this happens, then Ik must be contained in the union of two consecutive Jk intervals, as exemplified by I2J1J2 in Figure 3. We formalize this observation below, which is easily verified.

Fact 5.

Suppose [X(1),X(n)][a,b]. Take k,{1,,K} such that IkJ. Then either IkJ1J or IkJJ+1 holds. As a consequence, it also holds that IkJ1JJ+1, where J0,JK+1 are defined as the empty set for completion.

Figure 3: Equal-length partitions for [X(1),X(n)] (given by {Ik}) and [a,b] (given by {Jk}).

4.2 Main Probabilistic Bound

Recapping, by Proposition 3 the ESPC index time complexity for input q depends on the approximation error ε=|𝐫𝐚𝐧𝐤(q)𝐫^(q)|. By Lemma 4, the error ε critically depends on the number of keys in the subintervals {Ik}. We now state and prove the key results which provide bounds for the approximation error. As mentioned in Section 3.1, we assume that the density f has bounded support [a,b] for some a,b. That is, f(x)=0 for any x[a,b]. Our results can be extended to the case of f with unbounded support by incurring a penalty in terms of space complexity. Notice that the {Jk} implicitly define a random variable, which models how the {Xi}i=1n are distributed among the subintervals. This can be characterized by a parameter (p1,,pK) where pk=(XiJk). Notice that pk is not indexed by i because all Xi follow the same distribution. In this context, the core of the time complexity analysis is contained in the following theorem.

Theorem 6.

Suppose f has support [a,b]. Let A be an array consisting of the {Xi}i=1n sorted in ascending order. Let 𝐫^ be the approximator of the ESPC index with K subintervals, as described in Section 3.2, and denote the approximation error for q as ε(q)=|𝐫𝐚𝐧𝐤(q)𝐫^(q)|. Then, if the query parameter q has the same density function f as the {Xi}, it holds that 𝔼[ε]3n2k=1Kpk2.

Proof.

We know that 𝔼[ε]=𝔼[𝔼[ε|X1,,Xn]], by the tower property for conditional expectation. The random nature of the error depends on both the choice of q and the values of {Xi}i=1n. To see this, recall that the variables X1,,Xn determine the value of both 𝐫𝐚𝐧𝐤 and 𝐫^. The inner expected value gives

𝔼[ε|X1,,Xn]=ε(q)f(q)𝑑q=X(1)X(n)ε(q)f(q)𝑑q,

where the last equality comes from the fact that ε(q)=0 if q[X(1),X(n)]. Replacing ε(q)=|𝐫𝐚𝐧𝐤(q)𝐫^(q)| and noticing that the {Ik} form a partition of [X(1),X(n)]:

𝔼[ε|X1,,Xn]=k=1KIk|𝐫𝐚𝐧𝐤(q)𝐫^(q)|f(q)𝑑q12k=1KnkIkf(q)𝑑q,

where nk is the number of keys in subinterval Ik and we have used Lemma 4 to bound the error within each Ik. Now, denote by nk, the number of keys in IkJ. Then, we can write

𝔼[ε|X1,,Xn]12k=1K(=1Knk,)Ikf(q)𝑑q=12=1Kk=1Knk,Ikf(q)𝑑q, (1)

where first we have written nk as =1Knk, and then we have exchanged the order of summation. Now, denote by L(k) the set of indices such that IkJ. Notice that nk,=0 if L(k). Hence, inequality (1) becomes:

𝔼[ε|X1,,Xn]12=1Kk:L(k)nk,Ikf(q)𝑑q. (2)

By Fact 5 we know that IkJ1JJ+1 for all and k such that L(k). Using the notation p=(XiJ)=Jf(q)𝑑q introduced above, from inequality (2) we get

𝔼[ε|X1,,Xn]12=1K(p1+p+p+1)k:L(k)nk,=12=1K(p1+p+p+1)m, (3)

where m denotes the number of keys in J. Now, applying expected value on both sides of inequality (3), we get that 𝔼[ε]=𝔼[𝔼[ε|X1,,Xn]]12=1K(p1+p+p+1)𝔼[m]. Furthermore, we have

𝔼[m]=𝔼[i=1n𝟏XiJ]=i=1n𝔼[𝟏XiJ]=i=1n(XiJ)=np.

Replacing in the expression for 𝔼[ε], we get the following inequality:

𝔼[ε]12=1K(p1+p+p+1)𝔼[m]=n2=1K(p1p+p2+pp+1). (4)

Now, using Cauchy–Schwarz inequality and the fact that p0=(XiJ0)=0 we get:

=1Kp1p(=1Kp12=1Kp2)1/2=(=1K1p2=1Kp2)1/2(=1Kp2=1Kp2)1/2==1Kp2.

Similarly, using Cauchy–Schwarz inequality and the fact that pK+12=0 we can show that =1Kpp+1=1Kp2. Finally, applying these inequalities in (4), we get the desired result:

𝔼[ε]n2=1K(p1p+p2+pp+1)n2(=1Kp2+=1Kp2+=1Kp2)3n2=1Kp2.

Theorem 6 is the key tool for proving our results, as we show in Section 4.3.

4.3 Proof of Asymptotic Complexity

We now use Theorem 6 to prove the results stated in Section 3.1. For this, we need to relate the error bound in Theorem 6 with ρf=f22. For this, notice that

pk2=(Jkf(x)𝑑x)2(Jk1𝑑x)(Jkf2(x)𝑑x)=|Jk|Jkf2(x)𝑑x,

where |Jk| denotes the length of interval Jk and we have used Cauchy-Schwarz inequality to bound (Jkf(x)𝑑x)2. Substituting this in the error bound from Theorem 6 and using the fact that |Jk|=(ba)/K for all k, we obtain

𝔼[ε] 3n2k=1Kpk23n2k=1K(ba)KJkf2(x)𝑑x=3(ba)2nKk=1KJkf2(x)𝑑x
Since {Jk} is a partition of [a,b] and f is square-integrable:
𝔼[ε] 3(ba)2nKabf2(x)𝑑x=3(ba)2nKf2(x)𝑑x=3(ba)2ρfnK.

We can summarize the main error bound for the ESPC index in the following proposition.

Proposition 7.

Under the conditions of Theorem 6, and assuming ρf<, it holds

𝔼[ε]3(ba)2ρfnK.

The complexity results from Section 3.1 are proved in Section 4.3.1 as corollaries of Proposition 7. In Sections 4.3.2 and 4.3.3 we discuss two important generalizations of our results. In Section 5, we present further analysis and a comparison to previous results.

4.3.1 Proofs of Main Complexity Bounds

Proof of Theorem 1.

Let n and consider the following procedure Rn for generating an index: given a sorted array A=[X(1),,X(n)], the output Rn(A) consists of an ESPC index built with K=n subintervals. By Proposition 3, this means that space overhead is S(Rn(A))=O(K)=O(n). As this is independent of the specific values stored in A, we have S¯n=O(n). On the other hand, by Theorem 6 and Proposition 7 we can bound the expected prediction error by

𝔼[ε]3(ba)2ρfnn=3(ba)2ρf.

Furthermore, Jensen’s inequality implies

𝔼[logε]log𝔼[ε]log((ba)ρf)+log(3/2)log((ba)ρf)+1.

By Proposition 3, we know query time for q for the ESPC index is O(logε(q)). Accordingly, the expected query time is T¯n=O(𝔼[logε])=O(log((ba)ρf)+1)=O(log((ba)ρf)). This concludes the proof.

Proof of Theorem 2.

We use the same argument as the proof for Theorem 1, but setting K=nlogn subintervals instead of K=n.

4.3.2 Distribution of query parameter

As can be seen from the statement of Theorem 6, we have assumed that q has the same probability density function f as the {Xi}. We believe this to be a reasonable approximation in many contexts, but our analysis can be extended to q otherwise distributed, which constitutes an important generalization with respect to previous work [34].

Proposition 8.

Assume the conditions of Theorem 6. Additionally, suppose ρf< and that the search parameter q distributes according to a density function g. Further suppose that g is square-integrable and denote ρg=g22. Then, it holds that

𝔼[ε]3(ba)2ρfρgnK.

See Appendix A for the proof. Hence, for the general case where q does not distribute according to f, our results retain essentially the same form. For ease of presentation, we assume g=f in subsequent analyses. By Proposition 8, this assumption entails no loss of generality; for q otherwise distributed, we substitute ρfρg in place of ρf.

4.3.3 Unbounded support

Proposition 7 can be extended to f with unbounded support. For a,b denote by Aa,b the event that [X(1),X(n)][a,b]. Take values of a,b such that (Aa,b)1logn. Then, the expected query time is O(𝔼[logε]) where the expected value can be written as

𝔼[logε]=E1+E2 where E1=𝔼[logε|Aa,bC](Aa,bC) and E2=𝔼[logε|Aa,b](Aa,b).

Since the prediction error ε can never exceed n, the E2 term can be bounded as

E2=𝔼[logε|Aa,b](Aa,b)logn(Aa,b)logn1logn=1.

On the other hand, the event Aa,bC means that [X(1),X(n)][a,b]. This allows us to apply Jensen’s inequality along with Proposition 7 to get

E1=𝔼[logε|Aa,bC](Aa,bC)𝔼[logε|Aa,bC]log𝔼[ε|Aa,bC]log(3(ba)2ρfnK).

Putting everything together, this means that expected query time is

O(𝔼[logε])=O(log(3(ba)2ρfnK)+1)=O(log((ba)ρfnK)). (5)

Hence, time complexity depends on the values of a,b that guarantee (Aa,b)1logn. These values depend on how heavy-tailed the distribution is. We propose the following two variations of Theorem 1, with other results being available based on different assumptions.

Theorem 9.

Suppose the {Xi} have finite mean μ and variance σ2, and that ρf<. Then, there is a procedure Rn for building learned indexes with S¯n=O(nnlogn) and T¯n=O(ρ), where ρ=log(2σρf). Since ρ is independent of n, expected time is asymptotically O(1).

Theorem 10.

Suppose ρf< and that the distribution of the {Xi} is subexponential [30], meaning there is a constant C>0 such that (|Xi|x)2eCx for all x0. Then, there is a procedure Rn for building learned indexes with S¯n=O(nlogn) and T¯n=O(ρ), where ρ=log(4ρf/C). Since ρ is independent of n, expected time is asymptotically O(1).

We include below the proof of Theorem 9. For Theorem 10, the proof is available in the extended version of the paper [6]. It can also be viewed as a corollary of Lemma 3.4 in [34].

Proof of Theorem 9.

Set a=μσnlogn and b=μ+σnlogn. It holds that

(Aa,b) =([X(1),X(n)][a,b])=(i=1n{Xi[a,b]})i=1n(Xi[a,b]),
where the last inequality is due to the union bound. By definition of a and b:
(Aa,b) =i=1n({Xi<μσnlogn}{Xi>μ+σnlogn})
=i=1n(|Xiμ|>σnlogn)i=1n1nlogn=1logn,

where the last upper bound comes from Chebyshev’s inequality. Since (Aa,b)1logn equation (5) applies, and replacing a and b gives

O(𝔼[logε])=O(log((ba)ρfnK))=O(log(2σnlognρfnK)).

Hence, building an ESPC index with K=nnlogn intervals makes space usage O(nnlogn) and guarantees that expected query time is O(log(2σρf)). This concludes the proof.

5 Analysis and Benchmarking

We now take a closer look at the expected time complexity. Consider an ESPC index with K subintervals for a sorted array A with n keys. Then, by Theorem 6 and Jensen’s inequality:

𝔼[logε]log𝔼[ε]log(3n2k=1Kpk2)=log(3n2)+log(k=1Kpk2) (6)

where pk=(XiJk). Recall that (p1,,pK) constitutes an induced discrete probability distribution, describing the probabilities of the {Xi} falling on the different {Jk} intervals.

5.1 Analysis of Alternative Index Design

The term log(k=1Kpk2) is related to a notion of entropy and is minimized when (p1,,pK) is a uniform distribution (see Section 5.2). Hence, we could conceivably get better performance by trying to induce pk=1/K for all k. Consider then an alternative design where the {Ik} subintervals are defined in such a way that they cover the [a,b] range (instead of the [X(1),X(n)] range) and we aim to enforce pk=1/K for all k, rather than an equal-length condition. This requires knowledge of a, b, and F, which we usually lack, but several strategies can be explored to estimate them from the data.

This new index design would refine the intervals where we expect data to be more concentrated, which seems to be a reasonable strategy. However, there is a trade-off. Choosing the subintervals so that pk=1/K means that they will not be equal-length, unless the {Xi} distribute uniformly. Consequently, for a given query parameter q, we can no longer find its subinterval in constant time, as we did before with k1δ(qX(1)). This version of the index would need a new strategy to find the relevant subinterval Ik such that qIk.

A standard strategy would be to create a new array A formed by the left endpoints of the {Ik}, that is, A=[t0,,tk1]. Now, for a given search parameter q we can find the relevant Ik with k=𝐫𝐚𝐧𝐤A(q), and then proceed with steps (2) and (2) from Algorithm 2. Notice that the index is now hierarchical, with A at the top and A (segmented into the {Ik}) at the bottom. From equation (6), expected time at the bottom layer would scale as

𝔼[logεbot]log(3n2k=1Kpk2)=log(3n2k=1K1K2)=log(3n2K).

On the other hand, the top layer has K keys {t0,,tk1}. Since the index is designed so that pk=1/K, this means F(ti)=i/k. As a consequence, the keys in A approximately follow distribution F, with a better approximation as K. Suppose we use an ESPC index with K subintervals to speed up computation of 𝐫𝐚𝐧𝐤A. By Proposition 7 the expected time at the top layer will be dominated by 𝔼[logεtop]log(3(ba)ρfK/(2K)) and the total expected time can be bounded as

𝔼[logεbot]+𝔼[logεtop]log(3n2K)+log(3(ba)2ρfKK)=log(9(ba)4ρfnK).

Notice that (ba)ρf is still present in this bound. Now, for instance, with K=K=n we get twice the space usage compared to Theorem 1 and the same asymptotic expected query time. This indicates that asymptotic complexity remains the same for a hierarchical or tree-like version of the index, so the basic ESPC index gives a simpler proof for our results.

5.2 Bounds for Expected Query Time

Complexity due to number of keys.

Equation (6) provides an interesting insight. We see that the expected log-error (and hence, the expected query time) can be decomposed into two separate sources of complexity. The first is due exclusively to the number of keys n, which suggests that the query problem gets increasingly difficult when new keys are inserted, even if the underlying distribution of the data remains the same. This helps to explain why learned indexes evidence a need for updating [33, 26] in the face of new keys arriving, even when the same dataset is used to simulate the initial keys and the subsequent arrivals.

Information-theoretical considerations.

The sum k=1Kpk2 is related to the Rényi entropy [25]. For a discrete random variable P with possible outcomes 1,,K and correspondent probabilities {pk}k=1K, the Rényi entropy of order α is defined as Hα(P)=11αlog(k=1Kpkα) for α>0,α1. The cases α=0,1, are defined as limits, with H1(P) giving the Shannon entropy [25]. The Rényi entropy can be understood as a generalization of the Shannon entropy which preserves most of its properties. Consequently, equation (6) can be written as:

𝔼[logε]log(3n2)H2(P). (7)

H2 appears in cryptography [4, 27] due to its relation with the collision probability k=1Kpk2, and is maximized by the uniform distribution [27], that is, when pk=1/K for all k. This suggests that negative entropy might be a good measure of statistical complexity for learned indexes, beyond the ESPC index. It captures a notion of distance to the uniform distribution U, which is easy to learn for the type of simple models commonly used in learned indexes. This notion of distance can be formalized by writing H2(P)=D2(P||U)logK, where D2(P||U) is the Rényi divergence of order 2, defined analogously to the KL divergence [29].

Characterization via the stochastic process.

A similar insight can be reached by reference to the density function f. By Proposition 7 and Jensen’s inequality, we know

𝔼[logε]log((ba)ρfnK)=logn+log(ba)log(K)h2(Xi),

where we use hα(X)=11αlog(fαα) to denote the Rényi entropy for continuous random variables [31], similar to how differential entropy is defined in analogy to the Shannon entropy. This last expression provides a useful bound for the expected log-error, by separating the effect of the number n of keys, the number K of subintervals (which represent space usage), the support ba, and an inherent characteristic h2(Xi) of the stochastic process.

5.3 Comparison with Existing Methods

In this section, we compare our probabilistic model and complexity bounds to previous work. All results are stated in terms of our RAM model. Table 1 summarizes the main points.

Table 1: Space and expected time complexity for learned indexes and B+Tree. The probabilistic assumptions are not necessary for the index to work, but they are necessary to guarantee the stated performance. For the last three rows, it is assumed that f has bounded support [a,b].
Method Space Expected time
Parameters /
Notation
Probabilistic Model
B+Tree O(nb) O(logn)
Branching
factor b
None
PGM [10] O(nε2) O(logn)
Maximum
error ε
Gaps {Xi+1Xi} i.i.d.
with finite mean μ and
variance σ2, εσ/μ
PCA [34] O(ρ1+δn1+δ2) O(log1δ)
δ>0,
ρ=(ba)ρ1
{Xi} i.i.d., density f,
f(x)ρ1< for all x
RDA [34] O(ρn) O(log(logn)) ρ=ρ1ρ2
Same as PCA, plus
0<ρ2f(x) for all x
ESPC
O(n)
O(nlogn)
O(ρ)
O(ρ+log(logn))
K intervals,
ρ=log((ba)ρf)
{Xi} share density f,
ρf=f22<
Non-learned methods.

There are many classical algorithms for searching a sorted array. They tend to be comparison-based and have O(logn) expected time, unless strong assumptions are made (e.g., interpolation search uses O(log(logn)) time on average if the keys are uniformly distributed [17]). We use B+Tree as a representative. A B+Tree has space overhead O(n/b) where b is the branching factor. Since b does not depend on n, asymptotically this is just O(n). The number of comparisons required to find a key is O(logn). This represents both worst-case and average-case complexity, with no probabilistic assumptions.

PGM-index.

In the absence of probabilistic assumptions, space overhead of the PGM-index is O(n/ε) and expected time is O(logn). Here, ε is a hyperparameter of the method and is independent of n, so this constitutes the same asymptotic complexity as B+Trees. Under certain assumptions, space overhead can be strengthened to O(n/ε2) [9], representing a constant factor improvement (i.e., same asymptotic space complexity).

In contrast to [34] and our work, the assumptions in [9] concern the gaps between keys. Specifically, the {Xi+1Xi} are assumed to be independent and identically distributed (i.i.d.), with finite mean μ and variance σ2. It is also assumed that εσ/μ. This setting is not directly comparable to ours, in the sense that neither is a particular instantiation of the other. However, the assumptions described above are very strong in our view and seem hard to justify for most practical applications. Also, the εσ/μ condition constrains the choice of this key hyperparameter.

PCA Index and RDA Index.

In [34] several indexes are presented, the most relevant for us being the PCA and RDA indexes. The PCA Index is similar to the ESPC index, with some differences as explained in Section 3.2, including the use of binary search instead of exponential search. The RDA Index is similar to the Recursive Model Index (RMI) from [18]. All results below are stated in terms of the RAM model under the uniform cost criterion.

In [34], the same assumption regarding bounded support [a,b] for f is needed, so we consider that case in this section. Extensions to unbounded support incur a penalty in terms of space complexity, dependent on the tails of the distribution. We first describe the PCA Index. For any δ>0, there is a PCA Index with space overhead O((ba)1+δρ11+δn1+δ/2) and expected time complexity O(log1δ). Here, ρ1 is an upper bound for f. Since it is independent of n, space overhead is asymptotically O(n1+δ/2). A direct comparison to the ESPC index is possible through Theorem 1, which provides a strictly stronger result, with expected constant time and truly linear space.

On the other hand, the RDA Index has space overhead O(ρ1ρ2n), where ρ2 is a lower bound for f and ρ1 is an upper bound (as for the case of the PCA Index). Asymptotically in n, this is O(n). Expected query time in this case is O(log(logn)). A direct comparison to the ESPC index is possible through Theorem 2, which provides a strictly stronger result, with O(log(logn)) expected time and sublinear space overhead.

Even as we improve the complexity bounds, we also generalize the domain of application for our result. As part of the probabilistic model for the PCA and RDA indexes, it is assumed that the {Xi} are i.i.d. with density f [34]. PCA index assumes f is upper bounded, that is, there exists some fixed ρ1< such that f(x)ρ1 for all x. Notice that this condition implies that f is square-integrable, so that our assumptions are strictly weaker:

ρf=f2(x)𝑑xρ1f(x)𝑑x=ρ1<.

The RDA index further assumes that f is bounded away from 0, that is, there exists some ρ2>0 such that f(x)ρ2. Moreover, as described in [34], access to the values of a, b and ρ1 is required to build the PCA Index, while the RDA Index requires knowledge of ρ1 and ρ2. This seems like a strong assumption in practice. Finally, the analysis in [34] holds less generally than ours because it requires the {Xi} to be independent, and it (implicitly) depends on the query parameter q having the same distribution as the {Xi}.

6 Experimental Findings

We implement the ESPC index and evaluate the results in light of the theoretical analysis.

6.1 Data

In our experiments, we use four datasets from the Searching on Sorted Data (SOSD) benchmark [15, 24], which has become standard in testing learned indexes [28]. Each dataset consists of a sorted array of keys, which can be used to build an index and simulate queries. Two of the datasets we use (usparse, normal) are synthetically generated. The other two (amzn, osm) come from real-world data. In particular, osm is known to be challenging for learned indexes [24].

6.2 Experimental Design

Our complexity bounds (Theorems 1 and 2) are corollaries of Propositions 3 and 7, which are the key results to validate empirically. We do this in the following way. Each dataset consists of a sorted array of 20×107 keys. We use a subsample of n=107 keys, so that experiments are less computationally expensive. For a fixed subsample, we take different values of K and for each:

  1. 1.

    We build the ESPC index with K subintervals. Space overhead is estimated by reporting the amount of memory used by the index. From Proposition 3, this should be O(K). In terms of experiments, we expect a linear relationship between memory overhead and K.

  2. 2.

    We sample Q keys from A. We compute 𝐫𝐚𝐧𝐤 for these keys using the index and measure the prediction error. We then average these values to estimate the expected error. From Proposition 7, this should be smaller than 3(ba)2ρfnK. In practice, we use an estimate ρ^f of ρf (see Section 6.4) and all data is re-scaled to the [0,1] interval, so that we consider ba=1. For experiments, we plot the average prediction error alongside (3ρ^fn)/(2K), as functions of K. We expect the average error curve to be below (3ρ^fn)/(2K).

We use Q=30×106. For K we use values of 103,5×103,104,5×104,105 and 2×105.

6.3 Results

In terms of storage, our experiments show a perfect linear relationship between memory overhead and K, where memory overhead=32K bytes. This result is in accordance to the O(K) estimate in Proposition 4.1. The agreement between experiments and theory is to be expected here, because the space overhead of the ESPC index does not depend on probabilistic factors.

On the other hand, Figure 4 shows how the prediction error changes with the number of subintervals K. We plot the average experimental error along with the theoretical bound for the expected error. In accordance with Proposition 7, this theoretical bound is given by (3nρ^f)/(2K). As expected, for all datasets this expression serves as an upper bound for the average experimental error, and both curves have a similar shape. The results show that Proposition 7 has good predictive power, even for real-world data.

Figure 4: Average experimental error and theoretical bound for expected error, as functions of the number K of subintervals. Top row shows synthetic datasets, bottom row shows real-world datasets. The plots use a logarithmic scale for both axes, which prevents clustering of the points.

6.4 Analysis of 𝝆𝒇

As seen in Figure 4, the experimental error follows the overall shape described by the theoretical bound in Proposition 7. According to the theory, the constants involved (e.g., the intercept of the curves) should depend on ρf, so it is crucial to have estimates for this metric. We develop a procedure that can provide an estimate ρ^f of ρf, based on statistical techniques to estimate the density f. Full details are available in the extended version of the paper [6]. The values computed with this method are (usparse, 1.20), (normal, 3.89), (amzn, 1.72), (osm, 32.57). The value for osm is an order of magnitude greater than the rest, meaning it is farther away from the uniform distribution as measured by the Rényi divergence, helping to explain why it is challenging. The dataset with the lowest value is usparse. Since its underlying distribution is uniform, this is in line with the analysis in Section 5.2. Figure 4 shows that higher values of ρ^f correspond to higher experimental errors.

7 Conclusions

Theoretical understanding of learned indexes has not kept up with their practical development. We narrow this gap by proving that learned indexes exhibit strong theoretical guarantees under a very general probabilistic model. In particular, we prove learned indexes can achieve O(1) expected query time with at most linear space. We introduce a specific index that achieves this performance, and we find a general bound for the expected time in terms of the number of keys and the Rényi entropy of the distribution. From this result, we derive a metric ρf that can be used to characterize the difficulty of learning the data. We describe a procedure for estimating ρf for real data, and we show that the complexity bounds hold in practice. Our results help explain the good experimental performance of learned indexes, under the most general conditions considered so far in the literature. As future work, it is theoretically important to prove lower bounds for expected query time. We believe this type of result may hinge on information-theoretical properties (e.g., through rate-distortion theory [3]), which our analysis shows are relevant for learned indexes. On the other hand, it is also important to extend our analysis to the case of dynamic datasets, where the probability distribution may exhibit drift over time and updates are necessary. This is an important case in applications, and has not been considered in theoretical analysis.

References

  • [1] Alfred V. Aho and John E. Hopcroft. The Design and Analysis of Computer Algorithms. Addison-Wesley Longman Publishing Co., Inc., USA, 1st edition, 1974.
  • [2] Jon Louis Bentley and Andrew Chi-Chih Yao. An almost optimal algorithm for unbounded searching. Information Processing Letters, 5(3):82–87, 1976. doi:10.1016/0020-0190(76)90071-5.
  • [3] Toby Berger. Rate-distortion theory. Wiley Encyclopedia of Telecommunications, 2003.
  • [4] Christian Cachin. Entropy measures and unconditional security in cryptography. PhD thesis, ETH Zurich, 1997. URL: https://d-nb.info/950686247.
  • [5] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Third Edition. The MIT Press, Cambridge, MA, USA, 3rd edition, 2009.
  • [6] Luis Croquevielle, Guang Yang, Liang Lian, Ali Hadian, and Thomas Heinis. Querying in constant expected time with learned indexes. arXiv preprint arXiv:2405.03851, 2024.
  • [7] H.A. David and H.N. Nagaraja. Order Statistics. Wiley Series in Probability and Statistics. John Wiley & Sons, Inc., Hoboken, NJ, USA, 2004.
  • [8] Jialin Ding, Umar Farooq Minhas, Jia Yu, Chi Wang, Jaeyoung Do, Yinan Li, Hantian Zhang, Badrish Chandramouli, Johannes Gehrke, Donald Kossmann, David Lomet, and Tim Kraska. Alex: An updatable adaptive learned index. In Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data, SIGMOD ’20, pages 969–984, New York, NY, USA, 2020. Association for Computing Machinery. doi:10.1145/3318464.3389711.
  • [9] Paolo Ferragina, Fabrizio Lillo, and Giorgio Vinciguerra. Why are learned indexes so effective? In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 3123–3132, Virtual, 13–18 July 2020. PMLR. URL: http://proceedings.mlr.press/v119/ferragina20a.html.
  • [10] Paolo Ferragina and Giorgio Vinciguerra. The PGM-index: A fully-dynamic compressed learned index with provable worst-case bounds. Proc. VLDB Endow., 13(8):1162–1175, April 2020. doi:10.14778/3389133.3389135.
  • [11] Alex Galakatos, Michael Markovitch, Carsten Binnig, Rodrigo Fonseca, and Tim Kraska. Fiting-tree: A data-aware index structure. In Proceedings of the 2019 International Conference on Management of Data, SIGMOD ’19, pages 1189–1206, New York, NY, USA, 2019. Association for Computing Machinery. doi:10.1145/3299869.3319860.
  • [12] Rodrigo González, Szymon Grabowski, Veli Mäkinen, and Gonzalo Navarro. Practical implementation of rank and select queries. In Poster Proc. Volume of 4th Workshop on Efficient and Experimental Algorithms (WEA), pages 27–38, Athens, Greece, 2005. CTI Press and Ellinika Grammata.
  • [13] Goetz Graefe et al. Modern B-tree techniques. Foundations and Trends in Databases, 3(4):203–402, 2011. doi:10.1561/1900000028.
  • [14] Ali Hadian and Thomas Heinis. Shift-table: A low-latency learned index for range queries using model correction. In Yannis Velegrakis, Demetris Zeinalipour-Yazti, Panos K. Chrysanthis, and Francesco Guerra, editors, Proceedings of the 24th International Conference on Extending Database Technology, EDBT 2021, Nicosia, Cyprus, March 23 - 26, 2021, pages 253–264. OpenProceedings.org, 2021. doi:10.5441/002/edbt.2021.23.
  • [15] Andreas Kipf, Ryan Marcus, Alexander van Renen, Mihail Stoian, Alfons Kemper, Tim Kraska, and Thomas Neumann. SOSD: A benchmark for learned indexes. arXiv preprint arXiv:1911.13014, 2019. arXiv:1911.13014.
  • [16] Andreas Kipf, Ryan Marcus, Alexander van Renen, Mihail Stoian, Alfons Kemper, Tim Kraska, and Thomas Neumann. Radixspline: A single-pass learned index. In Proceedings of the Third International Workshop on Exploiting Artificial Intelligence Techniques for Data Management, aiDM ’20, New York, NY, USA, 2020. Association for Computing Machinery. doi:10.1145/3401071.3401659.
  • [17] Donald E. Knuth. The Art of Computer Programming, Volume 3: (2nd Ed.) Sorting and Searching. Addison Wesley Longman Publishing Co., Inc., USA, 1998.
  • [18] Tim Kraska, Alex Beutel, Ed H. Chi, Jeffrey Dean, and Neoklis Polyzotis. The case for learned index structures. In Proceedings of the 2018 International Conference on Management of Data, SIGMOD ’18, pages 489–504, New York, NY, USA, 2018. Association for Computing Machinery. doi:10.1145/3183713.3196909.
  • [19] Justin J. Levandoski, David B. Lomet, and Sudipta Sengupta. The bw-tree: A b-tree for new hardware platforms. In Proceedings of the 2013 IEEE International Conference on Data Engineering (ICDE 2013), ICDE ’13, pages 302–313, USA, 2013. IEEE Computer Society. doi:10.1109/ICDE.2013.6544834.
  • [20] Ling Liu and M Tamer Özsu. Encyclopedia of Database Systems, volume 6. Springer New York, NY, USA, 2009.
  • [21] Scott Lloyd and Maya Gokhale. Near memory key/value lookup acceleration. In Proceedings of the International Symposium on Memory Systems, MEMSYS ’17, pages 26–33, New York, NY, USA, 2017. Association for Computing Machinery. doi:10.1145/3132402.3132434.
  • [22] Zhanyu Ma and Arne Leijon. Coding bounded support data with beta distribution. In 2010 2nd IEEE InternationalConference on Network Infrastructure and Digital Content, pages 246–250, 2010. doi:10.1109/ICNIDC.2010.5657779.
  • [23] Zhanyu Ma, Andrew E. Teschendorff, Arne Leijon, Yuanyuan Qiao, Honggang Zhang, and Jun Guo. Variational bayesian matrix factorization for bounded support data. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(4):876–889, 2015. doi:10.1109/TPAMI.2014.2353639.
  • [24] Ryan Marcus, Andreas Kipf, Alexander van Renen, Mihail Stoian, Sanchit Misra, Alfons Kemper, Thomas Neumann, and Tim Kraska. Benchmarking learned indexes. Proc. VLDB Endow., 14(1):1–13, September 2020. doi:10.14778/3421424.3421425.
  • [25] Alfréd Rényi. On measures of entropy and information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics, volume 4, pages 547–562, Oakland, CA, USA, 1961. University of California Press.
  • [26] Amirhesam Shahvarani and Hans-Arno Jacobsen. Parallel index-based stream join on a multicore cpu. In Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data, SIGMOD ’20, pages 2523–2537, New York, NY, USA, 2020. Association for Computing Machinery. doi:10.1145/3318464.3380576.
  • [27] Maciej Skórski. Shannon entropy versus renyi entropy from a cryptographic viewpoint. In Jens Groth, editor, Cryptography and Coding, pages 257–274, Cham, 2015. Springer International Publishing. doi:10.1007/978-3-319-27239-9_16.
  • [28] Zhaoyan Sun, Xuanhe Zhou, and Guoliang Li. Learned index: A comprehensive experimental evaluation. Proc. VLDB Endow., 16(8):1992–2004, April 2023. doi:10.14778/3594512.3594528.
  • [29] Tim van Erven and Peter Harremos. Rényi divergence and kullback-leibler divergence. IEEE Transactions on Information Theory, 60(7):3797–3820, 2014. doi:10.1109/TIT.2014.2320500.
  • [30] Roman Vershynin. High-Dimensional Probability: An Introduction with Applications in Data Science. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, Cambridge, UK, 2018.
  • [31] Susana Vinga and Jonas S Almeida. Rényi continuous entropy of dna sequences. Journal of theoretical biology, 231(3):377–388, 2004.
  • [32] Chaichon Wongkham, Baotong Lu, Chris Liu, Zhicong Zhong, Eric Lo, and Tianzheng Wang. Are updatable learned indexes ready? Proc. VLDB Endow., 15(11):3004–3017, July 2022. doi:10.14778/3551793.3551848.
  • [33] Guang Yang, Liang Liang, Ali Hadian, and Thomas Heinis. FLIRT: A fast learned index for rolling time frames. In Julia Stoyanovich, Jens Teubner, Nikos Mamoulis, Evaggelia Pitoura, Jan Mühlig, Katja Hose, Sourav S. Bhowmick, and Matteo Lissandrini, editors, Proceedings 26th International Conference on Extending Database Technology, EDBT 2023, Ioannina, Greece, March 28-31, 2023, pages 234–246, Ioannina, Greece, 2023. OpenProceedings.org. doi:10.48786/edbt.2023.19.
  • [34] Sepanta Zeighami and Cyrus Shahabi. On distribution dependent sub-logarithmic query time of learned indexing. In Proceedings of the 40th International Conference on Machine Learning, ICML’23. JMLR.org, 2023.

Appendix A Proof of Proposition 8

Proof of Proposition 8.

The proof follows closely the argument used to prove Theorem 6 and Proposition 7. By the tower property for conditional expectation, 𝔼[ε]=𝔼[𝔼[ε|X1,,Xn]], where the inner expected value gives

𝔼[ε|X1,,Xn]=ε(q)g(q)𝑑q=X(1)X(n)ε(q)g(q)𝑑q.

where the last equality comes from the fact that for q[X(1),X(n)] we know the exact value of 𝐫𝐚𝐧𝐤(q), which is either 0 or n, and hence we have that ε(q)=0. Replacing ε(q)=|𝐫𝐚𝐧𝐤(q)𝐫^(q)| and noticing that the {Ik} form a partition of [X(1),X(n)]:

𝔼[ε|X1,,Xn]=k=1KIk|𝐫𝐚𝐧𝐤(q)𝐫^(q)|g(q)𝑑q.

By Lemma 4 we can bound the error within each Ik:

𝔼[ε|X1,,Xn]12k=1KnkIkg(q)𝑑q,

where nk is the number of keys in subinterval Ik. Now, denote by nk, the number of keys in IkJ. Then, we can write

𝔼[ε|X1,,Xn]12k=1K(=1Knk,)Ikg(q)𝑑q=12=1Kk=1Knk,Ikg(q)𝑑q, (8)

where first we have written nk as =1Knk, and then we have exchanged the order of summation. Now, denote by L(k) the set of indices such that IkJ. Notice that nk,=0 if L(k). Hence, inequality (8) becomes:

𝔼[ε|X1,,Xn]12=1Kk:L(k)nk,Ikg(q)𝑑q. (9)

By Fact 5 we know that IkJ1JJ+1 for all and k such that L(k). Now, define q as (qJ) for each =1,,K, analogously to how the p are defined as (XiJ). Using this notation, from inequality (9) we get

𝔼[ε|X1,,Xn] 12=1Kk:L(k)nk,(J1g(q)𝑑q+Jg(q)𝑑q+J+1g(q)𝑑q)
=12=1K(q1+q+q+1)k:L(k)nk,
=12=1K(q1+q+q+1)m, (10)

where m denotes the number of keys in J. Now, applying expected value on both sides of inequality (10), we get that

𝔼[ε]=𝔼[𝔼[ε|X1,,Xn]]12=1K(q1+q+q+1)𝔼[m] (11)

Furthermore, we have

𝔼[m]=𝔼[i=1n𝟏XiJ]=i=1n𝔼[𝟏XiJ]=i=1n(XiJ)=np.

Replacing 𝔼[m]=np in inequality (11), we get:

𝔼[ε]12=1K(q1+q+q+1)𝔼[m]=n2=1K(q1p+qp+q+1p). (12)

Now, by Cauchy-Schwarz inequality we know that

=1Kq1p(=1Kq12=1Kp2)1/2=(=0K1q2=1Kp2)1/2(=1Kq2=1Kp2)1/2,

where we have used the fact that q0=(qJ0)=0 since J0 is defined as the empty set. Similarly, by Cauchy-Schwarz inequality and the fact that qK+1=0 we can prove

=1Kqp(=1Kq2=1Kp2)1/2,=1Kq+1p(=1Kq2=1Kp2)1/2.

Replacing in (12) we get the new inequality

𝔼[ε]3n2(=1Kq2)1/2(=1Kp2)1/2.

Now, since both f and g are square-integrable functions, the same argument from Proposition 7 can be used to prove that

=1Kq2(ba)Kρg,and=1Kp2(ba)Kρf.

Putting everything together:

𝔼[ε]3n2(=1Kq2=1Kp2)1/23n2((ba)Kρg(ba)Kρf)1/2=3(ba)2ρfρgnK.

This concludes the proof.