B-Treaps Revised: Write Efficient Randomized Block Search Trees with High Load
Abstract
Uniquely represented (UR) data structures represent each logical state with a unique storage state. We study the problem of maintaining a dynamic set of keys from a totally ordered universe in this context. UR structures are also called “strongly history independent” structures in the literature.
We introduce a two-layer data structure called -Randomized Block Search Tree (RBST) that is uniquely represented and suitable for external memory (EM). Though RBSTs naturally generalize the well-known binary Treaps, several new ideas are needed to analyze the expected search, update, and storage efficiency in terms of block-reads, block-writes, and blocks stored. We prove that searches have block-reads, that dynamic updates perform block-writes and block-reads, and that -RBSTs have an asymptotic load-factor of at least for every .
Thus -RBSTs improve on the known, uniquely represented B-Treap [Golovin; ICALP’09]. Compared with non-UR structures, the RBST is also, to the best of our knowledge, the first external memory structure that is storage-efficient and has a non-amortized, write-efficient update bound.
Keywords and phrases:
Unique Representation, Randomization, Top-Down Analysis, Block Search Tree, Write-Efficiency, Storage-EfficiencyCopyright and License:
2012 ACM Subject Classification:
Theory of computation Design and analysis of algorithmsFunding:
This work was supported under the Australian Research Council Discovery Projects funding scheme (project number DP180102870). This project has received funding from the European Research Council (ERC) ††margin:Editors:
Pat Morin and Eunjin OhSeries and Publisher:
Leibniz International Proceedings in Informatics, Schloss Dagstuhl – Leibniz-Zentrum für Informatik
1 Introduction
Organizing a set of keys such that insertions, deletions, and searches are supported is one of the most basic problems in computer science that drives the development and analysis of structures with various guarantees and performance trade-offs. Binary search trees for example have several known height balancing methods (e.g. AVL and Red-Black trees) and weight balancing methods (e.g. BB[a] trees), some providing writes for each update [17].
Block Search Trees are generalizations of binary search trees that store in every tree node up to keys and child pointers, where the array size of the blocks is a (typically) fixed parameter. Since such layouts enforce a certain data locality, block structures are central for read- and write-efficient access in machine models with a memory hierarchy.
External Memory and deterministic B-Trees variants.
In the classic External Memory (EM) model, the data items of a problem instance must be transferred in blocks of a fixed size between a block-addressible external memory and an internal Main Memory (MM), which can store up to data items and perform computations. Typically, and the MM can only store a small number of blocks throughout processing, say . Though Vitter’s Parallel Disk Model can formalize more general settings [30, Section ], we are interested in the most basic case with one, merely block-addressible, EM device and one computing device with MM. The cost of a computation is typically measured in the number of block-reads from EM (Is), the number of block-writes to EM (Os), or the total number of IOs performed on EM. For basic algorithmic tasks like the range search, block search trees with support algorithms with asymptotically optimal total IOs (e.g. [30, Section ]). Classic B-Trees [4] guarantee that every block, beside the root, has a load-factor of at least and that all leaves are at equal depth, using a deterministic balance criteria. The B*-Tree variant [23, Section 6.2.4] guarantees a load-factor of at least based on a (deterministic) overflow strategy that shares keys among the two neighboring nodes on equal levels. Yao’s Fringe Analysis [32] showed that inserting keys under a random order in (deterministic balanced) B-Trees yields an expected asymptotic load-factor of , and the expected asymptotic load-factor is for B*-Trees. For general workloads however, maintaining higher load-factors (with even wider overflow strategies) further increases the update write-bounds in block search trees [3, 24]. The B+-Tree variant for key-value pairs, which stores the values separate from the keys to increase the load-factor and thus fan-outs, is heavily used in practical file systems, see e.g. [14].
The IO-optimized111-trees are usually called “write-optimized”, though they aim to minimize IOs and not writes (Os). Bε-Tree [13, 7] provides smooth trade-offs between amortized IOs per update and block-reads (Is) for searches. E.g. tuning parameter retains the bounds of B-Trees and provides improved bounds. In the fully update IO-optimized case () however, searches have merely a bound for the number of block-reads. The main design idea to achieve this is to augment the non-leaf nodes of a B-Tree with additional “message buffers” so that a key insertion can be stored close to the root. Messages then only need propagation, further down the tree, once a message buffer is full (cf. [13, Section ] and [7]). Clearly, the state of either of those (deterministic) structures depends heavily on the actual sequence in which the keys were inserted in them.
Uniquely Represented Data Structures.
For security or privacy reasons, some applications, see e.g. [19], require data structures that do not reveal any information about historical states to an observer of the memory representation, i.e. evidence of keys that were deleted or evidence of the keys’ insertion sequence. Data structures are called uniquely represented (UR) if they have this strong history independence property. For example, sorted arrays are UR, but the aforementioned search trees are not UR due to deterministic rebalancing. Early results [28, 29, 1] show lower and upper bounds on comparison-based search in UR data structures on the pointer machine. Using UR hash tables [9, 26] in the RAM model, pointer structures can also be mapped uniquely into memory. (See [9, Thm. ], [10, Sec. ].)
The Randomized Search Trees are defined by inserting keys, one at a time, in order of a random permutation into an initially empty binary search tree. The well-known Treap [27, 31] maintains the tree shape, of the permutation order, and supports efficient insertions, deletions, and searches (see also [25, Chapter 1.3]). Searches read with high probability tree nodes, any insertion (or deletion) performs expected rotations (writes), and the expected subtree weight, beneath the updated key, is . Beside UR, these remarkably strong bounds are central in the design of simple MM algorithms for dynamic problems. The constant writes bound of Treaps, and some Red-Black Tree variants, is particularly valuable for persistent data structures [16, 22], that is amortized writes are not sufficient to design fully-persistent data structures with linear space. The logarithmic weight bound for example is particularly valuable for obtaining a query time for order-queries in dynamic sets with expected update time [12, Section 5]. This is a central sub-problem in designing fully-persistent data structures, see [18] for example. Further important examples include asymmetric read/write cost settings [8, 5] and parallel computation [11], or simply storage media that can only endure few write-cycles.
Golovin [20] introduced the B-Treap as the first UR data structure for Byte-addressable External Memory (BEM). The B-Treap originates from a certain block storage layout of an associated (binary) Treap, i.e. the block tree is obtained, bottom-up, by iteratively placing subtrees, of a certain size, in one block node222E.g. byte-addressable memory is required for intra-block child pointers and each key maintains a size field that stores its subtree size (in a binary Treap), resulting in logarithmic writes (cf. Os in Table 1). . The author shows bounds of B-Treaps, under certain parameter conditions: If is at least polylogarithmic, specifically for some , then updates have expected block-writes and range-queries have expected block-reads, where is the output size (see Theorem 1 in [20]). The paper also discusses experimental data, from a non-dynamic implementation, that shows an expected load-factor close to [20, Sec.]. Though this storage layout approach allows leveraging known bounds of Treaps to analyze B-Treaps, the block size conditions are limiting for applications of B-Treaps in algorithms. Since the update algorithm is rather involved, Golovin [21] proposed a simpler, Skip-List based EM approach for a UR solution.
The known Skip-List based approaches [21] and [6] avoid the restrictions on , however both require an involved form of history independent hash-table for strings of key values (see [21, Section 2.2.2]). Moreover, the B-Skip-List [21] provides only a logarithmic depth bound (see [6, Lemma 15]) and the EM Skip-List in [6] offers only a weaker notion of History Independence than UR and has logarithmic block-writes (cf. [6, proof of Lem.19]).
| UR Tree | Model | [# Blocks] | ||
| B-Treap [20] | BEM | Size | ||
| Update (Os) | , if . | |||
| Search (Is) | ||||
| -RBST | EM | Size | ||
| Update (Os) | ||||
| Update (Is) | ||||
| Search (Is) |
Contribution and Paper Outline.
Our work provides the first UR EM search tree that is generally applicable for all block sizes , whilst providing bounds that improve on the only known UR EM search tree solution, B-Treap [20].
We introduce a new design technique, and provide a self-contained analysis, that leads to a two-layer randomized data structure called -Randomized Block Search Trees (RBSTs), where is the block size and a tuning parameter for the space-efficiency of our secondary layer (see Section 2). Without the secondary structures, the RBSTs are precisely the distribution of trees that are obtained by inserting the keys, one at a time, under a random order into an initially empty block search tree (e.g. yields the Treap distribution). RBSTs are UR and the algorithms for searching are simple. In Section 2.1, we give a partial-rebuild algorithm for dynamic updates that occupies blocks of temporary MM storage and performs a number of writes (Os) to EM that is proportional to the structural change in the RBST. Its number of reads (Is) is bounded within a small factor of the writes. We prove three performance bounds for -RBSTs in terms of block-reads, block-writes, and blocks stored.
Section 3 shows that searching reads expected blocks, and that RBST depth is with high probability for all . If , RBST depth is with high probability . Section 4 shows that -RBSTs have an expected asymptotic load-factor of at least for every . Combining both analysis techniques, we show in Section 5 that dynamic updates perform expected block-writes and expected block-reads.
See Table 1 for an overview of the write bounds (Os), and read bounds (Is), of the known bottom-up B-Treap and our proposed top-down RBST. To compare the update write-bounds (Os), note that RBSTs have constant writes for all that are admissible in B-Treaps, whereas B-Treaps write at least logarithmic blocks. This is a significant asymptotic improvement. To compare the update read-bounds (Is), note that the factor for all with that are admissible in B-Treaps, whereas the factor for close to . That is, the RBST bounds improve on all B-Treap bounds. Thus, RBSTs are simple UR search trees, for all block sizes , that simultaneously provide improved search, storage utilization, and write-efficiency, compared to the bottom-up B-Treap [20].
Comparing also with non-UR structures, -RBSTs are, to the best of our knowledge, the first EM structure that provides storage-efficiency and write-efficiency for every update. The decisive novelty of our tree structure is that all block-nodes of the primary layer use the maximum fan-out and that the block-nodes of the secondary layer use a weight-proportional fan-out, i.e. proportional to the number of keys that are stored in the subtree. The design of our secondary layer structures has thus nothing in common with the “message-buffers” in -trees. Our result is incomparable to IO-optimized -trees: Though RBSTs are UR and provide better bounds for space, searching (Is), and writes (Os), the block-reads (Is) for updates are and thus worse than the IO-bound of the (non-UR) -Tree.
2 Randomized Block Search Trees
Unlike the well-known B-Trees that store between and keys in every block and have all leaves at equal depth, our definition of -RBSTs does not strictly enforce a minimum load for every single node. Like binary Treaps, the shape of an RBST , over a set of keys, is defined by an incremental process that inserts the keys in a random order into an initially empty block search tree structure, i.e. with ascending priority values . The actual algorithms for dynamic updates are discussed in Section 2.1.
This section defines the basic tree structure that supports both, reporting queries (i.e. range search) and order-statistic queries (i.e. range counting and searching the -th smallest value). Section 2.1 then shows how to avoid explicitly maintaining the subtree weights of nodes close to the root, which yields improved update write-bounds for the case that order-statistic queries are not required in applications. Next, we define the structure of RBSTs and state their invariants.
Let array size and buffer threshold be fixed integer parameters. Every RBST block node stores a fixed-size array for keys, child pointers, and one parent pointer. We use the key-value of minimum priority from those keys contained in the node’s array as the UR label of the block-node. Every child pointer is organized as a pair, storing the pointer to the child block and the total number of keys contained in that subtree333E.g. subtree weights are immediately known to the parent, without reading the respective child blocks.. All internal blocks of an RBST are full, i.e. they store exactly keys. Any block has one of two possible states, either non-buffering () or buffering (). Though -RBSTs will use , we first give the definition of the primary tree without the use of buffers (), i.e. every internal block remains non-buffering.
For , the tree shape is defined by the following incremental process that inserts the keys in ascending order of their priority . Starting at the root block, the insertion of key first locates the leaf block of the tree using the search tree property. If the leaf is non-full, is emplaced in the sorted key array of the node. Otherwise, a new child block is allocated (according to the search tree property) and is stored therein. This concludes the definition of the tree structure for .
Active separators of a block are shown with dotted vertical lines and the horizontal lines are . The left tree has and the right tree has buffering subtrees. Note that all, except the last block, on a root-to-leaf path are full.
- (Invariant 1:)
-
Thus, any internal block stores those keys that have the smallest priority values in its subtree444 For example, and yields the well-known Treap and search trees with fan-out . .
See Figure 1 (left) for an example on keys with that consists of blocks. It demonstrates that leaves may merely store one key in their block. We address this storage inefficiency issue next.
The “high-level idea” for is to delay generating new leaves until the subtree contains a sufficiently large number of keys. To this end, we use secondary structures, which we call buffers555Though we use the word “buffer” as it describes this purpose best, our definition has nothing to do with the “message buffers” in [13, 2]., that replace the storage layout of all those subtrees that contain keys.
- (Invariant 2:)
-
Thus, all remaining block nodes of the primary structure are full and their children are either a primary block or a buffer.
There are several possible ways for UR organization of buffers, for example using a list of at most blocks that are sorted by key values. We propose the following structure that leads to stronger bounds.
Secondary UR Trees: Buffers with weight proportional fan-out.
Our buffers are search trees that consist of nodes similar to the primary tree. They also store the keys with the smallest priority values, from the subtree, in their block. However, the fan-out of internal nodes varies based on the number of keys in the subtree. Define as the fan-out bound for a subtree of weight . We defer the calibration of the parameter to the start of the proof of Theorem 9. The subtree weight of buffering nodes is maintained explicitly, with the aforementioned in-pointers. To obtain UR, the buffer layout is again defined recursively solely based on the set of keys in the buffer and the random permutation .
For fan-out bound , the subtree root has at most one child, which yields a list of blocks. The keys inside each block are stored in ascending key order, and the blocks in the list have ascending priority values. That is, the first block contains those keys with the smallest priorities, the second block contains, from the remaining keys, those with the smallest priorities, and so forth.
For fan-out bound , we also store the keys with the smallest priorities in the root. We call those keys with the smallest priority values the active separators of this block. The remaining keys in the subtree are partitioned in sets of key values, using the active separators only. Each of these key sets is organized recursively, in a buffer, and the resulting trees are referenced as the children of the active separators.
This concludes the definition of our tree structure.
See the right part of Figure 1 for an example of the -RBST on the same permutation (-coordinates) and keys (-coordinates) that consists of blocks. (See [31] for Cartesian Tree representations.) The active separators are shown in black, non-active separators in gray, and the block nodes as rectangles.
Some remarks on the definition of primary and secondary RBST nodes are in order. The buffer is UR since the storage layout only depends on the priority order of the keys, the number of keys in the buffer, and the order of the key values in it. Note that summing the state values of the child pointers yields the weight of the subtree, without additional block reads. Moreover, whenever an update in the secondary structure brings the buffer’s root above the threshold, i.e. , we have that its root contains those keys with the smallest priorities form the set and all keys are active separators, i.e. the buffer root immediately provides both invariants of blocks from the primary tree.
Note that this definition of RBSTs not only naturally contains the randomized search trees of block size , but also allows a smooth trade-off towards a simple list of blocks (respectively and ). Though leaves are non-empty, it is possible that the tree shape defined by the random permutation yields leaves with load as low as in RBSTs.
As stated at the beginning of this section, the definition above does not yield an efficient algorithm to actually maintain RBSTs. The remainder of this section specifies our algorithms for searching, dynamic insertion, and dynamic deletion in RBSTs. Our analysis is presented in Sections 3, 4, and 5.
Successor and Range Search in -RBSTs.
As with ordinary B-Trees, we determine with binary search on the array the successor of search value , i.e. the smallest active separator key in the block that has a value of at least , and follow the respective child pointer. If the current search node is buffering, then we additionally check the set of non-active separators in the block (i.e. all keys) during the search to find the actual successor of from . The search descends until the list, i.e. block nodes with fan-out , in the respective RBST buffer is found. Since the lists are sorted by priority, we check all keys, by scanning the blocks of the list, to determine the successor of . This point-search extends to range-queries in the ordinary way by considering the two search paths of the range’s boundary .
To summarize, successor and range search occupies blocks in main memory, does not write blocks, and the number of block-reads is , where is the depth of the RBST.
2.1 Insertion and Deletion via Partial-Rebuilds
As with the Treaps, the tree shape of -RBSTs is solely determined by the permutation of the keys. For , any update method that transforms into and vice-versa are suitable algorithms for insertions and deletions. Unlike Treaps, that use rotations to always perform insertions and deletions at the leaf level, rotations in block search trees seem challenging to realize and analyze. We propose an update algorithm that rebuilds subtrees in a top-down fashion, which has vague similarities to rebalancing Scapegoat Trees.
The main difficulty for RBST update algorithms is to achieve IO-bounds for EM-access that are (near) proportional to the structural change in the trees, while having an worst-case bound on the temporary blocks needed in main memory. The partial-rebuild algorithm we introduce in this section has bounds on the number of block-writes (Os) in terms of the structural change of the update, the expected Os and Is are analyzed in Section 5.
Observation 1.
Let be a permutation on the key universe and the keys in the tree, after and before the insertion of . Let be the number of blocks in that are different from the blocks in and the number of blocks in that are different from the blocks in . Let and be the depth of the subtrees in and that contain those blocks.
The rebuild algorithm in this section performs block-writes to EM, reads blocks, and uses blocks of temporary MM storage during the update.
Our rebuild-algorithm aims at minimizing the number of block-writes (Os) in an update, using a greedy top-down approach that builds the subtree of in auxiliary EM storage while reading the keys in from UR EM. On completion, we delete the obsolete blocks from UR memory and write the result blocks back to UR EM to obtain the final tree . This way, our update algorithm will only require blocks of temporary storage in MM. The basic idea is as follows. Given a designated separator-interval when assembling block for writing to auxiliary EM, we determine those keys with the -smallest priority values by searching in , place them in the array of , determine the fan-out and the active separators of block node , and recursively construct the remaining subtrees for those keys within each section between active separators of . Eventually, we free the obsolete blocks of form UR and move the resulting subtree from the auxiliary to UR memory.
Next we discuss all possible rebuild cases when inserting a new key to the tree, their realization is discussed thereafter. For a block node , let denote the set of keys stored in the subtree of , the keys stored in the array of itself, , and 666For example, the key with is the UR label of the block . See also Figure 1 for ..
Consider the search path for in . Starting at the root, we stop at the first node that i) must store in its array (i.e. ) or ii) that requires rebuilding (i.e. must increase its fan-out). To check for case ii), we use the augmented child references of buffering node to determine the number of keys in the resulting subtree . Thus, both fan-outs, pre-insertion and post-insertion, are known without additional block reads from the stored subtree weights.
If neither i) nor ii) occurs, the search halts in a buffer list () and we simply insert the key and rewrite all subsequent blocks of the list in one linear pass. Note that, if the list is omitted (as there are no keys in the active separator interval), the insertion must simply allocate a new block that solely stores to generate the output RBST. It remains to specify the insertion algorithm for case i) and ii). See Figure 2 for an overview of all cases.
In ii), is a buffer node that must increase its fan-out. It suffices to build two trees from one subtree, each containing the top keys in their root. If the new key is contained in this interval, we pass as additional “carry” parameter to the procedure. If is not contained in this interval, the insertion continues on the respective child of that must contain . (See Figure 2 bottom.) Note that subtrees of are built in auxiliary EM, regardless of .
In i), is an internal node that stores keys and there are four cases; depending on if is an active separator in and if the fan-out stays or increases (see Figure 2). In all cases, it suffices to rebuild at most three subtrees. Two trees for the two separator-intervals that are incident to and one tree that contains the two subtrees of the separator-intervals incident to the key that is displaced from block by the insertion of , i.e. .
To complete the insertion algorithm, it remains to specify our top-down greedy rebuild procedure for a given interval between two active separators.
First, we isolate the task of finding the relevant keys, needed for writing the next block to auxiliary EM within the following range-search function top.
Given a subtree root in , a positive integer , and query range , we compute those keys from that have the smallest priority values, together with their respective new subtree weights , based on a naïve search that visits (in ascending key order) all descendants of that overlap with the query range.
For our and rebuilds, we run the respective top-query on the designated interval range and
check if is stored in the output block or beneath it.
Then we determine the fan-out from the range count results , which determines the active separators in the output block.
We allocate one block in auxiliary EM for each non-empty child-reference and set their parent pointers.
Finally, we recursively rebuild the subtrees for the intervals of the active sections, passing them the reference to the subtree root in that holds all necessary keys for rebuilding its subtree.
Note that for this form of recursion, using parent pointers allows to implement the rebuild procedure such that only a constant number of temporary blocks of MM storage suffice.
After the subtrees are built in the auxiliary EM, we delete the old blocks from UR and move the blocks to UR to obtain the tree .
Note that deleting has the same cases for rebuilding sections of keys between active separators. Moreover, if a deletion leads to a buffering block (), then all its children are buffering and thus store their subtree weight together with the reference to it. Thus, determining the fan-out for deletions requires no additional block reads.
Clearly, the number of block writes is and the total number of block reads of all top-calls is , from a charging argument:
That is, any one block of the blocks in old tree is only read by a top call if the key range of intersects the query range of the top call, i.e. .
Consequently contains the smallest key, the largest key, or all keys of .
The number of either such reads that is charged to a block (from the blocks) is bounded by , since those blocks from the output tree that issue the calls to top have nested intervals that are contained in each other, by the search tree property (of the result blocks).
Our bounds for Observation 1 in terms of writes (Os) and reads (Is) will follow from our analysis of the depth and size of RBSTs in the next sections.
3 Bounds for Depth
Since successful tree searches terminate earlier, it suffices to analyze the query cost for keys , i.e. unsuccessful searches. In this section, we bound the expected block reads for searching for some fixed in the block trees , where the randomness is over the permutations , i.e. the set of bijections .
Consider the sequence of blocks of an RBST on the search path from the root to some fixed . Since the subtree weight is a (strictly) decreasing function, we have that the fan-out of the nodes is a non-increasing function that eventually assumes a value . From the definition of , we have that there are (in the worst case) at most blocks in the search path with . Thus, it suffices to bound the expected number of internal blocks with , which all have the search tree property .
Lemma 2.
Let be a set of keys, , and an integer. The expected number of primary tree nodes () in an -RBST on that contain in their interval is . The expected number of secondary tree nodes () that have in their interval is .
In particular, the expected number of blocks in the search path of is .
Further, the depth of an -RBST on is with high probability less than .
The block structure of RBSTs do not allow a clear extension of the Treap analysis based on “Ancestor-Events” [27] or the “Backward-Analysis” of point-location cost in geometric search structures [15, Chapter 6.4] to obtain bounds on block-reads (Is) that are stronger than the high-probability -depth bound of ordinary Treaps. Our proof technique is vaguely inspired by a top-down analysis technique of randomized algorithms for static problems that consider “average conflict list” size. However, several new ideas are needed to analyze a dynamic setting where the location is user-specified and not randomly selected by the algorithm. Our proof uses a partition in layers whose sizes increase exponentially with factor and a bidirectional counting process of keys in a layer of that partition.
Proof.
Recall that there are in the worst case nodes with fan-out on the search path of . Consider the active separators of the RBST blocks that contain in their search-interval. This set consists of two sequences and of strictly decreasing and increasing key values, respectively. Since their priority values are strictly increasing, i.e. , we have from the known bounds of ordinary Treaps that with high probability [27, Lem. 4.8]. Thus, RBST depth is with high probability less than .
To show our sharper search bound, let random variable be the number of primary tree nodes that contain in their interval . Partition with intervals of the form for indices . E.g. permutation induces an assignment of keys to an unique layer index, i.e. with . For internal node let and is the maximum over the set. Note that , since every internal node stores exactly keys. Thus, counts nodes that either have both or have that has a larger layer index than . We bound the expected number of nodes of the first kind, since there are in the worst case at most from the second kind. Defining for every layer index a random variable that counts the tree nodes in that layer
we have that , where is the total number of blocks of the first kind. The remainder of the proof shows bounds for .
Let for each index , , and . Thus, , and . Define to be the number of consecutive keys form that are less than but not contained in . Analogously, random variable counts those larger than and is the number of consecutive keys of , whose range contain , but do not contain elements from . Since all separators of primary nodes are active and internal nodes store exactly keys, we have for every .
Next we bound based on a sequence of binary events: Starting at , we consider the elements in descending key-order and count as “successes” if until one “failure” () is observed. If has no more elements, the experiment continues on in ascending key order. Defining as the number of successes on termination, we have . The probability to obtain failure, after observing successes, is , which is at least for all and .
Hence where random variable , i.e. the number of trials to obtain one failure in a sequence of independent Bernoulli trials with failure probability . Since , we have .
We thus have for all , which shows the lemma’s statement for the primary tree nodes.
Since there are in the worst case nodes with fan-out on a search path, it remains to bound the expected number of secondary nodes with a fan-out in on a search path. If there are any such nodes to count, consider the top-most buffering node and let be its subtree weight. For any fixed integer , random variables regarding the subtree only depend on the relative order of the keys, which are uniform from . Since the expected number of keys in either of the sections is , the expected number of keys in the section of has an upper bound of the form . Since the bound holds for all , this bound holds unconditionally. The lemma’s expectation bound on the nodes follows from the fact that all secondary nodes (with ) of a search path store exactly keys. Thus, the expected number of such nodes is .
Next, we show our depth bound that improves on the simpler bound above. We first give a tail-estimate for the number of primary nodes and then an estimate for the number of secondary nodes on a search-path to the fixed query point .
Lemma 3.
Let . The random variable is with high probability .
We will tighten above’s analysis by considering the relative priority orders of the keys that are counted in layer , where . Indeed, even if random variable exceeds its mean by a large factor, the blocks of a search path to can only contain all keys if also their relative priority-order have a, very specific, deep block structure. This observation allows to obtain a much stronger tail bound.
Proof.
Let be the number of keys from that are stored in the block of a node on the search path to , i.e. . We will show for the random variables that to obtain a tailbound that is strong enough for a union bound of all possible queries . Recall from Stirling’s formula, that for all .
To observe keys from layer stored in blocks of a search path, the keys stored in the last block must all have priorities larger than the first keys, the keys in the second to last block must all have priorities larger than the first keys, and so forth. Further, each of the relative orders of any one such batch of keys does not affect them being stored in the block of a search path. Thus the probability that keys have relative priorities to form blocks of a search path is at most
where the last inequality is due to and being true for all . Thus .
For , and we have
From the layer partition, we have that the outcome of is independent of , i.e. bounds for are with respect to and regardless of the actual set , and all relative priority-orders of the keys in are equally likely. Since across all layers the probability bound for the sum is maximized when all factors have equal thresholds, we have
where the third last inequality uses that , the second last inequality uses that and , and and the last inequality uses that .
Lemma 4.
The depth of a buffering subtree is with high probability . If , then the depth of a buffering subtree is with high probability .
We believe that Lemma 4 cannot be improved substantially without redesigning the secondary structures, update algorithms, and proofs. Specifically, parameter constellations with small provide a major obstacle in obtaining better bounds.
Proof.
For the number of secondary tree nodes, which have and store exactly keys, we observe that having such nodes on a search path requires that the topmost node has one child with weight , and the second topmost node must have one child with weight , and so forth, where the RBST parameter . Thus, at least nodes must have weight , and consequently fan-out value at least . Using our bound of Corollary 17 on the fan-out of a child, we obtain the probability of having such nodes on a search path is at most , i.e. a high-probability bound for all .
Thus, the number secondary nodes is at most with probability at least , and with probability at least .
4 Bounds for Size using a Top-Down Analysis
Our analysis will frequently use the following characterization of the partitions of a set of keys that are induced by the elements of the smallest priority values from the set.
Observation 5.
There is a bijective mapping between the partitions on keys, induced by the first keys from , and the solutions to the equation , where the variables are non-negative integers. This bijection implies that the solutions of the equation happen with the same probability.
In other words, is the number of keys in the -th section beneath the root of , where . Thus, can be considered as the number of consecutive keys from that are between the -th and -th key stored in the root. For example, in an RBST on the keys and block size , a root block consisting of the key values , , is characterized by the assignment . Next we analyze the effect of our secondary structures, since the size of RBSTs without buffer () can be dramatically large. E.g. the expected number of blocks for subtrees of size is
In this example, buffering stores the whole subtree using only one additional block.
4.1 Buffering for UR trees with load-factor
Our top-down analysis of the expected size, and thus load-factors, of -RBSTs frequently uses the following counting and index exchange arguments.
Lemma 6 (Exchange).
We have for all .
Proof.
Due to Observation 5, each solution to occurs with the same probability. Hence, we can calculate by counting the number of solutions where and dividing it by the total number of solutions.
Thus, .
Next, we show our size bound for an -RBST with keys, where the priorities are from a uniform distribution over the permutations in . Our analysis crucially relies on the basic fact that restricting on an arbitrary key subset of cardinality yields the uniform distribution on . Random variable denotes the space, i.e. the number of blocks used by the RBST on a set of keys, denotes the number of full, and the number of non-full blocks. Next we show a probability bound for the event that a given section, say the -th, contains a number of keys of a certain range.
Lemma 7.
For any and , we have .
Proof.
We have and .
These basic facts allow us to compute the following expressions.
Lemma 8.
We have .
Proof.
By elementary computation, we have
| (1) | ||||
Note that an RBST on keys with block size has . Thus, we have from Lemma 7 that
| (2) | |||
and the lemma follows by using (2) for the last summation in (1). We are now ready to state our size bound.
Theorem 9 (Size).
The expected number of non-full blocks in an -key -RBST is at most .
Our proof is by induction on , where the base cases are due to the number of non-full blocks that are occupied by the secondary buffer structures (Appendix A).
Proof.
Observation 18 states that for all with . Moreover, Theorem 21 shows that for . (Recall that .) Next, we set sufficiently large for the following proof. For simplicity, we will use and instead of and respectively. We can conclude from Theorem 21 that for ,
The last inequality holds because and .
For , we prove the theorem by induction. For each index , define as the number of non-full blocks in the -th section beneath the root (of an RBST with keys). Note that only depends on , i.e. the number of keys in the -th section, and their relative priorities. We thus have
Using Lemma 6, we have that the summation term has equal value for each section . Since, for , the root is full and not counted in , we have
| (3) |
To proof the inequality it suffices to show that the right-hand side of (3) is at most . This is true if and only if . Since and , it suffices to show that the value of the sum is not positive. This is if and only if
| (4) |
Using Lemma 7 on the left-hand and Lemma 8 on the right-hand side, it remains to show
| (5) |
By elementary computation, this inequality holds for all . (See Appendix B.)
Since each full block contains distinct keys, i.e. , we showed that . Thus, the load-factor, i.e. the relative utilization of the allocated blocks, has:
Corollary 10 (Load-Factor).
Let and . The expected number of blocks in an -key -RBST is at most , i.e. the expected load-factor is at least .
Proof.
The load-factor is . Since is a convex function, Jensen’s inequality gives . Using Thm. 9 yields .
Corollary 11 (Range-Search).
Let . The expected number of block-reads for range reporting in -RBSTs is , where is the number of results.
5 Bounds for Dynamic Updates via Partial-Rebuilding
The weight-proportional fan-out in the design of our secondary UR tree structure has the advantage that it avoids the need of additional restructuring work whenever a subtree must change its state between buffering and non-buffering, due to updates (cf. Section 2.1). Together with the space bound from last section and the observation that the partial-rebuild algorithm for updating RBSTs only rebuilds at most three subtrees beneath the affected node, regardless of its fan-out, allows us to show our bound on the expected structural change.
Using the counting technique from Lemma 2, it is easy to show that is at most . The proof of the theorem will use the following, stronger bound.
Lemma 12 (Subtree Weight).
Let be a key and random variable denote the subtree weight of node that stores . Then .
Proof.
Let random variable be the largest set of consecutive keys that have and . Note that , and iff . Recall from binary Treaps that for any , i.e. where . We will show an upper bound of by arguing for the case . Indeed, if holds for given , then
Observe that partitions the keys in into keys smaller than and keys larger than , where . Though the priority orders of a set of keys are restricted to those that have , we have that all relative orders of the keys in remain equally likely. Consider the case that contains exactly many separators smaller than for some given integer , i.e. there are separators larger than . Let the random variables denote the key sets of the respective sections smaller than and for the sections larger than . That is, we have and regardless of the priority orders. We consider the relative -orders of those separators in and use the fact (regarding ancestors in binary Treaps) that the entire keys of the -th section are in if , which has probability . Now, using that for all , we have
Since , we have that , thus .
Theorem 13.
The expected number of writes (Os) of an update in an -key -RBST is . Updates have expected reads (Is).
Recall that our algorithm reads blocks only from one search path and during the top-queries.
To bound and from Obs. 1, we first analyze the writes (Os).
This extends to the reads (Is), using a high-probability bound for depth (e.g. Lemma 2 or Lemma 3 and 4), since both random variables are in the worst-case .
Proof.
We show our bound on the expected total structural change for deleting a key from the RBST. (Insertion of the key would count the same number of blocks.) Let be the set of keys before and after the deletion of respectively, where . Let be the subtree root that is subject to the partial rebuild algorithm from Section 2.1. From Lemma 2, determining the search path of takes expected reads (Is). Either is a primary node with or is a buffering node with .
For , we have in the worst-case at most blocks in the buffer’s list.
For a primary node with ,
contains in its block .
From Lemma 12, we have that .
Now, for any given integer of keys in the subtree, we have from Theorem 9 that the expected number of blocks is and, from linearity of expectation, that three of the subtrees contain at most blocks.
Thus, the contribution towards the expected number of block-writes is at most . Moreover, using the high-probability bound of Lemma 2, we have that the depth of the subtree of is less than for any given integer .
Thus, the contribution towards the total number of block-reads, issued by the top-queries of the update operation (cf. Observation 1), is at most
.
For a buffering node with , has . By Theorem 21, subtree rooted at has expected blocks and writing three, of its subtrees, has expected block-writes. For the block-reads, consider the number of keys in some active section of . If for some , then the worst-case depth of this child is . By Lemma 14, the probability is at most . Thus, shows that the expected block-reads are for either of the three affected subtrees of .
References
- [1] Arne Andersson and Thomas Ottmann. New Tight Bounds on Uniquely Represented Dictionaries. SIAM J. Comput., 24(5):1091–1101, 1995. doi:10.1137/S0097539792241102.
- [2] Lars Arge. The buffer tree: A technique for designing batched external data structures. Algorithmica, 37(1):1–24, 2003. doi:10.1007/s00453-003-1021-x.
- [3] Ricardo A. Baeza-Yates. The expected behaviour of b+-trees. Acta Informatica, 26(5):439–471, 1989. doi:10.1007/BF00289146.
- [4] Rudolf Bayer and Edward M. McCreight. Organization and maintenance of large ordered indices. Acta Informatica, 1:173–189, 1972. doi:10.1007/BF00288683.
- [5] Naama Ben-David, Guy E. Blelloch, Jeremy T. Fineman, Phillip B. Gibbons, Yan Gu, Charles McGuffey, and Julian Shun. Parallel algorithms for asymmetric read-write costs. In Proc. Symposium on Parallelism in Algorithms and Architectures (SPAA’16), pages 145–156, 2016. doi:10.1145/2935764.2935767.
- [6] Michael A. Bender, Jonathan W. Berry, Rob Johnson, Thomas M. Kroeger, Samuel McCauley, Cynthia A. Phillips, Bertrand Simon, Shikha Singh, and David Zage. Anti-persistence on persistent storage: History-independent sparse tables and dictionaries. In Proc. 35th Symposium on Principles of Database Systems (PODS’16), pages 289–302. ACM, 2016. doi:10.1145/2902251.2902276.
- [7] Michael A. Bender, Martin Farach-Colton, Jeremy T. Fineman, Yonatan R. Fogel, Bradley C. Kuszmaul, and Jelani Nelson. Cache-oblivious streaming b-trees. In Proc. Symposium on Parallelism in Algorithms and Architectures (SPAA’07), pages 81–92. ACM, 2007. doi:10.1145/1248377.1248393.
- [8] Guy E. Blelloch, Jeremy T. Fineman, Phillip B. Gibbons, Yan Gu, and Julian Shun. Sorting with asymmetric read and write costs. In Proc. Symposium on Parallelism in Algorithms and Architectures (SPAA’15), pages 1–12, 2015. doi:10.1145/2755573.2755604.
- [9] Guy E. Blelloch and Daniel Golovin. Strongly History-Independent Hashing with Applications. In Proc. 48th Symposium on Foundations of Computer Science (FOCS’07), 2007. doi:10.1109/FOCS.2007.36.
- [10] Guy E. Blelloch, Daniel Golovin, and Virginia Vassilevska. Uniquely Represented Data Structures for Computational Geometry. In Joachim Gudmundsson, editor, Proc. 11th Scandinavian Workshop on Algorithm Theory (SWAT’08), pages 17–28. Springer, 2008. doi:10.1007/978-3-540-69903-3_4.
- [11] Guy E. Blelloch and Margaret Reid-Miller. Fast set operations using treaps. In Proc. 10th Symposium on Parallel Algorithms and Architectures (SPAA’98), pages 16–26, 1998. doi:10.1145/277651.277660.
- [12] Milutin Brankovic, Nikola Grujic, André van Renssen, and Martin P. Seybold. A simple dynamization of trapezoidal point location in planar subdivisions. In Proc. 47th International Colloquium on Automata, Languages, and Programming (ICALP’20), pages 18:1–18:18, 2020. doi:10.4230/LIPIcs.ICALP.2020.18.
- [13] Gerth Stølting Brodal and Rolf Fagerberg. Lower bounds for external memory dictionaries. In Proc. 14th Symposium on Discrete Algorithms (SODA’03), pages 546–554, 2003. URL: https://dl.acm.org/doi/10.5555/644108.644201.
- [14] Douglas Comer. The ubiquitous b-tree. ACM Comput. Surv., 11(2):121–137, 1979. doi:10.1145/356770.356776.
- [15] Mark de Berg, Otfried Cheong, Marc J. van Kreveld, and Mark H. Overmars. Computational Geometry: Algorithms and Applications, 3rd Edition. Springer, 2008. doi:10.1007/978-3-540-77974-2.
- [16] James R. Driscoll, Neil Sarnak, Daniel Dominic Sleator, and Robert Endre Tarjan. Making data structures persistent. J. Comput. Syst. Sci., 38(1):86–124, 1989. doi:10.1016/0022-0000(89)90034-2.
- [17] Amr Elmasry, Mostafa Kahla, Fady Ahdy, and Mahmoud Hashem. Red-black trees with constant update time. Acta Informatica, 56(5):391–404, 2019. doi:10.1007/s00236-019-00335-9.
- [18] Eric Demaine et al. Lecture notes on persistent data structures. https://web.archive.org/web/20230601205149/http://courses.csail.mit.edu/6.851/spring21/scribe/lec1.pdf.
- [19] Daniel Golovin. Uniquely represented data structures with applications to privacy. PhD thesis, Carnegie Mellon University, 2008.
- [20] Daniel Golovin. B-Treaps: A Uniquely Represented Alternative to B-Trees. In Proc. 36th International Colloquium on Automata, Languages, and Programming (ICALP’09), pages 487–499, 2009. doi:10.1007/978-3-642-02927-1_41.
- [21] Daniel Golovin. The b-skip-list: A simpler uniquely represented alternative to b-trees. CoRR, abs/1005.0662, 2010. arXiv:1005.0662.
- [22] Dana Jansens. Persistent binary search trees. https://web.archive.org/web/20230526175144/https://cglab.ca/˜dana/pbst/, 2014. Accessed: 2023-05-26.
- [23] Donald E. Knuth. The Art of Computer Programming, Volume III: Sorting and Searching. Addison-Wesley, 1973.
- [24] Klaus Küspert. Storage Utilization in B*-Trees with a Generalized Overflow Technique. Acta Informatica, 19:35–55, 1983. doi:10.1007/BF00263927.
- [25] Ketan Mulmuley. Computational Geometry: An Introduction Through Randomized Algorithms. Prentice Hall, 1994.
- [26] Moni Naor and Vanessa Teague. Anti-presistence: history independent data structures. In Proc. 33rd Symposium on Theory of Computing (STOC’01), pages 492–501, 2001. doi:10.1145/380752.380844.
- [27] Raimund Seidel and Cecilia R. Aragon. Randomized Search Trees. Algorithmica, 16(4/5):464–497, 1996. doi:10.1007/BF01940876.
- [28] Lawrence Snyder. On uniquely represented data strauctures. In Proc. 18th Symposium on Foundations of Computer Science (SFCS’77), pages 142–146, 1977. doi:10.1109/SFCS.1977.22.
- [29] Rajamani Sundar and Robert Endre Tarjan. Unique Binary-Search-Tree Representations and Equality Testing of Sets and Sequences. SIAM J. Comput., 23(1):24–44, 1994. doi:10.1137/S0097539790189733.
- [30] Jeffrey Scott Vitter. External memory algorithms and data structures. ACM Comput. Surv., 33(2):209–271, 2001. doi:10.1145/384192.384193.
- [31] Jean Vuillemin. A unifying look at data structures. Commun. ACM, 23(4):229–239, 1980. doi:10.1145/358841.358852.
- [32] Andrew Chi-Chih Yao. On Random 2-3 Trees. Acta Informatica, 9:159–170, 1978. doi:10.1007/BF00289075.
Appendix A Induction Base Case: Expected Size of Buffering Trees
Lemma 14.
Let be some non-negative integral random variables where . For integers and , we have the tail-bounds
Proof.
Since for , , the lemma holds.
Lemma 15.
For , we have .
Proof.
.
For a subtree of size , fan-out parameter is equal to . Therefore, for with , we have . Moreover, for , if and only if .
Lemma 16.
For a subtree of size , where , and all , we have
where is the number of keys in the i-th section beneath the root.
Proof.
Since , we have
Using Lemma 15, we subtract from the numerator and denominator of . Thus
Since is at most , we have is positive and conclude
as stated.
Corollary 17.
For , we have
Observation 18.
For a subtree with , the data structure is a simple list and has at most one non-full block.
Lemma 19.
For a subtree with , the expected number of non-full blocks .
Proof.
We will design an algorithm calculating an upper bound on the number of non-full blocks. The root has keys, and the remaining keys are randomly split into two sections with and keys, i.e. . For , if , there is at most one non-full block, and the algorithm does not need to proceed in this section anymore. If , to observe non-full blocks, the keys should be partitioned again. It is impossible for both and to have since it implies that , which is a contradiction. So at each iteration, either the algorithm stops or continues with one of the sections. In the last step, there are two sections each with at most one non-full block. Thus, the expected number of iterations plus is an upper bound on .
Clearly, is indeed an increasing function of n, so losing keys of the root at each iteration results in fewer non-full blocks. To obtain a weaker upper bound, we assume that at each step, the key with the highest priority splits the remaining keys, instead of keys, into two new sections. These keys include the other keys of the root as well. It suffices to show that the expected number of iterations is at most .
At iteration , keys with the highest priorities split keys into sections with keys, where . Round occurs if any of s exceeds . Since top keys are chosen uniformly at random, all solutions to the equation have equal probabilities. Next, we find an upper bound on for . Set parameters , , and of Lemma 14 to , , and respectively. Thus
Since is at most , we have
The last inequality is an application of Lemma 15. Using the union bound, we have
This is indeed the probability of round k+1 occurring, conditioned on the probability of rounds k having occurred. Define to be the number of iterations and to be an indicator random variable of whether the -th iteration happened. If
It follows that
Lemma 20.
For a subtree with , the expected number of non-full blocks .
Proof.
keys beneath the root are separated into three sections with and keys. Same as Lemma 19, we can prove that it is impossible to have for every , or for some . Due to the symmetric property of the random variables , we can conclude
Lemma 16
implies that .
(Set and .)
To complete the proof we show that .
Consequently, we get the desired inequality .
The two top-priority keys split the keys into three parts with , , and keys, where . The i-th part includes all keys beneath the root and some extra keys from the root, so for each , we have .
Same technique of Lemma 14 yields .
Combining the previous inequalities together with implies
Subtract from the numerator and denominator. By Lemma 15, we have:
as required.
Theorem 21.
For a subtree of size , we have that the expected number of non-full blocks .
Proof.
We will prove the statement by induction on . Observation 18, Lemma 19, and Lemma 20 provide the bases of the induction. The problem is divided into subproblems by using random variables .
Note that , so the induction assumption is applicable to it. For , we will use the induction assumption and bases to obtain
Appendix B Additional Proofs
Lemma 22.
Consider a subtree with keys, where is equal to . We have:
Proof.
The proof is by induction on . For the statement is
| (6) | ||||
This is true because .
Assume the lemma holds for each . We will prove the lemma for , i.e. we will show:
holds for all . The inequality is true for , since
Hence, it suffices to observe that
which is true.
For , we will use the identity equation together with the induction assumption. We have
