Abstract 1 Introduction 2 Preliminary Results 3 Warmup: Constant-factor 𝑭𝒑 approximation 4 Improving the 𝜺 dependence 5 Lower bounds References

Tight Bounds for Heavy-Hitters and Moment Estimation in the Sliding Window Model

Shiyuan Feng EECS, Peking University, Beijing, China William Swartworth Carnegie Mellon University, Pittsburgh, PA, USA David Woodruff Carnegie Mellon University, Pittsburgh, PA, USA
Abstract

We consider the heavy-hitters and Fp moment estimation problems in the sliding window model. For Fp moment estimation with 1<p2, we show that it is possible to give a (1±ϵ) multiplicative approximation to the Fp moment with 2/3 probability on any given window of size n using O~(1ϵplog2n+1ϵ2logn) bits of space. We complement this result with a lower bound showing that our algorithm gives tight bounds up to factors of loglogn and log1ϵ. As a consequence of our F2 moment estimation algorithm, we show that the heavy-hitters problem can be solved on an arbitrary window using O(1ϵ2log2n) space which is tight.

Keywords and phrases:
sketching, streaming, heavy hitters, sliding window, moment estimation
Category:
Track A: Algorithms, Complexity and Games
Copyright and License:
[Uncaptioned image] © Shiyuan Feng, William Swartworth, and David Woodruff; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Sketching and sampling
Related Version:
Full Version: http://arxiv.org/abs/2504.21175
Funding:
Authors William Swartworth and David Woodruff were supported by a Simons Investigator Award and Office of Naval Research award number N000142112647.
Editors:
Keren Censor-Hillel, Fabrizio Grandoni, Joël Ouaknine, and Gabriele Puppis

1 Introduction

In some situations, such as monitoring network traffic, data is being received at an enormous rate and it is not always worthwhile to store the entire dataset. Instead we may only be interested in some statistics of the data, which we can hope to estimate from a data structure that uses much less space. This motivated the development of streaming algorithms, which process a sequence of updates one at a time, while maintaining some small sketch of the underlying data.

In some applications, it makes sense to focus on only the most recent updates. For instance, when tracking IP addresses over a network [20, 13], we might be more interested in tracking the heavy users on a week-to-week basis rather than over longer time periods. Even if we are interested in longer time periods, we may still be interested in obtaining more fine-grained information about how the heavy users change over time.

This type of problem motivated the sliding window model for streaming algorithms which has a considerable line of work [12, 8, 7, 21]. There are several versions of the sliding window model, but for us our main goal is to respond to queries about the previous n elements of the stream.

We consider perhaps the two most foundational problems in streaming algorithms: moment estimation and heavy-hitters. Both of these models have received an enormous amount of attention in the streaming literature, beginning with  [11] who proposed the CountSketch algorithm for finding heavy items, all the way to [6] who gave the optimal space bound for insertion-only streams 111This means that items may be added to the stream, but not removed., removing one of the log factors of CountSketch. While fully tight bounds are known for heavy-hitters (up to a log(1/ϵ) factor in insertion streams), in the sliding window model tight bounds are still unknown. Previous work [7] gave a log2n lower bound, and a log3n upper bound in the sliding window model. We close this gap, and show that a log2n dependence suffices for computing heavy-hitters on a particular sliding window.

In the Fp moment estimation problem, the goal is to approximation the pth moment for the frequency vector of all items inserted into the stream. Like heavy-hitters, this problem has a long history going back to [1]. Since then, a line of work has led to nearly tight bounds both for large moments p>2, which does not admit polylogarithmic sized sketches [2, 15], as well as the small moments 1p2 [14, 18] that we consider here. While this problem has received considerable attention in the sliding window model [8, 10, 21], there still remains a gap in the literature. We aim to close this gap for the problem of estimating the Fp moment on a sliding window.

1.1 Our models

The sliding window model

There are several slightly different models that one can consider. By default we consider an insertion-only stream of length m consisting of items x1,,xm. In the sliding window model, there is a window of size W consisting of the W most recent updates to the stream. A correct sliding window algorithm may be queried for the desired statistic at any time t, and it must produce a correct estimate for the statistic on the stream xmW+1,,xm of the W most recent updates. When we refer to the failure probability for such an algorithm, we mean the probability of failure for a single query (for an arbitrary value of t).

One could also consider the problem of tracking a stream statistic over the sliding window at all times. This could be accomplished in our framework simply by making the failure probability 1/m.

One could also consider a more general model where there is no fixed window. Instead at a given query time t, the algorithm receives a positive integer n and must estimate the stream statistic on the portion of the stream in (tn,t]. The difference is that n is now not known until runtime. Our algorithms all apply to this more general model, and our lower bounds hold in the standard sliding window model. This model may be useful if one wants to observe how the stream has changed up to the present time. For instance if we were monitoring network traffic, we might be interested in which IP addresses occurred frequently in the past hour, day, week, etc.

𝑭𝒑 moment estimation

In the version of the (ε,δ,Fp) moment estimation problem that we consider, we receive a series of insertion-only updates x1,,xm. At any point in time t the algorithm may be asked to output an estimate F^ of the Fp moment of the window W=[tn+1,t] consisting of updates xtn+1,,xt. If fi(W) is the frequency count of universe item i over the window, then the Fp moment over the window is defined to be FP(f(W))=i(fi(W))p. In order to be correct, the algorithm must satisfy F^=(1±ε)Fp(f(W)) with probability at least 1δ.

Heavy hitters

We say that item i is (ε,p)-heavy for a frequency vector f if fiεfp. In the (ε,p)-heavy hitters problem the goal is to output a collection of O(εp) universe items such that all (ε,p)-heavy items for f(W) are contained in and such that all elements of are at least cε-heavy where c>0 is a constant.

1.2 Prior Work

[8] introduced the smooth histogram approach for solving problems in the sliding window model. For a function f, given adjacent substreams A,B, and C, an (α,β)-smooth function demands that if (1β)f(AB)f(B), then (1α)f(ABC)f(BC) for some parameters 0<βα<1. Throughout the stream, they use the smooth histogram to remove the unnecessary information and only keep track of the time when Fp differs by (1+ϵp). For p(1,2], their work achieves space O(ϵ(2+p)log3n).

[21] introduced a new tool called a Difference Estimator. The idea is to estimate the difference of Fp(u+v) and Fp(v) with additive error ϵFp(v), given that max(Fp(u),Fp(u+v)Fp(v))γF(v), the dimension of the sketch only needs to be O~(γ2/pϵ2). They use the idea in [8] to maintain a constant factor partition on the top level. Then using a binary tree-like structure to give a fine-grained partition, they use a Difference Estimator to exclude the part that is not in the window.

[6] gives a O~(ϵ2logn) bits of space algorithm to give strong tracking over 2 in insertion-only stream. Also, [4] generalize this to p(0,2). They introduce a weak tracking property, which is a key motivation for our Strong Estimator .

For heavy hitters, [7] gives a O~(ϵplog3n) algorithm, with the 2 estimation being the bottleneck. If given the 2 estimation over the window, then their algorithm can work in space O~(ϵplog2n). This is the result we apply to obtain improved heavy-hitter bounds from our F2 estimator. They also prove an Ω(ϵplog2n) lower bound for the heavy hitter problem in the sliding window model.

1.3 Our Results

Table 1: Results in the Sliding Window Model.
Problem (in Sliding Window Model) Previous Bound New Bound
Lp-Heavy Hitters, p(0,2]
O~(ϵplog3n)
Ω(ϵplog2n) [7]
𝒪~(ϵplog2n)
Fp Estimation, p(1,2] 𝒪~(ϵ2log3n) [21]
O~(ϵplog2n+ϵ2logn),
Ω(ϵplog2n+ϵ2logn)

𝑭𝒑 estimation

We give an algorithm for Fp moment estimation for 1<p2 that achieves optimal bounds in the sliding window model, in terms of the accuracy ε and the window size n.

Theorem 1.

There is an algorithm that uses

O((ϵplog2n+ϵ2lognlog41ϵ)(log1ϵ+(loglogn)2))=O~(εplog2n+ε2logn)

bits of space, and solves the (ε,Fp) moment estimation problem in the sliding window model.

Note that this result is for approximating the Fp on any fixed window with 2/3 probability. To track Fp at all times, and guarantee correctness at all times with good probability, one could apply our algorithm O(logm) times in parallel, and take the median estimate at each time.

Heavy hitters

As a consequence of our F2 approximation scheme, we obtain an optimal heavy-hitters algorithm by combining with the results of [7].

Theorem 2.

Given ϵ>0 and 0<p2, there exists an algorithm in the sliding window model that, with probability at least 23, outputs all indices i[U] for which frequency fiϵpW, and reports no indices i[U] for which fiϵ12pW. The algorithm has space complexity O(εplog2n(log1ϵ+loglogn)).

An earlier version of [7] claimed this bound for the stronger problem of tracking the heavy-hitters on the window at all times. Unfortunately this original bound was incorrect, and their algorithm actually gives a log3n dependence, even in our setting where we are interested in obtaining good success probability for a single window.

Lower bounds

We also show that our Fp estimation algorithms are tight in the sliding window model up to log1εloglogn factors. Specifically, we show

Theorem 3.

Fix p(1,2]. Suppose that 𝒜 is a streaming algorithm that operates on a window of size n, and outputs a (1±ε) approximation of Fp on any fixed window with 9/10 probability. For nU, 𝒜 must use at least

Ω(εplog2(εU)+ε2log(ε1/pU))

space.

The first term in the lower bound is our contribution. The second term applies to general insertion-only streams, and follows from a recent work of [5]. While the lower bound only applies to p>1, our proof also applies to p=1 if the stream is allowed to contain empty insertions that do not affect the frequency counts, but that move the window. (Without empty insertions p=1 is trivial, since F1 is always the window size.)

1.4 Notation

We use m to denote the length of the stream, U be the universe size, and u1,,um[U] to denote the elements in the stream. For sliding window, we use n to denote the window size. We use poly(n) to denote a constant degree polynomial in n. We assume mpoly(n),Upoly(n).

For interval [l,r][1,m], we use x(l,r) to denote the frequency vector of elements ul,ul+1,,ur. For p(0,2], we use p(l,r) to denote the p norm of the frequency vector x(l,r), more specifically, x(l,r)p, and Fp(l,r) to denote the Fp moment of frequency vector x(l,r), more specifically, x(l,r)pp. For window W=[l,r], we also use pW or FpW to denote the p norm or Fp frequency moment on window W. We use a=(1±ϵ)b to denote a[(1ϵ)b,(1+ϵ)b].

1.5 Technical Overview

𝑭𝒑 estimation

To motivate our approach, consider approximating F2 to within a constant factor. We first recall the framework introduced by [9] for the sliding window model. The rough idea is to start a new Fp estimation sketch at each point in the stream. This clearly gives a sliding window estimate, but uses far too much space. However, the observation is that if sketch i and sketch i+2 give similar estimates for Fp at a given point in the stream, then they will continue to do so for the remainder of the stream. The estimate given by sketch i+1 is sandwiched between the estimates from the neighboring sketches, so sketch i+1 is redundant, and we may safely delete it. After deleting all redundant sketches, we are left with O(logn) sketches (for constant ε) with geometrically increasing Fp estimates.

Each of these F2 sketches uses O(lognlog1δ) bits of space to estimate F2 with probability 1δ, so this approach requires O(log2nlog1δ) bits. Since we only store logn sketches at a time, it might seem that we can take δ=O(1logn). This is a subtle point and has been a source of error in prior work.

In fact we cannot take δ=O(1logn) if we are only guaranteed that our sketches correctly estimate F2 with probability at least 1δ. To see why, suppose that each sketch we create is correct with probability at least 1δ and is otherwise allowed to be adversarially incorrect with probability δ. With δ=O(1/logn), it is likely that two consecutive sketches will eventually both be incorrect. In that case, they can conspire to in turn sandwich and wipe out all sketches in front of them, from which we will not be able to recover. While extreme, this example illustrates that an Fp estimation sketch with a failure probability guarantee is not sufficient for us. This is why prior approaches for estimating the p norm or Fp moment in sliding windows require Ω(log3n) bits of space, with one of the logarithmic factors arising from the need for sketches with high success rates. These high-success-rate sketches necessitate union bounds over all sketches, even though most of them are not kept by the algorithm when we make a query.

Of course we should not expect sketches to fail this way in practice. Could there be a somewhat stronger guarantee that rules out this type of adversarial behavior? In this work, we overcome this limitation by introducing a new tool: the Strong Estimator . This estimator enables a refined analysis that avoids the union-bound overhead, allowing us to achieve the first space complexity which breaks the O(log3n) bit barrier.

The intuition of Strong Estimator is for a window [l,r], we do not need our estimator to be right (meaning give an ϵ approximation) on any sub-interval of this window, but rather only an additive error: ϵ fraction of the p norm of the window suffices for maintaining the timestamps. Thus we can avoid the union bound for all sub-intervals to be right. The intuition is similar to the weak tracking in [4].

In Section 3, we show that Strong Estimator with the simplest algorithm in [8] suffices to give an ϵ approximation to the p norm of the sliding window. However, this simple algorithm uses O~(ϵ3plog2n) bits of space, which requires a high dependence on ϵ.

In Section 4, we introduce Difference Estimator [21] to improve the ϵ dependence. We first use our simple algorithm to give a constant factor partition on the top level. We then use a binary tree-like structure to make a more detailed partition between the top level, and use Difference Estimator estimator to exclude the contribution that is not in the window. The algorithm in [21] requires O~(ϵ2log3n) space. The extra logn factor also comes from the high success probability requirement. Using the Strong Estimator and the structure of their algorithm, we can obtain an algorithm using O(ϵ2log2n) bits of memory. We further discover that the bottleneck of the algorithm comes from the Difference Estimator , which can be stored by rounded values using the rounding procedure in [16]. We thus improve the space to O~(ϵplog2n+ϵ2logn) bits, which gives a separation between p=2 and p<2. We further prove a lower bound on Fp moment estimation over sliding windows, see below, showing that our algorithm is nearly optimal.

Heavy Hitters

[7] showed that if one has an 2 estimation algorithm over a sliding window, then one can find the 2 heavy hitters with O~(ϵ2log2n) additional bits of space. In [17], they show that an ϵp/2 - heavy hitter algorithm for 2 also finds ϵ- heavy hitters for p. Thus, using our simplest algorithm to provide a constant 2 estimation over the window will give us an algorithm that uses O~(ϵplog2n) bits of space, and returns all the ϵ-p heavy hitters. This matches the lower bound Ω(ϵplog2n) for heavy hitters in [7].

Lower Bounds

For our lower bound for Fp estimation, we construct a stream that consists of roughly log(n) blocks, where each block has half the Fp value of the prior block. Within each block, we place εp items that are each (ε,p)-heavy for the block, and arrange them so that each of these heavy items occurs in a contiguous “mini-block”. Overall, there are 1εplog(n) of these “mini-blocks”.

The main observation is that a correct streaming algorithm for Fp-moment estimation in the sliding window model must remember the location of each mini-block. Doing so requires logn bits of space per mini-block as long as there are at least, say, n0.1 disjoint positions that each block can occur in, resulting in our Ω(1εplog2n) lower bound.

To see why our algorithm must remember each location, consider one of the heavy items x in our construction, and suppose we want to decide whether the mini-block for x occurs entirely before or entirely after stream index i. We first shift the sliding window so that it begins at index i. We would like to decide whether or not x occurs in the resulting window. By the geometric decay of the blocks, x remains heavy for the entire window. Then to decide if x remains in the window we append Q copies of x to the window, where Q is a rough estimate of the Fp over the window. If x does not occur in the window, then appending the x’s increases the Fp by Qp. On the other hand, if x does occur roughly εQ times in the window then the Fp increases by (εQ+Q)p(εQ)p=Θ((1+ε)Qp). Since Qp is approximately the Fp moment of the window, a Θ(ε) heavy-hitters algorithm can distinguish between these two cases. A minor issue is that as stated, we would need to append the x’s without shifting the window past the index i. However we can easily fix this by appending the x’s before performing appending singletons to accomplish the shift. To make the above intuition precise we give a reduction from the IndexGreater communication game in Section 5.

2 Preliminary Results

In this section, we will introduce some basic definitions and lemmas that form the foundation of our Strong Estimator. We will also introduce the previous results for heavy hitters. We first require the following definition for p-stable distribution.

Definition 4 (p-stable distribution. [22]).

For 0<p2, there exists a probability distribution 𝒟p called the p-stable distribution so that for any positive integer n with Z1,,Zn𝒟p and vector xn, then i=1nZixixpZ for Z𝒟p.

The probability density function fX of a p-stable random variable X satisfies fX(x)=Θ(11+|x|1+p) for p<2, while the normal distribution corresponds to p=2. Moreover, [19] details standard methods for generating p-stable random variables by taking θ uniformly at random from the interval [π2,π2],r uniformly at random from the interval [0,1], and setting

X=f(r,θ)=sin(pθ)cos1/p(θ)(cos(θ(1p))log1r)1p1

These p-stable random variables are crucial to obtaining our Strong Estimator.

Definition 5 (Strong Estimator).

Let ϵ1ϵ2(0,1), p(0,2] and let u1,u2,,um be the sequence of stream elements. Let x(i,j) denote the frequency vector for elements ui,,uj. We say that fp has the (ε1,ε2,δ) Strong Estimator property on the window [l,r] if with probability 1δ, we have the bound fp(x(a,b))=(1±ϵ1)p(a,b)±ϵ2p(l,r) for all sub-windows [a,b][l,r] simultaneously. We say that fp has the (ε1,ε2,δ) Strong Estimator property if it has the (ε1,ε2,δ) Strong Estimator property on all windows [l,r].

For simplicity, we will use fp(a,b) to denote the estimate fp(x(a,b)).

To support the construction and analysis of such an estimator, we rely on several probabilistic tools. Lemma 6 gives a Chernoff-type concentration bound for k-wise independent random variables. Lemma 7 provides tail bounds on the supremum of inner products between a p-stable random vector and a sequence with an increasing frequency vector.

Lemma 6 ([3]).

Let X1,Xn{0,1} be a sequence of k-wise independent random variables, and let μ=𝔼Xi. Then

λ>0,(Xi(1+λ)μ)exp(Ω(min{λ,λ2}μ))+exp(Ω(k))
Lemma 7 ([4]).

Let x(1),x(2),x(m)n satisfy 0x(1)x(2)x(m). Let Zn have k-wise independent entries marginally distributed according to 𝒟p. Then for some Cp depending only on p,

(supkm|Z,x(k)|λx(m)p)Cp(1λ2p/(2+p)+k1/p)

Finally, we include two additional lemmas that are essential for our heavy hitter detection algorithm in the sliding window model.

Lemma 8, due to Braverman et al. [7], shows that a constant-factor approximation to the 2 norm within a window suffices to recover all ϵ-heavy hitters under the 2 norm, with provable accuracy guarantees and near-optimal space complexity.

Lemma 8 ([7]).

For a fixed window W, if given a 12 approximation of 2W, then there is an algorithm that outputs all the ϵ heavy hitters for 2 within the window, and can guarantee that all the output elements have frequency ϵ122W. This algorithm uses O(ϵ2log2n(log1ϵ+loglogn)) and has success probability 23.

To extend this result to the p setting for any p(0,2], we use Lemma 9, which establishes that any algorithm capable of recovering ϵp/2-heavy hitters under 2 (with a suitable tail guarantee) also successfully identifies all ϵ-heavy hitters under the p norm.

Lemma 9 ([17]).

For any p(0,2], any algorithm that returns the ϵp/2-heavy hitters for 2 satisfying the tail guarantee also finds the ϵ-heavy hitters for p.

3 Warmup: Constant-factor 𝑭𝒑 approximation

In this section, we will introduce our result for Strong Estimator. This estimator enables us to design a constant-factor approximation algorithm for Fp, and in combination with previous results, it leads to a nearly optimal algorithm for identifying ϵ-heavy hitters over sliding windows.

3.1 Constructing a Strong Estimator

First, we will show how to construct a space efficient Strong Estimator defined in Definition 5.

Construction 10.

Let d, r and s be parameters to be specified later, the Strong Estimator takes Πd×U to be a random matrix with entries drawn according to 𝒟p, and such that the rows are r-wise independent, and all entries within a row are s-wise independent. For frequency vector x(a,b), the estimate fp(x(a,b)) is given by median(|(Πx(a,b))1|,|(Πx(a,b))2|,,|(Πx(a,b))d|).

The following lemma shows that, by appropriately setting the parameters d, r, and s, the above construction yields a valid Strong Estimator.

Lemma 11.

Let ϵ1,ϵ2(0,1) with ϵ2ϵ1, δ(0,1), and p[1,2]. Given a stream of elements u1,u2,,um, Construction 10 yields an (ϵ1,ϵ2,δ) Strong Estimator by setting the parameters as follows:

d=Θ(ϵ12(log1ϵ2+log1δ)),r=Θ(log1ϵ2+log1δ),s=Θ(ϵ1p).

To show the space efficiency, we also analyze the space complexity of storing the matrix Π used in the construction.

Lemma 12.

The memory required to store the matrix Π for an (ϵ1,ϵ2,δ) Strong Estimator is

O(ϵ1p(log1ϵ2+log1δ)log(mU)) bits.

Last, we will show an additional property of Strong Estimator, which is crucial for solving Fp estimation over sliding windows.

Lemma 13 (Additional Property for the Strong Estimator).

The (ε1,ε2,δ) Strong Estimator fp has the following property:

For a sequence of elements in the stream u1,u2,,um and a window [l,r], with probability 1δlogr, for any interval [a,b] that 1a<l,lbr, fp(x(a,b))=(1±(ϵ1+2ϵ2))p(a,b)±2ϵ2p(l,r).

3.2 Constant-factor 𝑭𝒑 approximation

First, we will introduce the algorithm given in [8]: it maintains O(ϵplogn) timestamps to track the times at which p changes by a factor of (1+ϵp). This algorithm requires O(log3n) bits of space, where one logn factor accounts for the number of sketches, another logn arises from the need for high success probability in order to apply a union bound over all sketches, and the final logn comes from the fact that each entry in the sketch requires logn bits.

Our key improvement over their algorithm is that, by using the Strong Estimator , we no longer require the sketches to have high success probability. The Strong Estimator provides guarantees over all sub-intervals directly, eliminating the need for a union bound.

We present an ϵ-approximation algorithm for p over sliding windows that uses O~(ϵ3plog2n) space. This is the first algorithm achieving o(log3n) space for this problem. As p estimation and Fp estimation are fundamentally the same, we do not distinguish between the two here.

The following theorem formalizes the guarantees of our approach:

Theorem 14.

For any fixed window W, with probability 1δ, Algorithm 1 will give an ϵ approximation of pW, and use O(ϵ3plog2n(log1ϵ+log1δ+loglogn)) bits of space.

We now apply our algorithm to the heavy hitter problem in sliding windows. Combining our 12-approximation for 2 with known results for heavy hitters (Lemma 8 and Lemma 9), we obtain the following result:

Theorem 15 (Main Theorem on Heavy Hitters).

Given ϵ>0 and 0<p2, there exists an algorithm in the sliding window model that, with probability at least 23, outputs all indices i[U] for which frequency fiϵpW, and reports no indices i[U] for which fiϵ12pW. The algorithm has space complexity (in bits) O(1ϵplog2n(log1ϵ+loglogn)).

Proof.

We use Algorithm 1 to obtain a 12 approximation over pW, the rest of the proof follows by Lemma 8 and Lemma 9.

Algorithm 1 Smooth Histogram with Strong Estimator .

Input: Stream a1,a2,[U], window length n, approximate ratio ϵ(0,1) and error rate δ(0,1), parameter p.

Output: At each time i, output a (1±ϵ) approximation to p in window [in,i].

Initialization:

  1. 1.

    Generate the matrix Πd×U for an (ϵp64,ϵp128,δ=δO(logm)) Strong Estimator, where d=O(ϵ2plog1ϵδ)).

  2. 2.

    Random matrix πd×U for a Indyk’s p-stable Sketch, where d=O(ϵ2log1δ).

Let f(D) be the estimated value of x(D)p from Strong Estimator on data stream D, and g(D) be the estimated value of x(D)p from Indyk’s p-stable Sketch.

 

During the Stream:

Define tk(j) to be the k-th timestamps when the stream goes to j. The following lemma shows that the p norm does not decreases significantly between two consecutive timestamps.

Lemma 16.

Let p[1,2], for fixed window W=[in,i], with probability 1δ, either t1(i)+1=t2(i), or p(t2(i),i)(1ϵ2)p(t1(i),i).

Now we prove theorem 14.

Proof of Theorem 14.

Our Indyk’s p-stable Sketch will give a ϵ4 approximation with probability 1δ. By a union bound over the event of Lemma 16 and the condition that Indyk’s p-stable Sketch is right, by adjusting the constant factor for δ, we obtain an ϵ approximation of pW with probability 1δ.

For the space, as the value f is bounded by poly(n), the number of timestamps will be bounded by O(ϵplogn).

For each timestamp, we keep a vector with dimension d=O(ϵ2ploglogmϵδ) for the Strong Estimator , and a vector with dimension d=O(ϵ2log1δ) for Indyk’s p-stable Sketch . So the total space for the sketches is O(ϵ3plog2n(log1ϵ+loglogn+log1δ)), as we have assumed mpoly(n). The space to store the matrix is of lower order by Lemma 12.

4 Improving the 𝜺 dependence

4.1 Preliminary Results

Here we use F(u) to denote upp for frequency vector u.

First, we introduce the definition and construction of Difference Estimator, which is essential in our algorithm that gives tight bound on Fp estimation over sliding window.

Definition 17 (Difference Estimator. Definition 8.2, [21]).

Given a stream 𝒮 and fixed times t1, t2, and t3, let frequency vectors u and v be induced by the updates of 𝒮 between times [t1,t2) and [t2,t3). Given an accuracy parameter ε>0 and a failure probability δ(0,1), a streaming algorithm 𝒞(t1,t2,t,γ,ε,δ) is a (γ,ε,δ)-suffix difference estimator for a function F if, with probability at least 1δ, it outputs an additive εF(v+wt) approximation to F(u+v+wt)F(v+wt) for all frequency vectors wt induced by [t3,t) for times t>t3, given min(F(u),F(u+v)F(v))γF(v) for a ratio parameter γ(0,1].

Lemma 18 (Construction of Difference Estimator. Lemma 4.9, Lemma 3.6 [21]).

Let the dimension d=𝒪(γ2/pε2(log1ε+log1δ)).

For 0<p<2, it suffices to use a sketching matrix Ad×U of i.i.d. entries drawn from the p-stable distribution 𝒟p, with dimension d to obtain a (γ,ε,δ)-difference estimator for Fp. To store such a matrix A requires O(dlogn(loglogn)2) space.

Moreover, the estimation of F(u+v)F(v) is given by:

  1. 1.

    For a parameter q=3, let each zi=j=q(i1)+1qi(Au+Av)jp/q and zi= j=q(i1)+1qi(Av)jp/q.

  2. 2.

    Output the arithmetic mean of (z1z1),(z2z2),,(zd/qzd/q).

For p=2, it suffices to use a sketching matrix Md×U that each entry Mi,j of M is a 4-wise independent random sign scaled by 1d to obtain a (γ,ϵ,δ)-difference estimator for F2. To store such a matrix M requires O(dlogn) space.

The estimation of F(u+v)F(v) is given by M(u+v)22Mv22.

Next, we introduce the rounding procedure described in [16]. Although this procedure is originally designed for a distributed setting, the process of merging two sketches can be interpreted as two nodes transmitting their respective sketches to a parent node, which then performs a new rounding operation based on the information received from its children.

Definition 19 ([16] Rounding random variable).

For any real value r, let ir and αi{1,1} be such that (1+γ)irαir(1+γ)ir+1. Now fix pr such that:

αir=pr(1+γ)ir+1+(1pr)(1+γ)ir

We then define the rounding random variable Γγ(r) by

Γγ(r)={0 if r=0αi(1+γ)ir+1 with probability prαi(1+γ)ir with probability 1pr
Procedure 1 (Recursive Randomized Rounding).
  1. 1.

    Choose random vector Zn using shared randomness.

  2. 2.

    Receive rounded sketches rj1,rj2,,rjtj from the tj children of node j in the prior layer (if any such children exist).

  3. 3.

    Compute xj=Xj,Z+rj1+rj2++rjt.

  4. 4.

    Compute rj=Γ(xj). If player j𝒞, then send rj it to the parent node of j in T. If j=𝒞, then output rj as the approximation to Z,𝒳.

Lemma 20 (Lemma 2, [16]).

Let U be the universe space, and m be the total point in the graph, d be the diameter of this graph, ϵ,δ(0,1). Fix p[1,2], and let Z=(Z1,Z2,,ZU)DpU. Then the above procedure when run on γ=(ϵδ/(dlog(Um)))C for a sufficiently large constant C, produces an estimate rC of Z,𝒳, held at the center vertex 𝒞, such that 𝔼[r𝒞]=Z,𝒳. Moreover, over the randomness used to draw Z, with probability 1δ for p<2, and with probability 1e1/δ for Gaussian Z, we have 𝔼[(r𝒞Z,𝒳)2](ϵ/δ)2𝒳p. Thus, with probability at least 1O(δ), we have

|r𝒞Z,𝒳|ϵ𝒳p

Moreover, if Z=(Z1,Z2,,Zn)n where each Zi{1,1} is a 4-wise independent Rademacher variable, then the above bound holds with p=2 (and with probability 1δ ).

Now, we give our own rounding procedure.

Procedure 2.

Suppose we want to update the rounded sketch of window [l,r]. The sketch has a matrix ΠDpd×U for p(0,2). For p=2, the sketch has a matrix Π{1,1}d×U, with each entry being a Rademacher variable and each row being 4-wise independent. The precise sketch for window [l,r] is (|Πx(l,r)|1,,|Πx(l,r)|d)d. Let 𝒫(Πx(l,r))d be the sketch vector after the rounding procedure.

There are 2 cases: First is given a non-rounded sketch, and then rounding the sketch. In this case, for each i[d], 𝒫(Πx(l,r))i=Γγ((Πx(l,r))i).

The second case is given the rounded sketch of 2 sub-intervals [l,k] and [k+1,r]. For this case, 𝒫(Πx(l,r))i=Γγ(𝒫(Πx(l,k))i+𝒫(Πx(k+1,r))i).

Lemma 21.

Let [l,r] be a window, Πd×U be the sketching matrix (meaning each entry is sampled from 𝒟p if 0<p<2, or is a Rademacher variable if p=2), and we are running procedure 𝒫 described in Procedure 2 with parameter γ=(ϵδdmlog(Um))C to get a rounded vector for sketch Πx(l,r). Then for all i[d], we have |𝒫(Πx(l,r))i(Πx(l,r))i|ϵx(l,r)p.

4.2 Algorithm Overview

The structure of our algorithm follows [21]. Let the current time be r and query window be W=[rn+1,r]. Our algorithm first uses Algorithm 1 to maintain a constant factor partition on the top level. We will have O(logn) timestamps on the top level. Let them be t1<t2<<ts.

For each of the O(logn) pieces of the stream, we will partition the substream [ti,ti+1] into more detailed timestamps. We will have a binary tree-like structure, with β=O(logϵp) at the top level, and each level j will keep roughly O(2j) timestamps. Define the k-th timestamp on level j between top level i to be ti,j,k.

We will use Difference Estimator to estimate the difference of Fp between 2 nearby timestamps and the current time r, to exclude the contribution from t1 to rn. As the Difference Estimator is maintained by a binary tree-like structure, we only need to query the Difference Estimator a constant number of times on each level. So if each Difference Estimator gives an η additive error of FpW, then our final approximation will have an O(ηβ) additive error of FpW. So a value η=O(ϵlog1ϵ) suffices and the Difference Estimator uses O(ϵ2log2npoly(log1ϵ)) space in total.

To maintain the binary tree-like structure for the Difference Estimator, the algorithm by [21] uses a p-stable sketch with high success probability, and thus by union bound, these sketch always can rule out unnecessary timestamps and retain only the essential ones.

Our algorithm relies on the analysis of Strong Estimator, and no such union bound will be used (thus saving a factor of logn), so it will require a more careful analysis. In the following content, we introduce our new techniques.

Algorithm 2 Moment Estimation in the Sliding Window with Optimal Space.

Input: Stream u1,u2,[U], window length n, approximate ratio ϵ(0,1) and error rate δ(0,1), parameter p.

Output:At each time i, output a (1+ε)-approximation to Fp for window [in,i].

Initialization: βlog100max(p,1)εmax(p,1),ηϵ225log1ϵ,γj23j for all j[β], δdif=δ225log1ϵ

  1. 1.

    Let ϵ0=12, prepare the matrix Πtop for (ϵ0p64,ϵ0p128,δSE=δlogO(logm)) Strong Estimator to maintain a constant factor partition on the top level.

  2. 2.

    Prepare the matrix π for a Indyk’s p-stable Sketch with dimension O(ϵ2log1δ).

  3. 3.

    To maintain the subroutine level, for each j[β], prepare C2jlogn random matrices Πid for a large constant C, each responsible for an (18,21/pϵ232,δsub=δO(Cϵplogn)) Strong Estimator . Store them in a queue qj.

  4. 4.

    For each level j[β], generate the matrix Πdif for the (γj,η,δdif) -Difference Estimator, with dimension d=O(γj2/pη2(log1η+log1δdif)).

 

4.3 Rounding Techniques

In this subsection, we present several space-saving techniques that allow us to maintain correctness guarantees while significantly reducing the storage requirements of the sketching data structures. First, we describe how rounding can be applied to the Strong Estimator sketches fi,j,k to reduce the number of bits required per entry. Next, we address the challenge of bounding the number of timestamps in the absence of a global union bound, introducing a rotating random set technique that probabilistically limits the number of active sketches. Finally, we apply a fine-grained rounding scheme to compress both the Indyk’s p-stable Sketch and Difference Estimator sketches, ensuring that their space usage remains within our overall complexity bounds.

Storing the Sketch via Strong Estimator 𝒇𝒊,𝒋,𝒌

We now describe how to apply rounding to the estimates computed by Strong Estimator to achieve space efficiency.

Throughout the algorithm, the sketch fi,j,k is used to estimate the p norm over the interval from timestamp ti,j,k to either another timestamp t or the current time. While we maintain exact estimates for intervals ending at the current time, we observe that for estimates ending at previous timestamps t, we can round the values to save space. Specifically, we round fi,j,k(ti,j,k,t) to the nearest power of (1+116). The following claim shows that this rounding preserves the Strong Estimator property up to a small additive loss:

Claim 22.

Let u1,,um be a stream and [l,r] an arbitrary window. Suppose f satisfies the (ϵ1,ϵ2) Strong Estimator property on [l,r]. Define f^(a,b)=Round(f(a,b)) to the nearest power of 1+116. Then f^ satisfies the (ϵ1+18,(1+116)ϵ2) Strong Estimator property on [l,r].

Thus, in our analysis, since we use (18,21/pϵ232,δsub)-Strong Estimator , we may safely treat the rounded sketches as (14,21/pϵ230,δsub)-Strong Estimator without loss of generality.

However, directly storing O(s2) rounded values (where s=O(ϵplogn) is the number of timestamps) would still be space-inefficient. To address this, we introduce the following compression technique:

Technique 1.

For each timestamp ti,j,k, we maintain an array of length O(logn). The c-th entry stores the rank of the earliest timestamp t such that fi,j,k(ti,j,k,t)(1+116)c.

A potential issue with this scheme is that the value fi,j,k(ti,j,k,tu) might be “overestimated” due to a later timestamp tv with fi,j,k(ti,j,k,tv)>fi,j,k(ti,j,k,tu). However, this does not violate the Strong Estimator property, since fi,j,k(ti,j,k,tu)<fi,j,k(ti,j,k,tv)<(1+ϵ1)p(ti,j,k,tv)+ϵ2p(l,r)<(1+ϵ1)p(ti,j,k,tu)+ϵ2p(l,r).

Therefore, this approximation remains valid, and the total space required to store all the Strong Estimator sketches is reduced to O(slognlogs) bits.

Challenges in Bounding the Number of Timestamps

The analysis in [21] relies on applying a union bound over all sketches and all times, which introduces an additional logn factor in the space complexity. In contrast, we leverage the Strong Estimator to eliminate the need for such a union bound. However, this also introduces new challenges.

Consider the current window W=[rn+1,r], and let l be the earliest index such that Fp(l,r)2FpW. Conditioned on the Strong Estimator satisfying its guarantee over [l,r], we can show that the number of timestamps at the first level is bounded and provides reliable guidance for constructing Difference Estimator .

The complication arises because this guarantee only holds with probability 1δ, and without a global union bound, we cannot ensure that the number of timestamps is always bounded across time. For instance, if the same matrix Π is reused across all Strong Estimator instances, then in the unlikely case that these matrices fail simultaneously, the number of timestamps could grow unbounded.

To mitigate this, we introduce the following technique:

Technique 2 (Rotating Random Sets).

For each level j[β], we maintain a queue qj that holds C2jlogn independent random matrices Π, for a sufficiently large constant C.

Each Strong Estimator instance fi,j,k uses a distinct matrix Π from the queue that has not been used before. When a sketch is deleted, its corresponding matrix is returned to qj, making it available for reuse.

Lemma 23.

Fix any time t. With probability at least 11poly(n), the queue qj is non-empty for all levels j[β]. This ensures that there are at most O(2jlogn) active timestamps at level j.

Hence, by a union bound over all levels and time, with high probability, the total number of timestamps across all levels remains bounded by O(ϵplogn) throughout the entire stream.

Rounding the Indyk’s p-stable Sketch and Difference Estimator

Our algorithm maintains only O(logn) timestamps at the top level. However, each of these timestamps stores an instance of Indyk’s p-stable Sketch sketch, which uses O(ϵ2logn) bits, and the total space for the Difference Estimator sketches at each level is also O~(ϵ2logn). To reduce the overall space to O~(ϵplog2n+ϵ2logn), we apply the rounding technique from [16] to compress the sketch values.

Lemma 24 (Correctness of rounding Indyk’s p-stable Sketch ).

Let ϵ,δ(0,1), and [l,r] be a window, Πd×U be the sketch matrix, γ=(ϵδdmlog(Um))C, 𝒫(Πx(l,r)) be the sketch vector after rounding procedure 2 with parameter γ.

Then median(|𝒫(Πx(l,r))1|,,|𝒫(Πx(l,r))d|)=median(|(Πx(l,r))1|,,|(Πx(l,r))d|)±ϵx(l,r)p with probability 1δ.

Proof.

The proof simply follows Lemma 21.

Setting the ϵ,δ in Lemma 24 to be 14ϵ,14δ, we can store the rounded vector in space O(dlog(1γlogn))=O(d(log1ϵδ+loglogn)).

Next, we prove the correctness of rounding the Difference Estimator, for p=2. The argument is the same as Lemma 24.

Lemma 25 (Correctness of rounding the Difference Estimator ).

Let ϵ0,δ0(0,1),q=3, u,v be 2 frequent vector, Ad×U be the sketch matrix, γ=(ϵ0δ0dmlog(Um))C, for p(0,2), recall the way Difference Estimator estimate the difference between u+vpp and vpp is by calculate the mean of j=q(i1)+1qi|(Au+Av)j|p/qj=q(i1)+1qi|(Av)j|p/q. Then the mean of j=q(i1)+1qi|(𝒫(Au+Av))j|p/qj=q(i1)+1qi|(𝒫(Av))j|p/q only deviate O((ϵ0d2δ02)q/p)F(v) from the original estimation with probability 12δ0.

Thus, setting ϵ0=O(ϵδ2d2) and δ0=O(δ), we can store the each Difference Estimator in space O(d(log1ϵδ+loglogn)).

4.4 Nearly Optimal 𝑭𝒑 Estimation

Now we introduce our main result for Fp estimation.

Theorem 26.

Let p[1,2], for any fixed window W, with probability 23, Algorithm 2 will give an ϵ approximation of FpW, and use O~(ϵplog2n+ϵ2logn) bits of space, more specifically, O((ϵplog2n+ϵ2lognlog41ϵ)(log1ϵ+(loglogn)2)).

Let the query window be W=[rn,r], and let l be the smallest index that Fp(l,r)2FpW, and W=[l,r]. First, we prove that our top level Strong Estimator gives a constant approximation. The proof can be easily seen from Lemma 16.

Lemma 27 (Constant factor partitions in top level).

With probability 1δ, FpWFp(t1,r)2FpW and 12FpWFp(t2,r)FpW.

Proof.

As we use a (ϵp64,ϵp64,δSE) Strong Estimator for ϵ=12. By Lemma 16, p(t2,r)34p(t1,r). So Fp(t1,r)(43)pFp(t2,r)2FpW and Fp(t2,r)(34)pFp(t1,r)12FpW.

Let be the event that the top level Strong Estimator gives a constant factor partition, and for the subroutine levels, all the O(ϵplogn) instances of Strong Estimator satisfy the (12,21/pϵ230) Strong Estimator property on window W. In the subroutine level, we use (12,21/pϵ230,δSub<δO(ϵplogn)) Strong Estimator, so by a union bound, all the O(ϵplogn) instances of Strong Estimator satisfy the Strong Estimator property in window W with probability 1δ.

So by adjusting the constant for δ, with probability 1δ, will happen.

We now show an upper bound on the moment of each substream whose contribution is estimated by the difference estimator, i.e., the difference estimators are well-defined.

Lemma 28 (Upper bounds on splitting times).

Let p[1,2]. Conditioned on , we have that for each j[β] and all k, either t1,j,k+1=t1,j,k+1 or Fp(t1,j,k,t1,j,k+1)2j7FpW.

Lemma 29 (Accuracy of difference estimators. Lemma A.3 [21]).

Let p[1,2]. Conditioned on the event in Lemma 28 holding, we have that for each j[β] and all k,

SDiffEsT(ti,j,k1,ti,j,k,t,γj,η,δ) gives an additive ηFp(ti,j,k,t) approximation to Fp(ti,j,k1,t)Fp(ti,j,k,t) with probability at least 1δ.

Lemma 30 (Geometric upper bounds on splitting times. Lemma A.4 [21]).

Let p[1,2]. Conditioned on the event in Lemma 28 holding, we have that for each j[β] and each k that either ti,j,k+1=ti,j,k+1 or Fp(ti,j,k,m)Fp(ti,j,k+1,m)2j/p2pFp(ti,j,k+1,m).

Lemma 31 (Geometric Lower bounds on splitting times).

Let p[1,2], conditioned on , we have that for each j[β] and each k that

Fp(ti,j,k1,ti,j,k+1)>2j21FpW.
Lemma 32 (Number of level j difference estimators. Lemma A.6 [21]).

Let p[1,2], conditioned on the Geometric Lower bounds on splitting times to hold (Lemma 31) , we have that for each j[β], that k2j+23 for any t1,j,k.

We next bound the number of level j difference estimators that can occur from the end of the previous level j1 difference estimator to the time when the sliding window begins. We say a difference estimator 𝒞1,j,k is active if k[aj,bj] for the indices aj and bj defined in StitchSW. The active difference estimators in each level will be the algorithms whose output are subtracted from the initial rough estimate to form the final estimate of FpW.

Lemma 33 (Number of active level j difference estimators. Lemma A.7 [21]).

Conditioned on the Upper bounds and Geometric lower bounds on splitting times to hold (Lemma 28 and 31) , for p[1,2] and each j[β], let aj be the smallest index such that t1,j,ajcj1 and let bj be the largest index such that t1,j,bjrn. Then conditioned on , we have bjaj224.

Lemma 34 (Correctness of sliding window algorithm).

For p[1,2], Algorithm 2 outputs a (1±ε)-approximation to FpW with probability 1δ.

Now we will prove the space bound.

Lemma 35.

Let δ be a constant 13, Algorithm 2 uses space O((ϵplog2n+ϵ2lognlog41ϵ)(log1ϵ+(loglogn)2)) bits.

Combining Lemma 34, which establishes the approximation guarantee, with the space bound given in Lemma 35, we conclude the proof of Theorem 26.

5 Lower bounds

We consider the following communication problem, which is a generalization of the classic GreaterThan communication problem.

Problem 36.

In the IndexGreater(N,k) problem, Alice receives a sequence (x1,,xk)[2N]k. Bob receives an an index j and a number y[2N]. Alice sends a one-way message to Bob, and Bob must decide if y>xj or if yxj.

Lemma 37.

Problem 36 requires Ω(Nk) communication to solve with 2/3 probability.

This lower bound is known. For instance [7] applies it in their lower bounds as well. We supply a short proof in the appendix from the somewhat more standard Augmented Index problem.

Lemma 38.

Let p(1,2], and let W,N,k,r be parameters satisfying the following:

  • nU

  • 2rNk11/pn11/p

  • 2rNkn

Suppose that 𝒜 is a streaming algorithm that operates on sliding windows of size n, and gives a 1±112k1/p multiplicative approximation to Fp on any fixed window with 9/10 probability, over the universe [U].

Then 𝒜 uses at least Ω(krlogN) space.

Proof.

See the appendix of the full version.

Theorem 39.

Fix p(1,2]. Suppose that 𝒜 is a streaming algorithm that operates on a window of size n, and outputs a (1±ε) approximation of Fp on any fixed window with 9/10 probability. Suppose that εn1/p. Then 𝒜 must use at least

Ω((p1)2εplog2(εmin(n,U)1/p)+1ε2log(ε1/pmin(U,n)))

space. In particular for constant p and nU, 𝒜 must use at least

Ω(1εplog2(εU)+1ε2log(ε1/pU))

space.

Proof.

The second term in the lower bound is shown by [5] in their Theorem 11 for general insertion-only streams.

For the first term, first suppose that nU. In Lemma 38, set k=1εp. Then set 2r and N to both be Θ((εp1n11/p)1/2) so that the conditions of Lemma 38 hold. This gives the stated bound for nU. To handle the case where nU, note that we can reduce from the nU setting; simply duplicate each item in the stream nU times.

References

  • [1] Noga Alon, Yossi Matias, and Mario Szegedy. The space complexity of approximating the frequency moments. In Proceedings of the twenty-eighth annual ACM symposium on Theory of computing, pages 20–29, 1996. doi:10.1145/237814.237823.
  • [2] Ziv Bar-Yossef, Thathachar S Jayram, Ravi Kumar, and D Sivakumar. An information statistics approach to data stream and communication complexity. Journal of Computer and System Sciences, 68(4):702–732, 2004. doi:10.1016/J.JCSS.2003.11.006.
  • [3] Mihir Bellare and John Rompel. Randomness-efficient oblivious sampling. In Proceedings 35th Annual Symposium on Foundations of Computer Science, pages 276–287. IEEE, 1994. doi:10.1109/SFCS.1994.365687.
  • [4] Jarosław Błasiok, Jian Ding, and Jelani Nelson. Continuous monitoring of p norms in data streams. arXiv preprint arXiv:1704.06710, 2017.
  • [5] Mark Braverman and Or Zamir. Optimality of frequency moment estimation. arXiv preprint arXiv:2411.02148, 2024. doi:10.48550/arXiv.2411.02148.
  • [6] Vladimir Braverman, Stephen R Chestnut, Nikita Ivkin, Jelani Nelson, Zhengyu Wang, and David P Woodruff. Bptree: An 2 heavy hitters algorithm using constant memory. In Proceedings of the 36th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, pages 361–376, 2017.
  • [7] Vladimir Braverman, Elena Grigorescu, Harry Lang, David P Woodruff, and Samson Zhou. Nearly optimal distinct elements and heavy hitters on sliding windows. arXiv preprint arXiv:1805.00212, 2018. arXiv:1805.00212.
  • [8] Vladimir Braverman and Rafail Ostrovsky. Smooth histograms for sliding windows. In 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS’07), pages 283–293, 2007. doi:10.1109/FOCS.2007.55.
  • [9] Vladimir Braverman and Rafail Ostrovsky. Smooth histograms for sliding windows. In 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS’07), pages 283–293. IEEE, 2007. doi:10.1109/FOCS.2007.55.
  • [10] Vladimir Braverman and Rafail Ostrovsky. Effective computations on sliding windows. SIAM Journal on Computing, 39(6):2113–2131, 2010. doi:10.1137/090749281.
  • [11] Moses Charikar, Kevin Chen, and Martin Farach-Colton. Finding frequent items in data streams. In International Colloquium on Automata, Languages, and Programming, pages 693–703. Springer, 2002. doi:10.1007/3-540-45465-9_59.
  • [12] Mayur Datar, Aristides Gionis, Piotr Indyk, and Rajeev Motwani. Maintaining stream statistics over sliding windows. SIAM journal on computing, 31(6):1794–1813, 2002. doi:10.1137/S0097539701398363.
  • [13] Erik D Demaine, Alejandro López-Ortiz, and J Ian Munro. Frequency estimation of internet packet streams with limited space. In European Symposium on Algorithms, pages 348–360. Springer, 2002. doi:10.1007/3-540-45749-6_33.
  • [14] Piotr Indyk. Stable distributions, pseudorandom generators, embeddings, and data stream computation. Journal of the ACM (JACM), 53(3):307–323, 2006. doi:10.1145/1147954.1147955.
  • [15] Piotr Indyk and David Woodruff. Optimal approximations of the frequency moments of data streams. In Proceedings of the thirty-seventh annual ACM symposium on Theory of computing, pages 202–208, 2005. doi:10.1145/1060590.1060621.
  • [16] Rajesh Jayaram and David P. Woodruff. Towards optimal moment estimation in streaming and distributed models. ACM Trans. Algorithms, 19(3), June 2023. doi:10.1145/3596494.
  • [17] Hossein Jowhari, Mert Sağlam, and Gábor Tardos. Tight bounds for Lp samplers, finding duplicates in streams, and related problems. In Proceedings of the Thirtieth ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems, PODS ’11, pages 49–58, New York, NY, USA, 2011. Association for Computing Machinery. doi:10.1145/1989284.1989289.
  • [18] Daniel M Kane, Jelani Nelson, and David P Woodruff. On the exact space complexity of sketching and streaming small norms. In Proceedings of the twenty-first annual ACM-SIAM symposium on Discrete Algorithms, pages 1161–1178. SIAM, 2010. doi:10.1137/1.9781611973075.93.
  • [19] James L. Jr. Nolan. Stable distributions. models for heavy tailed data, 2001. URL: https://api.semanticscholar.org/CorpusID:117487370.
  • [20] Subhabrata Sen and Jia Wang. Analyzing peer-to-peer traffic across large networks. In Proceedings of the 2nd ACM SIGCOMM Workshop on Internet measurment, pages 137–150, 2002. doi:10.1145/637201.637222.
  • [21] David P Woodruff and Samson Zhou. Tight bounds for adversarially robust streams and sliding windows via difference estimators. In 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS), pages 1183–1196. IEEE, 2022.
  • [22] Vladimir M Zolotarev. One-dimensional stable distributions, volume 65. American Mathematical Soc., 1986.