Approximating the packedness of polygonal curves

In 2012 Driemel et al. \cite{DBLP:journals/dcg/DriemelHW12} introduced the concept of $c$-packed curves as a realistic input model. In the case when $c$ is a constant they gave a near linear time $(1+\varepsilon)$-approximation algorithm for computing the Fr\'echet distance between two $c$-packed polygonal curves. Since then a number of papers have used the model. In this paper we consider the problem of computing the smallest $c$ for which a given polygonal curve in $\mathbb{R}^d$ is $c$-packed. We present two approximation algorithms. The first algorithm is a $2$-approximation algorithm and runs in $O(dn^2 \log n)$ time. In the case $d=2$ we develop a faster algorithm that returns a $(6+\varepsilon)$-approximation and runs in $O((n/\varepsilon^3)^{4/3} polylog (n/\varepsilon)))$ time. We also implemented the first algorithm and computed the approximate packedness-value for 16 sets of real-world trajectories. The experiments indicate that the notion of $c$-packedness is a useful realistic input model for many curves and trajectories.


Introduction
Worst-case analysis often fails to accurately estimate the performance of an algorithm for real-world data.One reason for this is that the traditional analysis of algorithms and data structures is only done in terms of the number of elementary objects in the input; it does not take into account their distribution.Problems with traditional analysis have led researchers to analyse algorithms under certain assumptions on the input [16], which are often satisfied in practice.By doing this, complicated hypothetical inputs are hopefully precluded, and the worst-case analysis yields bounds which better reflect the behaviour of the algorithms in practical situations.
In computational geometry, realistic input models were introduced by van der Stappen and Overmars [39] in 1994.They studied motion planning among fat obstacles.Since then a range of models have been proposed, including uncluttered scenes [15], low density [40], simple-cover complexity [33], to name a few.De Berg et al. [16] gave algorithms for computing the model parameters for planar polygonal scenes.In their paper they motivated why such algorithms are important.
To verify whether a certain model is appropriate for a certain application domain.Some algorithms require the value of the model parameter as input in order to work correctly, e.g. the range searching data structure for fat objects developed by Overmars and van der Stappen [35].Computing the model parameters of a given input can be useful for selecting the algorithm best tailored to that specific input.
In this paper we will study polygonal curves in R d .The Fréchet distance [22] is probably the most popular distance measure for curves.In 1995, Alt and Godau [3] presented an O(n 2 log n) time algorithm for computing the Fréchet distance between two polygonal curves of complexity n.This was later improved by Buchin et al. [29] who showed that the continuous Fréchet distance can be computed in O(n 2 √ log n(log log n) 3/2 ) expected time.Any attempt to find a much faster algorithm was proven to be futile when Bringmann [6] showed that, assuming the Strong Exponential Time Hypothesis, the Fréchet distance cannot be computed in strongly subquadratic time, i.e., in time O(n 2−ε ) for any ε > 0.
In an attempt to break the quadratic lower bound for realistic curves, Driemel et al. [18] introduced a new family of realistic curves, so-called c-packed curves, which since then has gained considerable attention [11,17,19,27,28].A curve π is c-packed if for any ball B, the length of the portion of π contained in B is at most c times the radius of B. In their paper they considered the problem of computing the Fréchet distance between two c-packed curves and presented a (1 + ε)-approximation algorithm with running time O( cn ε + cn log n), which was later improved to O( cn √ ε log 2 (1/ε) + cn log n) by Bringmann and Künnemann [7].Other models for realistic curves have also been studied.Closely related to c-packedness is γ-density which was introduced by Van der Stappen et al. [40] for obstacles, and modified to polygonal curves in [18].A set of objects is γ-low-density, if for any ball of any radius, the number of objects intersecting the ball that are larger than the ball is less than γ.Aronov et al. [5] studied so-called backbone curves, which are used to model protein backbones in molecular biology.Backbone curves are required to have, roughly, unit edge length and a given minimal distance between any pair of vertices.Alt et al. [4] introduced κ straight curves, which are curves where the arc length between any two points on the curve is at most a constant κ times their Euclidean distance.They also introduced κ-bounded curves which is a generalization of κ-straight curves.It has been shown [2] that one can decide in O(n log n) time whether a given curve is backbone, κ-straight or κ-bounded.
From the above discussion and the fact that the c-packed model has gained in popularity, we study two natural and important questions in this paper.

1.
Given a curve π, how fast can one (approximately) decide the smallest c for which π is c-packed?2. Are real-world trajectory data c-packed for some reasonable value of c?
Vigneron [41] gave an FPTAS for optimizing the sum of algebraic functions.The algorithm can be applied to compute a (1 + ε) approximation of the c-packedness value of a polygonal curve in R d in O(( n ε ) d+2 log d+2 n ε ) time.However, working with balls is complicated (see Section 1.1) and in this paper we will therefore consider a simplified version of c-packedness.Instead of balls we will use (d-)cubes, that is, we say that a curve π is c-packed if for any cube S, the length of the portion of π contained in S is at most c • r, where r is half the side length of S. Note that under this definition, a c-packed curve using the "ball" definition is a ( √ dc)-packed curve in the "cube"' definition, while a c-packed curve using the "cube" definition is also a c-packed curve in the "ball" definition.From now on we will use the "cube" definition of c-packed curves.
To the best of our knowledge the only known algorithm for computing packedness of a polygonal curve, apart from applying the tool by Vigneron [41], is by Gudmundsson et al. [25] who gave a cubic time algorithm for polygonal curves in R 2 .They consider the problem of computing "hotspots" for a given polygonal curve, but their algorithm can also compute the packedness of a polygonal curve.We provide two sub-cubic time approximation algorithms for the packedness of a polygonal curve.
Our first result is a simple O(dn 2 log n) time 2-approximation algorithm for d-dimensional polygonal curves.We also implemented this algorithm and tested it on 16 data sets to estimate the packedness value for real-world trajectory data.As expected the value varies wildly both between different data sets but also within the same data set.However, about half the data sets had an average packedness value less than 10, which indicates that c-packedness is a useful and realistic model for many real-world data sets.
Our second result is a faster O * (n 4/3 ) time1 (6+ε)-approximation algorithm for polygonal curves in the plane.We achieve this faster algorithm by applying Callahan and Kosaraju's Well-Separated Pair Decomposition (WSPD) to select O(n) squares, and then approximating the packedness values of these squares with a multi-level data structure.Note that our approach of building a data structure and then performing a linear number of square packedness queries solves a generalised instance of Hopcroft's problem.Hopcroft's problem asks: Given a set of n points and n lines in the plane, does any point lie on a line?An Ω(n 4/3 ) lower bound for Hopcroft's problem was given by Erickson [20].Hence, it is unlikely that our approach, or a similar approach, can lead to a considerably faster algorithm.

Preliminaries and our results
Let π = p 1 , . . ., p n be a polygonal curve in R d and let s i = (p i , p i+1 ) for 1 i < n.Let H be a closed convex region in R d .The function Υ(H) = n−1 i=1 |s i ∩ H| describes the total length of the trajectory π inside H.In the original definition of c-packedness H is a ball.
As mentioned in the introduction, we will consider H to be an axis-aligned cube instead of a ball.The reason for our choice was argued for R 2 in [25], and for completeness we include their arguments here.
If H is a square, then each piece of Υ(H) is a simple linear function, i.e. is of the form γ(x) = ax + b for some a, b ∈ R. The description of each piece of Υ is constant size and can be evaluated in constant time.However, if H is a disc, the intersection points of the boundary of H with the trajectory π are no longer simple linear equations in terms of the center and radius of H, so that Υ becomes a piecewise the sum of square roots of polynomial functions.These square root functions provide algebraic issues that cannot be easily resolved for maximising the function Υ(H)/r.For this reason, we will consider H to be a square instead of a disc.
The function Υ(H) = n−1 i=1 |s i ∩ H| describes the total length of the polygonal curve inside H. Similarly, Ψ(H) = Υ(H)/r denotes the packedness value of H. Our aim is to find a cube H * with centre at p * and radius r * that has the maximum packedness value for a given polygonal curve π.The radius of a cube is half the side length of the cube.
The following two theorems summarise the main results of this paper.
Theorem 1.Given a polygonal curve π of size n in R d , one can compute a 2-approximate packedness value for π in O(dn 2 log n) time.
Theorem 1 is presented in Section 2 and Theorem 2 is presented in Section 3. Experimental results on the packedness values for real world data sets are given in Section 2.1.

A 2-approximation algorithm
Given a polygonal curve π in R d , let H * be a d-cube with centre at p * and radius r * that has a maximum packedness value.Our approximation algorithm builds on two observation.The first observation is that given a center p ∈ R d one can in O(dn log n) time find, of all possible d-cubes centered at p, the d-cube that has the largest packedness value.The second observation is that there exists a d-cube centered at a vertex of π that has a packedness value that is at least half the packedness value of H * .Before we present the algorithm we need some notations.Let H p r be the d-cube H, scaled with p as center and such that its radius is r.Fix a point p in R d , and consider Ψ as a function of r.More formally, let ψ p (r) = Ψ(H p r ).Gudmundsson et al. [25] showed properties of ψ p (r) that we generalize to R d and restate as: As a corollary we get: Corollary 4. Let r 1 , r 2 be the radii of two consecutive break points of ψ p (r), where r 2 > r 1 .It holds that max r∈[r1,r2] ψ p (r) = max{ψ p (r 1 ), ψ p (r 2 )}, that is, the maximum value is obtained either at r 1 or at r 2 .
Proof.According to Lemma 3 the function ψ p (r) is a hyperbolic function in the range r ∈ [r 1 , r 2 ] of the form a(1/r) + b with the derivative −a/r 2 .This implies that ψ p (r) is a monotonically decreasing or monotonically increasing function in [r 1 , r 2 ].As a result the maximum value of ψ p (r) is attained either at r 1 or at r 2 .
Next we state the algorithm for the first observation.The general idea is to use planesweep, scaling d-cube H with centre at p by increasing its radius from 0 to ∞.The (d−1)-faces of H are bounded by 2d hyperplanes in R d .When H expands from p, it can first meet a segment of π in one of two ways: (i) a vertex of the segment lies on one of H's (d − 1)-face, or (ii) an interior point of the segment lies on a (d − 2)-face of H.For the first case, the new segment can change to intersect a different (d − 1)-face of H at most d − 1 times, depending on its relative position to the center of H and its components in all d dimensions.
Similarly for a segment of the second case, it can change to intersect a different (d−1)-face of H O(d) times.Thus each segment has O(d) event points and there are O(dn) events in total.Sort the events by their radii r 1 , . . ., r m (m = O(dn)) in increasing order.Perform the sweep by increasing the radius r starting at r = 0 and continue until all events have been encountered.
Recall that Υ(H) is the total length of the trajectory π inside H.For each r i , 1 i m, we can compute ψ p = Υ(r i )/r in time O(dn).For two consecutive radii r i and r i+1 , Υ(H p ri ) and Υ(H p ri+1 ) can differ in one of three ways.First, H p ri+1 may include a vertex not in H p ri , in which case the set of contributing edges may increase by up to two.Second, H p ri+1 may intersect an edge not in H p ri .Finally, an edge in H p ri may intersect a different (d − 1)-face.We can compute a function ∆(r i , r i+1 ) that describes these changes in constant time.We then have Υ(H p ri+1 ) = Υ(H p ri ) + ∆(r i , r i+1 ), and we can compute Υ(H p ri+1 ) from Υ(H p ri ) in constant time (in R 2 similar to [13]).Apart from sorting the event points, we compute ψ p (r) for every r i , 1 i m, in O(dn) time.We return radius arg max r1 ri rm ψ p (r i ) as the result.Hence, the total running time is O(dn log n).
Note that the break points of ψ p (r) are the event points.The correctness follows immediately from Corollary 4 which tells us that we only need to consider the set of event points.To summarise we get: Proof.The function is zero until an interior point on s is encountered.After encountering the interior point and before encountering a vertex of s, the segment |s ∩ H p r | is a chord between two boundary points of H p r .Suppose we normalise the size of the d-cube H p r to be unit-sized.Then the length of the chord is normalised to = ψ p (r).Before normalisation, the segment |s ∩ H p r | had fixed gradient, and had fixed orthogonal distance to the center p.After normalisation, the chord has fixed gradient and has decreasing distance to the center p. Therefore its length ψ p (r) is non-decreasing as it approaches the diameter of H p r .

Lemma 7. There exists a d-cube H with center at a vertex of
, where H * is the d-cube having the highest packedness value for π.
Proof.Consider H * .We will construct a d-cube H that is centered at a vertex p of π and contains H * .We will then prove that H has packedness value at least 1  2 • Ψ(H * ), which would prove the theorem.To construct H, we consider two cases: Case 1: The square H * does not contain a vertex of π, see Fig. 1(a).Scale H * until its boundary hits a vertex.Let H 1 denote the d-cube obtained from the scaling and let v be the vertex on the (d − 1)-face of H 1 .According to Lemma 6, we know that Ψ(H 1 ) ≥ Ψ(H * ).
Let H 2 be the d-cube centered at v with radius twice the radius of H 1 , as illustrated in Fig. 1(a).Clearly    1 The table lists 16 real-world data sets.The second and third columns shows the number of curves and the maximum complexity of a curve in the set.The following three columns lists the minimum, maximum and average approximate packedness values.The rightmost column states the average ratio between c and n for the data sets.

Experimental results
We implemented the above algorithm to test the approximate packedness of real-world trajectory data.We ran the algorithm on 16 data sets.The data sets were kindly provided to us by the authors of [26].Table 2 summarises the data sets and is taken from [26].The minimum/maximum/average (approximate) packedness values and the ratio between c and n for each dataset are listed in Table 1.Both the Geolife dataset and the Taxi dataset consist of over 20k trajectories, many of which are very large.For the experiments we randomly sampled 1,000 trajectories from each of these sets.
Although these are only sixteen data sets, it is clear that the notion of c-packedness is a reasonable model for many real-world data sets.For example, the maximal packedness value for all trajectories in all the first eight data sets is less than 50 and the average (approximate) packedness value is below 15.Looking at the ratio between c and n, we can see that for many data sets the value of c is considerably smaller than n.
Consider the task of computing the continuous Fréchet distance between two trajectories.For two trajectories of complexity n, computing the distance will require O * (n 2 ) time (even for an O(1)-approximation) while a (1 + ε)-approximation can be obtained for c-packed trajectories in O * (cn) time.Thus the algorithm by Driemel et al. [18] for c-packed curves is likely to be more efficient than the general algorithm for these data sets.

Linear number of good squares
To prove that it suffices to consider a linear number of squares we will use the well-known Well-Separated Pair Decomposition (WSPD) by Callahan and Kosaraju [8].
Let A and B be two finite sets of points in R d and let s > 0 be a real number.We say that A and B are well-separated with respect to s, if there exist two disjoint balls 1. for each i with 1 ≤ i ≤ m, A i and B i are well-separated with respect to s, and 2. for any two distinct points p and q of S, there is exactly one index i with 1 ≤ i ≤ m, such that p ∈ A i and q ∈ B i , or p ∈ B i and q ∈ A i .The integer m is called the size of the WSPD.[8]) Given a set V of n points in R d , and given a real number s > 0, a well-separated pair decomposition for V , with separation ratio s, consisting of O(s d n) pairs, can be computed in O(n log n + s d n) time.Now we are ready to construct a set S of squares.Compute a well-separated pair decomposition W = {(A 1 , B 1 ), . . ., (A m , B m )} with separation constant s = 720/ε for the vertex set of π.For every well-separated pair (A i , B i ) ∈ W , 1 ≤ i ≤ k, construct two squares that will be added to S as follows:

Lemma 10. (Callahan and Kosaraju
Pick an arbitrary point a ∈ A i and an arbitrary point b ∈ B i .Construct one square with center at a and radius r, and one square with center at b and radius r, where r = max{|a.x− b.x|, |a.y − b.y|} + ε/120 • |ab|.The two squares are added to S. It follows immediately from Lemma 10 that the number of squares in S is O(n/ε 2 ) and that one can construct S in O(n log n + n/ε 2 ) time.
To prove the approximation factor of the algorithm we will first need the following technical lemma.

Lemma 11. Let H p
r1 and H p r2 , with r 2 > r 1 , be two squares with centre at p such that H p r2 \ H p r1 contains no vertices of π in its interior.For any value r x , with r 1 r x r 2 , it holds that ψ p (r x ) ≤ ψ p (r 1 ) + 2 • ψ p (r 2 ).

Proof. Let
, and where , so the first term of ψ p (r x ) is bounded by ψ p (r 1 ).Next we consider the second term.
There are no vertices of π inside H p r2 \ H p r1 .These segments either cross H p r2 \ H p r1 and have no intersection with H p r1 , like segment I in Fig. 2(a), or intersect H p r1 and cross H p r2 \ H p r1 , like segment II in Fig. 2(a).
Let l x denote a segment's contribution to M x .Let us consider how l x /r x changes when r x grows from r 1 to r 2 .

Segment of Type I:
We know from Lemma 6 that l x /r x is non-decreasing as r x grows from r 1 to r 2 .

2.
Segment of Type II: Consider a segment s of Type II and the subsegment (or two subsegments) s ∩ (H p r1 \ H p r2 ).A subsegment s of s has one endpoint q on the boundary of H p r1 and one endpoint on the boundary of H p r2 .Let E be the side of H p r1 containing q.Let t be the middle point of E, let d 0 = |qt| and let θ be the acute angle between s and E. There are two cases.
The subsegment s does not cross line pt in region H p r2 \ H p r1 , as in Figure 2(b).Let r x be the radius when s crosses a corner of H p rx .When r x < r x , continues to increase strictly, so l x /r x l 2 /r 2 .
The subsegment s crosses line pt in region H p r2 \ H p r1 , as shown in Figure 2(c).r x is defined as above.When r x < r x , l x /r x increases strictly.When r x ≥ r x , begins to decrease.However, since For all cases, lx rx 2 • l2 r2 .Thus Mx rx 2 • M2 r2 .We get Due to Lemma 7, it suffices to consider squares with center at a vertex of π to obtain a 2-approximation.Combining this with Lemma 11, it suffices to consider squares with center at a vertex of π and a vertex of π on its boundary to obtain a 6-approximation.Using the WSPD argument we have reduced our set of squares to a linear number of squares and we will now argue that S must contain a square that has a high packedness factor.Let H * be a square with a maximum packedness value of π.Lemma 12.There exists a square S ∈ S such that Ψ(S) Proof.From Lemmas 7 and 11 we know that there exists a square H with centre at a vertex p of π and whose boundary contains a vertex q of π such that Ψ(H) ≥ 1  6 • Ψ(H * ).According to the construction of S there exists a square S ∈ S such that S has its centre at a point a ∈ A i and has radius r = max{|a.the set F (v), all the segments stored in the subtree rooted at v, including F (v).We denote this set by L(v).
The main benefit of this minor modification is that when an interval stabbing query is performed only O(log n) canonical subsets are required to identify all the segments intersecting the interval.Next we show how to build associated data structures for L(v) and F (v) for each internal node v in T .

Three associated data structures
Consider querying the segment tree T with a square S ∈ S.There are three different cases that can occur, and for each of these cases we will build an associated data structure.That is, each internal node will have three types of associated data structures.Consider a query S = [x, x ] × [y, y ] and let µ l and µ r be the leaf nodes in T where the search for the boundary values x and x end.See Figure 3 for an of the search and the three cases.An internal node v is one of the following types: The primary tree T , and the three types of nodes in T that can be encountered during a query.

Associated data structure for Type A nodes:
For a Type A node we need to compute the length of all segments stored in the subtree with root v in the y-interval [y, y ].Let s 1 , . . ., s m be the set of m segments stored in L(v), and let Y = y 1 , . . ., y 2m denote the y-coordinates of the endpoints of the segments in L(v) ordered from bottom-to-top.To simplify the description we assume that the values are distinct.
Let δ(y) denote the total length of the segments in L(v) below y.For two consecutive y-values y i and y i+1 , the set of edges contributing to δ(y i ) and δ(y i+1 ) has increased or decreased by one.So, we can compute a function ∆(y i , y i+1 ) that describes these changes in constant time.We then have δ(y i+1 ) = δ(y i ) + ∆(y i , y i+1 ), and thus we can compute δ(y i+1 ) from δ(y i ) in constant time after sorting the events.Hence, we can compute all the δ(y i )-values and all the ∆(y i , y i+1 ) in time O(m log m).
Given a y-value y one can compute δ(y ) as δ(y i ) + y −yi yi+1−yi • ∆(y i , y i+1 ), where y i is the largest y-value in Y smaller than y .Hence, our associated data structure for Type A nodes is a binary tree with respect to the values in Y , where each leaf stores the value y i , δ(y i ) and ∆(y i , y i+1 ).The tree can be computed in O(m log m) time using linear space, and can answer queries in O(log m) time.
Lemma 13.The associated data structures for Type A nodes in T can be constructed in O(n log2 n) time using O(n log 2 n) space.Given a query square S for an associated data structure of Type A stored in an internal node v, the value

Associated data structure for Type B nodes:
The associated data structure for a Type B node is built to handle the case when the query square S = [x, x ] × [y, y ] intersects either the left boundary (x l ) or the right boundary (x r ) of Int(v), but not both.The two cases are symmetric and we will only describe the case when S intersects the right boundary.
The data structure returns a value M that is an upper bound on the length of the segments of F (v) within S and a lower bound on the segments within S + , where S + is a slightly expanded version of S. See Figure 4(c).Formally, If S + spans Int(v) then we need to use a binary tree on F (v) to answer the query in logarithmic time, similar to the associated data structure for Type A nodes.
If S + does not span Int(v) then the query is performed on the Type B associated data structures.We first show how to construct these data structures, then we show how to handle the query.Recall that all the segments in F (v) span the interval Int(v).Let s 1 , . . ., s m be the set of m segment in F (v) and let µ(s i ) be the angle of inclination 2 of s i .The angle of inclination for segments with slope in the interval [0, 1) is in the interval [0, π/4).Partition F (v) into κ 1 sets F 1 (v), . . ., F κ1 (v) such that for any segment s i ∈ F j (v) it holds that (j − 1) . Consider one such partition F j (v) = {s j 1 , . . ., s j mj }.Build a balanced binary search tree T j r on the y-coordinates of the right endpoints of the segments in F j (v).The data structure can be constructed in O(m j log m j ) time using linear space.Given a y-interval as a query, the number of right endpoints in T j r within can be reported in time O(log m j ).From the above description and the fact that v∈T F (v) = O(n log n) it immediately follows that the total construction time for all the Type B nodes is O(n log 2 n) and the total amount of space required is O(n log n).This completes the construction of the data structures on F (v).
It remains to show how to handle a query, i.e. how to compute M .We focus first on computing an M that upper bounds |F (v) ∩ S|, and we later prove that M lower bounds |F (v) ∩ S + |.There are two steps in computing M .The first step is to count the number of segments that intersect S. The second step is to multiply this count by the maximum possible length of intersection between the segment and S.This would clearly yield an upper bound on |F (v) ∩ S|.To obtain suitable maximum lengths in the second step, we need to subdivide the partitions F j further.
Given these subdivisions L j , our first step is to simply perform a range counting query in T j r for each ∈ L j .Our second step is to multiply this count by the maximum length of intersection between S and any segment in F j with its right endpoint in .The product of these two values is clearly an upper bound on the length of intersection between S and segments in F j with right endpoint in .Finally, we sum over all subdivisions ∈ F j and then over all partitions F j to obtain a value M that upper bounds |S ∩ F (v)| If B has a greater angle of inclination than A then let p be the left endpoint of A ∩ S. Let x p be the x-coordinate of p and let p be the point on B with x-coordinate x p .The distance between p and p is bounded by ( ∈ I 2 j : In this case the left endpoint of A ∩ S will be on the left boundary of S. Similarly, the left endpoint of B will lie on the left boundary of S. We know that ∈ I 3 J : This case is very similar to the first case and left as an exercise for the reader.
We summarise the associated data structures for Type B nodes with the following lemma.

Associated data structure for Type C nodes:
The associated data structure for Type C nodes have some similarities with the associated data structures for Type B nodes, however, it is a much harder case since we cannot use the ordering on the y-coordinates of the right endpoints like for the Type B nodes.Instead we will precompute an approximation of |F (v) ∩ S| for every internal node v in T and every square S ∈ S that lies entirely within Int(v).Recall from Section 3.1 that S is a set of size O(n/ε 2 ) that is guaranteed to have a square that is a (6 + ε/8)-approximation.
Let S(v) denote the subset of squares in S that lie entirely within Int(v).The stored value for a square S ∈ S(v) is an upper bound on |F (v)∩S| and a lower bound on |F (v)∩S + where S + is the same as the one defined for Type B nodes.See Figure 4(c).If S + intersects the left or right boundary of Int(v) then perform the query as a Type B node instead of a Type C node.
The associated data structure will be built on the set F (v), hence, all segments will span the interval Int(v).Let s 1 , . . ., s m be the segments in F (v) and let µ(s i ) be the angle of inclination of s i .Partition F (v) into κ 1 sets F 1 (v), . . ., F κ1 (v) such that for any segment √ 2/ε.Using a combination of the approach we used for Type B nodes and a result by Agarwal [1] we can prove the following lemma.
Lemma 16.Given a set F j (v) = {s j 1 , . . ., s j n1 } of line segments (as defined above) and a set S(v) = {S 1 , . . ., S n2 } of squares lying entirely within Int(v), one can compute, in space, where w is a constant < 3.33, a set of n 2 real values {M j 1 (v), . . ., M j n2 (v)} such that for every i, 1 ≤ i ≤ n 2 the following holds: Proof.For each square S ∈ S(v) partition the left side and the bottom side of S into 2κ 2 subsegments of equal length, where κ 2 = 16/ε.Note that a line segment in F j (v) can at most intersect one of the subsegments since the segments have positive slopes.
Agarwal [1] showed that given a set of n r red line segments and another set of n b blue line segments, one can count, for each red segment, the number of blue segments intersecting it in overall time O(n 4/3 log (w+2)/3 n) using O(n 4/3 / log (2w+1)/3 n) space, where n = n r + n b and w is a constant < 3.33.Note that there are faster algorithms, e.g.[10], but to the best of the authors knowledge they do not return the number of red-blue intersection for each red segment.
Let the subsegments along the lower and left side of every square S ∈ S(v) be our red set of segments, hence n r = 2κ 2 • n 2 .Let the n b = n 1 segments in F j (v) be our blue segments.Applying the algorithm by Agarwal [1] immediately gives that in O((n 1 + κ 2 • n 2 ) polylog(n 1 + κ 2 •n 2 )) time we can compute the number of segments in F j (v) that intersect each subsegment of the squares in S(v).
For a subsegment of S i ∈ S(v), let a Fj ( ) be the number of segments in F j (v) that intersects and let b F J ( ) be the maximum length intersection between a possible segment in F j (v) and S i .Set M j i ( ) = a Fj ( ) • b Fj ( ), which is an upper bound on the total length of intersection between the segments in F j intersecting of S i .To get an upper bound on the total length of intersection between the segments in F j (v) and S i , denoted M j i (v), we simply sum the M j i ( )-values for all subsegments of S i .This is performed for each square S i ∈ S(v), 1 ≤ i ≤ n 2 .
Using a similar analysis as in Lemma 14 we get For each set F j (v) apply Lemma 16, and for each square S i ∈ S(v) precompute the value M i = κ1 j=1 M j i (v).Store all the squares lying within the interval Int(v) in a balanced binary search tree along with their precomputed values M i .Note that each square of S can appear at most once on each level of the primary segment tree structure T .Furthermore, an edge s ∈ π can straddle at most two intervals on a level, as a result we get that the total amount of time spent on one level of the segment tree to build the Type C associated data structures is O((n/ε 3 ) 4/3 log (w+2)/3 (n/ε)).Since the number of levels in T is O(log n) the total time required to build all the Type C associated data structures is O((n/ε 3 ) 4/3 log (w+5)/3 (n/ε)).Putting all the pieces together we get: Lemma 17.The Type C associated data structures for T can be computed in time O((n/ε 3 ) 4/3 polylog(n/ε)) using O(n 4/3 /ε 4 ) space.Given a query square S ∈ S(v), a value M (v) can be returned in O(log(n/ε 2 )) time such that:

Putting it together
In the previous section we showed how to construct a two-level data structure that uses a modified segment tree as the primary tree, and a set of associated data structures for all the internal nodes in the primary tree.The primary tree requires O(n log n) space, and the complexity of the associated data structures is dominated by the Type C nodes, that require O((n/ε 3 ) 4/3 log (w+5)/3 (n/ε)) time and O((n/ε 3 ) 4/3 / log (2w+1)/3 (n/ε)) space to construct.Given a query square S ∈ S a value M is returned in O( 1 ε 2 • log 2 n) time such that Υ(S) ≤ M ≤ Υ(S + ).
According to Lemma 12 there exists a square S ∈ S that has a packedness value that is within a factor of (6 + ε/8) smaller than the maximum packedness value of π.Using the data structure described in this section we get a (6 + ε/8)(1 + ε/8)-approximation.Since ε is assumed to be at most 1, we finally get Theorem 2.

Concluding remarks
In this paper we gave two approximation algorithm for the packedness value of a polygonal curve.The obvious question is if one can get a fast and practical (1 + ε)-approximation.
We also computed approximate packedness values for 16 real-world data sets, and the experiments indicate that the notion of c-packedness is a useful realistic input model for curves and trajectories.

Lemma 3 .
The function ψ p (r) is a piecewise hyperbolic function.The pieces of ψ p (r) are of the form a(1/r) + b, for a, b ∈ R, and the break points of ψ p (r) correspond to d-cubes H where: (i) a vertex of π lies on a (d − 1)-face of H, or (ii) a (d − 2)-face (in R 3 , an edge) of H intersects an edge of π.

Now we are ready to prove the second observation. Lemma 6 .
Consider the function ψ p (r) for a single segment s, i.e. ψ p (r) = |s∩H p r | r .If the first point on s encountered by H is an interior point of s then the function is non-decreasing from r = 0 until H p r encounters a vertex of s.

Figure 1
Figure 1 Illustrating the two cases in the proof of Lemma 7: Case 1 in (a) and Case 2 in (b).

Lemma 8 .Definition 9 .
C A and C B , such that (1) C A and C B have the same radius, (2) C A contains the bounding box of A and C B contains the bounding box of B, and (3) the distance between C A and C B is at least s times the radius of C A and C B .The real number s is called the separation ratio.Let s > 0 be a real number, let A and B be two sets in R d that are well-separated with respect to s, let a and a be two points in A, and let b and b be two points in B. Then (1) |aa | ≤ (2/s) • |ab|, and (2) |a b | ≤ (1 + 4/s) • |ab|.Let S be a set of n points in R d , and let s > 0 be a real number.A wellseparated pair decomposition (WSPD) for S, with respect to s, is a sequence {A 1 , B 1 }, . . ., {A m , B m } of pairs of non-empty subsets of S, for some integer m, such that:

Figure 2
Figure 2 (a) An illustration of the proof of Lemma 11 and the two types of segments that are considered.(b) Showing case 1, and (c) case 2 of of Type II segments.

Figure 4 2 κ1
Figure 4 (a) A query S and the set F (v).(b) Illustrating the three interval I 1 j , I 2 j and I 3 j .(c) The expanded square S + .By setting κ 1 = 16 √ 2/ε and κ 2 = 16/ε we can prove the following.Lemma 14. M ≤ |F (v) ∩ S + | Proof.Consider an arbitrary interval ∈ L j .Let A be a possible segment in F j (v) with right endpoint on that maximises |A ∩ S|, and let B be any segment in F j (v) with right endpoint on .It suffices to prove that |A ∩ S| ≤ |B ∩ S + |.We will have three cases depending on the position of .Let B = B ∩ S and let B = B ∩ (S + \ S). ∈ I 1 j : If the right endpoint of B lies above A then we can move B vertically downward until the right endpoint coincides with the right endpoint of A. This will not increase |B ∩ S + |.If B has a smaller angle of inclination than A then |A∩S|−|B | ≤ √ 2 κ1 •|x r −x| = ε 16 •|x r −x|.However, the length of B is at least ε/8 • |x r − x|, hence

Lemma 15 .
The associated data structure for Type B nodes can be constructed in O(n log 2 n) time using O(n log n) space.Given a query square S = [x, x ] × [y, y ] for an associated data structure of Type B stored in an internal node v, a real value M |s ∩ S i | ≤ b Fj ( ) ≤ |s ∩ S + i |,and|F j (v) ∩ S i | ≤ M j i (v) ≤ |F j (v) ∩ S + i |, which proves the lemma.
The d-cube H * contains one or more vertices, see Fig.1(b).Let v be a vertex inside H * .Let H 2 be the d-cube with center at v and radius twice that of H * .Again, we have H 2 completely contains H as required.Case 2: * ∩ π, so Ψ(H 2 ) ≥12 Ψ(H * ), as required.In both cases, we have constructed a d-cube H 2 centered at a vertex of π for which Ψ(H 2 ) ≥12 Ψ(H * ), which proves the lemma.

Table 2
Real data sets, showing number of input trajectories n, dimensions d, average number of simplified vertices per trajectory, and a description.