Evolving Distributions Under Local Motion
Abstract
Geometric data sets that arise in modern applications are often very large and change dynamically over time. A popular framework for dealing with such data sets is the evolving data framework, where a discrete structure continuously varies over time due to the unseen actions of an evolver, which makes small changes to the data. An algorithm probes the current state through an oracle, and the objective is to maintain a hypothesis of the data set’s current state that is close to its actual state at all times. In this paper, we apply this framework to maintaining a set of point objects in motion in -dimensional Euclidean space. To model the uncertainty in the object locations, both the ground truth and hypothesis are based on spatial probability distributions, and the distance between them is measured by the Kullback-Leibler divergence (relative entropy). We introduce a simple and intuitive motion model in which, with each time step, the distance that any object can move is a fraction of the distance to its nearest neighbor. We present an algorithm that, in steady state, guarantees a distance of between the true and hypothesized placements. We also show that for any algorithm in this model, there is an evolver that can generate a distance of , implying that our algorithm is asymptotically optimal.
Keywords and phrases:
Evolving data, tracking, imprecise points, local-motion model, online algorithmsCopyright and License:
2012 ACM Subject Classification:
Theory of computation Computational geometryAcknowledgements:
The authors would like to thank Michael Goodrich for suggesting the question of how to model multidimensional evolving point sets.Editors:
Pat Morin and Eunjin OhSeries and Publisher:
Leibniz International Proceedings in Informatics, Schloss Dagstuhl – Leibniz-Zentrum für Informatik
1 Introduction
Many modern computational applications are characterized by two qualities: data sets are massive and vary over time. A fundamental question is how to maintain some combinatorial structure that is a function of such a data set. The combination of size and dynamics makes maintaining such a structure challenging. Given the large data sizes, single-shot algorithms may be too slow, and common dynamic algorithms, which support explicit requests for insertions and deletions, may not be applicable because structural changes are unseen by the algorithm.
Anagnostopoulos, Kumar, Mahdian, Upfal, and Vandin introduced a model for handling such data sets, called the evolving data framework [2]. In this framework, the structure varies repeatedly over time through the unseen actions of an evolver, which makes small, stochastic changes to the data set. The algorithm can probe the current state locally using an oracle. With the aid of this oracle, the algorithm maintains a hypothesis of the current state of the structure that is as close as possible to its actual state at all times. The similarity between the hypothesis and the current state is measured through some distance function. The algorithm’s objective is to achieve, in the steady state, a small distance between the hypothesis and the actual state. This framework has been applied to a variety of problems [1, 6, 21, 17, 36].
Consider, for example, sorting. The data consists of a set of objects over some total order. The evolver repeatedly selects a random pair of adjacent objects and swaps them. The oracle is given two objects and returns their relative order. The objective of the algorithm is to maintain an order that is as close to the current state as possible, where the distance is measured in terms of the Kendall tau distance, that is, the number of pairwise order inversions [22]. It has been shown that a Kendall tau distance of is achievable, and this is optimal [6, 5]. It is noteworthy that the optimal algorithm in the evolving framework is based on a simple quadratic-time sequential algorithm, and not -time algorithms, as one might expect. The sorting problem has been generalized to tracking labels on a tree in [1], laying the foundations for a geometric framework for evolving data.
This paper focuses on the question of how to maintain a set of points whose positions evolve continuously in real -dimensional space, . In motion tracking applications, object movement is recorded through various technologies, including GPS-enabled mobile devices [31], RFID tags [28], and camera-based sensing [35]. Examples include the movement and migration of animals on land and in oceans, traffic and transport, defense and surveillance, and analysis of human behavior (see, e.g., [13, 24]).
Imprecision and uncertainty are unavoidable problems that arise in evolving systems. Due to sensor latency, the exact location of any object cannot be known with certainty. In the best case, any algorithm can maintain only an approximation to the current state. In order to bound the degree of uncertainty, it is necessary to impose restrictions on object motion. In traditional applications of the evolving data framework, the evolver acts randomly. This is not a reasonable assumption in practice, where moving objects are subject to physical laws or may have a sense of agency [15, 32, 34]. Much work has focused on realistic models of motion, but these can be difficult to analyze theoretically. A widely used model assumes that objects have a maximum speed limit, and as time passes, an object can be inferred to lie within a ball whose radius grows linearly over time based on this speed limit [9, 12, 20]. However, in practice, the speed that any object can move depends on the congestion within its local environment.
In multidimensional space, there is no intrinsic total order, and it is less clear what it means to accurately track imprecise moving objects. Given the inherent uncertainty, we model the location of each object in terms of a spatial probability distribution. The distance between the actual state of the system and our hypothesis is naturally defined as the relative entropy (Kullback-Leibler divergence) between these two distributions. The Kullback-Leibler (KL) divergence is a fundamental measure in statistics that represents the distance between two probability distributions [23]. It has numerous applications in statistical inference, coding theory, machine learning, among others [10]. In our case, it serves two intuitive purposes. First, it measures how different the truth is from our hypothesis. Second, it can be used to quantify the additional information required to encode the actual location of each object, based on our hypothesis [10].
For the evolution of our data, we adopt a locality-sensitive stochastic motion model. We assume that with each time step, the motion of an object is constrained by its immediate environment, which we call the local-motion model. In this model, the distance that any object can move in a single time step is a fixed fraction of the distance to its nearest neighbor. The support of this object’s probability distribution is a region of size proportional to the nearest-neighbor distance. This model has the advantage that it does not impose arbitrary speed limits on the movement of objects, it is invariant under transformations that preserve relative distances (e.g., translation, rotation, and uniform scaling), and it satisfies the observed phenomenon that objects in dense environments have less personal space [19, 16], and exhibit slower movement than those in sparse environments [29, 4].
To control the communication complexity in determining object locations, we do not require that the oracle returns the exact object positions. Instead, we assume access to an oracle that is given a Euclidean ball and the index of an object. It returns a pair of bits indicating whether the -th object lies within this ball and (if so) whether there is any other object of the set within this ball. Since every object’s location is subject to uncertainty, the same query on the oracle might result in different outcomes at different times, and our algorithm is robust to such variations.
In our framework, the evolver and the algorithm operate asynchronously and in parallel. With each time step, the evolver selects an arbitrary object of the set and moves it in accordance with the local-motion model. (This need not be random and may even be adversarial.) This information is hidden from our algorithm. The algorithm selects an object and invokes the oracle on this object. Based on the oracle’s response, the algorithm updates the current hypothesized distribution for this point. Thus, the evolver and algorithm are involved in a pursuit game, with the evolver incrementally changing object distributions (possibly in an adversarial manner) and the algorithm updating its hypothesized distributions. The algorithm’s goal is to minimize the relative entropy between these distributions at all times. Our computational model and a formal statement of our results are presented in Section 2.
2 Problem Formulation and Results
Objects
The objects that populate our system can be thought of as imprecise points [8, 14, 25] that are drawn independently from probability distributions that depend on the proximity to other objects. More formally, at any fixed time step, let denote a point set of size in -dimensional Euclidean space, and let denote the index set . For each , let denote the distance to its nearest neighbor in . Given and nonnegative real , let denote the closed Euclidean ball of radius centered at , and let denote the uniform probability distribution over this ball. Given a real , where , define the -local feature region of to be . Let denote the uniform111Our choice of using a uniform probability distribution is not critical to our approach. In the full version of the paper we show that our results apply to a much broader class of distributions. probability distribution, , and let denote this set of distributions. We refer to as the truth or ground truth, as it represents the true state of the system, subject to the given imprecision. Together, and define a set independent random variables , with distributed as (see Figure 1(a)).
The Local-Motion Model
As mentioned in the introduction, there are many ways to model the realistic motion of a collection of objects in an environment. A natural requirement is that each object’s motion is affected by the presence of nearby objects. Although there are many ways to incorporate this information (see, e.g., [29]), we have chosen a very simple model, where velocities are influenced by the distance to the nearest neighbor.
We think of the objects of our system as moving continuously over time, and our algorithm queries their state at regular discrete time steps. From the algorithm’s perspective, objects move, or “evolve,” over time in the following manner. Given a parameter , where , at each time step, an entity called evolver selects an object and moves by a distance of at most in any direction of its choosing (see Figure 1). While the value of is known, the action of the evolver, including the choice of the object and the movement, is hidden from the algorithm. Throughout, we assume that the evolver is a strong adversary, which means that it has access to our algorithm and the input set [7].
For a nonnegative integer time step , let and denote the underlying centers and distributions, respectively, at this time. To simplify notation, we omit the superscript when talking about the current time. We assume that there exists a bounding region in the form of a Euclidean ball centered at the origin. At all times, the points are restricted to lie within . The algorithm has knowledge of this ball. Define the system’s initial aspect ratio to be . Given any positive constant , let denote a factor- expansion of this ball about the origin.
Oracle
Consider the state of the system at any fixed time . Knowledge about the current state of the system is provided by an entity , called the oracle. It is given a Euclidean ball and an object index . Recall that for each , denotes a random variable distributed as . The oracle is a function , where returns:
-
“” or “” depending on whether and
-
“” or “” depending on whether there exists , such that
The first element of the pair is used to estimate the location of the object , and the second is used to estimate the distance to its nearest neighbor. Because and are random variables, so is . Consistent with previous applications of the evolving data framework (see, e.g., [3]), we intentionally made the oracle as weak as possible, implying that our algorithm can be used with stronger oracles.
Hypotheses and Distance
The algorithm maintains a hypothesis of the current object locations, which is defined to be a set of spatial probability distributions in . The distance between the truth and the current hypothesis , denoted , is defined as the sum of Kullback-Leibler divergences from the hypothesized distributions to the true ones. For , let
where denotes the measure over and define
| (1) |
As a baseline for comparisons, we introduce a naïve hypothesis, denoted , which assumes no information about the locations of the objects, other than the fact that they lie within the bounding region. Recall that denotes this bounding region and denotes a factor- expansion about its center. Since all the points of lie within and , is guaranteed to contain all the -local feature regions. For all , let be the uniform distribution over , and let . Regardless of the initial truth, the initial distance of the th object satisfies the following bound.
Let . Under our assumption that and are constants, we have . The initial aspect ratio, , depends on the initial configuration of the points of , and can be arbitrarily high. In the best case, when the points are uniformly distributed in , we have and hence . The objective of our algorithm is to maintain a significantly smaller bound on this distance.
The combination of evolver, oracle, and distance function constitute a model of evolving motion, which we henceforth call the -local-motion model.
Class of Algorithms
We assume that the algorithm that maintains the hypothesis runs in discrete steps over time. With each step, it may query the oracle a constant number of times, perform a constant amount of work, and then update the current set of hypotheses. The number of oracle queries is independent of but can depend on dimension , local feature scale factor , and motion factor . In our case, this work takes the form of updating the hypothesis for the object that was queried. In the purest form of the evolving data framework, the evolver and algorithm alternate [2]. Instead, similar to the generalized framework proposed in [1], we assume that there is a fixed speed-up factor, denoted . When amortized over the entire run, the ratio of the number of steps taken by the algorithm and the evolver does not exceed .
Objective and Results
The objective of the algorithm is as follows. Given any starting ground-truth configuration, after an initial “burn-in” period, the algorithm guarantees that the hypothesis is within a bounded distance of the truth subject to model assumptions and the given speed-up factor. Our main result is that there exists an algorithm with constant speed-up factor that maintains a distance of in steady state.
Theorem 1.
Consider a set of evolving objects in under the -local-motion model, for constants and , where . There exists an algorithm of constant speed-up , and burn-in time such that for all , this algorithm maintains a distance of .
The algorithm and its analysis will be proved in Section 4. Given that no algorithm can guarantee an exact match between hypothesis and truth, it is natural to wonder how close this is to optimal. In Section 3, we will show that it is asymptotically optimal by showing that for any algorithm and any constant speed-up factor, there exists an evolver that can force a distance of in steady state.
Why the KL Divergence?
The principal challenge in generalizing the evolving framework from simple 1-dimensional applications like sorting to a multidimensional setting is the lack of an obvious distance measure that captures how close the hypothesis is to the current state. Our approach is motivated by an information-theoretic perspective. The KL divergence serves as a measure of how different the actual object distribution is from our hypothesis . (Note that the KL divergence is asymmetric, which is to be expected, given the asymmetric roles of the truth and our hypothesis.)
As an example of this information-theoretic approach, consider the following application in coding theory and space quantization [10]. The objective is to communicate the location of an imprecise point with any arbitrary resolution . From Shannon’s source coding theorem [26], the theoretical lower bound for the expected number of bits required for communication is given by the Shannon entropy [30]
where is a cell (a -dimensional box) of size in , and , by a slight abuse of notation, is the probability associated with a cell using the underlying probability distribution function . This is roughly achieved by a Huffman coding [18] of the space around , which intuitively assigns a number of bits proportional to the log of the inverse of the probability measure associated with an event, which in our case is lying within a cell of size .
Since the ’s are not available to us, we use a similar strategy of encoding the space using . Therefore, in expectation, we use roughly
bits. When is arbitrarily small, the difference . Therefore, roughly represents the expected number of additional bits needed to represent the location of ’s individually using just ’s, compared to the absolute theoretical limit.
Paper Organization
The remainder of the paper is organized as follows. In the next section, we present Theorem 2, which provides a lower bound on the distance achievable by any algorithm in our model. In Section 4, we present the algorithm and analyze its steady-state performance. In Section 5, we discuss possible relaxations to the assumptions made in our model.
3 Lower Bound
In this section, we show that maintaining is the best we can hope to achieve. This is established in the following theorem. Due to space limitations, the proof has been deferred to Section A.1.
Theorem 2.
For any algorithm , there exists a starting configuration and an evolver (with knowledge of ) in the local-motion model such that, for any positive integer , there exists , such that .
The proof follows a similar structure to other lower-bound proofs in the evolving data framework [3, 6, 1]. Intuitively, over a period of time of length , for , the algorithm can inspect the locations of only a constant fraction of points. During this time, the evolver can move a sufficient number of uninspected points so that the overall distance increases by .
4 Algorithm
In this section, we present our algorithm and analyze its performance. We begin by defining our hypothesis. For every index , we define two parameters: , and . For we set the hypothesis probability density for the th point to be the -fold product of independent Cauchy distributions, one per coordinate, with center at and scaling parameter [33]. That is,
| (2: Hypothesis Def.) |
(Note that this differs from the standard multivariate Cauchy [33].) If we let denote the probability density function for the standard -fold product of Cauchy distributions (centered at the origin unit scale), we can express this equivalently as
It will be convenient to define the hypothesis ball , which we use to illustrate . Note that, unlike our truth probabilities , our hypothesis distributions have unbounded support. Our algorithm modifies the parameters and in response to information received from the oracle.
4.1 Potential Function
Due to the subtleties of tracking the Kullback-Leibler divergence, we introduce a potential function to aid in the analysis. For each object index , we define an individual potential function , which bounds the distance to within a constant. The total potential will be the sum of these functions.
Let denote the Euclidean distance between the center of the actual distribution and the center of the hypothesis ball. Let denote the radius of the local feature of . Recall that is the radius of the hypothesis ball (see Figure 2). We define the individual and system potentials to be
| (3) |
The following lemma shows that, up to additive and scalar constants that depend on the dimension, distances can be bounded above by this potential.
Lemma 3.
There exists a constant (depending on dimension) such that for any , , and hence .
Proof.
Recall that is the probability density function of a uniform distribution over a ball of radius . Thus, for , , where denotes the volume of the unit Euclidean ball in , and . Therefore,
| (4) |
For and , by the triangle inequality, we have
Therefore,
for a suitably chosen that depends only on the dimension.
Remark (Potential function and approximating pairwise distances).
The potential function we define here is symmetric. It is remarkable that a symmetric function bounds an asymmetric function under the choice of Cauchy distributions as hypotheses.
To demonstrate the value of our potential function, suppose that for , for some constant . This implies that , which further implies for some constant . Similarly, for some constant . By the triangle inequality, the distance between the centers of two hypothesis balls satisfies
Since , we have , for some constant .
Since is a constant approximation for , it immediately follows that we can maintain a constant-weight approximation of structures like the Euclidean minimum spanning tree and the Euclidean traveling salesman tour on by constructing the same on ’s, the centers of the hypotheses ’s.
An important feature of our choice of a potential function is that the evolver cannot change its value by more than a constant with each step. Recall that the evolver selects an object index and moves by a distance of most in any direction. This results in at most an fraction change in the values of and , and hence the resulting change in is bounded by a constant. However, note that the movement of could potentially affect the local feature sizes of a number of other objects in the system. We can show by a packing argument that there is only a constant number of indices , whose value is affected. Furthermore, these values are only changed by a constant. This is encapsulated in the following lemma. Its proof has been deferred to Section A.1.
Lemma 4.
Each step of the evolver increases the potential by at most a constant.
4.2 The Algorithm
Next, we present our algorithm. The algorithm runs in parallel to the evolver, running faster by a speed-up factor of (whose value will be derived in our analysis). Each step of the algorithm involves a probe of an object by the oracle, and following that, a possible modification of a hypothesis, . The total running time of the algorithm is proportional to the number of steps.
Let us begin with a high-level explanation. The details are provided in Algorithm˜TrackByZoom. Recall that we are given a set of objects, each represented by , a uniform probability distribution over a ball , where is the distance to its nearest neighbor and is the local feature scale. The points are restricted to lie within a bounding ball centered at the origin. The algorithm maintains a hypothesis in the form of Cauchy distributions, each represented by a hypothesis ball , where is the center of the distribution and is the distribution’s scaling parameter.
The algorithm begins with an initial hypothesis, where each hypothesis ball is just . This reflects the fact that we effectively assume nothing about the location of , except that it lies within the bounding ball. The algorithm then proceeds in a series of iterations, where each iteration handles all the indices in order. The handling of each index involves two processes, called zoom-out and zoom-in.
In the zoom-out process, we query the oracle on index to check whether (1) the sampled point lies within its hypothesis ball and (2) at least one other lies within a concentric ball whose radius is larger by a factor of . If so, we expand the hypothesis ball by a factor of . (We show in Lemma 5 that this guarantees that the hypothesis ball now contains the local feature ball .) We then proceed to the zoom-in process. If not, we double the hypothesis ball and repeat (see Figure 3).
In the zoom-in process, we first check whether lies within the hypothesis ball. (The evolver could have moved .) If not, we return to the zoom-out process. If so, we next check the -expansion of this ball. If there is no other in this expanded ball, we accept the current hypothesis for , and go on to the next point in the set. (We show in Lemma 10 that at this point in time is bounded by a constant). If there is such an however, we may need to shrink the hypothesis ball for the current index. We cover the hypothesis ball with a collection of balls whose radii are smaller by a constant factor, and we test whether lies within any of them (see Figure 4). As soon as we find one, we shrink the hypothesis ball about this ball. We repeat this process until one of the earlier conditions applies (causing us to return to the zoom-out process or move on to the next point).
4.3 Analysis
In this section, we analyze the distance that the algorithm maintains with the ground truth. Recall that the initial hypotheses are based on the bounding ball , which is centered at the origin. Letting , we have , as defined in Eq. (2: Hypothesis Def.). Using Eq. (3), the initial potential function satisfies:
| (5) |
Suppose the condition in line 8, and 23 are satisfied. We prove the following lemma to justify the expansion factor . The proof is rather technical and has been deferred to the full version of the paper.
Lemma 5 (Expansion Factor).
If and , then .
Let us consider how Algorithm˜TrackByZoom affects the potential. For a particular index , during the zoom-out process, we double the radius of the hypothesis ball. If the Euclidean distance between the centers of the hypothesis, and the local-feature ball is much larger than , then decreases by an additive constant. However, once is greater than , the algorithm risks increasing . We show that the number of steps where increases is a constant for every index . However, during the zoom-in process, we maintain the invariant that the hypothesis ball contains the local-feature ball (Lemma 5), which implies that . Therefore, decreases by a constant amount whenever we shrink the hypothesis ball by a constant factor. However, if the evolver moves while index is being processed, our algorithm may transition to the zoom-out process again. Hence, if the evolver executes some steps in the th iteration, our algorithm may increase by only . We obtain the following lemma. The proof is deferred to Section A.1
Lemma 6 (Algorithm’s contribution towards ).
For any iteration , let denote the number of steps executed by the evolver. Then TrackByZoom increases in steps, each time by . In each of the remaining steps of the th iteration, it decreases by in an amortized sense.
Let be the potential function at the start of the th iteration. And let be the time taken by the th iteration of the algorithm. At the conclusion of the zoom-in process for a particular index , the algorithm fixes a hypothesis which is at a reasonable distance from the truth . In fact, we show in Lemma 10 that at this point in time, is , a constant. Therefore, by Lemma 6, TrackByZoom running at a constant speed-up roughly spends time (within constant factors), in reducing to . Summing over all indices, we see that the time taken by th iteration, , is . Intuitively, for a sufficiently large speed-up factor, the actions of the evolver can only affect by a small fraction of this amount. We summarize these observations in the following lemma. The proof is deferred to Section A.1
Lemma 7 (Iteration time proportional to Potential).
There exists a constant speed-up factor for TrackByZoom such that .
We are now ready to derive how changes over the course of multiple iterations. We show that for a sufficiently large (constant) speed-up factor , after iterations converges to . (Recall that is the initial aspect ratio.)
Lemma 8.
There exists a constant speed-up factor for Algorithm˜TrackByZoom and , such that for all ,
Proof.
By Lemma 4, in the th iteration the evolver takes at most steps and increases by at most . Therefore, by Lemma 6, TrackByZoom increases by . Since the algorithm runs at a speed-up factor it takes steps, and by Lemma 6, it decreases by at least , for some . Accounting for all the increases and decreases we have:
There exists a sufficiently large such that , for . So,
By Lemma 7, for a sufficiently large (constant) , there exists such that . Therefore,
By Eq. (5), , and since there exists such that , we have
| (6) |
Observe that this is whenever .
If , then by Lemma 7, . Thus, increases by at most throughout iteration . Therefore, for any time during iteration , as well.
Using Eq. (6) we observe that for all , . By Lemma 7, this implies that . Hence, for some , we have , for all .
By combining Lemmas 8 and 3, we complete the proof of Theorem 1. It is noteworthy that there is a trade-off between the speed-up factor and the motion parameter . Recall that with each step, the evolver can move by a distance of up to . If we reduce this parameter to , then it takes the evolver at least steps to effect the same change as before. Thus, by reducing the local-motion factor, it is possible to achieve a speed-up factor of . Formally we have,
Corollary 9.
There exists a motion parameter such that for a set of evolving objects in under the -local-motion model, where , Algorithm˜TrackByZoom (with speed-up factor 1) maintains a distance of for all ,
5 Extensions and Conclusion
The local-motion model introduced in Section 2 assumes a uniform probability distribution for each object. However, the algorithm TrackByZoom with an appropriate speed-up factor can be applied to a wider class of probability distributions. A distribution in this class has , the volume of the th object, as its support, and is scaled according to , the local feature size. Many natural truncated distributions [11], such as the uniform distribution, the truncated normal distribution [27], the truncated Cauchy distribution belong to this family. Details are presented in the full version of the paper.
An additional refinement to the motion model we suggested might involve redefining the local feature region and the speed of a point based on the distance to its -nearest neighbors for a fixed , instead of just using the nearest neighbor distance. We conjecture that our principal results extend to this generalization as well, given a suitable range counting oracle. Yet another variation of the model could possibly involve letting the uncertainty associated with a point have unbounded support (for e.g. a normal rather than a truncated normal distribution). This is a significantly harder problem, as a point may lie outside its local feature with a constant probability, requiring a maintaining algorithm to have a super-constant speed-up. We aim to study these in the future.
References
- [1] A. Acharya and D. M. Mount. Optimally tracking labels on an evolving tree. In Proc. 34th Canad. Conf. Comput. Geom., pages 1–8, 2022.
- [2] A. Anagnostopoulos, R. Kumar, M. Mahdian, and E. Upfal. Sorting and selection on dynamic data. Theoretical Computer Science, 412(24):2564–2576, 2011. doi:10.1016/j.tcs.2010.10.003.
- [3] A. Anagnostopoulos, R. Kumar, M. Mahdian, E. Upfal, and F. Vandin. Algorithms on evolving graphs. In Proc. 3rd Innov. Theor. Comput. Sci. Conf., pages 149–160, 2012. doi:10.1145/2090236.2090249.
- [4] M. Beermann and A. Sieben. The connection between stress, density, and speed in crowds. Scientific Reports, 13:13626, 2023. doi:10.1038/s41598-023-39006-8.
- [5] J. J. Besa, W. E. Devanny, D. Eppstein, M. T. Goodrich, and T. Johnson. Optimally sorting evolving data. In 45th Internat. Colloq. Autom., Lang., and Prog., pages 81:1–81:13, 2018. doi:10.4230/LIPIcs.ICALP.2018.81.
- [6] J. J. Besa, W. E. Devanny, D. Eppstein, M. T. Goodrich, and T. Johnson. Quadratic time algorithms appear to be optimal for sorting evolving data. In Proc. 20th Workshop Algorithm Eng. and Exp., pages 87–96, 2018.
- [7] A. Borodin and R. El-Yaniv. Online Computation and Competitive Analysis. Cambridge University Press, 2005.
- [8] K. Buchin, M. Löffler, P. Morin, and W. Mulzer. Delaunay triangulation of imprecise points simplified and extended. Algorithmica, 61:674–693, 2011. doi:10.1007/s00453-010-9430-0.
- [9] D. Busto, W. Evans, and D. Kirkpatrick. Minimizing interference potential among moving entities. In Proc. 30th Annu. ACM-SIAM Sympos. Discrete Algorithms, pages 2400–2418, 2019. doi:10.5555/3310435.3310582.
- [10] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley, 2012. doi:10.1002/047174882X.
- [11] Y. Dodge. The Oxford Dictionary of Statistical Terms. Oxford Univ. Press, 2003. doi:10.1002/sim.1812.
- [12] W. Evans and D. Kirkpatrick. Minimizing query frequency to bound congestion potential for moving entities at a fixed target time. Algorithms, 17:246, 2024. doi:10.3390/a17060246.
- [13] J. Gudmundsson, P. Laube, and T. Wolle. Computational movement analysis. In W. Kresse and D. Danko, editors, Handbook of Geogr. Inf., pages 725–741. Springer, 2012. doi:10.1007/978-3-540-72680-7_22.
- [14] L. J. Guibas, D. Salesin, and J. Stolfi. Epsilon geometry: Building robust algorithms from imprecise computations. In Proc. Fifth Annu. Sympos. Comput. Geom., pages 208–217, 1989. doi:10.1145/73833.73857.
- [15] D. Helbing, L. Buzna, A. Johansson, and T. Werner. Self-organized pedestrian crowd dynamics: Experiments, simulations, and design solutions. Transportation Science, 39:1–24, 2005. doi:10.1287/trsc.1040.0108.
- [16] O. Hesham and G. Wainer. Context-sensitive personal space for dense crowd simulation. In Proc. Symp. Simulation for Architecture and Urban Design, pages 1–8, 2017. URL: https://dl.acm.org/doi/10.5555/3289787.3289806.
- [17] Q. Huang, X. Liu, X. Sun, and J. Zhang. Partial sorting problem on evolving data. Algorithmica, 79:960–983, 2017. doi:10.1007/s00453-017-0295-3.
- [18] D. A. Huffman. A method for the construction of minimum-redundancy codes. Proceedings of the IRE, 40:1098–1101, 1952. doi:10.1109/JRPROC.1952.273898.
- [19] U. Jain. Effects of population density and resources on the feeling of crowding and personal space. J. Social Psych., 127:331–338, 1987. doi:10.1080/00224545.1987.9713699.
- [20] S. Kahan. A model for data in motion. In Proc. 23rd Annu. ACM Sympos. Theory Comput., pages 265–277, 1991. doi:10.1145/103418.103449.
- [21] V. Kanade, N. Leonardos, and F. Magniez. Stable matching with evolving preferences. Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, 2016.
- [22] M. G. Kendall. A new measure of rank correlation. Biometrika, 30:81–93, 1938. doi:10.2307/2332226.
- [23] S. Kullback and R. A. Leibler. On information and sufficiency. Ann. Math. Statist., 22:79–86, 1951. doi:10.1214/aoms/1177729694.
- [24] P. Laube. Computational Movement Analysis. Springer Cham, 2014. doi:10.1007/978-3-319-10268-9.
- [25] M. Löffler and M. van Kreveld. Largest and smallest convex hulls for imprecise points. Algorithmica, 56:235–269, 2010. doi:10.1007/s00453-008-9174-2.
- [26] D. J. C. MacKay. Information Theory, Inference and Learning Algorithms. Cambridge University Press, 2003. URL: https://books.google.com/books?id=AKuMj4PN_EMC.
- [27] F. Nielsen. Statistical divergences between densities of truncated exponential families with nested supports: Duo Bregman and duo Jensen divergences. Entropy, 24:421, 2022. doi:10.3390/e24030421.
- [28] R. Pan, Z. Han, T. Liu, H. Wang, J. Huang, and W. Wang. An RFID tag movement trajectory tracking method based on multiple RF characteristics for electronic vehicle identification ITS applications. Sensors, 23:7001, 2023. doi:10.3390/s23157001.
- [29] A. Seyfried, B. Steffen, W. Klingsch, and M. Boltes. The fundamental diagram of pedestrian movement revisited. J. Stat. Mech. Theory Exp., 2005:P10002, 2005. doi:10.1088/1742-5468/2005/10/P10002.
- [30] C. E. Shannon. A mathematical theory of communication. The Bell System Technical Journal, 27:623–656, 1948. doi:10.1002/j.1538-7305.1948.tb00917.x.
- [31] S. M. Tomkiewicz, M. R. Fuller, J. G. Kie, and K. K. Bates. Global positioning system and associated technologies in animal behaviour and ecological research. Phil. Trans. R. Soc. B, 365:2163–2176, 2010. doi:10.1098/rstb.2010.0090.
- [32] M. van Kreveld, M. Löffler, F. Staals, and L. Wiratma. A refined definition for groups of moving entities and its computation. Internat. J. Comput. Geom. Appl., 28(2):181–196, 2018. doi:10.4230/LIPIcs.ISAAC.2016.130.
- [33] S. Verdú. The Cauchy distribution in information theory. Entropy, 25:346, 2023. doi:10.3390/e25020346.
- [34] L. Wiratma, M. Löffler, and F. Staals. An experimental comparison of two definitions for groups of moving entities. In Proc. 10th Internat. Conf. Geogr. Inf. Sci., pages 64:1–64:6, 2018. doi:10.4230/LIPIcs.GIScience.2018.64.
- [35] S. Yi, H. Li, and X. Wang. Understanding pedestrian behaviors from stationary crowd groups. In Proc. IEEE Conf. Comput. Vis. Pattern Recogn., pages 3488–3496, 2015. doi:10.1109/CVPR.2015.7298971.
- [36] Y. Zou, G. Zeng, Y. Wang, X. Liu, X. Sun, J. Zhang, and Q. Li. Shortest paths on evolving graphs. In H. T. Nguyen and V. Snasel, editors, Computational Social Networks, pages 1–13. Springer, 2016. doi:10.1007/978-3-319-42345-6_1.
Appendix A Appendix
A.1 Deferred Proofs
Note that some of the restatements of the lemmas given here provide additional details, which were not present in the original statements.
The next lemma applies whenever Algorithm˜TrackByZoom has completed its processing of an object on Line 20. It shows that the potential value for this object does not exceed a constant.
Lemma 10 (Done tracking ).
There exists a constant , such that whenever Algorithm˜TrackByZoom completes its processing of any object , .
Proof.
The algorithm moves on to the next point when the condition in line 19 is satisfied ( and ). Since , intersect. Moreover, if is the current nearest neighbor of , implies lies outside the expansion of (see Figure 5).
Now because is the nearest neighbor of , , hence . By triangle inequality, the following series of inequality holds:
implying that .
Now consider the hypothesis ball, , set by the algorithm in the previous step (via Line 26). From Lemma 5, we have . Let . Since the algorithm shrinks the hypothesis ball by at least a factor of 2 during the zoom-in process, is at least . And therefore . We also have (see Figure 5). Therefore,
Theorem 2 (Lower Bound on the Distance).
For any algorithm , there exists a starting configuration and an evolver (with knowledge of ) in the local-motion model such that, for any positive integer , there exists , such that .
Proof.
We construct a point set on the real line (), and we will mention at the end how to adapt the proof to any dimension . Given the small constant , the local feature constant, we define to be the following set of points in .
Let , the distance corresponding to the first point in the tuple. We similarly define and . Let denote the current distance to ’s nearest neighbor. The local feature interval for , denoted , is . Recall that is the uniform probability over . Therefore, , where is the diameter of . Now
By the concavity of the log function and Jensen’s inequality, we have
| (7) |
We may assume that (for otherwise we can set and are done). This implies there are only many ’s such that . Call the remaining , ’s the proximal set, denoted .
We are now ready to describe the evolver’s actions. It chooses to stay dormant until time . For a large enough constant , consider the time interval . The algorithm , running at a speed-up factor of , can modify at most hypotheses in the time interval. The evolver chooses not to move any of those points. Therefore can only reduce by over this time interval. Now there are at least members in the proximal set , whose hypotheses were not altered by . Call this set the stable set, denoted .
For , the evolver selects some members from . Call that set . To be specific, for all , the evolver chooses to move away from some times, by a distance of exactly , where is the distance of from its current nearest neighbor. (Note that the nearest neighbor will remain throughout these operations.) Given this value of , after the conclusion of these operations, the distance between and is at least 2. For , the local feature of changes to an interval , where . Now, for , using a similar analysis as Eq. (7), we have
| (8) |
Thus, for all , we have and . Therefore,
If as well, then similarly from Eq. (8), we have
But, this yields a contradiction since and implies that
Therefore, for , , and since , we have .
For a general dimension , we define , in the same way except all the points lie on a single axis. The evolver also moves these points along that particular axis. The rest of the analysis involves integration over a region of space rather an interval, but is straightforward and carries over to .
Lemma 11 (Evolver’s contribution to ).
Every step of the evolver increases the potential function by at most a constant.
Proof.
Let’s say the evolver chooses to move at time . Now is the nearest neighbor distance of at time . Therefore moves by at most distance, implying . Similarly . Therefore,
| (9) |
Let be the set of indices of points whose nearest neighbor at time 0 was , or whose nearest neighbor at time 1 was . Only points with indices in this set have their potential function changed. If , , and do not change. Since moves by at most distance, the nearest neighbor distance changes by at most as well. Therefore,
| (10) | |||
| Since , we also have | |||
| (11) | |||
If was the nearest neighbor of , then , and hence . Using the last fact, and dividing Eq. (10) by , we have . Similarly, If was the nearest neighbor of , then using Eq. (11) we have . Therefore for . And hence,
| (12) |
Finally, we show that . To that effect we solve a general problem: What is the maximum number of points in , whose nearest neighbor is ? Without loss of generality, let be the nearest neighbor of , such that . Let ’s nearest neighbor be . Consider the line segment , and its intersection with the ball centered at , and passing through . Call the intersection point . For any general point , we similarly define . Note . By triangle inequality, we have . Since is the nearest neighbor of , we have . Therefore . Therefore, for any , such that is the nearest neighbor of and , we have . Let . Draw a ball of radius 1/2 around , and every member of . Each of these balls are non-overlapping. However all of them are contained within a ball around of radius . Therefore . Observing that , and using Eq. (9), and 12 we have the result.
Lemma 12 (Algorithm’s contribution towards ).
For any iteration , let denote the number of steps executed by the evolver. Then TrackByZoom increases in steps, each time by . In each of the remaining steps of the th iteration, it decreases by in an amortized sense.
Proof.
For a particular index , we first concentrate on the zoom-out routine of our Algorithm˜TrackByZoom (see Line 6). Assume that the evolver leaves untouched for the time being. We show later how to handle the case where the evolver moves , while the algorithm is processing the index . The algorithm increases by a constant factor in Line 12. Therefore, decreases by a constant amount whenever . Now suppose at time , for the first time during the zoom-out routine. That implies a 2-expansion of in line 12 contains the entire . Therefore, a further -expansion definitely contains another , , satisfying the condition in line 8. We conclude that once , the algorithm only increases by a constant amount before moving on to the zoom-in routine of the algorithm (line 14).
Now, we look at the zoom-in routine of the algorithm (line 14). The algorithm only zooms in as long as the condition in line 23 is satisfied. Since we expand with an expansion factor in line 26, using Lemma 5 we conclude that in every step during the zoom-in routine. That implies throughout the zoom-in process, further implying .
In line 24 we subdivide the hypothesis ball into balls and make constant number of queries to the oracle. Therefore, reduces by a factor of . And hence, reduces by at least a constant amount. We amortize the reduction over the calls to the Oracle, to denote a constant reduction in in every zoom-in step.
If the evolver moved while the algorithm was processing the index , there is a possibility that might move outside of during the zoom-in routine. We take care of this condition in line 16 to initiate the zoom-out process. Since the algorithm possibly increases by a constant amount when switching from zoom-out to zoom-in, we conclude: For every evolver’s step, our algorithm might increase by at most a constant amount as well.
Lemma 13 (Iteration time proportional to Potential).
There exists a constant speed-up factor for TrackByZoom such that .
Proof.
By Lemma 4, the evolver can only increase the potential by during the iteration. By Lemma 6 we observe that the algorithm increases by some amount that is . Since by Lemma 10, the algorithm reduces each to at most , it reduces the overall potential by at most . For a speed-up factor , the algorithm TrackByZoom spends time to do so, implying , further implying , for a sufficiently large constant .
To see why , we follow a similar logic, except to observe that the evolver can decrease the potential by at most as well. By Lemma 10, the algorithm reduces the overall potential by at least , which takes it at least time, implying that , resulting in .
