Abstract 1 Introduction 2 Grid Graph Connectivity and MSF 3 𝑶(𝟏)-Round Approximate EMST 4 𝑶(𝟏)-Round Approximate DBSCAN 5 Conclusion References Appendix A Appendix

O(1)-Round MPC Algorithms for Multi-Dimensional Grid Graph Connectivity, Euclidean MST and DBSCAN

Junhao Gan ORCID The University of Melbourne, Australia Anthony Wirth ORCID The University of Melbourne, Australia Zhuo Zhang The University of Melbourne, Australia
Abstract

In this paper, we investigate three fundamental problems in the Massively Parallel Computation (MPC) model: (i) grid graph connectivity, (ii) approximate Euclidean Minimum Spanning Tree (EMST), and (iii) approximate DBSCAN.

Our first result is a O(1)-round Las Vegas (i.e., succeeding with high probability) MPC algorithm for computing the connected components on a d-dimensional c-penetration grid graph ((d,c)-grid graph), where both d and c are positive integer constants. In such a grid graph, each vertex is a point with integer coordinates in d, and an edge can only exist between two distinct vertices with -norm at most c. To our knowledge, the current best existing result for computing the connected components (CC’s) on (d,c)-grid graphs in the MPC model is to run the state-of-the-art MPC CC algorithms that are designed for general graphs: they achieve O(loglogn+logD) [10] and O(loglogn+log1λ) [8] rounds, respectively, where D is the diameter and λ is the spectral gap of the graph. With our grid graph connectivity technique, our second main result is a O(1)-round Las Vegas MPC algorithm for computing approximate Euclidean MST. The existing state-of-the-art result on this problem is the O(1)-round MPC algorithm proposed by Andoni et al. [5], which only guarantees an approximation on the overall weight in expectation. In contrast, our algorithm not only guarantees a deterministic overall weight approximation, but also achieves a deterministic edge-wise weight approximation. The latter property is crucial to many applications, such as finding the Bichromatic Closest Pair and Single-Linkage Clustering. Last, but not least, our third main result is a O(1)-round Las Vegas MPC algorithm for computing an approximate DBSCAN clustering in O(1)-dimensional Euclidean space.

Keywords and phrases:
Massively Parallel Computation, Graph Connectivity, Grid Graphs, Euclidean Minimum Spanning Tree, DBSCAN
Copyright and License:
[Uncaptioned image] © Junhao Gan, Anthony Wirth, and Zhuo Zhang; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Massively parallel algorithms
Related Version:
Full Version: https://arxiv.org/abs/2501.12044 [23]
Funding:
This work is in part supported by ARC DE190101118 and DP230102908.
Editors:
Sudeepa Roy and Ahmet Kara

1 Introduction

Effective parallel systems for large scale datasets, such as MapReduce [17], Dryad [27], Spark [37], Hadoop [34], have received considerable attention in recent years. The Massively Parallel Computation (MPC) model [28, 24, 9] has been proposed to provide a solid theoretical abstraction for the modern study of parallelism.

In this paper, we propose several O(1)-round Las Vegas algorithms, each succeeding with high probability, in the MPC model, for solving three fundamental problems: (i) connectivity and minimum spanning forest (MSF) on multi-dimensional grid graphs, (ii) approximate Euclidean Minimum Spanning Tree (EMST), and (iii) approximate DBSCAN.

The MPC Model.

In the MPC model, the input is of size n and evenly distributed among m machines. Each machine is equipped with a strictly sub-linear local memory of size Θ(s), where s=n/m=nα for some constant α(0,1)  [5, 28, 24]. An MPC algorithm runs in (synchronous) rounds; each round proceeds two phases one after another : the communication phase first, and then the local computation phase. In the communication phase, each machine can send to and receive from other machines, in total, O(s) data. In the computation phase, each machine performs computation on the O(s) data stored in its local memory, and prepares what data can be sent to which machine in the communication phase of the next round. The efficiency of an MPC algorithm is measured by the total number of rounds.

Main Result 1: Connectivity on Grid Graphs

We consider a class of graphs called d-dimensional c-penetration grid graphs (for short, (d,c)-grid graphs). Specifically, a graph G=(V,E) is a (d,c)-grid graph if it satisfies: (i) the dimensionality d2 is a constant and c1 is an integer; (ii) each node in V is a d-dimensional point with integer coordinates in d space; (iii) for each edge (u,v)E, 0<u,vc holds, i.e., u and v are distinct vertices and their coordinates differ by at most c on every dimension. It can be verified that, when c is a constant, by definition, each node in a (d,c)-grid graph can have at most (2c+1)d1O(1) neighbours, and hence, a (d,c)-grid graph is sparse, i.e., |E|O(|V|). In particular, for c=1, a (d,1)-grid graph is a common well-defined d-dimensional grid graph.

In real-world applications, grid graphs are often useful in the modeling of 3D meshes and terrains [11]. More importantly, grid graphs are extremely useful in many algorithmic designs for solving various computational geometry problems, such as Approximate Nearest Neighbour Search [7], Euclidean Minimum Spanning Tree [26], DBSCAN [4] and etc. Therefore, the study of algorithms on grid graphs has become an important topic in the research community. Our first result is this theorem:

Theorem 1 ([*]).

Given a (d,c)-grid graph G=(V,E) with cO(1), there exists a Las Vegas MPC algorithm with local memory Θ(s)Θ(|V|α) per machine, for an arbitrary constant α(max{d+1d+2,45},1), which computes all the connected components of G in O(1) rounds with high probability. Specifically, the algorithm assigns the ID of the connected component to each node, of which the node belongs to.

Computing connected components (CC’s) of sparse graphs in the MPC model is notoriously challenging. In particular, the well accepted 2-CYCLE Conjecture [36, 32] says that:

Every MPC algorithm, using n1Ω(1) local memory per machine and O(n) space in total, requires Ω(logn) rounds to correctly solve, with high probability, the 1-vs-2-Cycle problem which asks to distinguish whether the input graph consists of just one cycle with n nodes or two cycles with n/2 nodes each.

Theoretically speaking, the 2-CYCLE Conjecture can be disproved as “easy” as finding just a single constant α<1 and an algorithm that solves the 1-vs-2-Cycle problem with O(nα) per-machine local memory and O(n) total space in o(logn) rounds. However, according to the recent hardness results, the conjecture is robust and hard to be refuted [32].

Even worse, since the input instance of the 1-vs-2-Cycle problem is so simple, it eliminates the hope of computing CCs for many classes of “simple” graphs, such as path graphs, tree graphs and planar graphs, which are often found to be simpler cases for many fundamental problems. As a result, it is still very challenging to find a large class of graphs on which the CC problem can be solved in o(logn) rounds.

State-of-the-art works [6, 8, 10, 13] consider general sparse graphs that are parameterised by the diameter D and the spectral gap λ. Specifically, Andoni et al. [6] propose a O(logDloglogn)-round randomized algorithm, where n is the number of nodes. Assadi et al. [8] give an algorithm with O(loglogn+log1λ) rounds. Recently, Behnezhad et al. [10] improve the bound by Andoni et al. to O(logD+loglogn) randomized, while Coy and Czumaj [13] propose a deterministic MPC algorithm achieving the same round number bound. However, all these state-of-the-art general algorithms still require O(logn) rounds for solving the CC problem on (d,c)-grid graphs in the worst case, as the diameter D of a (d,c)-grid graph can be as large as n and the spectral gap, λ, can be as small as 1n.

Therefore, our Theorem 1 is not only an immediate improvement over these state-of-the-art known results the CC problem on (d,c)-grid graphs, but, interestingly, also suggests a large class of sparse graphs, (d,c)-grid graphs with cO(1), on which the CC problem can be solved in O(1) rounds.

We note that it is the geometric property of (d,c)-grid graphs making them admit O(1)-round CC algorithms. As we shall discuss in Section 2, our key technique is the so-called Orthogonal Vertex Separator for multi-dimensional grid graphs proposed by Gan and Tao [22] which was originally proposed to efficiently compute CC’s on (d,1)-grid graphs in External Memory model [3]. However, in the MPC model, it appears difficult to compute such separators in O(1) rounds. To overcome this, we relax the Orthogonal Vertex Separator to a “weaker yet good-enough” version for our purpose of designing O(1)-round MPC algorithms and extend this technique 111 A similar technique of using small-size geometric separators (though they are not the same as ours) is applied to computing Contour Trees on terrains [33] in O(1) rounds in the MPC model. Moreover, the idea of a crucial technique in our separator computation, ε-approximation, actually stems from the KD-tree construction in the MPC model in [2]. to (d,c)-grid graphs which are crucial to our O(1)-round EMST and DBSCAN algorithms.

Moreover, with standard modifications, our CC algorithm can also compute the Minimum Spanning Forests (MSF) on (d,c)-grid graphs:

Corollary 2 ([*]).

Given an edge-weighted (d,c)-grid graph G=(V,E) with cO(1), there exists a Las Vegas MPC algorithm with local memory Θ(s)Θ(|V|α) per machine, for arbitrary constant α(max{d+1d+2,45},1), which computes a Minimum Spanning Forest in O(1) rounds with high probability.

Main Result 2: Approximate EMST

By enhancing our MSF technique for (d,c)-grid graphs, our second main result is a new O(1)-round Las Vegas MPC algorithm for computing approximate Euclidean Minimum Spanning Tree (EMST). Specifically, the problem is defined as follows. Given a set, P, of n points with integer coordinates in d-dimensional space d, where d is a constant and the coordinate values are in [0,Δ] with Δ=nO(1), let G=(V,E) be an implicit edge-weighted complete graph on P such that: V=P and the weight of each edge (u,v)E is the Euclidean distance between u and v. The goal of the EMST problem is to compute an MST on the implicit complete graph, G, of P. The EMST problem is one of the most fundamental and important problems in computational geometry and has sparked significant attention for machine learning [18, 25, 30], data clustering [15, 14] and optimization problems [16, 31]. The existing state-of-the-art result is an O(1)-round Monte-Carlo MPC algorithm proposed by Andoni et al. [5] for computing an approximate EMST. Their algorithm possesses the following properties: (i) it can achieve O(1) rounds in the worst case and works for all constant α(0,1); and (ii) in expectation, it returns a spanning tree T of the implicit graph G whose total edge weight is at most (1+ρ) times of the total edge weight of the exact EMST T, where the approximation factor ρ>0 is a constant. In comparison, we have the following theorem:

Theorem 3 ([*]).

Given a set P of n points in d-dimensional space d with coordinate values in [0,Δ] for Δ=nO(1) and a constant approximation factor ρ>0, there exists a Las Vegas MPC algorithm with local memory Θ(s)Θ(nα) per machine, for arbitrary constant α(max{d+1d+2,45},1), which computes a spanning tree T of the implicit graph G of P in O(1) rounds with high probability and T deterministically satisfyies the following:

  • Overall Weight Approximation: the total edge weight of T is at most (1+ρ) times the total edge weight of the exact EMST T;

  • Edge-Wise Weight Approximation: for every edge (u,v) in T with weight w(u,v), there must exist a path from u to v in T such that the maximum weight of the edges on this path is at most (1+ρ)w(u,v).

Our algorithm improves Andoni et al.’s approximate EMST algorithm in two folds. First, our algorithm computes a feasible ρ-approximate EMST deterministically, while theirs can only guarantee this in expectation. Second, and most importantly, the approximate EMST computed by our algorithm can further guarantee the edge-wise approximation which Andoni et al.’s algorithm cannot achieve. We note that the edge-wise approximation guarantee is crucial to solving many down-stream algorithmic problems, such as ρ-approximate bichromatic closest pair [1], Single-Linkage Clustering [38] and other related problems [35, 19].

Main Result 3: Approximate DBSCAN

Our third main result is a O(1)-round MPC algorithm for computing ρ-approximate DBSCAN clustering as defined in [20]. To our best knowledge, this is the first O(1)-round algorithm in the MPC model for solving the DBSCAN problem.

Theorem 4 ([*]).

Given a set P of n points in d-dimensional space d, where d is a constant, two DBSCAN parameters: ε and 𝑚𝑖𝑛𝑃𝑡𝑠, and a constant approximation factor ρ>0, there exists a Las Vegas MPC algorithm with local memory Θ(s)Θ(nα) per machine, for arbitrary constant α(max{d+1d+2,45},1), which computes an ρ-approximate DBSCAN clustering of P in O(1) rounds with high probability.

Derandomization

As we shall see shortly, the randomness of all our Las Vegas algorithms only comes from the randomized computation of an 1r-approximation for some r. Indeed, it is known that such approximations can be computed in O(1) MPC rounds deterministically [2]. And therefore, all our algorithms can be derandomized, yet slightly shrinking the feasible value range of the local memory parameter, α, to (max{d+1d+2,67},1).

Full Paper

The proofs of all the theorems, lemmas and claims marked with [*] can be found in the full version of this paper, on arXiv [23].

2 Grid Graph Connectivity and MSF

In this section, we introduce our O(1)-round MPC algorithms for computing CC’s and MSF’s on (d,c)-grid graphs with cO(1) with high probability.

Implicit Graphs and Implicit (𝒅,𝒄)-Grid Graphs.

For the ease of explanation of our techniques for solving the EMST problem in the next section, we first introduce the notion of implicit graphs, denoted by G=(V,).

Definition 5 (Implicit Graph).

An implicit graph G=(V,) is a graph consisting of a node set V and an edge formation rule, , where:

  • each node u in V is associated with O(1) words of information, e.g., O(1)-dimensional coordinates;

  • the edge formation rule, , is a function (of size bounded by O(1) words) on distinct node pairs: for any two distinct nodes u,vV, (u,v) returns, based on the O(1)-word information associated with u and v: (i) whether edge (u,v) exists in G, and (ii) the weight of (u,v) if edge (u,v) exists and the weight is well defined.

By the above definition, the size of every implicit graph is bounded by O(|V|). An implicit graph can be converted to an explicit graph by materialising all the edges with the edge formation rule . If the corresponding explicit graph of an implicit graph G=(V,) is a (d,c)-grid graph, we say G=(V,) is an implicit (d,c)-grid graph. For example, given a set P of n points in d with coordinate values in [0,Δ], the implicit complete Euclidean graph Gcomp of P can be considered as an implicit (d,Δ)-grid graph G=(P,), where (u,v) returns dist(u,v), the Euclidean distance between u and v, as the edge weight. Another example is the unit-disk graph [12] in d space with l-norm, which can be essentially regarded as the implicit (d,1)-grid graph with an edge formation rule: there exists an edge between two vertices if and only if their l-norm distance is at most 1. However, as we will see in our EMST algorithm, the notion of implicit (d,c)-grid graphs is more general due to the flexibility on the edge formation rule specification. For example, in addition to the coordinates, each node can also bring an ID of the connected component that it belongs to, and the edge formation rule can further impose a rule that there must be no edge between two nodes having the same connected component ID.

2.1 Pseudo 𝒔-Separator and Its Existence

Figure 1: A (d,c)-grid graph G with d=2 and c=2 and a separator. The blue regions are the target c-dividers: the vertices in them are separator vertices. Removing all the separator vertices disconnects G into 5 sub-graphs.

In this subsection, we extend the notion of Orthogonal Vertex Separator [22] to a weaker yet good-enough version for our algorithm design purpose for solving the CC and MSF problems on implicit (d,c)-grid graphs in the MPC model. We call this new notion a Pseudo s-Separator; recall that O(s) is the size of local memory, per machine, in the MPC model:

Definition 6 (Pseudo s-Separator).

Consider an implicit (d,c)-grid graph G=(V,) and a parameter s<|V|. A set SV is a pseudo s-separator of G if it satisfies:

  • |S|O(c|V|s1/dlogs)

  • Removing the vertices in S disconnects G into hO(|V|/s) sub-graphs Gi=(Vi,), i[1,h], such that ViVj= for all ij, ViV and |Vi|O(s).

Figure 1 shows an example. Next, we show that if 1cs1d3 and |V|>s, a pseudo s-separator must exist in an implicit (d,c)-grid graph G=(V,). While our proof mimics that in [22], it does require some new non-trivial ideas. We introduce some useful notations.

Minimum Bounding Space of 𝑮.

Consider an implicit (d,c)-grid graph G=(V,); let [xmin(i),xmax(i)] be the value range of V on the i-th dimension (for i{1,2,,d}), where xmin(i) and xmax(i) are the smallest and largest coordinate values of the points in V on dimension i, respectively. The minimum bounding space of G is defined as mbr(V)=[xmin(1),xmax(1)]×[xmin(2),xmax(2)]××[xmin(d),xmax(d)]. Moreover, we denote the projection of mbr(V) on the i-th dimension by mbr(V)[i]=[xmin(i),xmax(i)].

𝒄-Divider.

Consider the minimum bounding space mbr(V); a c-divider on the i-th dimension at coordinate x, denoted by π(i,x), is an axis-parallel rectangular range whose projection on dimension i is [x,x+c1] and the projection on dimension ji is mbr(V)[j]. With a c-divider π(i,x), the vertex set V is partitioned into three parts: (i) the left part Vleft={uVu[i]x1}, (ii) the right part Vright={uVu[i]x+c}, and (iii) the boundary part Sπ=Vπ(i,x), where u[i] is the ith coordinate of a vertex u.

We say that π(i,x) separates G into two sub-graphs induced by the left and right part, i.e., Vleft and Vright, respectively. And all the vertices in Sπ are called separator vertices. Observe that for any uVleft and any vVright, |u[i]v[i]|c+1 implying u,vc+1. As a result, there must not exist any edge between such u and v. With such notation, we show the following Binary Partition Lemma.

Lemma 7 (Binary Partition Lemma).

Consider an implicit (d,c)-grid graph G=(V,) with 1cs1d3 and |V|>s; there exists a c-divider π(i,x) partitioning V into {Sπ,{Vleft,Vright}} such that:

  • |Vleft||V|4(d+1) and |Vright||V|4(d+1), and

  • |Sπ|2c(1+d)1d|V|11d.

Proof.

We prove this lemma with a constructive algorithm. Given the vertex set, V, our algorithm first finds two integers, yj and zj, for each dimension j[1,d]. Specifically, yj is the largest integer that the number of the vertices in V located to the “left” of yj is at most |V|2(1+d), while zj is the smallest integer that the number of the vertices in V located to the “right” of zj is at most |V|2(1+d). Formally, yj=max{x:|{vVv[j]<x}||V|2(1+d)} and zj=min{x:|{vVv[j]>x}||V|2(1+d)}, where v[j] is the coordinate of v on dimension j.

Consider the hyper-box in d, whose projection on each dimension is [yj,zj] for j[1,d]. By the definition of yj and zj, this box contains in total at least |V|(12d2(d+1))=|V|d+1 vertices in V. Since the coordinates of the points are all integers, the volume of the box must be at least |V|d+1. That is, j=1d(zjyj+1)|V|d+1. Thus, there must exist some dimension i such that ziyi+1(|V|1+d)1d. Next we fix this dimension, i. Let Sπ(i,x) be the set of all vertices in V falling in a c-divider π(i,x). Within dimension i, each vertex in V can be contained in at most c consecutive c-dividers. As a result, we have x[yi,zic]|Sπ(i,x)|c|V|. Since there are zicyi+1 terms in the summation, there exists an integer x[yi,zic] with

|Sπ(i,x)|c|V|ziyi+1cc|V|(|V|1+d)1dc2c(1+d)1d|V|11d,

where the second inequality comes from the bound on ziyi+1 and the third inequality is due to cs1d312(|V|(1+d))1d as |V|>s. Then such π(i,x) is a desired c-divider.

Next, we bound the size of Vleft with respect to π(i,x): bounding |Vright| is analogous. Since xyi, there are only two possible relations between x and yi:

  • If x>yi, according to the definition of yi, the number of vertices in V to the left of π is at least |V|(12d+2)|V|4d+4.

  • If x=yi, the number of vertices in V to the left of π is at least |V|2d+2|Sπ(i,x)||V|(12d+22c(d+1)1d|V|1d)|V|4d+4.

Either way, |Vleft||V|4(d+1) holds; Lemma 7 thus follows.

The Multi-Partitioning Algorithm.

Given an implicit (d,c)-grid graph G=(V,) with 1cs1d3 and |V|>s, initialize S and perform the following steps:

  • apply Lemma 7 on G to obtain {Sπ,{Vleft,Vright}};

  • SSSπ;

  • if |Vleft|>s, recursively apply Lemma 7 on Gleft=(Vleft,);

  • if |Vright|>s, recursively apply Lemma 7 on Gright=(Vright,);

Lemma 8 ([*]).

The vertex set S, obtained by the above Multi-Partitioning Algorithm, is a pseudo s-separator of the implicit (d,c)-grid graph G=(V,), where 1cs1d3 and |V|>s.

A construction algorithm proves existence, hence:

Theorem 9.

For any implicit (d,c)-grid graph G=(V,) with 1cs1d3 and |V|>s, a pseudo s-separator S of G must exist.

Remark.

While our Pseudo s-Separator (PsSep) looks similar to the notation of the Orthogonal Vertex Separator (OVSep) proposed by Gan and Tao [22], we emphasize that some crucial differences between them are worth noticing. First, the OVSep was originally proposed for solving problems on (d,1)-grid graphs (in the External Memory model). Our PsSep supports implicit (d,c)-grid graphs for parameter c up to a non-constant s1d3 in the MPC model. This extension, discussed in Section 3, plays a significant role in our O(1)-round algorithm for computing Approximate EMST’s. Second and most importantly, it is still unclear how (and appears difficult) to compute the OVSep in O(1) MPC rounds. In contrast, our PsSep is proposed to overcome this technical challenge and can be computed in O(1) MPC rounds. However, as discussed in the next sub-section, because of the application of ε-approximation in the construction algorithm of PsSep, the separator size at each recursion level no longer geometrically decreases and therefore, leading to a logarithmic blow-up in the overall separator size in PsSep. Despite of this size blow-up, by setting the parameters carefully, we show that our PsSep can still fit in the local memory of one machine, and thus, it is still good enough for the purpose of computing CC’s on implicit (d,c)-grid graphs and solving the Approximate EMST and Approximate DBSCAN problems in O(1) rounds in the MPC model.

2.2 𝑶(𝟏)-Round Pseudo 𝒔-Separator Algorithm

In this subsection, we show a O(1)-round MPC algorithm for computing a pseudo s-separator for implicit (d,c)-grid graphs with 1cs1d3. The Multi-Partitioning Algorithm proves the existence of a pseudo s-separator, but a straightforward simulation of this algorithm in the MPC model is, however, insufficient to achieve O(1) rounds. To overcome this, we resort to the technique of ε-approximation [29, 2].

2.2.1 Preliminaries

Range Space and 𝜺-Approximation.

A range space Σ is a pair (X,), where X is a ground set and is a family of subsets of X. The elements of X are points and the elements of are ranges. For YX, the projection of on Y is |Y={qYq}. Given a range space Σ=(X,) and ε[0,1], a subset XX is called an ε-approximation of Σ if for every range q, we have ||Xq||X||q||X||ε.

Fact 1 ([29, 2]).

Consider the range space (V,), where the ground set V is a set of n points in d and ={V axis-parallel rectangular range  in d}. A random sample set V~ of V with size |V~|Θ(r2logn) is a 1r-approximate of (V,) with high probability.

Throughout this paper, we consider axis-parallel rectangular ranges only. For simplicity, we may just say V~ is a 1r-approximation of V.

2.2.2 Our Pseudo 𝒔-Separator Algorithm

Our MPC algorithm comprises several super rounds, each performing:

  • Obtain an ε-approximation, V~, tuning ε so this fits in the local memory of a machine;

  • Perform roughly O(sO(1)) “good enough” binary partitions in the local memory.

Our algorithm then simultaneously recurs on each of the resulting induced sub-graphs, if applicable, in the next super round. As we will show, each super round would decrease the size of the graph by a factor of Θ(sO(1)), and there can be at most O(logsO(1)|V|s)=O(1) super rounds. Moreover, by Fact 1, an ϵ-approximation can be obtained by sampling which can be achieved in O(1) MPC rounds; detailed implementations can be found in the appendix of our full paper [23]. Therefore, our algorithm runs in O(1) MPC rounds.

Define r=2s1/d. In order to ensure that a 1r-approximation V~ of V fits the local memory size, O(s), without loss of generality, in the following, we assume that the dimensionality, d, is at least 3. Otherwise, for the case d=2, we can “lift” the dimensionality to 3 by adding a dummy dimension to all the points in V; however, the bounds derived for d=3 apply.

The challenge lies in how to find a good enough (compared to that in Lemma 7) c-divider π to separate V in the local memory with access to a 1r-approximation V~ only.

Lemma 10 ([*]).

Given V with |V|>s and d3, with only access to a 1r-approximation V~ of V such that r=2s1/d and |V~|O(s), we can find a c-divider π in mbr(V), which satisfies:

  • |Vleft||V|8(d+1) and |Vright||V|8(d+1),

  • |Sπ|4c(1+d)1/d|V|s1/d.

Lemma 10 suggests that, with only access to a 1r-approximation of V, we can compute a good enough c-divider π; it separates V into two parts each with size |V|8(d+1); the size of the separator Sπ is still bounded by O(c|V|s1/d), the same as in Lemma 7.

By a more careful analysis, interestingly, with a sample set V~ of size Θ(s), one can derive that V~ indeed is sufficient to be, with high probability, a 13r1+l-approximation, with some constant l(0,12). The 13r1+l-approximation property of V~ is sufficient to invoke Lemma 10 for multiple times, i.e., making multiple partitions, with V~ in local memory. We formalise this with the following crucial lemma.

Lemma 11 ([*]).

Consider a 13r1+l-approximation V~ of the range space Σ=(V,), where |V|>rls and l=min{13,logr|V|s} and is the set of all possible axis-parallel rectangular ranges in mbr(V). Let be an arbitrary range in . Define V1=V and V~1=V~.

If |V~1|2|V~|rl, then we have: (i) V~1 is a 1r-approximation of Σ=(V1,1), where 1={qqV1}, and (ii) |V1|>s. Therefore, Lemma 10 is applicable on V1 and V~1.

Our MPC Algorithm for Computing Pseudo 𝒔-Separator
One Super Round Implementation.

Based on Lemma 11, given an implicit (d,c)-grid graph G=(V,), where d3, |V|>rls for l=min{13,logr|V|s}, 1cs1d3 and r=2s1/d, the implementation of one super round is shown in Algorithm 1.

Algorithm 1 One Super Round Implementation for Pseudo s-Separator.
The Overall Pseudo 𝒔-Separator Algorithm.

Given an implicit (d,c)-grid graph G=(V,), where d3, |V|>rls for l=min{13,logr|V|s}, 1cs1d3 and r=2s1/d,

  • perform a new super round simultaneously on each of the induced sub-graphs G with |V|>rls for l=min{13,logr|V|s};

  • terminate when no more super rounds can be performed, in which case, all the induced sub-graphs have no more than s vertices.

Lemma 12 ([*]).

After a super round, the input implicit (d,c)-grid graph G=(V,) is separated into h disconnected induced sub-graphs Gi=(Vi,) for i[1,h] such that hO(rl) and |Vi|O(|V|/rl) for all i[1,h].

Algorithm 2 Connected Component Algorithm.
Lemma 13 ([*]).

When our Overall Pseudo s-Separator algorithm terminates, let S be the set of all the vertices in V falling in target c-dividers in applications of Lemma 10. Set S is a pseudo s-separator of the input implicit (d,c)-grid graph G.

Lemma 14 ([*]).

The total number of super rounds of our overall pseudo s-separator algorithm is bounded by O(dα), and the total number of MPC rounds is bounded by O(1).

Putting the above lemmas together, we have:

Theorem 15.

Given an implicit (d,c)-grid graph G=(V,) with d3 and 1cs1d3, there exists a Las Vegas MPC algorithm with local memory Θ(s)Θ(|V|α) per machine, for all constant α(0,1), which computes a pseudo s-separator of G in O(1) rounds with high probability.

2.3 Grid Graph Connectivity Algorithms

Next, we introduce our MPC algorithms for computing CCs and MSFs for implicit (d,c)-grid graphs with 1cs1d3.

Bounding the Feasible Range for 𝜶.

Recall that Theorem 15 shows that a pseudo s-separator S of G can be computed in O(1) rounds with high probability. This algorithm works for all constant α(0,1). However, as we shall see shortly, both our CC and MSF algorithms require the pseudo s-separator, S, to fit in the local memory O(s) in a single machine. As a result, they require |S|s. From the proofs of Lemma 10 and Lemma 13, we know that |S|4c|V|s1/dlogs. Recalling that s=|V|α, our algorithm works when αd+1d+2 for d3. Combining the case of d=2, a feasible range of α becomes (max{d+1d+2,45},1).

Connectivity and Minimum Spanning Forest.

Consider a pseudo s-separator S of G; let Gi=(Vi,) be the induced sub-graphs of G by S, where i[1,h]. By Definition 6, we have h=O(|V|s) and |Vi|O(s). For each Gi, denote the projection of mbr(Vi) on dimension j by [xi(j),yi(j)], for i[1,h] and j[1,d]. Consider the hyper-box Hi whose projection on each dimension j is [xi(j)c,yi(j)+c] for i[1,h] and j[1,d]; we say that Hi is the extended region of Gi. Moreover, we define the extended graph of Gi, denoted by Gi+=(Vi+,), as the induced graph by the vertices in Vi+=ViSi, where Si=SHi is the set of separator vertices falling in the extended region of Gi. Figure 1 shows an example of Gi and Gi+. Clearly, by choosing a constant α(max{d+1d+2,45},1), we know that |S|O(s) and hence, |Vi+|O(s). Therefore, Gi+ fits in the local memory in one machine. Moreover, without loss of generality, we assume that Gi+ is stored completely in a machine Mi.

By incorporating the techniques of the existing External Memory CC and MSF algorithms for grid graphs [22], where the detailed MPC implementations of them are respectively shown in Algorithm 2 and Algorithm 3 in Appendix A, we have the following theorem:

Theorem 16 ([*]).

Given an implicit (edge-weighted) (d,c)-grid graph G=(V,) with 1cs1d3, there exists a Las Vegas MPC algorithm with local memory Θ(s)Θ(|V|α) per machine, for arbitrary constant α(max{d+1d+2,45},1), which computes all the connected components (resp., minimum spanning forest) of G in O(1) rounds with high probability.

3 𝑶(𝟏)-Round Approximate EMST

An Overview of Our 𝝆-Approximate EMST Algorithm.

The basic idea of our ρ-approximate EMST algorithm is to mimic the Kruskal’s algorithm. Specifically, our algorithm firstly uses those “short” edges to connect different connected components that are found so far, and then gradually considers “longer” edges. Our algorithm runs in super rounds, each of which takes O(1) MPC rounds. In the i-th super round, our algorithm constructs a representative implicit (d,c)-grid graph Gi=(Vi,i) which only considers those edges with weight li=ci(ρd)i1, where c=s1d3. Next, it invokes our O(1)-round MSF algorithm on Gi, and then starts a new super round repeating this process until a spanning tree T is obtained.

The Algorithm Steps.

Consider a set P of n points in d-dimensional space d with coordinate value range [0,Δ]. Let V1=P and set c=s1d3. Our MPC ρ-approximate EMST algorithm constructs a series of implicit (d,c)-grid graphs, denoted by Gi=(Vi,i), for i{1,2,}.

  • Augmented Node Information: Each vertex u in Vi is associated with two pieces of O(1)-size information: (i) a connected component id of u, denoted by u.id, and (ii) the original point pP which u currently represents, denoted by u.pt.

  • Edge Formation Rule: for any two distinct nodes u,vVi, in building, i: the edge (u,v) exists if and only if the Euclidean distance dist(u,v)li, where li=ci(ρd)i1; if it exists, its weight, w(u,v), is 0 if u.id=v.id, otherwise 𝑑𝑖𝑠𝑡(u,v).

Initially, V1P and for each node uV1, u.id is the id of the point and u.pt is the point itself. Our ρ-approximate EMST algorithm runs in super rounds; in the i-th super round (for i=1,2,), it works as follows:

  • Solve Stage: MSF Computation on Gi=(Vi,i):

    • invoke our O(1)-round MPC algorithm (in Theorem 16) to compute an MSF of Gi. In particular, let u.cid be a temporary variable to denote the current connected component id of uVi in Gi. Observe the difference between u.id and u.cid: the former records the connected component id of u in the previous super round, and is used to determine edge weights in i in the current round;

    • let 𝒯i be the set of edges in the MSF of Gi; remove edges with zero weight from 𝒯i;

    • initialize 𝒯i; for each edge (x,y)𝒯i, add an edge (x.pt,y.pt) to 𝒯i;

    • keep tract of the total edge number that are added to T=i𝒯i; when, in total, n1 edges are added, stop and return T as an ρ-approximate EMST of P;

  • Sketch Stage: Computing Vi+1 from Vi:

    • initialize Vi+1; impose a grid on Vi with side length of liρd;

    • for each non-empty grid cell, g, add the most bottom-left corner point t of g to Vi+1; moreover, set t.ptu.pt and t.idu.cid, where u is an arbitrary node in Vig;

MPC Implementation Details.
  • At the beginning of the Solve Stage, the vertex set Vi is distributed across the machines. Thus, the MSF algorithm suggested in Theorem 16 is invoked on Gi=(Vi,i), and the resulted set of edges, 𝒯i, is distributedly stored across the machines. By local computations, the edges in 𝒯i can be converted to edges in 𝒯i.

  • To implement the Sketch Stage (i.e., computing Vi+1 from Vi), each machine first computes the grid cell coordinate for each vertex in Vi locally. Our algorithm sorts all the vertices in Vi by their grid cell coordinates. After sorting, all the vertices falling in the same grid cell g are either stored in just one machine or a contiguous sequence of machines. Next, each machine generates the most bottom-left corner point t, according to our algorithm, for each grid cell g in its local memory. Duplicate points t are then removed by Duplicate Removal, an atomic MPC operation whose implementation details can be found in Appendix A. The vertex set Vi+1 for the next super round is thus constructed.

Figure 2: A running example of our approximate EMST algorithm.
A Running Example.

Figure 2 gives a running example of our approximate EMST algorithm. The input set P={v1,,v9} contains nine points in 2-dimensional space with integer coordinate values in [0,15]. Our algorithm solves this problem in three super rounds. The top row of the figures shows the edges that are labeled to be in the result edge set i𝒯i at the end of the round i. The bottom row shows the vertex set of Vi and the MSF of Gi.

In this example, l1=4 and ρ=22. In the first round, our algorithm constructs G1=(V1,1). The vertex set V1 with the augmented node information is shown in the first figure in at the bottom. The MSF of G1, denoted as 𝒯1, is computed and its edges are shown in the figure. After the computation of the MSF, each vertex uV1 computes its component id u.cid. All the zero-weight edges are deleted from 𝒯1 and the remaining edges are highlighted in colour red. For each edge (u1,u2)𝒯1, its corresponding edge (u1.pt,u2.pt) is marked as an output edge, i.e., the edge in the approximate EMST T returned by our algorithm, as shown in the first top-row figure. Next, the sketch stage starts; G2=(V2,2) is thus constructed as follows. A grid with side length of l1ρd=2 is imposed on V1, and then V2 is generated, shown in the second bottom-row figure. At the end of this second round, two new edges are added to T, the final output edge set of the approximate EMST. The same process is repeated with l2=8 and l3=16 in the third round, and one more edge is added to T. Since T now contains in total eight edges, it is returned as an approximate EMST on the input nine points.

Remark on 𝑮𝒊=(𝑽𝒊,𝓔𝒊).

From the construction of Gi in our algorithm, at first glance, Gi is an implicit (d,cliρd)-grid graph which looks ineligible for our MSF algorithm proposed in Theorem 16. However, observe that since the coordinate values of the nodes in Vi are all multiples of liρd, therefore, by a simple scaling, Gi is indeed an implicit (d,c)-grid graph.

Lemma 17 ([*]).

Our ρ-approximate EMST algorithm takes O(1) MPC rounds with high probability.

Let T be the union of 𝒯i for all i=1,2,. We prove that T is an (2ρ)-approximate EMST of P, achieving both overall and edge-wise approximation. By decreasing the constant approximation parameter ρ by half, an ρ-approximate EMST can be obtained.

Lemma 18 ([*]).

T is an Euclidean spanning tree of P.

Lemma 19 (Edge-Wise Approximation Guarantee [*]).

Consider an exact EMST T of P; for any edge (u,v)T, there must exist a path 𝒫(u,v) from u to v in T such that every edge (x,y) on this path has weight w(x,y)(1+2ρ)w(u,v).

Lemma 20 (Overall Approximation Guarantee [*]).

Consider an exact EMST T of P; the total edge weight of T is at most (1+2ρ) times the total edge weight of T.

4 𝑶(𝟏)-Round Approximate DBSCAN

4.1 DBSCAN and 𝝆-Approximate DBSCAN

Core and Non-Core.

Let P be a set of n points in d-dimensional Euclidean space d, where the dimensionality d2 is a constant. For any two points u,vP, the Euclidean distance between u and v is denoted by dist(u,v). There are two parameters: (i) a radius ε0 and (ii) core point threshold 𝑚𝑖𝑛𝑃𝑡𝑠1 which is a constant integer. For a point vP, define B(v,ε) as the ball center at v with radius ε. If B(v,ε) covers at least 𝑚𝑖𝑛𝑃𝑡𝑠 points in P, then v is a core point. Otherwise, v is a non-core point.

Primitive Clusters and Clusters.

Consider an implicit core graph G=(Pcore,), where Pcore is the set of all the core points in P, and there exists an edge between two core points u and v if dist(u,v)ε. Each connected component C of G is defined as a primitive cluster. For a primitive cluster C, let C be the set of all the non-core points pP such that dist(p,C)=minuCdist(p,u)ε. The union of CC is defined as DBSCAN cluster.

The DBSCAN Problem.

Given a set of points P, a radius ε0 and a core threshold 𝑚𝑖𝑛𝑃𝑡𝑠1, the goal of the DBSCAN problem is to compute all the DBSCAN clusters.

𝝆-Approximate DBSCAN.

Given an extra approximation parameter, ρ0, the only difference is in the definition of the primitive clusters. Specifically, consider a conceptual ρ-approximate core graph Gρ=(Pcore,ρ), where the edge formation rule ρ works as follows. For any two core points u,vPcore: if dist(u,v)ε, ρ(u,v) must return that (u,v) is in Gρ; if dist(u,v)>(1+ρ)ε, ρ(u,v) must return that (u,v) does not exist in Gρ; otherwise, what ρ(u,v) returns does not matter.

Due to the does-not-matter case, Gρ is not unique. Consider a graph Gρ; each connected component C of Gρ is defined as a primitive cluster with respect to Gρ. And an ρ-approximate DBSCAN cluster is obtained by including non-core points to a primitive cluster C in the same way as in the exact version. The goal of the ρ-approximate DBSCAN problem is to compute the ρ-approximate DBSCAN clusters with respect to an arbitrary Gρ.

4.2 Our MPC Algorithm

Our approximate DBSCAN algorithm consists of three main steps: (i) identify core points; (ii) compute primitive clusters from a Gρ; and (iii) assign non-core points to primitive clusters. As shown in the existing ρ-approximate DBSCAN algorithm in [21], the third step, assigning non-core points, can be achieved by an analogous algorithm of core point identification. Next, we give an MPC algorithm for the first two steps.

4.2.1 Identify Core Points

We first show how to identify all the core points for a DBSCAN problem instance in O(1α) MPC rounds. Our goal is to label each point pP, as to whether or not it is a core point. In the following, we assume that the constant parameter α in the MPC model satisfies α(12,1), and hence, ms, that is, the number of machines is no more than the local memory size s.

Consider a grid imposed in the data space d with grid-cell length l=ε/d. Each grid cell, g, is represented by the coordinates (x1,x2,,xd) of its left-most corner, namely, each grid cell g is essentially a hyper cube [x1l,(x1+1)l)××[xdl,(xd+1)l). Each point pP is contained in one and exactly one grid cell, denoted by g(p). The size of a grid cell g, denoted by |g|, is defined as the number of points in P contained in g. If |g|>0, then g is a non-empty grid cell. If a non-empty grid cell g contains at least 𝑚𝑖𝑛𝑃𝑡𝑠 points of P, then g is a dense grid cell. Otherwise, g is a sparse grid cell. Given a non-empty grid cell g, the neighbour grid cell set of g is defined as Neig={ggpg,qg,s.t.dist(p,q)ε} (note here that p and q are not necessarily points in P). A grid cell, g, in Neig is called a neighbour grid cell of g. It can be verified that |Neig|<(2d+3)dO(1) [20, 22].

Our core point identification algorithm contains the following three steps:

Put Points into Grid Cells.

Given a point p, the coordinates of the grid cell g(p) which contains p can be calculated with the coordinates of p. Define Pg=Pg as the set of all the points in P falling in g. By sorting all the points in P by the coordinates of the grid cells they belong to in lexicographical order, which can be achieved in O(logsn) MPC rounds [24], all the points in P can be arranged in the machines with the following properties:

  • Each machine stores points from consecutive (in the sorted order) non-empty grid cells. Those non-empty grid cells are called the support grid cells of the machine.

  • The points in a same grid cell g are stored in either only one machine or a contiguous sequence of machines. Thus, the ID’s of the machines that store Pg – the storage locations of the grid cell g – constitute an interval and can be represented by the left and right endpoints (i.e., the smallest and largest machine ID’s) of this interval in O(1) words.

  • The number of the support grid cells of each machine is bounded by O(s). And the total non-empty grid cell count is bounded by O(n).

Calculate Non-empty Neighbour Grid Cell Storage Locations.

In this step, we aim to, for each non-empty grid cell, compute the locations of all its non-empty neighbour grid cells. Since the size of Neig is bounded by O((2d+3)d)O(1), it takes O(1) space to store the location intervals of its all (possibly empty) neighbour grid cells.

At high level, our algorithm (Algorithm 4 in Appendix A) takes four super rounds to compute non-empty neighbour grid cell storage locations, as follows:

  • Super Round 1: For each support grid cell gj in a machine Mi, for each of its possible neighbours g, our algorithm builds a 4-tuple (gj.coord,Mi,g.coord,g.loc), where g.coord denotes the coordinates of grid cell g, Mi denotes the machine ID and g.loc term is a placeholder with value NULL at the current stage and will be filled with the storage location of g if g is non-empty. Next, these 4-tuples are sorted by the third term, g.coord, across all machines. Therefore, at the end of this super round, each machine stores a subset of these 4-tuples in its memory.

  • Super Round 2: Each machine M broadcasts the minimal and maximal g.coord value of the 4-tuples it stores, denoted by [M.min(g.coord),M.max(g.coord)]. As a result, at the end of this super round, each machine, Mi, has the information of the g.coord value range of each other.

  • Super Round 3: Each machine Mi sends a 2-tuple (gj.coord,gj.loc), for each of its support grid cells gj, to those machines M having gj.coord[M.min(g.coord),M.max(g.coord)]. Specifically, the second term of the 2-tuple, gj.loc, is the storage location of gj, which was computed in the previous “put points into grid cells” phase.

  • Super Round 4: After receiving all gj.loc[M.min(g.coord),M.max(g.coord)] in Super Round 3, the previous placeholder, the forth term g.loc, in each 4-tuple stored in machine M now can be filled properly in M’s local memory. Each of the resultant 4-tuples (gj.coord,Mi,g.coord,g.loc) are then sent back to machine Mi accordingly.

Lemma 21 ([*]).

The above algorithm, i.e., Algorithm 4, takes O(1) rounds. In each round, the amount of data sent and received by each machine is bounded by O(s).

Output the Point Labels.

Algorithm 5 in Appendix A gives a detailed implementation. A running example of Algorithm 5 is shown in Figure 3. This algorithm labels each point in P as whether or not it is a core point. If a grid cell g is dense, i.e., |g|𝑚𝑖𝑛𝑃𝑡𝑠, all of the points in Pg are labelled as core points. For each point p in each sparse grid cell g(p) (i.e., with |g(p)|<𝑚𝑖𝑛𝑃𝑡𝑠) in each machine M, we can count the number of the points of P in the hyper ball centred at p with radius ε, denoted by B(p,ε). Observe that a point q falls in B(p,ε) only if either p and q are in the same grid cell or g(q) is a neighbour grid cell of g(p). Our algorithm, thus, sends p to all the storage locations of (i.e., the machines storing) the neighbour grid cells of g(p). In the local memory of each of those machines, the distances between p and all those points in the neighbour grid cells can be computed. And then, the numbers of points falling in B(p,ε) in those machines are sent back to machine M (where p is stored). Therefore, the points p in sparse cells can be labelled accordingly.

Figure 3: A running example of Algorithm 5 with 𝑚𝑖𝑛𝑃𝑡𝑠=3. The input points are in six non-empty grid cells stored across five machines M1,,M5. Specifically, g2,g4,g5 are dense grid cells and the points in them are core points, while g1,g3,g6 are sparse grid cells. To label the points in these sparse grid cells, Algorithm 5 performs in two MPC rounds. Take g3 stored in M1 as an example; the neighbour grid cell set of g3 is Neig3={g1,g2,g4}, and the storage locations of them are: M1 for both g1 and g2, and M2,M3,M4 for g4. The communications between M1 and M1 (i.e., local computation) and M2,M3,M4 are incurred, which are shown by the blue arrows in the figure.
Lemma 22 ([*]).

The above algorithm, i.e., Algorithm 5, performs two MPC rounds. In each round, the amount of data sent and received by each machine is bounded by O(s).

Lemma 23 (Core Point Identification [*]).

Consider a set P of n points; given a radius parameter ε0 and a core point threshold, 𝑚𝑖𝑛𝑃𝑡𝑠1, which is a constant integer, there exists an MPC algorithm with local memory Θ(s)Θ(nα) per machine, for arbitrary constant α(12,1), which identifies all the core points in O(1α) rounds.

4.2.2 Compute Primitive Clusters

Our algorithm works as follows:

  • construct an implicit (d,c)-grid graph G=(V,):

    • initialize V; impose a grid on Pcore with side length of 12dρε;

    • for each non-empty grid cell g, add the most bottom-left corner point t of g to V;

    • set to consider those edges (t1,t2) with dist(t1,t2)(1+12ρ)ε only and returns dist(t1,t2) as its weight; as a result, G is c-penetration with c=(1+12ρ)ε12dρεO(1);

  • invoke our MPC algorithm in Theorem 16 on the implicit (d,c)-grid graph G=(V,) to compute an MSF of G and associate the connected component ID to each point tV;

  • for each point tV, assign t’s connected component ID to each core point in the grid cell which t represents;

  • group all the core points in Pcore by their connected component ID’s;

  • return each group as a primitive cluster; denote the set of these primitive clusters by 𝒞;

Lemma 24 ([*]).

The above primitive cluster computing algorithm returns legal ρ-approximate primitive clusters in O(1) rounds with high probability.

5 Conclusion

In this paper, we study the problems of grid graph connectivity, approximate EMST and approximate DBSCAN in the MPC model with strictly sub-linear local memory space per machine. We show Las Vegas algorithms (succeeding with high probability), which can be derandomized, for solving these problems in O(1) rounds. Due to the importance of the problems studied in this paper, we believe that our O(1)-round MPC algorithms and then pseudo s-separator technique will be of independent interests for solving other related problems in the MPC model.

References

  • [1] Pankaj K Agarwal, Herbert Edelsbrunner, Otfried Schwarzkopf, and Emo Welzl. Euclidean minimum spanning trees and bichromatic closest pairs. In Proceedings of the sixth annual symposium on Computational geometry, pages 203–210, 1990. doi:10.1145/98524.98567.
  • [2] Pankaj K. Agarwal, Kyle Fox, Kamesh Munagala, and Abhinandan Nath. Parallel algorithms for constructing range and nearest-neighbor searching data structures. In Proceedings of the 35th ACM PODS 2016, pages 429–440. ACM, 2016. doi:10.1145/2902251.2902303.
  • [3] Alok Aggarwal and Jeffrey Scott Vitter. The input/output complexity of sorting and related problems. Commun. ACM, 31(9):1116–1127, 1988. doi:10.1145/48529.48535.
  • [4] Rakesh Agrawal, Johannes Gehrke, Dimitrios Gunopulos, and Prabhakar Raghavan. Automatic subspace clustering of high dimensional data for data mining applications. In Proceedings of the 1998 ACM SIGMOD international conference on Management of data, 1998.
  • [5] Alexandr Andoni, Aleksandar Nikolov, Krzysztof Onak, and Grigory Yaroslavtsev. Parallel algorithms for geometric graph problems. In Symposium on Theory of Computing, STOC 2014, New York, NY, USA, May 31 - June 03, 2014, pages 574–583. ACM, 2014. doi:10.1145/2591796.2591805.
  • [6] Alexandr Andoni, Zhao Song, Clifford Stein, Zhengyu Wang, and Peilin Zhong. Parallel graph connectivity in log diameter rounds. In Mikkel Thorup, editor, 59th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2018, 2018. doi:10.1109/FOCS.2018.00070.
  • [7] Sunil Arya, David M Mount, Nathan S Netanyahu, Ruth Silverman, and Angela Y Wu. An optimal algorithm for approximate nearest neighbor searching fixed dimensions. Journal of the ACM (JACM), 45(6):891–923, 1998. doi:10.1145/293347.293348.
  • [8] Sepehr Assadi, Xiaorui Sun, and Omri Weinstein. Massively parallel algorithms for finding well-connected components in sparse graphs. In Proceedings of the 2019 ACM Symposium on Principles of Distributed Computing, PODC 2019, 2019.
  • [9] Paul Beame, Paraschos Koutris, and Dan Suciu. Communication steps for parallel query processing. Journal of the ACM (JACM), 64(6):1–58, 2017. doi:10.1145/3125644.
  • [10] Soheil Behnezhad, Laxman Dhulipala, Hossein Esfandiari, Jakub Lacki, and Vahab Mirrokni. Near-optimal massively parallel graph connectivity. In 2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS), pages 1615–1636. IEEE, 2019. doi:10.1109/FOCS.2019.00095.
  • [11] Patrick Burger and Hans-Joachim Wuensche. Fast multi-pass 3d point segmentation based on a structured mesh graph for ground vehicles. In 2018 IEEE Intelligent Vehicles Symposium (IV), pages 2150–2156. IEEE, 2018. doi:10.1109/IVS.2018.8500552.
  • [12] Brent N Clark, Charles J Colbourn, and David S Johnson. Unit disk graphs. Discrete mathematics, 86(1-3):165–177, 1990. doi:10.1016/0012-365X(90)90358-O.
  • [13] Sam Coy and Artur Czumaj. Deterministic massively parallel connectivity. In STOC ’22: 54th Annual ACM SIGACT Symposium on Theory of Computing, Rome, Italy, June 20 - 24, 2022, pages 162–175. ACM, 2022. doi:10.1145/3519935.3520055.
  • [14] Sam Coy, Artur Czumaj, and Gopinath Mishra. On parallel k-center clustering. In Proceedings of the 35th ACM Symposium on Parallelism in Algorithms and Architectures, SPAA 2023, Orlando, FL, USA, June 17-19, 2023, pages 65–75. ACM, 2023. doi:10.1145/3558481.3591075.
  • [15] Artur Czumaj, Guichen Gao, Shaofeng H.-C. Jiang, Robert Krauthgamer, and Pavel Veselý. Fully scalable MPC algorithms for clustering in high dimension. CoRR, abs/2307.07848, 2023. doi:10.48550/arXiv.2307.07848.
  • [16] Artur Czumaj, Christiane Lammersen, Morteza Monemizadeh, and Christian Sohler. (1+ε-approximation for facility location in data streams. In Proceedings of SODA 2013, 2013.
  • [17] Jeffrey Dean and Sanjay Ghemawat. Mapreduce: simplified data processing on large clusters. Commun. ACM, 51(1):107–113, 2008. doi:10.1145/1327452.1327492.
  • [18] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805, 2018. arXiv:1810.04805.
  • [19] Hu Ding and Jinhui Xu. Solving the chromatic cone clustering problem via minimum spanning sphere. In International Colloquium on Automata, Languages, and Programming, pages 773–784. Springer, 2011. doi:10.1007/978-3-642-22006-7_65.
  • [20] Junhao Gan and Yufei Tao. Dbscan revisited: Mis-claim, un-fixability, and approximation. In Proceedings of the 2015 ACM SIGMOD international conference on management of data, pages 519–530, 2015. doi:10.1145/2723372.2737792.
  • [21] Junhao Gan and Yufei Tao. On the hardness and approximation of euclidean dbscan. ACM Transactions on Database Systems (TODS), 42(3):1–45, 2017. doi:10.1145/3083897.
  • [22] Junhao Gan and Yufei Tao. An i/o-efficient algorithm for computing vertex separators on multi-dimensional grid graphs and its applications. J. Graph Algorithms Appl. (JGAA), 22(2):297–327, 2018. doi:10.7155/JGAA.00471.
  • [23] Junhao Gan, Anthony Wirth, and Zhuo Zhang. o(1)-round mpc algorithms for multi-dimensional grid graph connectivity, emst and dbscan, 2025. arXiv:2501.12044.
  • [24] Michael T Goodrich, Nodari Sitchinava, and Qin Zhang. Sorting, searching, and simulation in the mapreduce framework. In International Symposium on Algorithms and Computation, pages 374–383. Springer, 2011. doi:10.1007/978-3-642-25591-5_39.
  • [25] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 770–778. IEEE Computer Society, 2016. doi:10.1109/CVPR.2016.90.
  • [26] Piotr Indyk. Algorithms for dynamic geometric problems over data streams. In Proceedings of the thirty-sixth annual ACM Symposium on Theory of Computing, pages 373–380, 2004. doi:10.1145/1007352.1007413.
  • [27] Michael Isard, Mihai Budiu, Yuan Yu, Andrew Birrell, and Dennis Fetterly. Dryad: distributed data-parallel programs from sequential building blocks. In Proceedings of the 2007 EuroSys Conference, Lisbon, Portugal, March 21-23, 2007, pages 59–72. ACM, 2007. doi:10.1145/1272996.1273005.
  • [28] Howard J. Karloff, Siddharth Suri, and Sergei Vassilvitskii. A model of computation for mapreduce. In Proceedings of the Twenty-First Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2010, Austin, Texas, USA, January 17-19, 2010, 2010. doi:10.1137/1.9781611973075.76.
  • [29] Yi Li, Philip M. Long, and Aravind Srinivasan. Improved bounds on the sample complexity of learning. J. Comput. Syst. Sci., 62(3):516–527, 2001. doi:10.1006/JCSS.2000.1741.
  • [30] Tomás Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. In 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings, 2013. URL: http://arxiv.org/abs/1301.3781.
  • [31] Morteza Monemizadeh. Facility location in the sublinear geometric model. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023), 2023.
  • [32] Danupon Nanongkai and Michele Scquizzato. Equivalence classes and conditional hardness in massively parallel computations. Distributed computing, 35(2):165–183, 2022. doi:10.1007/S00446-021-00418-2.
  • [33] Abhinandan Nath, Kyle Fox, Pankaj K Agarwal, and Kamesh Munagala. Massively parallel algorithms for computing tin dems and contour trees for large terrains. In Proceedings of the 24th ACM SIGSPATIAL, 2016.
  • [34] Tom White. Hadoop - The Definitive Guide: Storage and Analysis at Internet Scale (3. ed., revised and updated). O’Reilly, 2012. URL: http://www.oreilly.de/catalog/9781449311520/index.html.
  • [35] Jie Xue. Colored range closest-pair problem under general distance functions. In Proceedings of ACM-SIAM SODA, pages 373–390. SIAM, 2019. doi:10.1137/1.9781611975482.24.
  • [36] Grigory Yaroslavtsev and Adithya Vadapalli. Massively parallel algorithms and hardness for single-linkage clustering under lp-distances. In 35th International Conference on Machine Learning (ICML’18), 2018.
  • [37] Matei Zaharia, Mosharaf Chowdhury, Michael J. Franklin, Scott Shenker, and Ion Stoica. Spark: Cluster computing with working sets. In 2nd USENIX Workshop on Hot Topics in Cloud Computing, HotCloud’10, Boston, MA, USA, June 22, 2010, 2010. URL: https://www.usenix.org/conference/hotcloud-10/spark-cluster-computing-working-sets.
  • [38] Yan Zhou, Oleksandr Grygorash, and Thomas F Hain. Clustering with minimum spanning trees. International Journal on Artificial Intelligence Tools, 20(01):139–177, 2011. doi:10.1142/S0218213011000061.

Appendix A Appendix

Here, we include pseudocode for the main algorithms in our paper.

Algorithm 3 MSF Algorithm.
Algorithm 4 Location of Non-empty Neighbor Grid Cell.
Algorithm 5 Output the Point Label.