-Round MPC Algorithms for Multi-Dimensional Grid Graph Connectivity, Euclidean MST and DBSCAN
Abstract
In this paper, we investigate three fundamental problems in the Massively Parallel Computation (MPC) model: (i) grid graph connectivity, (ii) approximate Euclidean Minimum Spanning Tree (EMST), and (iii) approximate DBSCAN.
Our first result is a -round Las Vegas (i.e., succeeding with high probability) MPC algorithm for computing the connected components on a -dimensional -penetration grid graph (-grid graph), where both and are positive integer constants. In such a grid graph, each vertex is a point with integer coordinates in , and an edge can only exist between two distinct vertices with -norm at most . To our knowledge, the current best existing result for computing the connected components (CC’s) on -grid graphs in the MPC model is to run the state-of-the-art MPC CC algorithms that are designed for general graphs: they achieve [10] and [8] rounds, respectively, where is the diameter and is the spectral gap of the graph. With our grid graph connectivity technique, our second main result is a -round Las Vegas MPC algorithm for computing approximate Euclidean MST. The existing state-of-the-art result on this problem is the -round MPC algorithm proposed by Andoni et al. [5], which only guarantees an approximation on the overall weight in expectation. In contrast, our algorithm not only guarantees a deterministic overall weight approximation, but also achieves a deterministic edge-wise weight approximation. The latter property is crucial to many applications, such as finding the Bichromatic Closest Pair and Single-Linkage Clustering. Last, but not least, our third main result is a -round Las Vegas MPC algorithm for computing an approximate DBSCAN clustering in -dimensional Euclidean space.
Keywords and phrases:
Massively Parallel Computation, Graph Connectivity, Grid Graphs, Euclidean Minimum Spanning Tree, DBSCANCopyright and License:
![[Uncaptioned image]](x1.png)
2012 ACM Subject Classification:
Theory of computation Massively parallel algorithmsFunding:
This work is in part supported by ARC DE190101118 and DP230102908.Editors:
Sudeepa Roy and Ahmet KaraSeries and Publisher:

1 Introduction
Effective parallel systems for large scale datasets, such as MapReduce [17], Dryad [27], Spark [37], Hadoop [34], have received considerable attention in recent years. The Massively Parallel Computation (MPC) model [28, 24, 9] has been proposed to provide a solid theoretical abstraction for the modern study of parallelism.
In this paper, we propose several -round Las Vegas algorithms, each succeeding with high probability, in the MPC model, for solving three fundamental problems: (i) connectivity and minimum spanning forest (MSF) on multi-dimensional grid graphs, (ii) approximate Euclidean Minimum Spanning Tree (EMST), and (iii) approximate DBSCAN.
The MPC Model.
In the MPC model, the input is of size and evenly distributed among machines. Each machine is equipped with a strictly sub-linear local memory of size , where for some constant [5, 28, 24]. An MPC algorithm runs in (synchronous) rounds; each round proceeds two phases one after another : the communication phase first, and then the local computation phase. In the communication phase, each machine can send to and receive from other machines, in total, data. In the computation phase, each machine performs computation on the data stored in its local memory, and prepares what data can be sent to which machine in the communication phase of the next round. The efficiency of an MPC algorithm is measured by the total number of rounds.
Main Result 1: Connectivity on Grid Graphs
We consider a class of graphs called -dimensional -penetration grid graphs (for short, -grid graphs). Specifically, a graph is a -grid graph if it satisfies: (i) the dimensionality is a constant and is an integer; (ii) each node in is a -dimensional point with integer coordinates in space; (iii) for each edge , holds, i.e., and are distinct vertices and their coordinates differ by at most on every dimension. It can be verified that, when is a constant, by definition, each node in a -grid graph can have at most neighbours, and hence, a -grid graph is sparse, i.e., . In particular, for , a -grid graph is a common well-defined -dimensional grid graph.
In real-world applications, grid graphs are often useful in the modeling of 3D meshes and terrains [11]. More importantly, grid graphs are extremely useful in many algorithmic designs for solving various computational geometry problems, such as Approximate Nearest Neighbour Search [7], Euclidean Minimum Spanning Tree [26], DBSCAN [4] and etc. Therefore, the study of algorithms on grid graphs has become an important topic in the research community. Our first result is this theorem:
Theorem 1 ([*]).
Given a -grid graph with , there exists a Las Vegas MPC algorithm with local memory per machine, for an arbitrary constant , which computes all the connected components of in rounds with high probability. Specifically, the algorithm assigns the ID of the connected component to each node, of which the node belongs to.
Computing connected components (CC’s) of sparse graphs in the MPC model is notoriously challenging. In particular, the well accepted 2-CYCLE Conjecture [36, 32] says that:
Every MPC algorithm, using local memory per machine and space in total, requires rounds to correctly solve, with high probability, the 1-vs-2-Cycle problem which asks to distinguish whether the input graph consists of just one cycle with nodes or two cycles with nodes each.
Theoretically speaking, the 2-CYCLE Conjecture can be disproved as “easy” as finding just a single constant and an algorithm that solves the 1-vs-2-Cycle problem with per-machine local memory and total space in rounds. However, according to the recent hardness results, the conjecture is robust and hard to be refuted [32].
Even worse, since the input instance of the 1-vs-2-Cycle problem is so simple, it eliminates the hope of computing CCs for many classes of “simple” graphs, such as path graphs, tree graphs and planar graphs, which are often found to be simpler cases for many fundamental problems. As a result, it is still very challenging to find a large class of graphs on which the CC problem can be solved in rounds.
State-of-the-art works [6, 8, 10, 13] consider general sparse graphs that are parameterised by the diameter and the spectral gap . Specifically, Andoni et al. [6] propose a -round randomized algorithm, where is the number of nodes. Assadi et al. [8] give an algorithm with rounds. Recently, Behnezhad et al. [10] improve the bound by Andoni et al. to randomized, while Coy and Czumaj [13] propose a deterministic MPC algorithm achieving the same round number bound. However, all these state-of-the-art general algorithms still require rounds for solving the CC problem on -grid graphs in the worst case, as the diameter of a -grid graph can be as large as and the spectral gap, , can be as small as .
Therefore, our Theorem 1 is not only an immediate improvement over these state-of-the-art known results the CC problem on -grid graphs, but, interestingly, also suggests a large class of sparse graphs, -grid graphs with , on which the CC problem can be solved in rounds.
We note that it is the geometric property of -grid graphs making them admit -round CC algorithms. As we shall discuss in Section 2, our key technique is the so-called Orthogonal Vertex Separator for multi-dimensional grid graphs proposed by Gan and Tao [22] which was originally proposed to efficiently compute CC’s on -grid graphs in External Memory model [3]. However, in the MPC model, it appears difficult to compute such separators in rounds. To overcome this, we relax the Orthogonal Vertex Separator to a “weaker yet good-enough” version for our purpose of designing -round MPC algorithms and extend this technique 111 A similar technique of using small-size geometric separators (though they are not the same as ours) is applied to computing Contour Trees on terrains [33] in rounds in the MPC model. Moreover, the idea of a crucial technique in our separator computation, -approximation, actually stems from the KD-tree construction in the MPC model in [2]. to -grid graphs which are crucial to our -round EMST and DBSCAN algorithms.
Moreover, with standard modifications, our CC algorithm can also compute the Minimum Spanning Forests (MSF) on -grid graphs:
Corollary 2 ([*]).
Given an edge-weighted -grid graph with , there exists a Las Vegas MPC algorithm with local memory per machine, for arbitrary constant , which computes a Minimum Spanning Forest in rounds with high probability.
Main Result 2: Approximate EMST
By enhancing our MSF technique for -grid graphs, our second main result is a new -round Las Vegas MPC algorithm for computing approximate Euclidean Minimum Spanning Tree (EMST). Specifically, the problem is defined as follows. Given a set, , of points with integer coordinates in -dimensional space , where is a constant and the coordinate values are in with , let be an implicit edge-weighted complete graph on such that: and the weight of each edge is the Euclidean distance between and . The goal of the EMST problem is to compute an MST on the implicit complete graph, , of . The EMST problem is one of the most fundamental and important problems in computational geometry and has sparked significant attention for machine learning [18, 25, 30], data clustering [15, 14] and optimization problems [16, 31]. The existing state-of-the-art result is an -round Monte-Carlo MPC algorithm proposed by Andoni et al. [5] for computing an approximate EMST. Their algorithm possesses the following properties: (i) it can achieve rounds in the worst case and works for all constant ; and (ii) in expectation, it returns a spanning tree of the implicit graph whose total edge weight is at most times of the total edge weight of the exact EMST , where the approximation factor is a constant. In comparison, we have the following theorem:
Theorem 3 ([*]).
Given a set of points in -dimensional space with coordinate values in for and a constant approximation factor , there exists a Las Vegas MPC algorithm with local memory per machine, for arbitrary constant , which computes a spanning tree of the implicit graph of in rounds with high probability and deterministically satisfyies the following:
-
Overall Weight Approximation: the total edge weight of is at most times the total edge weight of the exact EMST ;
-
Edge-Wise Weight Approximation: for every edge in with weight , there must exist a path from to in such that the maximum weight of the edges on this path is at most .
Our algorithm improves Andoni et al.’s approximate EMST algorithm in two folds. First, our algorithm computes a feasible -approximate EMST deterministically, while theirs can only guarantee this in expectation. Second, and most importantly, the approximate EMST computed by our algorithm can further guarantee the edge-wise approximation which Andoni et al.’s algorithm cannot achieve. We note that the edge-wise approximation guarantee is crucial to solving many down-stream algorithmic problems, such as -approximate bichromatic closest pair [1], Single-Linkage Clustering [38] and other related problems [35, 19].
Main Result 3: Approximate DBSCAN
Our third main result is a -round MPC algorithm for computing -approximate DBSCAN clustering as defined in [20]. To our best knowledge, this is the first -round algorithm in the MPC model for solving the DBSCAN problem.
Theorem 4 ([*]).
Given a set of points in -dimensional space , where is a constant, two DBSCAN parameters: and , and a constant approximation factor , there exists a Las Vegas MPC algorithm with local memory per machine, for arbitrary constant , which computes an -approximate DBSCAN clustering of in rounds with high probability.
Derandomization
As we shall see shortly, the randomness of all our Las Vegas algorithms only comes from the randomized computation of an -approximation for some . Indeed, it is known that such approximations can be computed in MPC rounds deterministically [2]. And therefore, all our algorithms can be derandomized, yet slightly shrinking the feasible value range of the local memory parameter, , to .
Full Paper
The proofs of all the theorems, lemmas and claims marked with [*] can be found in the full version of this paper, on arXiv [23].
2 Grid Graph Connectivity and MSF
In this section, we introduce our -round MPC algorithms for computing CC’s and MSF’s on -grid graphs with with high probability.
Implicit Graphs and Implicit -Grid Graphs.
For the ease of explanation of our techniques for solving the EMST problem in the next section, we first introduce the notion of implicit graphs, denoted by .
Definition 5 (Implicit Graph).
An implicit graph is a graph consisting of a node set and an edge formation rule, , where:
-
each node in is associated with words of information, e.g., -dimensional coordinates;
-
the edge formation rule, , is a function (of size bounded by words) on distinct node pairs: for any two distinct nodes , returns, based on the -word information associated with and : (i) whether edge exists in , and (ii) the weight of if edge exists and the weight is well defined.
By the above definition, the size of every implicit graph is bounded by . An implicit graph can be converted to an explicit graph by materialising all the edges with the edge formation rule . If the corresponding explicit graph of an implicit graph is a -grid graph, we say is an implicit -grid graph. For example, given a set of points in with coordinate values in , the implicit complete Euclidean graph of can be considered as an implicit -grid graph , where returns , the Euclidean distance between and , as the edge weight. Another example is the unit-disk graph [12] in space with -norm, which can be essentially regarded as the implicit -grid graph with an edge formation rule: there exists an edge between two vertices if and only if their -norm distance is at most . However, as we will see in our EMST algorithm, the notion of implicit -grid graphs is more general due to the flexibility on the edge formation rule specification. For example, in addition to the coordinates, each node can also bring an ID of the connected component that it belongs to, and the edge formation rule can further impose a rule that there must be no edge between two nodes having the same connected component ID.
2.1 Pseudo -Separator and Its Existence
In this subsection, we extend the notion of Orthogonal Vertex Separator [22] to a weaker yet good-enough version for our algorithm design purpose for solving the CC and MSF problems on implicit -grid graphs in the MPC model. We call this new notion a Pseudo -Separator; recall that is the size of local memory, per machine, in the MPC model:
Definition 6 (Pseudo -Separator).
Consider an implicit -grid graph and a parameter . A set is a pseudo -separator of if it satisfies:
-
-
Removing the vertices in disconnects into sub-graphs , , such that for all , and .
Figure 1 shows an example. Next, we show that if and , a pseudo -separator must exist in an implicit -grid graph . While our proof mimics that in [22], it does require some new non-trivial ideas. We introduce some useful notations.
Minimum Bounding Space of .
Consider an implicit -grid graph ; let be the value range of on the -th dimension (for ), where and are the smallest and largest coordinate values of the points in on dimension , respectively. The minimum bounding space of is defined as . Moreover, we denote the projection of on the -th dimension by .
-Divider.
Consider the minimum bounding space ; a -divider on the -th dimension at coordinate , denoted by , is an axis-parallel rectangular range whose projection on dimension is and the projection on dimension is . With a -divider , the vertex set is partitioned into three parts: (i) the left part , (ii) the right part , and (iii) the boundary part , where is the coordinate of a vertex .
We say that separates into two sub-graphs induced by the left and right part, i.e., and , respectively. And all the vertices in are called separator vertices. Observe that for any and any , implying . As a result, there must not exist any edge between such and . With such notation, we show the following Binary Partition Lemma.
Lemma 7 (Binary Partition Lemma).
Consider an implicit -grid graph with and ; there exists a -divider partitioning into such that:
-
and , and
-
.
Proof.
We prove this lemma with a constructive algorithm. Given the vertex set, , our algorithm first finds two integers, and , for each dimension . Specifically, is the largest integer that the number of the vertices in located to the “left” of is at most , while is the smallest integer that the number of the vertices in located to the “right” of is at most . Formally, and , where is the coordinate of on dimension .
Consider the hyper-box in , whose projection on each dimension is for . By the definition of and , this box contains in total at least vertices in . Since the coordinates of the points are all integers, the volume of the box must be at least . That is, . Thus, there must exist some dimension such that . Next we fix this dimension, . Let be the set of all vertices in falling in a -divider . Within dimension , each vertex in can be contained in at most consecutive -dividers. As a result, we have . Since there are terms in the summation, there exists an integer with
where the second inequality comes from the bound on and the third inequality is due to as . Then such is a desired -divider.
Next, we bound the size of with respect to : bounding is analogous. Since , there are only two possible relations between and :
-
If , according to the definition of , the number of vertices in to the left of is at least .
-
If , the number of vertices in to the left of is at least .
Either way, holds; Lemma 7 thus follows.
The Multi-Partitioning Algorithm.
Given an implicit -grid graph with and , initialize and perform the following steps:
Lemma 8 ([*]).
The vertex set , obtained by the above Multi-Partitioning Algorithm, is a pseudo -separator of the implicit -grid graph , where and .
A construction algorithm proves existence, hence:
Theorem 9.
For any implicit -grid graph with and , a pseudo -separator of must exist.
Remark.
While our Pseudo -Separator (PsSep) looks similar to the notation of the Orthogonal Vertex Separator (OVSep) proposed by Gan and Tao [22], we emphasize that some crucial differences between them are worth noticing. First, the OVSep was originally proposed for solving problems on -grid graphs (in the External Memory model). Our PsSep supports implicit -grid graphs for parameter up to a non-constant in the MPC model. This extension, discussed in Section 3, plays a significant role in our -round algorithm for computing Approximate EMST’s. Second and most importantly, it is still unclear how (and appears difficult) to compute the OVSep in MPC rounds. In contrast, our PsSep is proposed to overcome this technical challenge and can be computed in MPC rounds. However, as discussed in the next sub-section, because of the application of -approximation in the construction algorithm of PsSep, the separator size at each recursion level no longer geometrically decreases and therefore, leading to a logarithmic blow-up in the overall separator size in PsSep. Despite of this size blow-up, by setting the parameters carefully, we show that our PsSep can still fit in the local memory of one machine, and thus, it is still good enough for the purpose of computing CC’s on implicit -grid graphs and solving the Approximate EMST and Approximate DBSCAN problems in rounds in the MPC model.
2.2 -Round Pseudo -Separator Algorithm
In this subsection, we show a -round MPC algorithm for computing a pseudo -separator for implicit -grid graphs with . The Multi-Partitioning Algorithm proves the existence of a pseudo -separator, but a straightforward simulation of this algorithm in the MPC model is, however, insufficient to achieve rounds. To overcome this, we resort to the technique of -approximation [29, 2].
2.2.1 Preliminaries
Range Space and -Approximation.
A range space is a pair , where is a ground set and is a family of subsets of . The elements of are points and the elements of are ranges. For , the projection of on is . Given a range space and , a subset is called an -approximation of if for every range , we have
Fact 1 ([29, 2]).
Consider the range space , where the ground set is a set of points in and . A random sample set of with size is a -approximate of with high probability.
Throughout this paper, we consider axis-parallel rectangular ranges only. For simplicity, we may just say is a -approximation of .
2.2.2 Our Pseudo -Separator Algorithm
Our MPC algorithm comprises several super rounds, each performing:
-
Obtain an -approximation, , tuning so this fits in the local memory of a machine;
-
Perform roughly “good enough” binary partitions in the local memory.
Our algorithm then simultaneously recurs on each of the resulting induced sub-graphs, if applicable, in the next super round. As we will show, each super round would decrease the size of the graph by a factor of , and there can be at most super rounds. Moreover, by Fact 1, an -approximation can be obtained by sampling which can be achieved in MPC rounds; detailed implementations can be found in the appendix of our full paper [23]. Therefore, our algorithm runs in MPC rounds.
Define . In order to ensure that a -approximation of fits the local memory size, , without loss of generality, in the following, we assume that the dimensionality, , is at least . Otherwise, for the case , we can “lift” the dimensionality to by adding a dummy dimension to all the points in ; however, the bounds derived for apply.
The challenge lies in how to find a good enough (compared to that in Lemma 7) -divider to separate in the local memory with access to a -approximation only.
Lemma 10 ([*]).
Given with and , with only access to a -approximation of such that and , we can find a -divider in , which satisfies:
-
and ,
-
.
Lemma 10 suggests that, with only access to a -approximation of , we can compute a good enough -divider ; it separates into two parts each with size ; the size of the separator is still bounded by , the same as in Lemma 7.
By a more careful analysis, interestingly, with a sample set of size , one can derive that indeed is sufficient to be, with high probability, a -approximation, with some constant . The -approximation property of is sufficient to invoke Lemma 10 for multiple times, i.e., making multiple partitions, with in local memory. We formalise this with the following crucial lemma.
Lemma 11 ([*]).
Consider a -approximation of the range space , where and and is the set of all possible axis-parallel rectangular ranges in . Let be an arbitrary range in . Define and .
If , then we have: (i) is a -approximation of , where , and (ii) . Therefore, Lemma 10 is applicable on and .
Our MPC Algorithm for Computing Pseudo -Separator
One Super Round Implementation.
Based on Lemma 11, given an implicit -grid graph , where , for , and , the implementation of one super round is shown in Algorithm 1.
The Overall Pseudo -Separator Algorithm.
Given an implicit -grid graph , where , for , and ,
-
perform a new super round simultaneously on each of the induced sub-graphs with for ;
-
terminate when no more super rounds can be performed, in which case, all the induced sub-graphs have no more than vertices.
Lemma 12 ([*]).
After a super round, the input implicit -grid graph is separated into disconnected induced sub-graphs for such that and for all .
Lemma 13 ([*]).
When our Overall Pseudo -Separator algorithm terminates, let be the set of all the vertices in falling in target -dividers in applications of Lemma 10. Set is a pseudo -separator of the input implicit -grid graph .
Lemma 14 ([*]).
The total number of super rounds of our overall pseudo -separator algorithm is bounded by , and the total number of MPC rounds is bounded by .
Putting the above lemmas together, we have:
Theorem 15.
Given an implicit -grid graph with and , there exists a Las Vegas MPC algorithm with local memory per machine, for all constant , which computes a pseudo -separator of in rounds with high probability.
2.3 Grid Graph Connectivity Algorithms
Next, we introduce our MPC algorithms for computing CCs and MSFs for implicit -grid graphs with .
Bounding the Feasible Range for .
Recall that Theorem 15 shows that a pseudo -separator of can be computed in rounds with high probability. This algorithm works for all constant . However, as we shall see shortly, both our CC and MSF algorithms require the pseudo -separator, , to fit in the local memory in a single machine. As a result, they require . From the proofs of Lemma 10 and Lemma 13, we know that . Recalling that , our algorithm works when for . Combining the case of , a feasible range of becomes .
Connectivity and Minimum Spanning Forest.
Consider a pseudo -separator of ; let be the induced sub-graphs of by , where . By Definition 6, we have and . For each , denote the projection of on dimension by , for and . Consider the hyper-box whose projection on each dimension is for and ; we say that is the extended region of . Moreover, we define the extended graph of , denoted by , as the induced graph by the vertices in , where is the set of separator vertices falling in the extended region of . Figure 1 shows an example of and . Clearly, by choosing a constant , we know that and hence, . Therefore, fits in the local memory in one machine. Moreover, without loss of generality, we assume that is stored completely in a machine .
By incorporating the techniques of the existing External Memory CC and MSF algorithms for grid graphs [22], where the detailed MPC implementations of them are respectively shown in Algorithm 2 and Algorithm 3 in Appendix A, we have the following theorem:
Theorem 16 ([*]).
Given an implicit (edge-weighted) -grid graph with , there exists a Las Vegas MPC algorithm with local memory per machine, for arbitrary constant , which computes all the connected components (resp., minimum spanning forest) of in rounds with high probability.
3 -Round Approximate EMST
An Overview of Our -Approximate EMST Algorithm.
The basic idea of our -approximate EMST algorithm is to mimic the Kruskal’s algorithm. Specifically, our algorithm firstly uses those “short” edges to connect different connected components that are found so far, and then gradually considers “longer” edges. Our algorithm runs in super rounds, each of which takes MPC rounds. In the -th super round, our algorithm constructs a representative implicit -grid graph which only considers those edges with weight , where . Next, it invokes our -round MSF algorithm on , and then starts a new super round repeating this process until a spanning tree is obtained.
The Algorithm Steps.
Consider a set of points in -dimensional space with coordinate value range . Let and set . Our MPC -approximate EMST algorithm constructs a series of implicit -grid graphs, denoted by , for .
-
Augmented Node Information: Each vertex in is associated with two pieces of -size information: (i) a connected component id of , denoted by , and (ii) the original point which currently represents, denoted by .
-
Edge Formation Rule: for any two distinct nodes , in building, : the edge exists if and only if the Euclidean distance , where ; if it exists, its weight, , is if , otherwise .
Initially, and for each node , is the id of the point and is the point itself. Our -approximate EMST algorithm runs in super rounds; in the -th super round (for ), it works as follows:
-
Solve Stage: MSF Computation on :
-
–
invoke our -round MPC algorithm (in Theorem 16) to compute an MSF of . In particular, let be a temporary variable to denote the current connected component id of in . Observe the difference between and : the former records the connected component id of in the previous super round, and is used to determine edge weights in in the current round;
-
–
let be the set of edges in the MSF of ; remove edges with zero weight from ;
-
–
initialize ; for each edge , add an edge to ;
-
–
keep tract of the total edge number that are added to ; when, in total, edges are added, stop and return as an -approximate EMST of ;
-
–
-
Sketch Stage: Computing from :
-
–
initialize ; impose a grid on with side length of ;
-
–
for each non-empty grid cell, , add the most bottom-left corner point of to ; moreover, set and , where is an arbitrary node in ;
-
–
MPC Implementation Details.
-
At the beginning of the Solve Stage, the vertex set is distributed across the machines. Thus, the MSF algorithm suggested in Theorem 16 is invoked on , and the resulted set of edges, , is distributedly stored across the machines. By local computations, the edges in can be converted to edges in .
-
To implement the Sketch Stage (i.e., computing from ), each machine first computes the grid cell coordinate for each vertex in locally. Our algorithm sorts all the vertices in by their grid cell coordinates. After sorting, all the vertices falling in the same grid cell are either stored in just one machine or a contiguous sequence of machines. Next, each machine generates the most bottom-left corner point , according to our algorithm, for each grid cell in its local memory. Duplicate points are then removed by Duplicate Removal, an atomic MPC operation whose implementation details can be found in Appendix A. The vertex set for the next super round is thus constructed.
A Running Example.
Figure 2 gives a running example of our approximate EMST algorithm. The input set contains nine points in -dimensional space with integer coordinate values in . Our algorithm solves this problem in three super rounds. The top row of the figures shows the edges that are labeled to be in the result edge set at the end of the round . The bottom row shows the vertex set of and the MSF of .
In this example, and . In the first round, our algorithm constructs . The vertex set with the augmented node information is shown in the first figure in at the bottom. The MSF of , denoted as , is computed and its edges are shown in the figure. After the computation of the MSF, each vertex computes its component id . All the zero-weight edges are deleted from and the remaining edges are highlighted in colour red. For each edge , its corresponding edge is marked as an output edge, i.e., the edge in the approximate EMST returned by our algorithm, as shown in the first top-row figure. Next, the sketch stage starts; is thus constructed as follows. A grid with side length of is imposed on , and then is generated, shown in the second bottom-row figure. At the end of this second round, two new edges are added to , the final output edge set of the approximate EMST. The same process is repeated with and in the third round, and one more edge is added to . Since now contains in total eight edges, it is returned as an approximate EMST on the input nine points.
Remark on .
From the construction of in our algorithm, at first glance, is an implicit -grid graph which looks ineligible for our MSF algorithm proposed in Theorem 16. However, observe that since the coordinate values of the nodes in are all multiples of , therefore, by a simple scaling, is indeed an implicit -grid graph.
Lemma 17 ([*]).
Our -approximate EMST algorithm takes MPC rounds with high probability.
Let be the union of for all . We prove that is an -approximate EMST of , achieving both overall and edge-wise approximation. By decreasing the constant approximation parameter by half, an -approximate EMST can be obtained.
Lemma 18 ([*]).
is an Euclidean spanning tree of .
Lemma 19 (Edge-Wise Approximation Guarantee [*]).
Consider an exact EMST of ; for any edge , there must exist a path from to in such that every edge on this path has weight .
Lemma 20 (Overall Approximation Guarantee [*]).
Consider an exact EMST of ; the total edge weight of is at most times the total edge weight of .
4 -Round Approximate DBSCAN
4.1 DBSCAN and -Approximate DBSCAN
Core and Non-Core.
Let be a set of points in -dimensional Euclidean space , where the dimensionality is a constant. For any two points , the Euclidean distance between and is denoted by . There are two parameters: (i) a radius and (ii) core point threshold which is a constant integer. For a point , define as the ball center at with radius . If covers at least points in , then is a core point. Otherwise, is a non-core point.
Primitive Clusters and Clusters.
Consider an implicit core graph , where is the set of all the core points in , and there exists an edge between two core points and if . Each connected component of is defined as a primitive cluster. For a primitive cluster , let be the set of all the non-core points such that . The union of is defined as DBSCAN cluster.
The DBSCAN Problem.
Given a set of points , a radius and a core threshold , the goal of the DBSCAN problem is to compute all the DBSCAN clusters.
-Approximate DBSCAN.
Given an extra approximation parameter, , the only difference is in the definition of the primitive clusters. Specifically, consider a conceptual -approximate core graph , where the edge formation rule works as follows. For any two core points : if , must return that is in ; if , must return that does not exist in ; otherwise, what returns does not matter.
Due to the does-not-matter case, is not unique. Consider a graph ; each connected component of is defined as a primitive cluster with respect to . And an -approximate DBSCAN cluster is obtained by including non-core points to a primitive cluster in the same way as in the exact version. The goal of the -approximate DBSCAN problem is to compute the -approximate DBSCAN clusters with respect to an arbitrary .
4.2 Our MPC Algorithm
Our approximate DBSCAN algorithm consists of three main steps: (i) identify core points; (ii) compute primitive clusters from a ; and (iii) assign non-core points to primitive clusters. As shown in the existing -approximate DBSCAN algorithm in [21], the third step, assigning non-core points, can be achieved by an analogous algorithm of core point identification. Next, we give an MPC algorithm for the first two steps.
4.2.1 Identify Core Points
We first show how to identify all the core points for a DBSCAN problem instance in MPC rounds. Our goal is to label each point , as to whether or not it is a core point. In the following, we assume that the constant parameter in the MPC model satisfies , and hence, , that is, the number of machines is no more than the local memory size .
Consider a grid imposed in the data space with grid-cell length . Each grid cell, , is represented by the coordinates of its left-most corner, namely, each grid cell is essentially a hyper cube . Each point is contained in one and exactly one grid cell, denoted by . The size of a grid cell , denoted by , is defined as the number of points in contained in . If , then is a non-empty grid cell. If a non-empty grid cell contains at least points of , then is a dense grid cell. Otherwise, is a sparse grid cell. Given a non-empty grid cell , the neighbour grid cell set of is defined as (note here that and are not necessarily points in ). A grid cell, , in is called a neighbour grid cell of . It can be verified that [20, 22].
Our core point identification algorithm contains the following three steps:
Put Points into Grid Cells.
Given a point , the coordinates of the grid cell which contains can be calculated with the coordinates of . Define as the set of all the points in falling in . By sorting all the points in by the coordinates of the grid cells they belong to in lexicographical order, which can be achieved in MPC rounds [24], all the points in can be arranged in the machines with the following properties:
-
Each machine stores points from consecutive (in the sorted order) non-empty grid cells. Those non-empty grid cells are called the support grid cells of the machine.
-
The points in a same grid cell are stored in either only one machine or a contiguous sequence of machines. Thus, the ID’s of the machines that store – the storage locations of the grid cell – constitute an interval and can be represented by the left and right endpoints (i.e., the smallest and largest machine ID’s) of this interval in words.
-
The number of the support grid cells of each machine is bounded by . And the total non-empty grid cell count is bounded by .
Calculate Non-empty Neighbour Grid Cell Storage Locations.
In this step, we aim to, for each non-empty grid cell, compute the locations of all its non-empty neighbour grid cells. Since the size of is bounded by , it takes space to store the location intervals of its all (possibly empty) neighbour grid cells.
At high level, our algorithm (Algorithm 4 in Appendix A) takes four super rounds to compute non-empty neighbour grid cell storage locations, as follows:
-
Super Round 1: For each support grid cell in a machine , for each of its possible neighbours , our algorithm builds a 4-tuple , where denotes the coordinates of grid cell , denotes the machine ID and term is a placeholder with value NULL at the current stage and will be filled with the storage location of if is non-empty. Next, these 4-tuples are sorted by the third term, , across all machines. Therefore, at the end of this super round, each machine stores a subset of these -tuples in its memory.
-
Super Round 2: Each machine broadcasts the minimal and maximal value of the -tuples it stores, denoted by . As a result, at the end of this super round, each machine, , has the information of the value range of each other.
-
Super Round 3: Each machine sends a -tuple , for each of its support grid cells , to those machines having . Specifically, the second term of the -tuple, , is the storage location of , which was computed in the previous “put points into grid cells” phase.
-
Super Round 4: After receiving all in Super Round 3, the previous placeholder, the forth term , in each -tuple stored in machine now can be filled properly in ’s local memory. Each of the resultant -tuples are then sent back to machine accordingly.
Lemma 21 ([*]).
The above algorithm, i.e., Algorithm 4, takes rounds. In each round, the amount of data sent and received by each machine is bounded by .
Output the Point Labels.
Algorithm 5 in Appendix A gives a detailed implementation. A running example of Algorithm 5 is shown in Figure 3. This algorithm labels each point in as whether or not it is a core point. If a grid cell is dense, i.e., , all of the points in are labelled as core points. For each point in each sparse grid cell (i.e., with ) in each machine , we can count the number of the points of in the hyper ball centred at with radius , denoted by . Observe that a point falls in only if either and are in the same grid cell or is a neighbour grid cell of . Our algorithm, thus, sends to all the storage locations of (i.e., the machines storing) the neighbour grid cells of . In the local memory of each of those machines, the distances between and all those points in the neighbour grid cells can be computed. And then, the numbers of points falling in in those machines are sent back to machine (where is stored). Therefore, the points in sparse cells can be labelled accordingly.
Lemma 22 ([*]).
The above algorithm, i.e., Algorithm 5, performs two MPC rounds. In each round, the amount of data sent and received by each machine is bounded by .
Lemma 23 (Core Point Identification [*]).
Consider a set of points; given a radius parameter and a core point threshold, , which is a constant integer, there exists an MPC algorithm with local memory per machine, for arbitrary constant , which identifies all the core points in rounds.
4.2.2 Compute Primitive Clusters
Our algorithm works as follows:
-
construct an implicit -grid graph :
-
–
initialize ; impose a grid on with side length of ;
-
–
for each non-empty grid cell , add the most bottom-left corner point of to ;
-
–
set to consider those edges with only and returns as its weight; as a result, is -penetration with ;
-
–
-
invoke our MPC algorithm in Theorem 16 on the implicit -grid graph to compute an MSF of and associate the connected component ID to each point ;
-
for each point , assign ’s connected component ID to each core point in the grid cell which represents;
-
group all the core points in by their connected component ID’s;
-
return each group as a primitive cluster; denote the set of these primitive clusters by ;
Lemma 24 ([*]).
The above primitive cluster computing algorithm returns legal -approximate primitive clusters in rounds with high probability.
5 Conclusion
In this paper, we study the problems of grid graph connectivity, approximate EMST and approximate DBSCAN in the MPC model with strictly sub-linear local memory space per machine. We show Las Vegas algorithms (succeeding with high probability), which can be derandomized, for solving these problems in rounds. Due to the importance of the problems studied in this paper, we believe that our -round MPC algorithms and then pseudo -separator technique will be of independent interests for solving other related problems in the MPC model.
References
- [1] Pankaj K Agarwal, Herbert Edelsbrunner, Otfried Schwarzkopf, and Emo Welzl. Euclidean minimum spanning trees and bichromatic closest pairs. In Proceedings of the sixth annual symposium on Computational geometry, pages 203–210, 1990. doi:10.1145/98524.98567.
- [2] Pankaj K. Agarwal, Kyle Fox, Kamesh Munagala, and Abhinandan Nath. Parallel algorithms for constructing range and nearest-neighbor searching data structures. In Proceedings of the 35th ACM PODS 2016, pages 429–440. ACM, 2016. doi:10.1145/2902251.2902303.
- [3] Alok Aggarwal and Jeffrey Scott Vitter. The input/output complexity of sorting and related problems. Commun. ACM, 31(9):1116–1127, 1988. doi:10.1145/48529.48535.
- [4] Rakesh Agrawal, Johannes Gehrke, Dimitrios Gunopulos, and Prabhakar Raghavan. Automatic subspace clustering of high dimensional data for data mining applications. In Proceedings of the 1998 ACM SIGMOD international conference on Management of data, 1998.
- [5] Alexandr Andoni, Aleksandar Nikolov, Krzysztof Onak, and Grigory Yaroslavtsev. Parallel algorithms for geometric graph problems. In Symposium on Theory of Computing, STOC 2014, New York, NY, USA, May 31 - June 03, 2014, pages 574–583. ACM, 2014. doi:10.1145/2591796.2591805.
- [6] Alexandr Andoni, Zhao Song, Clifford Stein, Zhengyu Wang, and Peilin Zhong. Parallel graph connectivity in log diameter rounds. In Mikkel Thorup, editor, 59th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2018, 2018. doi:10.1109/FOCS.2018.00070.
- [7] Sunil Arya, David M Mount, Nathan S Netanyahu, Ruth Silverman, and Angela Y Wu. An optimal algorithm for approximate nearest neighbor searching fixed dimensions. Journal of the ACM (JACM), 45(6):891–923, 1998. doi:10.1145/293347.293348.
- [8] Sepehr Assadi, Xiaorui Sun, and Omri Weinstein. Massively parallel algorithms for finding well-connected components in sparse graphs. In Proceedings of the 2019 ACM Symposium on Principles of Distributed Computing, PODC 2019, 2019.
- [9] Paul Beame, Paraschos Koutris, and Dan Suciu. Communication steps for parallel query processing. Journal of the ACM (JACM), 64(6):1–58, 2017. doi:10.1145/3125644.
- [10] Soheil Behnezhad, Laxman Dhulipala, Hossein Esfandiari, Jakub Lacki, and Vahab Mirrokni. Near-optimal massively parallel graph connectivity. In 2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS), pages 1615–1636. IEEE, 2019. doi:10.1109/FOCS.2019.00095.
- [11] Patrick Burger and Hans-Joachim Wuensche. Fast multi-pass 3d point segmentation based on a structured mesh graph for ground vehicles. In 2018 IEEE Intelligent Vehicles Symposium (IV), pages 2150–2156. IEEE, 2018. doi:10.1109/IVS.2018.8500552.
- [12] Brent N Clark, Charles J Colbourn, and David S Johnson. Unit disk graphs. Discrete mathematics, 86(1-3):165–177, 1990. doi:10.1016/0012-365X(90)90358-O.
- [13] Sam Coy and Artur Czumaj. Deterministic massively parallel connectivity. In STOC ’22: 54th Annual ACM SIGACT Symposium on Theory of Computing, Rome, Italy, June 20 - 24, 2022, pages 162–175. ACM, 2022. doi:10.1145/3519935.3520055.
- [14] Sam Coy, Artur Czumaj, and Gopinath Mishra. On parallel k-center clustering. In Proceedings of the 35th ACM Symposium on Parallelism in Algorithms and Architectures, SPAA 2023, Orlando, FL, USA, June 17-19, 2023, pages 65–75. ACM, 2023. doi:10.1145/3558481.3591075.
- [15] Artur Czumaj, Guichen Gao, Shaofeng H.-C. Jiang, Robert Krauthgamer, and Pavel Veselý. Fully scalable MPC algorithms for clustering in high dimension. CoRR, abs/2307.07848, 2023. doi:10.48550/arXiv.2307.07848.
- [16] Artur Czumaj, Christiane Lammersen, Morteza Monemizadeh, and Christian Sohler. (1+-approximation for facility location in data streams. In Proceedings of SODA 2013, 2013.
- [17] Jeffrey Dean and Sanjay Ghemawat. Mapreduce: simplified data processing on large clusters. Commun. ACM, 51(1):107–113, 2008. doi:10.1145/1327452.1327492.
- [18] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805, 2018. arXiv:1810.04805.
- [19] Hu Ding and Jinhui Xu. Solving the chromatic cone clustering problem via minimum spanning sphere. In International Colloquium on Automata, Languages, and Programming, pages 773–784. Springer, 2011. doi:10.1007/978-3-642-22006-7_65.
- [20] Junhao Gan and Yufei Tao. Dbscan revisited: Mis-claim, un-fixability, and approximation. In Proceedings of the 2015 ACM SIGMOD international conference on management of data, pages 519–530, 2015. doi:10.1145/2723372.2737792.
- [21] Junhao Gan and Yufei Tao. On the hardness and approximation of euclidean dbscan. ACM Transactions on Database Systems (TODS), 42(3):1–45, 2017. doi:10.1145/3083897.
- [22] Junhao Gan and Yufei Tao. An i/o-efficient algorithm for computing vertex separators on multi-dimensional grid graphs and its applications. J. Graph Algorithms Appl. (JGAA), 22(2):297–327, 2018. doi:10.7155/JGAA.00471.
- [23] Junhao Gan, Anthony Wirth, and Zhuo Zhang. -round mpc algorithms for multi-dimensional grid graph connectivity, emst and dbscan, 2025. arXiv:2501.12044.
- [24] Michael T Goodrich, Nodari Sitchinava, and Qin Zhang. Sorting, searching, and simulation in the mapreduce framework. In International Symposium on Algorithms and Computation, pages 374–383. Springer, 2011. doi:10.1007/978-3-642-25591-5_39.
- [25] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 770–778. IEEE Computer Society, 2016. doi:10.1109/CVPR.2016.90.
- [26] Piotr Indyk. Algorithms for dynamic geometric problems over data streams. In Proceedings of the thirty-sixth annual ACM Symposium on Theory of Computing, pages 373–380, 2004. doi:10.1145/1007352.1007413.
- [27] Michael Isard, Mihai Budiu, Yuan Yu, Andrew Birrell, and Dennis Fetterly. Dryad: distributed data-parallel programs from sequential building blocks. In Proceedings of the 2007 EuroSys Conference, Lisbon, Portugal, March 21-23, 2007, pages 59–72. ACM, 2007. doi:10.1145/1272996.1273005.
- [28] Howard J. Karloff, Siddharth Suri, and Sergei Vassilvitskii. A model of computation for mapreduce. In Proceedings of the Twenty-First Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2010, Austin, Texas, USA, January 17-19, 2010, 2010. doi:10.1137/1.9781611973075.76.
- [29] Yi Li, Philip M. Long, and Aravind Srinivasan. Improved bounds on the sample complexity of learning. J. Comput. Syst. Sci., 62(3):516–527, 2001. doi:10.1006/JCSS.2000.1741.
- [30] Tomás Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. In 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings, 2013. URL: http://arxiv.org/abs/1301.3781.
- [31] Morteza Monemizadeh. Facility location in the sublinear geometric model. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023), 2023.
- [32] Danupon Nanongkai and Michele Scquizzato. Equivalence classes and conditional hardness in massively parallel computations. Distributed computing, 35(2):165–183, 2022. doi:10.1007/S00446-021-00418-2.
- [33] Abhinandan Nath, Kyle Fox, Pankaj K Agarwal, and Kamesh Munagala. Massively parallel algorithms for computing tin dems and contour trees for large terrains. In Proceedings of the 24th ACM SIGSPATIAL, 2016.
- [34] Tom White. Hadoop - The Definitive Guide: Storage and Analysis at Internet Scale (3. ed., revised and updated). O’Reilly, 2012. URL: http://www.oreilly.de/catalog/9781449311520/index.html.
- [35] Jie Xue. Colored range closest-pair problem under general distance functions. In Proceedings of ACM-SIAM SODA, pages 373–390. SIAM, 2019. doi:10.1137/1.9781611975482.24.
- [36] Grigory Yaroslavtsev and Adithya Vadapalli. Massively parallel algorithms and hardness for single-linkage clustering under lp-distances. In 35th International Conference on Machine Learning (ICML’18), 2018.
- [37] Matei Zaharia, Mosharaf Chowdhury, Michael J. Franklin, Scott Shenker, and Ion Stoica. Spark: Cluster computing with working sets. In 2nd USENIX Workshop on Hot Topics in Cloud Computing, HotCloud’10, Boston, MA, USA, June 22, 2010, 2010. URL: https://www.usenix.org/conference/hotcloud-10/spark-cluster-computing-working-sets.
- [38] Yan Zhou, Oleksandr Grygorash, and Thomas F Hain. Clustering with minimum spanning trees. International Journal on Artificial Intelligence Tools, 20(01):139–177, 2011. doi:10.1142/S0218213011000061.
Appendix A Appendix
Here, we include pseudocode for the main algorithms in our paper.