Fitting Tree Metrics and Ultrametrics in Data Streams
Abstract
Fitting distances to tree metrics and ultrametrics are two widely used methods in hierarchical clustering, primarily explored within the context of numerical taxonomy. Formally, given a positive distance function , the goal is to find a tree (or an ultrametric) including all elements of set , such that the difference between the distances among vertices in and those specified by is minimized. Numerical taxonomy was first introduced by Sneath and Sokal [Nature 1962], and since then it has been studied extensively in both biology and computer science.
In this paper, we initiate the study of ultrametric and tree metric fitting problems in the semi-streaming model, where the distances between pairs of elements from (with ), defined by the function , can arrive in an arbitrary order. We study these problems under various distance norms; namely the objective, which aims to minimize the number of modified entries in to fit a tree-metric or an ultrametric; the objective, which seeks to minimize the total sum of distance errors across all pairs of points in ; and the objective, which focuses on minimizing the maximum error incurred by any entries in .
-
Our first result addresses the objective. We provide a single-pass polynomial-time -space approximation algorithm for ultrametrics and prove that no single-pass exact algorithm exists, even with exponential time.
-
Next, we show that the algorithm for implies an approximation for the objective, where is the maximum, and is the minimum absolute difference between distances in the input. This bound matches the best-known approximation for the RAM model using a combinatorial algorithm when .
-
For the objective, we provide a complete characterization of the ultrametric fitting problem. First, we present a single-pass polynomial-time -space 2-approximation algorithm and show that no better than 2-approximation is possible, even with exponential time. Furthermore, we show that with an additional pass, it is possible to achieve a polynomial-time exact algorithm for ultrametrics.
-
Finally, we extend all these results to tree metrics by using only one additional pass through the stream and without asymptotically increasing the approximation factor.
Keywords and phrases:
Streaming, Clustering, Ultrametrics, Tree metrics, Distance fittingCategory:
Track A: Algorithms, Complexity and GamesFunding:
Debarati Das: Research supported by NSF grant 2337832.Copyright and License:
![[Uncaptioned image]](x1.png)
2012 ACM Subject Classification:
Theory of computation Streaming models ; Theory of computation Sketching and sampling ; Theory of computation Facility location and clusteringEditors:
Keren Censor-Hillel, Fabrizio Grandoni, Joël Ouaknine, and Gabriele PuppisSeries and Publisher:

1 Introduction
Hierarchical clustering is a method of cluster analysis that builds a hierarchy of clusters by starting with each data point as its own cluster and successively merging the two closest clusters until all points are merged into a single cluster or a stopping criterion is met. This method involves creating a dendrogram, a tree-like diagram, that records the sequence of merges or splits, allowing for easy visualization and interpretation of the hierarchical structure. Hierarchical clustering uses various distance metrics (e.g., Euclidean, Manhattan, cosine) and linkage criteria (e.g., single, complete, average, Ward’s method), providing flexibility to tailor the analysis to specific data characteristics and clustering goals. It is versatile across different types of data, including numerical, categorical, and binary data, and has become the preferred method for analyzing gene expression data [35] and constructing phylogenetic trees [3, 43]. Consequently, hierarchical clustering plays a significant role in both theory and practice across various domains, such as image processing to group similar regions within images [48], social network analysis to identify communities within a network [14], and business and marketing to segment customers based on behavior, preferences, or purchasing patterns [46].
Tree metrics and ultrametrics are fundamental measures used in hierarchical clustering, where the distance between any two points is defined by the cost of the unique path connecting them in a tree-like structure. Formally, given a distance function , the goal is to find a tree with positive edge weights, encompassing all elements of set as vertices. This tree should best match the distances specified by . In the case of ultrametrics, the tree must be rooted, and all elements of must appear as leaf nodes at the same depth.
The task of fitting distances with tree metrics and ultrametrics, often referred to as the numerical taxonomy problem, has been a subject of interest since the 1960s [18, 53, 54]. One of the pioneering works in this area was presented by Cavalli-Sforza and Edwards in 1967 [18]. Different formulations of the optimal fit for a given distance function lead to various objectives, such as minimizing the number of disagreements (using the norm of the error vector), minimizing the sum of differences (using the norm), and minimizing the maximum error (using the norm).
Deploying hierarchical clustering (HC) algorithms faces significant challenges due to scalability issues, particularly with the rise of data-intensive applications and evolving datasets. As data volumes continue to grow, there is an urgent need for efficient algorithms tailored for large-scale models such as streaming, local algorithms, MPC, and dynamic models, given the large input sizes relative to available resources. In this work we study hierarchical clustering, focusing on tree metrics and ultrametrics in the semi-streaming model. The model supports incremental updates, keeping the information about the clusters current without the need to reprocess the entire dataset. This adaptability makes hierarchical clustering highly valuable for applications such as network monitoring and social media analysis, where real-time insights are essential [47, 49, 52].
A recent result [5] studied hierarchical clustering (HC) in the graph streaming model, providing a polynomial-time, single-pass space algorithm that achieves an approximation for HC. When space is more limited, specifically , the authors show that no algorithm can estimate the value of the optimal hierarchical tree within an factor, even with passes over the input and exponential time.
A special case of the ultrametric fitting problem is where the tree depth is two, known as the Correlation Clustering problem. In this problem given a complete graph with edges labeled either similar (0) or dissimilar (1), the objective is to partition the vertices into clusters to minimize the disagreements. After a decade of extensive research on correlation clustering in the semi-streaming setting [2, 6, 9, 10, 23, 28, 29], a recent breakthrough in [32] introduces a single-pass algorithm that achieves a approximation using space. This directly improves upon two independent works [15, 51], both presenting single-pass algorithms achieving a -approximation using space.
However, our understanding of streaming algorithms for larger depths, particularly within the context of ultrametrics and tree metrics, is very limited. The challenge arises from the fact that, unlike correlation clustering, which deals with only two distinct input distances, this problem may involve up to different distances, especially in a highly noisy input. Although the optimal output tree can be defined using at most of these distances, identifying these distances is non-trivial. As a result, in the worst case, it may be necessary to store all observed input distances, which would require quadratic space if done naively. Additionally, the hierarchical nature of clusters at various tree depths introduces inherent dependencies among clusters at different levels. This complexity makes it highly challenging to adapt the ideas used in streaming algorithms for correlation clustering to ultrametrics and tree metrics. In this paper, we offer the first theoretical guarantees by providing several algorithmic results for fitting distances using ultrametrics and tree metrics in the semi-streaming setting under various distance norms.
1.1 Other Related Works
1.1.1 Ultrametrics and Tree metrics
The numerical taxonomy problem, which involves fitting distances with tree metrics and ultrametrics, was first introduced by Cavalli-Sforza and Edwards in 1967 [18]. Day demonstrated that this problem is NP-hard for both and norms in the context of tree metrics and ultrametrics [34]. Moreover, these problems are APX-hard [20], as inferred from the APX-hardness of Correlation Clustering, which rules out the possibility of a polynomial-time approximation scheme. On the algorithmic side, Harp, Kannan, and McGregor [42] developed an approximation for the objective in the ultrametric fitting problem, where is the number of distinct distances in the input. Ailon and Charikar [3] improved this to an approximation, which they extended to the tree metric using a reduction from Agarwala [1]. A recent breakthrough in [24] presented the first constant-factor approximation for the objective for both ultrametric and tree metric.
The objective was first investigated in [26], which developed a novel constant-factor approximation algorithm. Charikar and Gao [19] improved the approximation guarantee to 5. For the weighted ultrametric violation distance, where the weights satisfy the triangle inequality, they provided an approximation, with being the number of distinct values in the input. Kipouridis [44] further extended these results to tree metrics.
Research into the numerical taxonomy began in the early 1990s. It was discovered by several authors that the case of the ultrametric fitting problem is solvable in polynomial time (in fact linear time in the input) and it is the only case with that property, whereas the problem of tree fitting is APX-hard [1, 22, 39, 45]. Since then, the Best-Fit Ultrametrics/Tree-Metrics problems were extensively studied from both mathematical and computational perspectives [4, 12, 13, 22, 25, 27, 36, 50].
1.1.2 Correlation Clustering
The classic correlation clustering problem, introduced by Bansal, Blum, and Chawla [7], can be visualized as a special case of ultrametrics where the tree’s depth is bounded by two. Correlation clustering serves as a fundamental building block for constructing ultrametrics and tree metrics. Despite being APX-hard [20], extensive research [7, 20, 21, 30, 31] has aimed at finding efficient approximation algorithms, with the latest being a 1.437-approximation [16]. Correlation clustering also boasts a rich body of literature and has been extensively studied across various models designed for large datasets, including streaming [2, 6, 9, 31], MPC [29], MapReduce [23], and dynamic models [8, 11, 33].
1.1.3 Metric Violation Distance
Another counterpart of the ultrametric violation distance problem is the metric violation distance problem, which requires embedding an arbitrary distance matrix into a metric space while minimizing the objective. While only a hardness of approximation of 2 is known, [38, 40, 41] provided algorithms with an approximation ratio of . An exponential improvement in the approximation guarantee to was achieved in [26]. The maximization version of this problem is also well-motivated by its application in designing metric filtration schemes for analyzing chromosome structures, as studied in [37].
1.2 Our Contributions
In this work, we examine the problem of fitting tree metrics and ultrametrics in the semi-streaming model, focusing on the and objectives.
Note that storing the tree alone requires word space. Since we are working within the semi-streaming model, where space is permitted, this is a natural consideration.
Our results apply to the most general semi-streaming settings where the entries of the input distance matrix, of size , arrive one by one in some arbitrary order, possibly adversarially. Notably, all our algorithms require either one or two passes over the data while achieving constant factor approximations in polynomial time. Before discussing the key contributions of this work, we provide a formal definition of the problem.
Problem: Best-Fit Ultrametrics/Tree-Metrics
Input: A set
and a distance matrix .
Desired Output: An ultrametric (resp. tree metric) that spans and
fits in the sense of minimizing:
For , the aim is to minimize the total number of errors. In other words, each pair comes with a request regarding their distance in the output tree, and our goal is to construct a tree that satisfies as many of these requests as possible, minimizing the total number of pairs whose distances are altered. This fundamental problem for ultrametrics, also known as the Ultrametric violation distance problem, was first investigated in [26], in which a novel constant-factor approximation algorithm in the RAM model was developed. Charikar and Gao [19] further improved the approximation guarantee to 5.
We present for this problem a single-pass algorithm, in the semi-streaming setting, that provides a constant approximation and succeeds with high probability. We remark that straightforwardly adapting this algorithm in the RAM model yields a near-linear time algorithm (, while the input size is ), improving over the best known time from [26].111In [26] the exact running time of the algorithm has not been analyzed, but there exist inputs where it must perform repetitions of a flat-clustering algorithm that takes time per repetition. Due to space constraints, several proofs are deferred to the full version [17].
Theorem 1.
There exists a single pass polynomial time semi-streaming algorithm that w.h.p. -approximates the Best-Fit Ultrametrics problem.
Following, we show that this result also implies an approximation for the objective.
Corollary 2.
Let (resp. ) be the smallest (resp. largest) absolute difference between distinct distances in , for an Best-Fit Ultrametrics instance. There exists a single pass polynomial time semi-streaming algorithm that w.h.p -approximates the Best-Fit Ultrametrics problem.
Proof.
Let (resp. ) be an ultrametric minimizing (resp. ), and be the output from Theorem 1 (thus .
It holds that , as in the objective we pay at least for each pair having a disagreement. Similarly , by definition of .
It is interesting to note that for the objective, most recent approximation algorithms in the offline setting are not combinatorial, making it a significant challenge to adapt them to the semi-streaming model. The best known combinatorial approximation for Best-Fit Ultrametrics and Tree-Metrics is , when [3, 42], achieved through the so-called pivoting algorithm 222The authors of [3] claim the approximation is proportional to the number of distinct distances. That is because of a simplification they make in the paper, that the distances are all consecutive positive integers starting from . For an example showing the analysis is tight, take and . With probability we pick as the pivot, and pay , while the optimum solution only pays .. Unfortunately, this algorithm is very challenging to adapt to a single-pass semi-streaming setting as it generalizes the PIVOT-based algorithm of Correlation Clustering, which, despite extensive research, has not been adapted to semi-streaming settings with only a single pass without significant modifications [9, 10, 15, 51]. Surprisingly, our approximation for the objective is derived directly from the algorithm for the objective, eliminating the need to explicitly use this pivoting approach.
Further, we contrast Theorem 1 by ruling out the possibility of a single-round exact algorithm, even with sub-quadratic space and exponential time. For this, we provide a new lower bound result for the correlation clustering problem, showing that any single-pass streaming algorithm with sub-quadratic space cannot output the optimal clustering nor can maintain its cost.
Theorem 3.
Any randomized single pass streaming algorithm that with probability greater than either solves the correlation clustering problem or maintains the cost of an optimal correlation clustering solution requires bits.
We then extend this result to Best-Fit Ultrametrics problems for , using the fact that correlation clustering is a special case of these problems (see e.g. [3]).
Corollary 4.
For , any randomized single pass streaming algorithm that with probability greater than either solves Best-Fit Ultrametrics or just outputs the error of an optimal ultrametric solution requires bits.
Next, we consider the objective, where the goal is to minimize the maximum error. We provide a complete characterization of Best-Fit Ultrametrics in the semi-streaming model, including a single-pass algorithm with a -approximation factor.
Theorem 5.
There exists a single pass polynomial time semi-streaming algorithm that -approximates the Best-Fit Ultrametrics problem.
We contrast Theorem 5 by showing that this is the best approximation factor achievable using a single pass, even with sub-quadratic space and exponential time.
Theorem 6.
Any randomized one-pass streaming algorithm for Best-Fit Ultrametrics with an approximation factor strictly less than 2 and a success probability greater than requires bits of space.
Moreover, we demonstrate that allowing two passes is sufficient for an exact solution. Therefore, we provide optimal tradeoffs between the number of passes and the approximation factor in all scenarios.
Theorem 7.
There exists a two-pass polynomial time semi-streaming algorithm that computes an exact solution to the Best-Fit Ultrametrics problem.
We show that all aforementioned algorithms can be extended to tree metrics. This is achieved by providing reductions to the corresponding ultrametrics problems, requiring only one additional pass over the stream. The reductions used for the and objectives differ significantly from each other.
Theorem 8.
There exists a two-pass polynomial time semi-streaming algorithm that w.h.p -approximates the Best-Fit Tree-Metrics problem.
Using the same arguments as in Corollary 2, we obtain an analogous result for the objective.
Corollary 9.
Let (resp. ) be the smallest (resp. largest) absolute difference between distinct distances in , for an Best-Fit Tree-Metrics instance. There exists a two-pass pass polynomial time semi-streaming algorithm that w.h.p -approximates Best-Fit Tree-Metrics problem.
Theorem 10.
There exists a two-pass polynomial time semi-streaming algorithm that 6-approximates the Best-Fit Tree-Metrics problem.
1.3 Technique Overview.
We provide a technical overview of the most technically novel contribution of our work, namely the results regarding the Best-Fit Ultrametrics algorithm in the semi-streaming model (more details in Section 3).
1.3.1 Why previous approaches cannot be adapted
In general, it is difficult to ensure a hierarchical structure while providing non-trivial approximation guarantees. In Hierarchical Clustering research, such results usually rely on one of two standard approaches, namely the top-down (divisive) approach, and the bottom-up (agglomerative) approach. In fact, with the exception of [24], results for Best-Fit Ultrametrics () [3, 19, 24, 26, 42] all rely on the divisive approach.
Non-divisive approaches
The only relevant result applying a non-divisive approach is that of [24], which crucially relies on a large LP. Unfortunately, it is not known how to solve such an LP in streaming.
Divisive approaches
A divisive algorithm starts with the root node (containing the whole ), computes its children (subsets at height ) based on some division strategy, and recurses on its children. Different division strategies have been employed, with the most prominent ones using the solution to an LP, or attempting to satisfy a particular (usually randomly chosen) element called the pivot, or solving some flat clustering problem. In what follows, we discuss why existing division strategies do not work in our case.
Correlation Clustering.
Perhaps the most straightforward approach is to solve a (flat) clustering problem; for each pair of vertices , we ideally want them together if , and apart otherwise. This corresponds to the Correlation Clustering problem, which returns a clustering violating as few of our preferences as possible. Unfortunately, this approach does not work for Best-Fit Ultrametrics, as certain choices that appear good locally (on a particular height ) may be catastrophic globally.
The first result for Best-Fit Ultrametrics.
The authors of [26] overcame the shortcomings of Correlation Clustering as a division strategy by solving a particular flavor of it, called Agreement Correlation Clustering. This guaranteed further structural properties that could be leveraged to provide approximation for Best-Fit Ultrametrics. However this approach is too strong to guarrantee in streaming, since one can recover adjacency information with black-box calls to an Agreement Correlation Clustering subroutine. This of course requires bits of memory, while in streaming we only have .
Other results for Best-Fit Ultrametrics.
1.3.2 Our techniques
Best-Fit Ultrametrics
Our streaming algorithm is a divisive algorithm. In the divisive framework, each level of the tree is defined by a distinct distance from the input, which allows each level to be visualized as an instance of the correlation clustering problem. In this instance, two vertices are connected if their distance is at most the threshold associated with that level; otherwise, they are not connected. Following this, different layers of the ultrametric tree are built by repeatedly applying a division strategy in a top-down fashion. Here, we highlight the techniques we develop to design a semi-streaming algorithm that uses only a single pass and computes an approximation for the Best-Fit Ultrametrics problem.
Distances Summary.
The first fundamental challenge is identifying which distances should be preserved in the constructed ultrametric, given that the input may contain distinct distances. A divisive algorithm may need to perform its division strategy on every level defined by such a distance (of course, sometimes it may decide not to divide anything); however, in the semi-streaming setting, we cannot even afford to store all these distances. Instead, we work with a compressed set of distances that effectively captures all important information. More formally, we focus on distances for which there exists at least one vertex such that the number of vertices with distance less than from is significantly smaller than the number of vertices with distance at most from . Using this notion, we demonstrate that it is sufficient to consider only a near-linear number of distances to achieve a good approximation.
Agreement Sketches.
A key component in many (flat) clustering algorithms (including the first algorithm for Correlation Clustering [7], which inspired many others, such as [6, 8, 26, 29]) involves comparing the set of neighbors of two vertices. While our division strategy also builds on such comparisons, both the hierarchical nature and streaming constraints of our setting present unique challenges. In Correlation Clustering each vertex has only a single set of neighbors, however, in our hierarchical setting, each layer of the tree is associated with a different distance threshold, producing different sets of neighbors for a vertex. In the worst case, there can be such sets, and building a sketch for each can require up to quadratic space.
To address this, we build a new sketch for a node only when its set of neighbors changes significantly. The intuition here is that if the neighborhood of a node has not changed substantially, then a precomputed sketch for a nearby neighborhood will suffice. However, implementing this in a semi-streaming setting, where distances between pairs can arrive in any arbitrary order, is challenging. Since we cannot store the distances to all other nodes from a given node simultaneously, identifying significant changes in a node’s neighborhood becomes difficult. To manage this, we develop a new technique that combines random sampling with a pruning strategy, ensuring that the overall space required to store all the sketches is .
In this approach, we build each sketch by randomly sampling nodes. Assuming the neighborhood size has dropped substantially, we expect the correct sketch to reach a certain size. Notably, the set of neighbors of a node only shrinks as the distance decreases. Thus, for a specific weight threshold, if the sample (or sketch) size grows considerably, this indicates that the neighborhood has not changed much, so we disregard that weight threshold and delete the corresponding sample from the sketch. Specifically, for each node, we build and store sketches when the neighborhood size shrinks by a constant factor. Following this, we consider at most different sizes, and storing the sketch for all sizes takes only polylogarithmic space for a node. Therefore, the total space required to store the sketches for all the nodes is bounded by . Moreover, the sketches ensure that for each node and each weight , there exists a weight such that the neighboring nodes of at and differ very little, and we have built a sketch corresponding to the neighboring set at weight .
Across-Levels Correlations.
In divisive algorithms, while building a new level of the ultrametric tree, the recursions performed depend on the divisions computed at previous recursion levels. In this sense, the divisive framework can be viewed as an adaptive adversary for the division strategy we need to perform. This is not an issue when deterministic division strategies are used (e.g. as in [26]), but it becomes particularly problematic in our case, where we are forced to use random sketches because of the issue with the distinct distances.
The challenge here arises from the fact that for a given vertex we do not build a new sketch for each level of the tree. Instead, we only construct a sketch when the set of neighbors changes substantially. Consequently, multiple levels of the tree must reuse the same sketch, which increases the correlation among clusters at different levels. This makes it difficult to ensure concentration bounds when limiting the overall error.
To address this, our approach aims to “limit” the dependencies by ensuring our algorithm has only a logarithmic recursion depth (as opposed to the recursion depth in straightforward divisive approaches). This allows us to afford independent randomness by using a new sketch for each recursion depth. To reduce the recursion depth, we make the following observation: if the correlation clustering subroutine identifies a large cluster (e.g., containing a fraction of the vertices), we can detect this cluster without explicitly applying the correlation clustering algorithm (thus omitting the requirement of using a sketch). This is because all vertices within this cluster have large degrees, while those outside have very small degrees. Therefore, it suffices to identify the vertices with small degrees and remove them to generate the new cluster. It is important to note that the degree calculation must consider the entire graph, not just the subgraph induced by the current cluster being considered. Otherwise, intra-recursion dependencies could be introduced, and thus the logarithmic recursion depth guarantee may not suffice.
Within-Level Correlations.
Correlation issues do not only occur vertically (across levels), but also horizontally (within the same level), as most algorithms for correlation clustering compute a cluster, and then recurse on the rest of the elements. However in our case, such an adaptive construction may lead to the possibility of reusing sketches, making it difficult to ensure concentration.
We overcome these issues in several ways. First, we use these sketches to compute the agreement among vertices (i.e., computing the similarity between the set of neighbors of each pair of vertices) before we start constructing any clusters. Finally we propose an algorithm that is relying solely on our agreement sketches and is decomposed into independent events, thus only requiring us to consult the sketches of each layer only once. By using the agreements precomputation and our proposed clustering algorithm we ensure concentration while limiting the error for all clusters within a level.
-Structural Clustering.
Finally, as argued in Section 1.3.1, we need a division strategy that is different from the existing Agreement Correlation Clustering of [26]. That is because it can be proven that solving Agreement Correlation Clustering on arbitrary subgraphs requires bits of memory.
Instead, we introduce -Structural Clustering, which is inspired by Agreement Correlation Clustering. The key distinction is that now we require a clustering of to satisfy the structural properties, while also considering edges with only one endpoint in . This distinction is exactly what allows us to generalize our proposed algorithm to solve -Structural Clustering by relying solely on the global neighborhoods of its vertices. Interestingly, the resulting time complexity of our general algorithm only depends on the size of the subgraph, as we compress all the necessary global information through our sketches. Finally, we remark that both the construction of the sketches (Section 3.1) and the introduction of the -Structural Clustering (Section 3.2.2) are two novel contributions of our work and could be of independent interest.
Best-Fit Tree-Metrics
In [44] it is shown how to reduce Best-Fit Tree-Metrics to Best-Fit Ultrametrics. In this approach, however, one needs to create different instances of Best-Fit Ultrametrics, which is not feasible in the semi-streaming model. In this work, we show that randomly solving a logarithmic number of these different instances suffices.
Our initial approach requires passes over the stream. One for a preprocessing step implicitly constructing the Best-Fit Ultrametrics instances, one to solve these instances (and post-process them to extract trees that solve the original problem), and a final one to figure which one of the logarithmically many trees we need to output (the one with the smallest cost is picked).
We further improve the number of passes to , by eliminating the need for the final pass. To do that, we note that there are many trees with “tiny” cost related to the input; let be the set containing these trees. By triangle inequality, all trees in have “small” cost related to each other. If we create a graph with the trees as nodes, and an edge between two nodes when the cost relative to each other is small, we then show that with high probability this graph contains a big clique. Finally, we show that any node (corresponding to a tree) from a big clique is a good enough approximation to the original input, even if it is not in .
Results
Regarding Best-Fit Ultrametrics, we show that the existing exact algorithm [39] can be straightforwardly adapted to a -pass semi-streaming algorithm. Naturally, as the problem has been solved exactly, no research has focused on approximation algorithms. In this work we show that the solution to a related problem ( Min-Decrement Ultrametrics) -approximates Best-Fit Ultrametrics. Then we adapt the exact solution for Min-Decrement Ultrametrics [39] to obtain a single pass semi-streaming algorithm.
We also show that no single-pass semi-streaming algorithm can give a better-than-2 approximation, for otherwise we could compress any graph in space. Together, these results completely characterize Best-Fit Ultrametrics in the semi-streaming setting, regarding the optimal number of passes and the optimal approximation factor.
For Best-Fit Tree-Metrics, there exists a reduction to Ultrametrics [1], blowing up the approximation by a factor . Adapting it in the semi-streaming requires one additional pass through the stream. Using it with our -approximation for Best-Fit Ultrametrics (rather than with the exact algorithm, as done in [1]), we need passes (instead of ).
2 Preliminaries
We start by presenting useful notations we employ throughout the text. We use to denote an unordered pair . We use the term distance matrix to refer to a function from to the non-negative reals. Let be a distance matrix. For easiness of notation, we use . We slightly abuse notation and say that for any , . For , is the norm of . We extend the notation for . In this case, denotes the number of pairs such that . We even say is the norm of , despite not being a norm.
If is a tree and are two nodes in , then we write to denote the distance between and in . An ultrametric is a metric with the property that for all . It holds that is an ultrametric iff there exists a rooted tree spanning such that all elements of are in the leaves of , the depth of all leaves is the same, and for all . We call trees with these properties ultrametric trees.
In the semi-streaming model, the input is again a distance matrix on a vertex set . Let . Our algorithm has available space, and the entries of arrive one-by-one, in an arbitrary order, as pairs of the form . For simplicity, we use the standard assumption that each distance fits in bits of memory.
We let be the set of pairs such that . We define , the set of neighbors of at level , to be the vertices such that (including itself), and the degree of at level to be . We even write and (instead of and ) when is clear from the context. Given an ultrametric tree , a cluster at level is a maximal set of leaves such that every pairwise distance in is at most . It is straightforward that a cluster at level corresponds to the set of leaves descending from a node of at height . Abusing notation, and only when it is clear from the context, we refer to this node as a cluster at level as well.
Regarding , it is sufficient to focus on ultrametrics where the distances between nodes are also entries in . That is because if an ultrametric does not have this property, we can create an ultrametric with this property such that (folklore). To do this, simply modify every distance in to the smallest entry in that is at least as large as (if no such entry in exists, then we modify to be the maximum entry in ).
3 Ultrametrics
In this section, we show how to -approximate Best-Fit Ultrametrics with a single pass in the semi-streaming model. Formally we show the following.
See 1
Our algorithm consists of two main phases. In the streaming phase, we construct efficient sketches that capture the essential information of the input matrix . That is, we store a compressed representation of , denoted as , which, unlike , has a reduced size of rather than values (hereafter called weights). Yet, we will show in Section 3.1 that for every weight and every , a weight is stored, such that and are roughly the same. This guarantee enables us to approximate both the size of a neighborhood and the size of the intersection for two different neighborhoods.
The second step is a post-stream process that carefully utilizes the precomputed sketches while addressing the adaptivity challenges discussed in Section 1.3.2. In Section 3.2 we show how to compute the -Structural Clustering subroutine, which we will use as our division strategy. In Section 3.3 we present our main algorithm, which uses this subroutine and the distances summary as black-boxes to construct the ultrametric tree. In the full version, we establish the necessity of approximation in the streaming setting by proving that computing an optimal solution requires bits of memory.
3.1 Construction of Sketches
This section outlines the process for constructing sketches that enable our algorithm’s implementation. For now we consider large neighborhoods of size . While a similar approach was used in [29]333In [29], the authors claim polylogarithmic size sketches for each vertex. However, we are unable to verify this. Specifically, the random set is constructed by selecting each vertex with probability , where is a constant. Since is at most , the probability of selecting a vertex is at least , which is a constant. Thus, each random set is of size . Therefore, for a vertex with , the sketch size will be of size ., for the problem of correlation clustering, the challenge here is different. Unlike correlation clustering, where each vertex has only a single set of neighbors, each layer of the tree in our context is associated with a different distance threshold. Thus each varying threshold can produce a different set of neighbors for a vertex. In the worst case, there can be such sets, and building a sketch for each changing set of neighbors for each vertex can require up to quadratic space (or even cubic, if implemented naively).
We denote the weight of an edge by . Each sketch is constructed for a specific vertex with a predetermined size chosen from the set , where is a small constant parameter to be adjusted. Each sketch will encapsulate a neighborhood of the vertex of size roughly , and allow us to compare the common intersection of two different neighborhoods. Let be the largest weight for which . We call size relevant for vertex if such exists.
To obtain the sketches, for each , we start by generating a random subset by sampling each vertex from independently with probability , prior to processing the stream. For each vertex , each relevant size , and each satisfying , we define a sketch . Every sketch consists of (i) an estimate of the parameter , denoted by , and (ii) an (almost) random sample of of size , along with the weight of each sampled edge. To achieve this, we store a collection of edges , where all edges in have the same weight , which we also store alongside , and let be the collection corresponding to the largest weight. Furthermore, we ensure that the overall size of all collections is bits.
The purpose of incorporating two size parameters, and into the sketch is to enable comparisons of neighborhoods that have slightly different, but relatively close, sizes. Yet, for simplicity, the reader may assume for the following construction and claims. We now describe the process of constructing a specific sketch given the input stream:
-
1.
Initialize , .
-
2.
If is not incident on , continue to the next edge.
-
3.
Else, if and , continue to the next edge.
-
4.
Else if , where , continue to the next edge.
-
5.
Otherwise proceed as follows:
-
(a)
If there is a collection of edges with an associated weight of , add the edge to . Otherwise, create a new collection containing the edge alongside .
-
(b)
Increase by .
-
(c)
If , delete the collection with the largest weight associated with it, set and .
-
(a)
After processing all the edges, output , furthermore if we let and call it a governing weight of the sketches parametrized by and . Namely, is the weight associated with the neighborhood a sketch parametrized by and is encapsulating. The next claim shows that this sketches can be stored in semi-streaming settings.
Claim 11.
The sketches , where and , can be constructed and stored in bits.
The next claim demonstrates that for every there is a sketch with governing weight such that is a good approximation to .
Claim 12.
With high probability, for each vertex and each relevant size , we have .
We now extend this result for every weight , and show how to obtain a sketch that is a good approximation to .
Claim 13.
For each vertex and each weight with , we can report a sketch associated with size and governing weight , such that with high probability,
We conclude this section by providing another data structure for storing the nearest neighbors for each vertex , denoted by . This will allow us to compare neighborhoods of small size. The implementation of is done using a priority queue with predefined and fixed size . We add every edge incident to to the priority queue together with the associated edge. This leads to the following claim.
Claim 14.
For each vertex , can be stored in bits, and it contains the nearest neighbors of .
In this section, we have outlined the construction of sketches with a total space of , showing that for each vertex and weight , we can report a sketch associated with governing weight that with high probability is a random sample of a neighborhood that is roughly of the same size as . In the next section, we will demonstrate how these sketches can be utilized to estimate the size of the symmetric difference in a way that supports the algorithm’s requirements, justifying the need for maintaining several sketches for each choice of and .
3.2 Structural Clustering
In this section, we introduce an algorithm that requires a single pass over the input stream to solve -Structural Clustering. This extends the notion of Agreement Correlation Clustering from [26], to which we refer as -Structural Clustering. Our semi-streaming algorithm hinges on the key idea that clusters should be formed from vertices that share almost similar neighborhoods. We emphasize that our algorithm is also applicable in the standard RAM (non-streaming) setting and runs in near-linear time, for , improving the time algorithm previously known for -Structural Clustering.
We begin our presentation in Section 3.2.1 with an algorithm solving -Structural Clustering, designed to be adapted in the semi-streaming model. Then, in Section 3.2.2 we show how our proposed algorithm could also be extended to compute -Structural Clustering, which is our division strategy for constructing ultrametrics in Section 3.3. Finally, in Section 3.2.3 we show how to actually implement these algorithms in the semi-streaming model, by utilizing our sketches outlined in Section 3.1.
3.2.1 Algorithm for V-Structural Clustering
We begin by solving the Structural Clustering problem for the entire vertex set . Our graph is for some weight . First we present the definitions of agreement and heavy vertices as in [29]. The parameters and that appear in the following definitions are sufficiently small constants. Furthermore we denote by the symmetric difference between two sets, that is, .
Definition 15 (agreement).
Two vertices are in -agreement iff , which means that share most of their neighbors. is the set of vertices in -agreement with .
Definition 16 (heavy).
We say that a vertex is -heavy if , which means that most of its neighbors are in agreement with . Denote by the -heaviness indicator of vertex .
Computing the -agreement set of a vertex and its -heaviness indicator is a crucial part of the algorithm. Normally, both can be computed exactly by applying the definitions, even using a deterministic algorithm. However, in the semi-streaming model, we can only approximate and with high probability. In Section 3.2.3 we show that, using the sketches outlined in Section 3.1, we can achieve a sufficient approximation that allows us to solve the Structural Clustering for with high probability.
We allow the following relaxations. Let be a set containing all vertices that are in agreement with and no vertices that are not in agreement with . Similarly let be a set containing all vertices that are in -agreement with and no vertices that are not in -agreement with . And finally let be a method that returns true if is -heavy and false if is not -heavy.
With these tools and definitions at our disposal, we can introduce Algorithm 1. It is important to note that this algorithm is executed, using the sketches alone, post stream process. Given the respective sketches, the time complexity of the algorithm is .
We now prove that Algorithm 1 is guaranteed to return a set of disjoint clusters that satisfy the required structural properties. We are referring to the special properties of -Structural Clustering, which are expressed in terms of the definitions of important and everywhere dense groups as in [26].
Definition 17 (important group).
Given a correlation clustering instance, we say that a group of vertices is important if for any vertex , is adjacent to at least fraction of the vertices in and has at most fraction of its neighbors outside of .
Definition 18 (everywhere dense).
Given a correlation clustering instance, we say that a group of vertices is everywhere dense if for any vertex , is adjacent to at least vertices of .
Lemma 19 outlines the supplementary properties required for a correlation clustering algorithm to qualify as structural, demonstrated in the context of Algorithm 1.
Lemma 19 (structural properties).
Suppose for a small enough parameter . Let be the set of clusters returned by Algorithm 1. Then, for any important group of vertices , there is a cluster such that , and does not intersect any other important groups of vertices disjoint from . Moreover, every cluster is everywhere dense.
Proof.
In order to prove Lemma 19, we require the following claims.
Claim 20.
Suppose . Assume are two vertices for which and both return true. If is not part of , then the sets and are disjoint.
Claim 21.
Suppose . Every cluster returned by Algorithm 1 is everywhere dense.
Claim 22.
Suppose . Let the pair of vertices belong to the same important group, then are in -agreement.
Claim 23.
Suppose . Let belong to two disjoint important groups, then are not in -agreement.
Following Claim 20 and Claim 21 the clusters returned by the algorithm are disjoint and everywhere dense. Next, we show the properties related to important groups. First, let be a vertex in some important group , then has at least a fraction of its neighbors within , and according to Claim 22, it is in -agreement with all vertices in . Thus, every vertex that belongs to an important group is -heavy, which also implies that any such vertex is part of a non-singleton cluster.
Now, assume belong to the same important group , and that belongs to a cluster created in step 3 of the algorithm, we will show that . Because are in -agreement, also belongs to the set . However, the intersection of and at vertex implies, based on Claim 20, that vertex necessarily belongs to the cluster .
It remains to prove that if and belong to disjoint important groups, they cannot be part of the same cluster. Suppose, contrarily, that both and belongs to . Note that is -heavy as it belongs to some important group. As such, since in , the sets and are not disjoint (as both contains ). According to Claim 20, as both are -heavy, must also be in . Similarly, belongs to as well. Thus, and intersect at . However, by Claim 20, this intersection implies that and must be in -agreement, contradicting Claim 23.
3.2.2 S-Structural Clustering
So far we have provided an algorithm for -Structural Clustering. However, what we really need for Best-Fit Ultrametrics is the more general problem of -Structural Clustering as defined in Lemma 24. Note that it is different from Agreement Correlation Clustering on the induced subgraph , because -Structural Clustering also considers edges with only one endpoint in .
Lemma 24.
Suppose for a small enough parameter . For every we can output a set of clusters . This clustering ensures that for every important group of vertices , there is a cluster such that , and does not intersect any other important groups of vertices contained in . Moreover, every cluster is everywhere dense.
In the rest of this section we provide a construction of -Structural Clustering by reducing the problem to -Structural Clustering. We start by generalizing the definitions 15 and 16 presented earlier.
Definition 25 (subset agreement).
Two vertices are in -agreement inside if , which means that share most of their neighbors inside . Denote by the set of vertices that are in -agreement with inside .
Definition 26 (subset heavy).
A vertex is -heavy inside if . This means that most of its neighbors are in agreement with inside . Denote by the -heaviness indicator of vertex inside .
Note that definitions 25 and 26 are more general than definitions 15 and 16, since we can derive the latter ones by substituting . The extra term has an intuitive meaning that will become clear later during the reduction. Similarly to the previous section we need to approximate the new sets and , which we do in Section 3.2.3.
We finally prove Lemma 24 by reducing -Structural Clustering, for a set in the correlation clustering instance , to -Structural Clustering, in a specially constructed instance . The instance can be seen as a transformation of that preserves all internal edges within and replaces all external neighbors of internal vertices with dummy vertices, ensuring that our subset-specific definitions of agreement (Definition 25) and heaviness (Definition 26) applied to correspond exactly to the original definitions 15 and 16 applied to . Therefore, to produce the desired -Structural Clustering, it suffices to run Algorithm 1 on using as the parameters. The complete proof can be found in the full version.
3.2.3 Computing Agreements
Building on the algorithm from Section 3.2.2, we now describe its adaptation to the semi-streaming model. Since this algorithm only requires computing approximations to -agreements in and heaviness queries, we prove:
Lemma 27.
The following statements hold with high probability:
-
1.
For a given and every , we can output:
-
“yes” if
-
“no” if
-
-
2.
For every , we can output:
-
“yes” if
-
“no” if
-
We can now establish S-Structural Clustering in the semi-streaming model.
Theorem 28.
Suppose for a small enough parameter . Given access to the sketches of all vertices in , and w.h.p., for every we can output a set of clusters . This clustering ensures that for every important group of vertices , there is a cluster such that , and does not intersect any other important groups of vertices contained in . Moreover, every cluster is everywhere dense.
3.3 Main Algorithm
Due to space limitations, we provide only a description and pseudocode of the main algorithm.
To run our main algorithm it suffices to obtain access to certain black-boxes established in the previous sections. Ideally, we would like to have access to a summary of the input distances, to estimations of neighborhood sizes, and to be able to repeatedly compute - Structural Clustering for instances given by an adaptive adversary. We show that even though we cannot generally guarantee the last requirement, it suffices to guarantee it for a particular (technical) type of adaptive adversary.
We first need the following definitions. For a weight , let be the smallest weight in such that . Similarly, for any we let be the largest value in smaller than . We say that is a compressed set if for , has the property that for all . Finally let be a function with for a sufficiently small constant .
Our algorithm (see Algorithm 2 for the pseudocode) is a divisive algorithm running -Structural Clustering at each level to divide a cluster. However, it then performs a different division strategy for the largest cluster. This different strategy for the largest cluster allows us to guarantee that each vertex only participates in a logarithmic number of -Structural Clustering computations, and is only possible if the size of the largest cluster has not dropped by a constant factor.
More formally, our algorithm takes as argument a set (initially the whole ) and a distance (initially the maximum distance). First, it creates a tree-node at distance from the leaves, whose leaves-descendants are all the vertices in . Then it uses an -Structural Clustering subroutine, and for each cluster with size at most it recurses on . The roots of the trees created from each of these recursions then become children of .
Subsequently, for the largest cluster we perform the following postprocessing: Let and .
-
If there are at most vertices in with large estimated degree (larger than ), then we recurse on ; the root of the tree created from this recursion becomes a child of .
-
Otherwise, we let contain the vertices whose estimated degree is small (less than ), and recurse on . The root of the tree created from this recursion (let us call it ) becomes a child of , and then we update . Finally, we repeat the postprocessing again (but this time on (instead of ) at level (instead of )).
The idea of the postprocessing is that nodes whose degree drops significantly cannot be in a huge cluster without a big cost. The challenging part is showing that keeping the rest of the nodes in is sufficient.
References
- [1] Richa Agarwala, Vineet Bafna, Martin Farach, Mike Paterson, and Mikkel Thorup. On the approximability of numerical taxonomy. SIAM J. Comput, 28(3):1073–1085, 1999.
- [2] Kook Jin Ahn, Graham Cormode, Sudipto Guha, Andrew McGregor, and Anthony Wirth. Correlation clustering in data streams. Algorithmica, 83:1980–2017, 2021. doi:10.1007/S00453-021-00816-9.
- [3] Nir Ailon and Moses Charikar. Fitting tree metrics: Hierarchical clustering and phylogeny. SIAM J. Comput., 40(5):1275–1291, 2011. Announced at FOCS 2005. doi:10.1137/100806886.
- [4] Federico Ardila. Subdominant matroid ultrametrics. Annals of Combinatorics, 8:379–389, 2005.
- [5] Sepehr Assadi, Vaggos Chatziafratis, Jakub Lacki, Vahab Mirrokni, and Chen Wang. Hierarchical clustering in graph streams: Single-pass algorithms and space lower bounds. In COLT, volume 178 of Proceedings of Machine Learning Research, pages 4643–4702. PMLR, 2022. URL: https://proceedings.mlr.press/v178/assadi22a.html.
- [6] Sepehr Assadi and Chen Wang. Sublinear time and space algorithms for correlation clustering via sparse-dense decompositions. In ITCS, volume 215 of LIPIcs, pages 10:1–10:20. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2022. doi:10.4230/LIPICS.ITCS.2022.10.
- [7] Nikhil Bansal, Avrim Blum, and Shuchi Chawla. Correlation clustering. In 43rd Symposium on Foundations of Computer Science (FOCS 2002), page 238. IEEE Computer Society, 2002. doi:10.1109/SFCS.2002.1181947.
- [8] Soheil Behnezhad, Moses Charikar, Vincent Cohen-Addad, Alma Ghafari, and Weiyun Ma. Fully dynamic correlation clustering: Breaking 3-approximation. CoRR, abs/2404.06797, 2024. doi:10.48550/arXiv.2404.06797.
- [9] Soheil Behnezhad, Moses Charikar, Weiyun Ma, and Li-Yang Tan. Almost 3-approximate correlation clustering in constant rounds. In 63rd IEEE Annual Symposium on Foundations of Computer Science, FOCS, pages 720–731, 2022. doi:10.1109/FOCS54457.2022.00074.
- [10] Soheil Behnezhad, Moses Charikar, Weiyun Ma, and Li-Yang Tan. Single-pass streaming algorithms for correlation clustering. In Nikhil Bansal and Viswanath Nagarajan, editors, Proceedings of the 2023 ACM-SIAM Symposium on Discrete Algorithms, SODA, pages 819–849. SIAM, 2023. doi:10.1137/1.9781611977554.CH33.
- [11] Soheil Behnezhad, Mahsa Derakhshan, MohammadTaghi Hajiaghayi, Cliff Stein, and Madhu Sudan. Fully dynamic maximal independent set with polylogarithmic update time. In FOCS, pages 382–405. IEEE Computer Society, 2019. doi:10.1109/FOCS.2019.00032.
- [12] Daniel Irving Bernstein. L-infinity optimization to bergman fans of matroids with an application to phylogenetics. SIAM Journal on Discrete Mathematics, 34(1):701–720, 2020. doi:10.1137/18M1218741.
- [13] Daniel Irving Bernstein and Colby Long. L-infinity optimization to linear spaces and phylogenetic trees. SIAM Journal on Discrete Mathematics, 31(2):875–889, 2017. doi:10.1137/16M1101027.
- [14] Ronald L Breiger, Scott A Boorman, and Phipps Arabie. An algorithm for clustering relational data with applications to social network analysis and comparison with multidimensional scaling. Journal of Mathematical Psychology, 12(3):328–383, 1975.
- [15] Mélanie Cambus, Fabian Kuhn, Etna Lindy, Shreyas Pai, and Jara Uitto. A (3 + )-approximate correlation clustering algorithm in dynamic streams. In Proceedings of the 2024 ACM-SIAM Symposium on Discrete Algorithms, SODA, pages 2861–2880. SIAM, 2024. doi:10.1137/1.9781611977912.101.
- [16] Nairen Cao, Vincent Cohen-Addad, Euiwoong Lee, Shi Li, Alantha Newman, and Lukas Vogl. Understanding the cluster linear program for correlation clustering. In STOC, pages 1605–1616. ACM, 2024. doi:10.1145/3618260.3649749.
- [17] Amir Carmel, Debarati Das, Evangelos Kipouridis, and Evangelos Pipis. Fitting tree metrics and ultrametrics in data streams, 2025. arXiv:2504.17776.
- [18] L. L. Cavalli-Sforza and A. W. F. Edwards. Phylogenetic analysis models and estimation procedures. The American Journal of Human Genetics, 19:233–257, 1967.
- [19] Moses Charikar and Ruiquan Gao. Improved approximations for ultrametric violation distance. In David P. Woodruff, editor, Proceedings of the 2024 ACM-SIAM Symposium on Discrete Algorithms, SODA, pages 1704–1737. SIAM, 2024. doi:10.1137/1.9781611977912.68.
- [20] Moses Charikar, Venkatesan Guruswami, and Anthony Wirth. Clustering with qualitative information. J. Comput. Syst. Sci., 71(3):360–383, 2005. Announced at FOCS 2003. doi:10.1016/J.JCSS.2004.10.012.
- [21] Shuchi Chawla, Konstantin Makarychev, Tselil Schramm, and Grigory Yaroslavtsev. Near optimal LP rounding algorithm for correlationclustering on complete and complete k-partite graphs. In STOC, pages 219–228. ACM, 2015. doi:10.1145/2746539.2746604.
- [22] Victor Chepoi and Bernard Fichet. -approximation via subdominants. Journal of mathematical psychology, 44(4):600–616, 2000.
- [23] Flavio Chierichetti, Nilesh N. Dalvi, and Ravi Kumar. Correlation clustering in mapreduce. In The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’14, pages 641–650. ACM, 2014. doi:10.1145/2623330.2623743.
- [24] Vincent Cohen-Addad, Debarati Das, Evangelos Kipouridis, Nikos Parotsidis, and Mikkel Thorup. Fitting distances by tree metrics minimizing the total error within a constant factor. In 62nd IEEE Annual Symposium on Foundations of Computer Science, FOCS 2021, pages 468–479. IEEE, 2021. doi:10.1109/FOCS52979.2021.00054.
- [25] Vincent Cohen-Addad, Rémi de Joannis de Verclos, and Guillaume Lagarde. Improving ultrametrics embeddings through coresets. In ICML, volume 139 of Proceedings of Machine Learning Research, pages 2060–2068. PMLR, 2021. URL: http://proceedings.mlr.press/v139/cohen-addad21a.html.
- [26] Vincent Cohen-Addad, Chenglin Fan, Euiwoong Lee, and Arnaud De Mesmay. Fitting metrics and ultrametrics with minimum disagreements. In 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS), pages 301–311. IEEE, 2022. doi:10.1109/FOCS54457.2022.00035.
- [27] Vincent Cohen-Addad, Karthik C. S., and Guillaume Lagarde. On efficient low distortion ultrametric embedding. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 2078–2088. PMLR, 2020. URL: http://proceedings.mlr.press/v119/cohen-addad20a.html.
- [28] Vincent Cohen-Addad, Silvio Lattanzi, Andreas Maggiori, and Nikos Parotsidis. Online and consistent correlation clustering. In ICML, volume 162 of Proceedings of Machine Learning Research, pages 4157–4179. PMLR, 2022. URL: https://proceedings.mlr.press/v162/cohen-addad22a.html.
- [29] Vincent Cohen-Addad, Silvio Lattanzi, Slobodan Mitrovic, Ashkan Norouzi-Fard, Nikos Parotsidis, and Jakub Tarnawski. Correlation clustering in constant many parallel rounds. In ICML, volume 139 of Proceedings of Machine Learning Research, pages 2069–2078, 2021. URL: http://proceedings.mlr.press/v139/cohen-addad21b.html.
- [30] Vincent Cohen-Addad, Euiwoong Lee, Shi Li, and Alantha Newman. Handling correlated rounding error via preclustering: A 1.73-approximation for correlation clustering. In FOCS, pages 1082–1104. IEEE, 2023. doi:10.1109/FOCS57990.2023.00065.
- [31] Vincent Cohen-Addad, Euiwoong Lee, and Alantha Newman. Correlation clustering with sherali-adams. In FOCS, pages 651–661. IEEE, 2022. doi:10.1109/FOCS54457.2022.00068.
- [32] Vincent Cohen-Addad, David Rasmussen Lolck, Marcin Pilipczuk, Mikkel Thorup, Shuyi Yan, and Hanwen Zhang. Combinatorial correlation clustering. In STOC, pages 1617–1628, 2024. doi:10.1145/3618260.3649712.
- [33] Mina Dalirrooyfard, Konstantin Makarychev, and Slobodan Mitrovic. Pruned pivot: Correlation clustering algorithm for dynamic, parallel, and local computation models. CoRR, abs/2402.15668, 2024. doi:10.48550/arXiv.2402.15668.
- [34] William H.E. Day. Computational complexity of inferring phylogenies from dissimilarity matrices. In Bulletin of Mathematical Biology, volume 49(4), pages 461–467, 1987.
- [35] Patrik D’haeseleer. How does gene expression clustering work? Nature Biotechnology, 23:1499–1501, 2005.
- [36] Andreas Dress, Barbara Holland, Katharina T Huber, Jack H Koolen, Vincent Moulton, and Jan Weyer-Menkhoff. additive and ultra-additive maps, gromov’s trees, and the farris transform. Discrete Applied Mathematics, 146(1):51–73, 2005.
- [37] Geet Duggal, Rob Patro, Emre Sefer, Hao Wang, Darya Filippova, Samir Khuller, , and Carl Kingsford. Resolving spatial inconsistencies in chromosome conformation measurements. Algorithms for Molecular Biology, 8(1):1–10, 2013.
- [38] Chenglin Fan, Benjamin Raichel, and Gregory Van Buskirk. Metric violation distance: Hardness and approximation. In Artur Czumaj, editor, Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2018, New Orleans, LA, USA, January 7-10, 2018, pages 196–209. SIAM, 2018. doi:10.1137/1.9781611975031.14.
- [39] Martin Farach, Sampath Kannan, and Tandy Warnow. A robust model for finding optimal evolutionary trees. In Proceedings of the Twenty-Fifth Annual ACM Symposium on Theory of Computing, pages 137–145, 1993. doi:10.1145/167088.167132.
- [40] Anna C. Gilbert, Albert Gu, Christopher Ré, Atri Rudra, and Mary Wootters. Sparse recovery for orthogonal polynomial transforms. In Artur Czumaj, Anuj Dawar, and Emanuela Merelli, editors, 47th International Colloquium on Automata, Languages, and Programming, ICALP 2020, July 8-11, 2020, Saarbrücken, Germany (Virtual Conference), volume 168 of LIPIcs, pages 58:1–58:16. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2020. doi:10.4230/LIPICS.ICALP.2020.58.
- [41] Anna C. Gilbert and Lalit Jain. If it ain’t broke, don’t fix it: Sparse metric repair. In 55th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2017, Monticello, IL, USA, October 3-6, 2017, pages 612–619. IEEE, 2017. doi:10.1109/ALLERTON.2017.8262793.
- [42] Boulos Harb, Sampath Kannan, and Andrew McGregor. Approximating the best-fit tree under l norms. In APPROX-RANDOM, pages 123–133, 2005. doi:10.1007/11538462_11.
- [43] Patrick K. Kimes, Yufeng Liu, David Neil Hayes, and James Stephen Marron. Statistical significance for hierarchical clustering. Biometrics, 73(3):811–821, 2017.
- [44] Evangelos Kipouridis. Fitting tree metrics with minimum disagreements. In 31st Annual European Symposium on Algorithms (ESA 2023), 2023.
- [45] Mirko Křivánek. The complexity of ultrametric partitions on graphs. Information processing letters, 27(5):265–270, 1988. doi:10.1016/0020-0190(88)90090-7.
- [46] Bipul Kumar, Arun Sharma, Sanket Vatavwala, and Prashant Kumar. Digital mediation in business-to-business marketing: A bibliometric analysis. Industrial Marketing Management, 85:126–140, 2020.
- [47] Pei Lee, Laks VS Lakshmanan, and Evangelos E Milios. Incremental cluster evolution tracking from highly dynamic network data. In 2014 IEEE 30th International Conference on Data Engineering, pages 3–14. IEEE, 2014.
- [48] Sanghoon Lee and M.M. Crawford. Unsupervised multistage image classification using hierarchical clustering with a bayesian similarity measure. IEEE Transactions on Image Processing, 14(3):312–320, 2005. doi:10.1109/TIP.2004.841195.
- [49] Sebastian Lühr and Mihai Lazarescu. Incremental clustering of dynamic data streams using connectivity based representative points. Data & knowledge engineering, 68(1):1–27, 2009. doi:10.1016/J.DATAK.2008.08.006.
- [50] Bin Ma, Lusheng Wang, and Louxin Zhang. Fitting distances by tree metrics with increment error. Journal of combinatorial optimization, 3:213–225, 1999. doi:10.1023/A:1009837726913.
- [51] Konstantin Makarychev and Sayak Chakrabarty. Single-pass pivot algorithm for correlation clustering. keep it simple! In Advances in Neural Information Processing Systems, volume 36, pages 6412–6421, 2023.
- [52] Pedro Pereira Rodrigues, Joao Gama, and Joao Pedroso. Hierarchical clustering of time-series data streams. IEEE transactions on knowledge and data engineering, 20(5):615–627, 2008. doi:10.1109/TKDE.2007.190727.
- [53] Peter H.A. Sneath and Robert R. Sokal. Numerical taxonomy. Nature, 193(4818):855–860, 1962.
- [54] Peter H.A. Sneath and Robert R. Sokal. Numerical taxonomy. the principles and practice of numerical classification. Freeman, 1963.