Lipschitz Decompositions of Finite Metrics
Abstract
Lipschitz decomposition is a useful tool in the design of efficient algorithms involving metric spaces. While many bounds are known for different families of finite metrics, the optimal parameters for -point subsets of , for , remained open, see e.g. [Naor, SODA 2017]. We make significant progress on this question and establish the bound . Building on prior work, we demonstrate applications of this result to two problems, high-dimensional geometric spanners and distance labeling schemes. In addition, we sharpen a related decomposition bound for , due to Filtser and Neiman [Algorithmica 2022].
Keywords and phrases:
Lipschitz decompositions, metric embeddings, geometric spannersFunding:
Robert Krauthgamer: Work partially supported by the Israel Science Foundation grant #1336/23, by the Israeli Council for Higher Education (CHE) via the Weizmann Data Science Research Center, and by a research grant from the Estate of Harry Schutzman.Copyright and License:
![[Uncaptioned image]](x1.png)
2012 ACM Subject Classification:
Theory of computation Computational geometry ; Theory of computation Sparsification and spannersAcknowledgements:
We thank Arnold Filtser and Ofer Neiman for helpful discussions.Editors:
Oswin Aichholzer and Haitao WangSeries and Publisher:

1 Introduction
The pursuit of approximating metric spaces by simpler structures has inspired the development of fundamental concepts, such as graph spanners [47, 46] and low-distortion embeddings into various spaces [37, 7], both of which have a wide range of algorithmic applications. Many of these results, including for instance [7, 48, 17, 30, 6, 19], rely on various notions of decomposition of a metric space into low-diameter clusters, and these decompositions are most often randomized. One extensively studied notion, see e.g. [13, 24, 17, 25], is Lipschitz decomposition (also called separating decomposition), which informally is a random partition of a metric space into low-diameter clusters, with a guarantee that nearby points are likely to belong to the same cluster.
Definition 1.1 (Lipschitz decomposition [7]).
Let be a metric space. A distribution over partitions of is called -Lipschitz if
-
1.
for every partition , all clusters satisfy ; and
-
2.
for all ,
where denotes the cluster of containing and .
Typical applications require such decompositions where is not known in advance, or even multiple values of (say for every power of ). We naturally seek small and thus define the (optimal) decomposition parameter of as
and we extend this to a family of metric spaces , by defining .
Obtaining bounds on the decomposition parameter of various metrics (and families of metrics) is of significant algorithmic importance, and we list in Table 1 several known bounds. One fundamental example where we know of (nearly) tight bounds is the metric space , for , which stands for equipped with the norm. For , we have due to [13], and for we have due to [40] (see discussion therein about an incorrect claim made in [13]).111Throughout, the notation hides factors, and hides a factor that depends only on . Observe that an upper bound for immediately extends to all subsets of it, implying in particular a bound for the family of all finite subsets of . These bounds depend on , and are thus most suitable for low-dimensional settings.
We focus on finite metrics , aiming to bound in terms of , which is often useful in high-dimensional settings. For instance, it is well-known that the family of all -point metric spaces [7]. To write this assertion more formally, define and then the above asserts that , where we used that every finite metric embeds isometrically in . For the family of -point metrics, combining with the famous JL Lemma [27] immediately yields , which is tight by [13]. For -point metrics, , we have due to [35, 40], nearly matching the lower bound of from [13]. However, for -point metrics, , to the best of our knowledge, the only known upper bound is , obtained by trivially applying the results for general -point metric spaces. The following question was raised by Naor [40, Question 1], see also [41, Question 83].
Question 1.2 ([40]).
Is it true that for every , ? More ambitiously, is it true that ?
Our main result, in Theorem 1.3, answers the first part of this question in the affirmative. Additionally, we show in Section 2 an analogous result for another notion of decomposability that was introduced in [22] (and we call capped decomposition) and is particularly suited for high-dimensional geometric spanners.
Family of Metrics | or | Reference | Comments |
---|---|---|---|
spaces | [13] | ||
spaces | [40] | ||
finite metrics | [7] | ||
space (Euclidean) | [13] | ||
spaces | [35, 40] | ||
spaces | Theorem 1.3 | conjectured | |
doubling constant | [24] | ||
-minor-free graphs | [1, 18] | conjectured | |
graphs with genus | [36, 1] | ||
graphs with treewidth | [20] |
Geometric Spanners.
A spanner with stretch (in short a -spanner) for a finite metric is a graph , that satisfies for all , meaning that the shortest-path distance in the graph approximates the original distance within factor , where by definition every edge has weight . Of particular interest are spanners that are sparse, meaning they contain a small number of edges, ideally linear in . Another important parameter is the lightness of a spanner, defined as the total weight of its edges divided by the weight of a minimum spanning tree of . Clearly, the lightness is at least . These spanners are called geometric because the input is a metric space (rather than a graph). They are natural and useful representations of a metric, and as such, have been studied extensively, see the surveys [16, 49, 2]. Spanners for -point metrics in low-dimensional spaces (e.g., in fixed-dimensional Euclidean space or doubling metrics) are well-studied and well-understood. For instance, metrics with doubling dimension admit -spanners with near-optimal sparsity and lightness [12, 34].
However, in high-dimensional spaces, our understanding of spanners is rather limited. Har-Peled, Indyk, and Sidiropoulos [25] showed that every -point Euclidean metric admits, an -spanner with edges for every . Filtser and Neiman [22] extended this result to all metric spaces that admit a certain decomposition that we call capped decomposition (Definition 2.4), showing that in those spaces, it is possible to construct spanners that are both sparse and light. In particular, they showed that every -point subset of , , has an -spanner with edges and lightness for every . It remained open whether the spaces for admit the aforementioned capped decomposition. To the best of our knowledge, all known spanners for these spaces have a tradeoff of stretch with sparsity .
1.1 Our Results
Our main contribution is the construction of a Lipschitz decomposition for finite metrics, , as follows.
Theorem 1.3.
Let . Then . That is, for every -point metric and , there exists an -Lipschitz decomposition of .
Previously, this bound was known only for the extreme values , and in these cases it is actually tight. More precisely, for our bound coincides with the well-known result [13], and for it is known that , because all -point metrics embed into with -distortion [38]. For intermediate values, say fixed , our bound is the first one to improve over , which applies to all -point metrics, and leaves a gap from the lower bound that follows from Dvoretzky’s Theorem [15].
We compare our bound with those for other metric spaces in Table 1.
The proof of Theorem 1.3 appears in Section 2.1, and has interesting technical features. It relies on two known decompositions of finite metrics, one for general metrics and one for Euclidean metrics, that are composed via a metric-embedding tool called the Mazur map. Our decomposition method is data-dependent, i.e., not oblivious to the data, and we discuss this intriguing aspect in Sections 2.1 and 5.
Note added in proof.
Shortly after this work was posted online, two groups working in parallel to each other [31, 42] improved our result in Theorem 1.3 to for every , thereby resolving in the affirmative also the second part of ˜1.2. These two papers design a recursive process that relies on the technique developed here for proving Theorem 1.3, see each paper for its dependence on .
Geometric Spanners for .
We then use similar ideas to obtain a new bound for another notion of decomposability, that was introduced in [22] and we call capped decomposition; and this immediately yields geometric spanners in , for . While for these spanners coincide with the known bounds from [25, 22], for fixed , our spanners are the first improvement over the trivial bounds that hold for all metric spaces.
Theorem 1.4.
Let and . Then every -point metric admits an -spanner of size and lightness , where is such that .
The proof of this theorem appears in Section 2.2, and includes both the spanner construction, which follows [22], and our new bound for capped decomposition, which is the main technical result.
Geometric Spanners for .
We also sharpen the known spanner results for spaces with , which say that every -point subset admits an -spanner with edges and lightness for every [22]. We improve upon this result by eliminating the factor in the exponent.
Theorem 1.5.
Let and . Then every -point metric admits an -spanner of size and lightness .
The proof of this theorem, presented in Section 3, follows the construction of [22], but replacing a key step, in which they rely on results from [43], with results from [4]. Interestingly, our improved spanner bound “matches” the bounds of Theorem 1.4, up to duality between and .
Distance Labeling Schemes.
Distance labeling for a metric space assigns to each point a label , so that one can later recover (perhaps approximately) the distance between any two points in based only on their labels (without knowledge of the metric space). It was formulated in [45], motivated by applications in distributed computing, and has been studied intensively, see e.g. [23, 21]. An immediate corollary of our main result in Theorem 1.3 is a distance labeling scheme for finite metrics in for , as follows.
Theorem 1.6.
Let . Then the family of -point metrics in with pairwise distances in the range admits a distance labeling scheme with approximation and label size bits, where is such that .
A formal definition of the distance labeling model and a proof of Theorem 1.6 appear in Section 4.
1.2 Related Work
We focus on Lipschitz decomposition and on capped decomposition, that was introduced in [22], but the literature studies several different decompositions of metric spaces into low-diameter clusters, see e.g. [39, 19]. In particular, the notion of padded decomposition [48, 29] is closely related and was used extensively, see for example [48, 8, 35, 39, 30]. While a Lipschitz decomposition guarantees that nearby points are likely to be clustered together, a padded decomposition guarantees that each point is, with good probability, together with all its nearby points in the same cluster. Remarkably, if a metric space admits a padded decomposition then it admits also a Lipschitz decomposition with almost the same parameters [35], however the other direction is not true, as demonstrated by .
The problem of computing efficiently the optimal decomposition parameters for an input metric space was studied in [32]. Specifically for Lipschitz decomposition, they show that can be -approximated in polynomial time (in ).
2 Decompositions and Spanners in for
In this section we consider finite subsets of for . We first present (in Section 2.1) a new Lipschitz decomposition, which proves Theorem 1.3. Next, we show (in Section 2.2) a new construction of capped decomposition, which is a related notion of decomposability that was introduced in [22] without a concrete name. Finally we obtain (in Section 2.3) new spanners, which prove Theorem 1.4. This is actually an immediate corollary of our capped decomposition, by following the spanner construction of [22].
2.1 Lipschitz Decomposition in for
Before presenting the proof of Theorem 1.3, we first provide the intuition behind the proof. A common approach in many algorithms for metric spaces is to embed the given metric into a simpler one (e.g., a tree metric), solve the problem in the target metric, and then pull back this solution to the original metric. For our purpose, of constructing a Lipschitz decomposition of , , a natural idea is to seek a low-distortion embedding of into , because we already have decompositions for that space, namely, . Ideally, the embedding into would be oblivious, meaning that it embeds the entire (not only ) into , but unfortunately such an embedding does not exist (it would imply oblivious dimension reduction in for , which is provably impossible [14]). We get around this limitation by employing a data-dependent approach, where the decomposition depends on the input set . More precisely, we use Mazur maps, which provide a low-distortion embedding from to , but only for sets of bounded diameter (see Corollary 2.3). We thus first decompose into bounded-diameter subsets by applying a standard Lipschitz decomposition (that is applicable for every -point metric). The final decomposition is obtained by pulling back the solution (clusters) we found in .
We proceed to introduce some technical results needed for our proof of Theorem 1.3. The first one is a well-known bound for Lipschitz decomposition of a finite metric.
Theorem 2.1 ([7]).
Every -point metric admits an -Lipschitz decomposition for every .
Next, we define the Mazur map, which is an explicit embedding for . The image of an input vector is computed in each coordinate separately, by raising the absolute value to power while keeping the original sign. The next theorem appears in [10], where it is stated as an adaptation of [11], and we will actually need the immediate corollary that follows it.
Theorem 2.2 ([11, 10]).
Let and , and let be the Mazur map scaled down by factor . Then for all such that ,
Corollary 2.3.
Let . Every -point set with diameter at most admits an embedding such that
Proof of Theorem 1.3.
Let , and let be an -point metric space for . Construct a partition of in the following steps:
-
1.
Construct for an -Lipschitz decomposition using Theorem 2.1.
-
2.
Embed each cluster into using the embedding provided by Corollary 2.3 for .
- 3.
-
4.
The final decomposition is obtained by taking the preimage of every cluster of every .
It is easy to see that is indeed a partition of , consisting of clusters. Next, consider and let us bound . Observe that a pair of points can be separated only in steps 1 or 3. Therefore,
where the last inequality follows because each is non-expanding on its cluster .
It remains to show that the final clusters all have diameter at most . Let be in the same cluster, i.e., . Then and . Combining the maximum possible diameter of and with the contraction guarantees of , we get
Rearranging this, we obtain , which completes the proof.
2.2 Capped Decomposition in for
We now present our construction of capped decomposition, which is a notion of decomposability that was introduced in [22] without a concrete name. We start with its definition, and then present our construction.
Definition 2.4.
Let be a metric space. A distribution over partitions of is called -capped if
-
1.
for every partition , all clusters have ; and
-
2.
for every such that ,
Observe that here, unlike in Lipschitz decomposition, we have a guarantee on the probability that points are clustered together only if they are within distance of each other, hence the name “capped decomposition”. Moreover, the probability bound does not depend on the exact value of . We say that admits a -capped decomposition, where , if it admits a -capped decomposition for every . A family of metrics admits a -capped decomposition if every metric in the family admits a -capped decomposition.
Theorem 2.5.
Let . Then every -point metric in admits a -capped decomposition for all , where is such that .
Previously, such a decomposition was known only for the extreme case by [22], see Proposition 2.6, and our bound above in fact converges to their bound when . Our proof of Theorem 2.5 is similar to Theorem 1.3, and relies on two known capped decompositions, that we introduce next, together with the Mazur map Corollary 2.3.
Proposition 2.6 ([22]).
Every -point subset of admits a -capped decomposition for all .
Proposition 2.7 (Implicit in [39]).
Every -point metric space admits a -capped decomposition for all .
Proof of Theorem 2.5.
Let and . Let be an -point subset of , where is such that . Construct a partition of in the following steps:
-
1.
Construct for a -capped decomposition using Proposition 2.7.
-
2.
Embed each cluster into using the embedding provided by Corollary 2.3 for .
-
3.
For each embedded cluster construct a -capped decomposition using Proposition 2.6.
-
4.
The final decomposition is obtained by taking the preimage of every cluster of every .
It is easy to see that that is indeed a partition of , consisting of clusters. Next, consider with and let us bound . Observe that , and therefore
where the inequality follows because each is non-expanding on its cluster .
It remains to show that each cluster has diameter at most . Let be in the same cluster, i.e., . Then and . Combining the maximum possible diameter of and with the contraction guarantees of , we get
Rearranging this, we obtain , which completes the proof.
2.3 Spanners in for
We can now prove Theorem 1.4, by applying the following spanner construction of [22].
Theorem 2.8 ([22]).
Let be an -point metric space admitting a -capped decomposition for some . Then, for every , there exists a -spanner for with edges and lightness .
Proof of Theorem 1.4.
The proof follows directly by combining Theorem 2.5 and Theorem 2.8, as we can assume without loss of generality.
3 Spanners in for
This section presents an improved construction of geometric spanners in for . Previously, -spanners of size for all were constructed in [22]; in particular, setting yields an -spanner of near-linear size . We first present in Section 3.1 two different constructions of near-linear-size spanners with a slightly better stretch. Then in Section 3.2 we use yet another technique, namely Locality Sensitive Hashing (LSH), to slightly improve the construction of [22] of spanners with general stretch .
3.1 Spanners of Near-Linear Size
We slightly improve the near-linear size spanner construction of [22] by shaving the factor from the stretch, as follows.
Theorem 3.1.
For every fixed , every -point metric admits an -spanner of size .
We present two related but different proofs for this theorem. Both are based on modifying the spanner algorithm for from [25], and therefore we start with an overview of that algorithm. Given an input set , the algorithm begins by constructing a hierarchical set of -nets , where we assume that the minimum and maximum distances in are and , respectively. Then, for each level , it constructs an -Lipschitz decomposition of by combining the JL Lemma [27] with the Lipschitz decomposition of [13]. For each cluster in it, the algorithm add to the spanner edges in a star-like fashion, meaning that all cluster points are connected to one arbitrary point within the cluster. The last two steps are repeated times to ensure that with high probability, for each level , every with are clustered together in at least one of the repetitions. It is shown in [25] that this algorithm constructs an -spanner of size .
Proof of Theorem 3.1 via Lipschitz Decomposition.
Observe that the above algorithm of [25] uses the fact that the points lie in only for the construction of Lipschitz Decompositions, and relies on an optimal decomposition for finite metrics to conclude that the spanner’s stretch is . For finite metrics, , we can use instead a Lipschitz decomposition from [35, 40], which has , to conclude the claimed stretch.
We next present a proof that modifies the algorithm of [25] differently, and relies on a decomposition that is similar to a Lipschitz decomposition but has slightly weaker guarantees. Interestingly, this technique yields a slightly stronger result than Theorem 3.1, where need not to be fixed and can depend on (e.g., ). We proceed to introduce some technical results from [9] regarding a weak form of dimensionality reduction in , for , which are needed for our proof.
Definition 3.2 ([44]).
Let , be metric spaces and be a real interval. An embedding is called -range preserving with distortion if there exists such that for all :
-
1.
If , then .
-
2.
If , then .
-
3.
If , then .
We say that admits an -range preserving embedding into with distortion , if for all , there exists a -range preserving embedding into with distortion .
Theorem 3.3 ([9]).
Let . For every -point set , and for every range parameter , there exists an -range preserving embedding with distortion , such that .
Proof of Theorem 3.1 via Weak Dimension Reduction.
Observe that the above algorithm of [25] only requires the decomposition of each net to ensure that points with are clustered together with constant probability, and that the diameter of all clusters is at most ; of course, for , , we replace the factor with . A careful examination shows that these properties are preserved by first reducing the dimension using the range-preserving embedding provided by Theorem 3.3 with and , and then constructing a Lipschitz decomposition for the image points in using [13].
3.2 Spanners with Stretch-Size Tradeoff
We now present, in Theorem 1.5, a construction of -spanners in , where , of size for all , which slightly improves over the -spanners of size from [22]. It is worth noting that Theorem 1.5 generalizes the results of Theorem 3.1, and thus provides an alternative proof for it.
Definition 3.4 (LSH [26]).
Let be a family of hash functions mapping a metric to some universe . We say that is -sensitive if for every , the following is satisfied:
-
1.
If , then .
-
2.
If , then .
Such is called an LSH family with parameter .
Lemma 3.5 ([22]).
Let be a metric space such that for every , there exists a -sensitive LSH family with parameter . Then admits a -capped decomposition.
For , the LSH family constructed in [3] can be used in Lemma 3.5 to conclude that admits a -capped decomposition for every [22], thereby proving Theorem 1.5 for this case of . In a similar fashion, an LSH family constructed in [43] for was used in [22] to show that these spaces admit a -capped decomposition. We observe that this result can be improved by replacing the LSH family from [43], with an alternative one that is briefly mentioned in [4], and consequently prove Theorem 1.5. For completeness, we reproduce this LSH family for , where .
Lemma 3.6 ([4]).
Let , , and large enough . Then there exists a -sensitive LSH family for with parameter .
Proof.
Let , , and sufficiently large . Let be the isometric embedding of the -snowflake of into from [28, Theorem 4.1]. Take and , and let be the -sensitive LSH family for with parameter from [3]. Observe that, for every , if , then , and thus
Similarly, if , then , and hence
We therefore conclude that is an -sensitive LSH family for with parameter .
Proof of Theorem 1.5.
The proof follows immediately by constructing a capped decomposition based on Lemma 3.5 and Lemma 3.6, and using it in the spanner construction from Theorem 2.8.
4 Distance Labeling
In the distance labeling model, a scheme is designed for an entire a family of -point metrics (and in some scenarios, all these metrics have the same point set , e.g., different graphs on the same vertex set). A scheme is an algorithm that preprocesses each metric in and assigns to each point a label .
Definition 4.1.
A scheme is a distance labeling with approximation and label size of if
-
1.
every label (for every point in every metric in ) consists of at most bits; and
-
2.
there is an algorithm that, given the labels of two points in a metric (but not given or the points ), outputs an estimate that satisfies
The following theorem was presented in [24] with limited details, and we include a proof of it below for completeness.
Theorem 4.2 ([24]).
Let be a family of -point metrics, and assume that all the pairwise distances in all metrics in are in the range . Then admits a distance-labeling scheme with approximation and label size bits.
It is straightforward to see that Theorem 1.6 follows by combining Theorem 4.2 and Theorem 1.3.
Proof of Theorem 4.2.
We first describe the preprocessing algorithm, denoting . Perform the following steps for all levels . Begin by constructing a -Lipschitz decomposition, and observe that every two points with are separated with probability at most . Then, assign a random bit to each cluster, and observe that if two points are at distance greater than , they always fall in different clusters, hence, the probability that they are assigned the same bit is exactly , and if they are at distance at most they are assigned the same bit with probability at least . Repeat the last two steps times, and then with high probability, every two points are assigned the same bit at least times if and fewer than times if . Finally, label each point by concatenating the bit assigned to its cluster in all the repetitions at all levels.
The label-size analysis is straightforward. It remains to show that, given two labels , it is possible to approximate the distance within factor . This can be achieved by identifying the smallest level such that and are assigned the same bit at least times, and then the above analysis (used in contrapositive form) implies that , where by convention .
5 Future Directions
Lipschitz Decompositions.
We stress that our decomposition in Theorem 1.3 employs a data-dependent approach, and is not oblivious to the input set (as, say, the decomposition for in [13], even when applied together with the JL Lemma). In retrospect, this feature is perhaps not very surprising, because data-dependent approaches have been already shown to be effective for central problems, such as nearest neighbor search [5, 33]. We thus mention that a major open problem in the field is whether dimension reduction is possible in for ; we know that for this is not possible via an oblivious mapping [14], raising the question whether data-dependent mappings can overcome this limitation.
Geometric Spanners.
The geometric spanners in [25, 22] for , , are not known to be optimal, i.e., we do not know of matching lower bounds, except for the more restricted case of -hop spanners [25]. We conjecture that tight instances exist in these spaces, i.e., the spanner bounds obtained in [25, 22] are optimal for every stretch . We similarly do not know of matching lower bounds for the geometric spanners in , for fixed , that we obtain in Theorem 1.4, and it is quite plausible that our upper bounds are not tight. We do know however, based on known results, that for every , there exist tight instances in for .
References
- [1] Ittai Abraham, Cyril Gavoille, Anupam Gupta, Ofer Neiman, and Kunal Talwar. Cops, robbers, and threatening skeletons: padded decomposition for minor-free graphs. In Proceedings of the Forty-Sixth Annual ACM Symposium on Theory of Computing, STOC ’14, pages 79–88. Association for Computing Machinery, 2014. doi:10.1145/2591796.2591849.
- [2] Reyan Ahmed, Greg Bodwin, Faryad Darabi Sahneh, Keaton Hamm, Mohammad Javad Latifi Jebelli, Stephen Kobourov, and Richard Spence. Graph spanners: A tutorial review. Computer Science Review, 37:100253, 2020. doi:10.1016/j.cosrev.2020.100253.
- [3] A. Andoni and P. Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. In 47th Annual IEEE Symposium on Foundations of Computer Science, pages 459–468. IEEE, 2006. doi:10.1109/FOCS.2006.49.
- [4] A. Andoni and P. Indyk. Nearest neighbors in high-dimensional spaces. In J. E. Goodman and J. O’Rourke, editors, Handbook of Discrete and Computational Geometry, chapter 43, pages 1135–1150. CRC Press, 3rd edition, 2017. doi:10.1201/9781315119601.
- [5] Alexandr Andoni, Assaf Naor, Aleksandar Nikolov, Ilya P. Razenshteyn, and Erik Waingarten. Data-dependent hashing via nonlinear spectral gaps. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2018, pages 787–800. ACM, 2018. doi:10.1145/3188745.3188846.
- [6] S. Arora, J. R. Lee, and A. Naor. Euclidean distortion and the sparsest cut. J. Amer. Math. Soc., 21(1):1–21, 2008.
- [7] Y. Bartal. Probabilistic approximation of metric spaces and its algorithmic applications. In 37th Annual Symposium on Foundations of Computer Science, pages 184–193. IEEE, 1996.
- [8] Y. Bartal. Graph decomposition lemmas and their role in metric embedding methods. In 12th Annual European Symposium on Algorithms, volume 3221 of LNCS, pages 89–97. Springer, 2004. doi:10.1007/978-3-540-30140-0_10.
- [9] Yair Bartal and Lee-Ad Gottlieb. Dimension reduction techniques for () with applications. In 32nd International Symposium on Computational Geometry, SoCG 2016, volume 51 of LIPIcs, pages 16:1–16:15. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2016. doi:10.4230/LIPICS.SOCG.2016.16.
- [10] Yair Bartal and Lee-Ad Gottlieb. Approximate nearest neighbor search for -spaces () via embeddings. Theoretical Computer Science, 757:27–35, 2019. doi:10.1016/j.tcs.2018.07.011.
- [11] Yoav Benyamini and Joram Lindenstrauss. Geometric nonlinear functional analysis, volume 48. American Mathematical Soc., 1998.
- [12] Glencora Borradaile, Hung Le, and Christian Wulff-Nilsen. Greedy spanners are optimal in doubling metrics. In Proceedings of the 2019 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 2371–2379, 2019. doi:10.1137/1.9781611975482.145.
- [13] M. Charikar, C. Chekuri, A. Goel, S. Guha, and S. Plotkin. Approximating a finite metric by a small number of tree metrics. In 39th Annual Symposium on Foundations of Computer Science, pages 379–388, 1998.
- [14] Moses Charikar and Amit Sahai. Dimension reduction in the norm. In 43rd Symposium on Foundations of Computer Science (FOCS 2002), pages 551–560. IEEE Computer Society, 2002. doi:10.1109/SFCS.2002.1181979.
- [15] A. Dvoretzky. Some results on convex bodies and Banach spaces. In Proc. Internat. Sympos. Linear Spaces (Jerusalem, 1960), pages 123–160. Jerusalem Academic Press, Jerusalem, 1961.
- [16] David Eppstein. Spanning trees and spanners. In Handbook of Computational Geometry, pages 425–461. North Holland / Elsevier, 2000. doi:10.1016/B978-044482537-7/50010-3.
- [17] J. Fakcharoenphol, S. Rao, and K. Talwar. A tight bound on approximating arbitrary metrics by tree metrics. J. Comput. Syst. Sci., 69(3):485–497, 2004. doi:10.1016/j.jcss.2004.04.011.
- [18] Arnold Filtser. On strong diameter padded decompositions. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2019, volume 145 of LIPIcs, pages 6:1–6:21. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2019. doi:10.4230/LIPICS.APPROX-RANDOM.2019.6.
- [19] Arnold Filtser. Scattering and sparse partitions, and their applications. ACM Trans. Algorithms, 20(4):30:1–30:42, 2024. doi:10.1145/3672562.
- [20] Arnold Filtser, Tobias Friedrich, Davis Issac, Nikhil Kumar, Hung Le, Nadym Mallek, and Ziena Zeif. Optimal padded decomposition for bounded treewidth graphs. CoRR, abs/2407.12230, 2024. doi:10.48550/arXiv.2407.12230.
- [21] Arnold Filtser, Lee-Ad Gottlieb, and Robert Krauthgamer. Labelings vs. embeddings: On distributed and prioritized representations of distances. Discrete & Computational Geometry, 71:849–871, 2024. doi:10.1007/s00454-023-00565-2.
- [22] Arnold Filtser and Ofer Neiman. Light spanners for high dimensional norms via stochastic decompositions. Algorithmica, 84(10):2987–3007, 2022. doi:10.1007/s00453-022-00994-0.
- [23] C. Gavoille, D. Peleg, S. Pérennes, and R. Raz. Distance labeling in graphs. Journal of Algorithms, 53(1):85–112, 2004. doi:10.1016/J.JALGOR.2004.05.002.
- [24] A. Gupta, R. Krauthgamer, and J. R. Lee. Bounded geometries, fractals, and low-distortion embeddings. In 44th Annual IEEE Symposium on Foundations of Computer Science, pages 534–543, 2003. doi:10.1109/SFCS.2003.1238226.
- [25] Sariel Har-Peled, Piotr Indyk, and Anastasios Sidiropoulos. Euclidean spanners in high dimensions. In Proceedings of the Twenty-Fourth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2013, pages 804–809. SIAM, 2013. doi:10.1137/1.9781611973105.57.
- [26] P. Indyk and R. Motwani. Approximate nearest neighbors: towards removing the curse of dimensionality. In 30th Annual ACM Symposium on Theory of Computing, pages 604–613, 1998. doi:10.1145/276698.276876.
- [27] W. B. Johnson and J. Lindenstrauss. Extensions of Lipschitz mappings into a Hilbert space. In Conference in modern analysis and probability (New Haven, Conn., 1982), pages 189–206. Amer. Math. Soc., Providence, RI, 1984.
- [28] Nigel J. Kalton. The nonlinear geometry of Banach spaces. Revista Matematica Complutense, 21(1):7–60, 2008. URL: http://eudml.org/doc/42299.
- [29] R. Krauthgamer and J. R. Lee. The intrinsic dimensionality of graphs. Combinatorica, 27(5):551–585, 2007. doi:10.1007/s00493-007-2183-y.
- [30] R. Krauthgamer, J. R. Lee, M. Mendel, and A. Naor. Measured descent: A new embedding method for finite metrics. Geometric And Functional Analysis, 15(4):839–858, 2005. doi:10.1007/s00039-005-0527-6.
- [31] Robert Krauthgamer, Nir Petruschka, and Shay Sapir. The power of recursive embeddings for metrics, 2025. doi:10.48550/arXiv.2503.18508.
- [32] Robert Krauthgamer and Tim Roughgarden. Metric clustering via consistent labeling. Theory of Computing, 7(5):49–74, 2011. doi:10.4086/toc.2011.v007a005.
- [33] Deepanshu Kush, Aleksandar Nikolov, and Haohua Tang. Near neighbor search via efficient average distortion embeddings. In 37th International Symposium on Computational Geometry, SoCG 2021, volume 189 of LIPIcs, pages 50:1–50:14. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2021. doi:10.4230/LIPICS.SOCG.2021.50.
- [34] Hung Le and Shay Solomon. A unified framework for light spanners. In Proceedings of the 55th Annual ACM Symposium on Theory of Computing, STOC 2023, pages 295–308. ACM, 2023. doi:10.1145/3564246.3585185.
- [35] J. R. Lee and A. Naor. Metric decomposition, smooth measures, and clustering. Unpublished manuscript, 2003. URL: https://web.math.princeton.edu/˜naor/homepagefiles/cluster.pdf.
- [36] James R. Lee and Anastasios Sidiropoulos. Genus and the geometry of the cut graph. In 21st Annual ACM-SIAM Symposium on Discrete Algorithms, pages 193–201. SIAM, 2010. doi:10.1137/1.9781611973075.18.
- [37] N. Linial, E. London, and Y. Rabinovich. The geometry of graphs and some of its algorithmic applications. Combinatorica, 15(2):215–245, 1995. doi:10.1007/BF01200757.
- [38] J. Matoušek. On embedding expanders into spaces. Israel J. Math., 102:189–197, 1997.
- [39] M. Mendel and A. Naor. Ramsey partitions and proximity data structures. J. Eur. Math. Soc., 9(2):253–275, 2007.
- [40] Assaf Naor. Probabilistic clustering of high dimensional norms. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2017, pages 690–709. SIAM, 2017. doi:10.1137/1.9781611974782.44.
- [41] Assaf Naor. Extension, separation and isomorphic reverse isoperimetry, volume 11 of Mem. Eur. Math. Soc. European Mathematical Society (EMS), 2024. doi:10.4171/MEMS/11.
- [42] Assaf Naor and Kevin Ren. has nontrivial Euclidean distortion growth when , 2025. arXiv:2502.10543.
- [43] Huy L. Nguyen. Approximate nearest neighbor search in . CoRR, abs/1306.3601, 2013. arXiv:1306.3601.
- [44] R. Ostrovsky and Y. Rabani. Polynomial-time approximation schemes for geometric min-sum median clustering. J. ACM, 49(2):139–156, 2002. doi:10.1145/506147.506149.
- [45] D. Peleg. Proximity-preserving labeling schemes. Journal of Graph Theory, 33(3):167–176, 2000. doi:10.1002/(SICI)1097-0118(200003)33:3\%3C167::AID-JGT7\%3E3.0.CO;2-5.
- [46] D. Peleg and A. A. Schäffer. Graph spanners. J. Graph Theory, 13(1):99–116, 1989. doi:10.1002/jgt.3190130114.
- [47] David Peleg and Jeffrey D. Ullman. An optimal synchronizer for the hypercube. SIAM J. Comput., 18(4):740–747, 1989. doi:10.1137/0218050.
- [48] S. Rao. Small distortion and volume preserving embeddings for planar and Euclidean metrics. In Proceedings of the 15th Annual Symposium on Computational Geometry, pages 300–306. ACM, 1999. doi:10.1145/304893.304983.
- [49] Uri Zwick. Exact and approximate distances in graphs – A survey. In Algorithms – ESA 2001, 9th Annual European Symposium, volume 2161 of Lecture Notes in Computer Science, pages 33–48. Springer, 2001. doi:10.1007/3-540-44676-1_3.