21 Search Results for "Alman, Josh"


Document
Near Optimal Dual Fault Tolerant Distance Oracle

Authors: Dipan Dey and Manoj Gupta

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We present a dual fault-tolerant distance oracle for undirected and unweighted graphs. Given a set F of two edges, as well as a source node s and a destination node t, our oracle returns the length of the shortest path from s to t that avoids F in O(1) time with a high probability. The space complexity of our oracle is Õ(n²) , making it nearly optimal in terms of both space and query time. Prior to our work, Pettie and Duan [SODA 2009] designed a dual fault-tolerant distance oracle that required Õ(n²) space and O(log n) query time. In addition to improving the query time, our oracle is much simpler than the previous approach.

Cite as

Dipan Dey and Manoj Gupta. Near Optimal Dual Fault Tolerant Distance Oracle. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 45:1-45:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{dey_et_al:LIPIcs.ESA.2024.45,
  author =	{Dey, Dipan and Gupta, Manoj},
  title =	{{Near Optimal Dual Fault Tolerant Distance Oracle}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{45:1--45:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.45},
  URN =		{urn:nbn:de:0030-drops-211164},
  doi =		{10.4230/LIPIcs.ESA.2024.45},
  annote =	{Keywords: Distance Sensitive Oracle, Dual Fault Distance Oracle}
}
Document
Removing the log Factor from (min,+)-Products on Bounded Range Integer Matrices

Authors: Dvir Fried, Tsvi Kopelowitz, and Ely Porat

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We revisit the problem of multiplying two square matrices over the (min, +) semi-ring, where all entries are integers from a bounded range [-M : M] ∪ {∞}. The current state of the art for this problem is a simple O(M n^{ω} log M) time algorithm by Alon, Galil and Margalit [JCSS'97], where ω is the exponent in the runtime of the fastest matrix multiplication (FMM) algorithm. We design a new simple algorithm whose runtime is O(M n^ω + M n² log M), thereby removing the logM factor in the runtime if ω > 2 or if n^ω = Ω (n²log n).

Cite as

Dvir Fried, Tsvi Kopelowitz, and Ely Porat. Removing the log Factor from (min,+)-Products on Bounded Range Integer Matrices. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 57:1-57:6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{fried_et_al:LIPIcs.ESA.2024.57,
  author =	{Fried, Dvir and Kopelowitz, Tsvi and Porat, Ely},
  title =	{{Removing the log Factor from (min,+)-Products on Bounded Range Integer Matrices}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{57:1--57:6},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.57},
  URN =		{urn:nbn:de:0030-drops-211283},
  doi =		{10.4230/LIPIcs.ESA.2024.57},
  annote =	{Keywords: FMM, (min , +)-product, FFT}
}
Document
Parameterized Dynamic Data Structure for Split Completion

Authors: Konrad Majewski, Michał Pilipczuk, and Anna Zych-Pawlewicz

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We design a randomized data structure that, for a fully dynamic graph G updated by edge insertions and deletions and integers k, d fixed upon initialization, maintains the answer to the Split Completion problem: whether one can add k edges to G to obtain a split graph. The data structure can be initialized on an edgeless n-vertex graph in time n ⋅ (k d ⋅ log n)^{𝒪(1)}, and the amortized time complexity of an update is 5^k ⋅ (k d ⋅ log n)^{𝒪(1)}. The answer provided by the data structure is correct with probability 1-𝒪(n^{-d}).

Cite as

Konrad Majewski, Michał Pilipczuk, and Anna Zych-Pawlewicz. Parameterized Dynamic Data Structure for Split Completion. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 87:1-87:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{majewski_et_al:LIPIcs.ESA.2024.87,
  author =	{Majewski, Konrad and Pilipczuk, Micha{\l} and Zych-Pawlewicz, Anna},
  title =	{{Parameterized Dynamic Data Structure for Split Completion}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{87:1--87:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.87},
  URN =		{urn:nbn:de:0030-drops-211587},
  doi =		{10.4230/LIPIcs.ESA.2024.87},
  annote =	{Keywords: parameterized complexity, dynamic data structures, split graphs}
}
Document
Improved Space-Efficient Approximate Nearest Neighbor Search Using Function Inversion

Authors: Samuel McCauley

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
Approximate nearest neighbor search (ANN) data structures have widespread applications in machine learning, computational biology, and text processing. The goal of ANN is to preprocess a set S so that, given a query q, we can find a point y whose distance from q approximates the smallest distance from q to any point in S. For most distance functions, the best-known ANN bounds for high-dimensional point sets are obtained using techniques based on locality-sensitive hashing (LSH). Unfortunately, space efficiency is a major challenge for LSH-based data structures. Classic LSH techniques require a very large amount of space, oftentimes polynomial in |S|. A long line of work has developed intricate techniques to reduce this space usage, but these techniques suffer from downsides: they must be hand tailored to each specific LSH, are often complicated, and their space reduction comes at the cost of significantly increased query times. In this paper we explore a new way to improve the space efficiency of LSH using function inversion techniques, originally developed in (Fiat and Naor 2000). We begin by describing how function inversion can be used to improve LSH data structures. This gives a fairly simple, black box method to reduce LSH space usage. Then, we give a data structure that leverages function inversion to improve the query time of the best known near-linear space data structure for approximate nearest neighbor search under Euclidean distance: the ALRW data structure of (Andoni, Laarhoven, Razenshteyn, and Waingarten 2017). ALRW was previously shown to be optimal among "list-of-points" data structures for both Euclidean and Manhattan ANN; thus, in addition to giving improved bounds, our results imply that list-of-points data structures are not optimal for Euclidean or Manhattan ANN .

Cite as

Samuel McCauley. Improved Space-Efficient Approximate Nearest Neighbor Search Using Function Inversion. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 88:1-88:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{mccauley:LIPIcs.ESA.2024.88,
  author =	{McCauley, Samuel},
  title =	{{Improved Space-Efficient Approximate Nearest Neighbor Search Using Function Inversion}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{88:1--88:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.88},
  URN =		{urn:nbn:de:0030-drops-211590},
  doi =		{10.4230/LIPIcs.ESA.2024.88},
  annote =	{Keywords: similarity search, locality-sensitive hashing, randomized algorithms, data structures, space efficiency, function inversion}
}
Document
RANDOM
Matrix Multiplication Reductions

Authors: Ashish Gola, Igor Shinkar, and Harsimran Singh

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
In this paper we study a worst case to average case reduction for the problem of matrix multiplication over finite fields. Suppose we have an efficient average case algorithm, that given two random matrices A,B outputs a matrix that has a non-trivial correlation with their product A ⋅ B. Can we transform it into a worst case algorithm, that outputs the correct answer for all inputs without incurring a significant overhead in the running time? We present two results in this direction. - Two-sided error in the high agreement regime. We begin with a brief remark about a reduction for high agreement algorithms, i.e., an algorithm which agrees with the correct output on a large (say > 0.9) fraction of entries, and show that the standard self-correction of linearity allows us to transform such algorithms into algorithms that work in worst case. - One-sided error in the low agreement regime. Focusing on average case algorithms with one-sided error, we show that over 𝔽₂ there is a reduction that gets an O(T) time average case algorithm that given a random input A,B outputs a matrix that agrees with A ⋅ B on at least 51% of the entries (i.e., has only a slight advantage over the trivial algorithm), and transforms it into an Õ(T) time worst case algorithm, that outputs the correct answer for all inputs with high probability.

Cite as

Ashish Gola, Igor Shinkar, and Harsimran Singh. Matrix Multiplication Reductions. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 34:1-34:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gola_et_al:LIPIcs.APPROX/RANDOM.2024.34,
  author =	{Gola, Ashish and Shinkar, Igor and Singh, Harsimran},
  title =	{{Matrix Multiplication Reductions}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{34:1--34:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.34},
  URN =		{urn:nbn:de:0030-drops-210274},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.34},
  annote =	{Keywords: Matrix Multiplication, Reductions, Worst case to average case reductions}
}
Document
Finer-Grained Hardness of Kernel Density Estimation

Authors: Josh Alman and Yunfeng Guan

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
In batch Kernel Density Estimation (KDE) for a kernel function f : ℝ^m × ℝ^m → ℝ, we are given as input 2n points x^{(1)}, …, x^{(n)}, y^{(1)}, …, y^{(n)} ∈ ℝ^m in dimension m, as well as a vector v ∈ ℝⁿ. These inputs implicitly define the n × n kernel matrix K given by K[i,j] = f(x^{(i)}, y^{(j)}). The goal is to compute a vector v ∈ ℝⁿ which approximates K w, i.e., with || Kw - v||_∞ < ε ||w||₁. For illustrative purposes, consider the Gaussian kernel f(x,y) : = e^{-||x-y||₂²}. The classic approach to this problem is the famous Fast Multipole Method (FMM), which runs in time n ⋅ O(log^m(ε^{-1})) and is particularly effective in low dimensions because of its exponential dependence on m. Recently, as the higher-dimensional case m ≥ Ω(log n) has seen more applications in machine learning and statistics, new algorithms have focused on this setting: an algorithm using discrepancy theory, which runs in time O(n / ε), and an algorithm based on the polynomial method, which achieves inverse polynomial accuracy in almost linear time when the input points have bounded square diameter B < o(log n). A recent line of work has proved fine-grained lower bounds, with the goal of showing that the "curse of dimensionality" arising in FMM is necessary assuming the Strong Exponential Time Hypothesis (SETH). Backurs et al. [NeurIPS 2017] first showed the hardness of a variety of Empirical Risk Minimization problems including KDE for Gaussian-like kernels in the case with high dimension m = Ω(log n) and large scale B = Ω(log n). Alman et al. [FOCS 2020] later developed new reductions in roughly this same parameter regime, leading to lower bounds for more general kernels, but only for very small error ε < 2^{- log^{Ω(1)} (n)}. In this paper, we refine the approach of Alman et al. to show new lower bounds in all parameter regimes, closing gaps between the known algorithms and lower bounds. For example: - In the setting where m = Clog n and B = o(log n), we prove Gaussian KDE requires n^{2-o(1)} time to achieve additive error ε < Ω(m/B)^{-m}, matching the performance of the polynomial method up to low-order terms. - In the low dimensional setting m = o(log n), we show that Gaussian KDE requires n^{2-o(1)} time to achieve ε such that log log (ε^{-1}) > ̃ Ω ((log n)/m), matching the error bound achievable by FMM up to low-order terms. To our knowledge, no nontrivial lower bound was previously known in this regime. Our approach also generalizes to any parameter regime and any kernel. For example, we achieve similar fine-grained hardness results for any kernel with slowly-decaying Taylor coefficients such as the Cauchy kernel. Our new lower bounds make use of an intricate analysis of the "counting matrix", a special case of the kernel matrix focused on carefully-chosen evaluation points. As a key technical lemma, we give a novel approach to bounding the entries of its inverse by using Schur polynomials from algebraic combinatorics.

Cite as

Josh Alman and Yunfeng Guan. Finer-Grained Hardness of Kernel Density Estimation. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 35:1-35:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{alman_et_al:LIPIcs.CCC.2024.35,
  author =	{Alman, Josh and Guan, Yunfeng},
  title =	{{Finer-Grained Hardness of Kernel Density Estimation}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{35:1--35:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.35},
  URN =		{urn:nbn:de:0030-drops-204311},
  doi =		{10.4230/LIPIcs.CCC.2024.35},
  annote =	{Keywords: Kernel Density Estimation, Fine-Grained Complexity, Schur Polynomials}
}
Document
Track A: Algorithms, Complexity and Games
Faster Algorithms for Dual-Failure Replacement Paths

Authors: Shiri Chechik and Tianyi Zhang

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
Given a simple weighted directed graph G = (V, E, ω) on n vertices as well as two designated terminals s, t ∈ V, our goal is to compute the shortest path from s to t avoiding any pair of presumably failed edges f₁, f₂ ∈ E, which is a natural generalization of the classical replacement path problem which considers single edge failures only. This dual failure replacement paths problem was recently studied by Vassilevska Williams, Woldeghebriel and Xu [FOCS 2022] who designed a cubic time algorithm for general weighted digraphs which is conditionally optimal; in the same paper, for unweighted graphs where ω ≡ 1, the authors presented an algebraic algorithm with runtime Õ(n^{2.9146}), as well as a conditional lower bound of n^{8/3-o(1)} against combinatorial algorithms. However, it was unknown in their work whether fast matrix multiplication is necessary for a subcubic runtime in unweighted digraphs. As our primary result, we present the first truly subcubic combinatorial algorithm for dual failure replacement paths in unweighted digraphs. Our runtime is Õ(n^{3-1/18}). Besides, we also study algebraic algorithms for digraphs with small integer edge weights from {-M, -M+1, ⋯, M-1, M}. As our secondary result, we obtained a runtime of Õ(Mn^{2.8716}), which is faster than the previous bound of Õ(M^{2/3}n^{2.9144} + Mn^{2.8716}) from [Vassilevska Williams, Woldeghebriela and Xu, 2022].

Cite as

Shiri Chechik and Tianyi Zhang. Faster Algorithms for Dual-Failure Replacement Paths. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 41:1-41:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chechik_et_al:LIPIcs.ICALP.2024.41,
  author =	{Chechik, Shiri and Zhang, Tianyi},
  title =	{{Faster Algorithms for Dual-Failure Replacement Paths}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{41:1--41:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.41},
  URN =		{urn:nbn:de:0030-drops-201849},
  doi =		{10.4230/LIPIcs.ICALP.2024.41},
  annote =	{Keywords: graph algorithms, shortest paths, replacement paths}
}
Document
Track A: Algorithms, Complexity and Games
It’s Hard to HAC Average Linkage!

Authors: MohammadHossein Bateni, Laxman Dhulipala, Kishen N. Gowda, D. Ellis Hershkowitz, Rajesh Jayaram, and Jakub Łącki

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
Average linkage Hierarchical Agglomerative Clustering (HAC) is an extensively studied and applied method for hierarchical clustering. Recent applications to massive datasets have driven significant interest in near-linear-time and efficient parallel algorithms for average linkage HAC. We provide hardness results that rule out such algorithms. On the sequential side, we establish a runtime lower bound of n^{3/2-ε} on n node graphs for sequential combinatorial algorithms under standard fine-grained complexity assumptions. This essentially matches the best-known running time for average linkage HAC. On the parallel side, we prove that average linkage HAC likely cannot be parallelized even on simple graphs by showing that it is CC-hard on trees of diameter 4. On the possibility side, we demonstrate that average linkage HAC can be efficiently parallelized (i.e., it is in NC) on paths and can be solved in near-linear time when the height of the output cluster hierarchy is small.

Cite as

MohammadHossein Bateni, Laxman Dhulipala, Kishen N. Gowda, D. Ellis Hershkowitz, Rajesh Jayaram, and Jakub Łącki. It’s Hard to HAC Average Linkage!. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 18:1-18:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bateni_et_al:LIPIcs.ICALP.2024.18,
  author =	{Bateni, MohammadHossein and Dhulipala, Laxman and Gowda, Kishen N. and Hershkowitz, D. Ellis and Jayaram, Rajesh and {\L}\k{a}cki, Jakub},
  title =	{{It’s Hard to HAC Average Linkage!}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{18:1--18:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.18},
  URN =		{urn:nbn:de:0030-drops-201613},
  doi =		{10.4230/LIPIcs.ICALP.2024.18},
  annote =	{Keywords: Clustering, Hierarchical Graph Clustering, HAC, Fine-Grained Complexity, Parallel Algorithms, CC}
}
Document
Track A: Algorithms, Complexity and Games
Finer-Grained Reductions in Fine-Grained Hardness of Approximation

Authors: Elie Abboud and Noga Ron-Zewi

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We investigate the relation between δ and ε required for obtaining a (1+δ)-approximation in time N^{2-ε} for closest pair problems under various distance metrics, and for other related problems in fine-grained complexity. Specifically, our main result shows that if it is impossible to (exactly) solve the (bichromatic) inner product (IP) problem for vectors of dimension c log N in time N^{2-ε}, then there is no (1+δ)-approximation algorithm for (bichromatic) Euclidean Closest Pair running in time N^{2-2ε}, where δ ≈ (ε/c)² (where ≈ hides polylog factors). This improves on the prior result due to Chen and Williams (SODA 2019) which gave a smaller polynomial dependence of δ on ε, on the order of δ ≈ (ε/c)⁶. Our result implies in turn that no (1+δ)-approximation algorithm exists for Euclidean closest pair for δ ≈ ε⁴, unless an algorithmic improvement for IP is obtained. This in turn is very close to the approximation guarantee of δ ≈ ε³ for Euclidean closest pair, given by the best known algorithm of Almam, Chan, and Williams (FOCS 2016). By known reductions, a similar result follows for a host of other related problems in fine-grained hardness of approximation. Our reduction combines the hardness of approximation framework of Chen and Williams, together with an MA communication protocol for IP over a small alphabet, that is inspired by the MA protocol of Chen (Theory of Computing, 2020).

Cite as

Elie Abboud and Noga Ron-Zewi. Finer-Grained Reductions in Fine-Grained Hardness of Approximation. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 7:1-7:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ICALP.2024.7,
  author =	{Abboud, Elie and Ron-Zewi, Noga},
  title =	{{Finer-Grained Reductions in Fine-Grained Hardness of Approximation}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{7:1--7:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.7},
  URN =		{urn:nbn:de:0030-drops-201507},
  doi =		{10.4230/LIPIcs.ICALP.2024.7},
  annote =	{Keywords: Fine-grained complexity, conditional lower bound, fine-grained reduction, Approximation algorithms, Analysis of algorithms, Computational geometry, Computational and structural complexity theory}
}
Document
Tensor Ranks and the Fine-Grained Complexity of Dynamic Programming

Authors: Josh Alman, Ethan Turok, Hantao Yu, and Hengzhi Zhang

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
Generalizing work of Künnemann, Paturi, and Schneider [ICALP 2017], we study a wide class of high-dimensional dynamic programming (DP) problems in which one must find the shortest path between two points in a high-dimensional grid given a tensor of transition costs between nodes in the grid. This captures many classical problems which are solved using DP such as the knapsack problem, the airplane refueling problem, and the minimal-weight polygon triangulation problem. We observe that for many of these problems, the tensor naturally has low tensor rank or low slice rank. We then give new algorithms and a web of fine-grained reductions to tightly determine the complexity of these problems. For instance, we show that a polynomial speedup over the DP algorithm is possible when the tensor rank is a constant or the slice rank is 1, but that such a speedup is impossible if the tensor rank is slightly super-constant (assuming SETH) or the slice rank is at least 3 (assuming the APSP conjecture). We find that this characterizes the known complexities for many of these problems, and in some cases leads to new faster algorithms.

Cite as

Josh Alman, Ethan Turok, Hantao Yu, and Hengzhi Zhang. Tensor Ranks and the Fine-Grained Complexity of Dynamic Programming. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 4:1-4:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{alman_et_al:LIPIcs.ITCS.2024.4,
  author =	{Alman, Josh and Turok, Ethan and Yu, Hantao and Zhang, Hengzhi},
  title =	{{Tensor Ranks and the Fine-Grained Complexity of Dynamic Programming}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{4:1--4:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.4},
  URN =		{urn:nbn:de:0030-drops-195324},
  doi =		{10.4230/LIPIcs.ITCS.2024.4},
  annote =	{Keywords: Fine-grained complexity, Dynamic programming, Least-weight subsequence}
}
Document
Matrix Multiplication and Number on the Forehead Communication

Authors: Josh Alman and Jarosław Błasiok

Published in: LIPIcs, Volume 264, 38th Computational Complexity Conference (CCC 2023)


Abstract
Three-player Number On the Forehead communication may be thought of as a three-player Number In the Hand promise model, in which each player is given the inputs that are supposedly on the other two players' heads, and promised that they are consistent with the inputs of the other players. The set of all allowed inputs under this promise may be thought of as an order-3 tensor. We surprisingly observe that this tensor is exactly the matrix multiplication tensor, which is widely studied in the design of fast matrix multiplication algorithms. Using this connection, we prove a number of results about both Number On the Forehead communication and matrix multiplication, each by using known results or techniques about the other. For example, we show how the Laser method, a key technique used to design the best matrix multiplication algorithms, can also be used to design communication protocols for a variety of problems. We also show how known lower bounds for Number On the Forehead communication can be used to bound properties of the matrix multiplication tensor such as its zeroing out subrank. Finally, we substantially generalize known methods based on slice-rank for studying communication, and show how they directly relate to the matrix multiplication exponent ω.

Cite as

Josh Alman and Jarosław Błasiok. Matrix Multiplication and Number on the Forehead Communication. In 38th Computational Complexity Conference (CCC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 264, pp. 16:1-16:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{alman_et_al:LIPIcs.CCC.2023.16,
  author =	{Alman, Josh and B{\l}asiok, Jaros{\l}aw},
  title =	{{Matrix Multiplication and Number on the Forehead Communication}},
  booktitle =	{38th Computational Complexity Conference (CCC 2023)},
  pages =	{16:1--16:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-282-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{264},
  editor =	{Ta-Shma, Amnon},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2023.16},
  URN =		{urn:nbn:de:0030-drops-182861},
  doi =		{10.4230/LIPIcs.CCC.2023.16},
  annote =	{Keywords: Number on the forehead, communication complexity, matrix multiplication}
}
Document
Optimal-Degree Polynomial Approximations for Exponentials and Gaussian Kernel Density Estimation

Authors: Amol Aggarwal and Josh Alman

Published in: LIPIcs, Volume 234, 37th Computational Complexity Conference (CCC 2022)


Abstract
For any real numbers B ≥ 1 and δ ∈ (0,1) and function f: [0,B] → ℝ, let d_{B; δ}(f) ∈ ℤ_{> 0} denote the minimum degree of a polynomial p(x) satisfying sup_{x ∈ [0,B]} |p(x) - f(x)| < δ. In this paper, we provide precise asymptotics for d_{B; δ}(e^{-x}) and d_{B; δ}(e^x) in terms of both B and δ, improving both the previously known upper bounds and lower bounds. In particular, we show d_{B; δ}(e^{-x}) = Θ(max{√{B log(δ^{-1})}, log(δ^{-1})/{log(B^{-1} log(δ^{-1}))}}), and d_{B; δ}(e^{x}) = Θ(max{B, log(δ^{-1})/{log(B^{-1} log(δ^{-1}))}}), and we explicitly determine the leading coefficients in most parameter regimes. Polynomial approximations for e^{-x} and e^x have applications to the design of algorithms for many problems, including in scientific computing, graph algorithms, machine learning, and statistics. Our degree bounds show both the power and limitations of these algorithms. We focus in particular on the Batch Gaussian Kernel Density Estimation problem for n sample points in Θ(log n) dimensions with error δ = n^{-Θ(1)}. We show that the running time one can achieve depends on the square of the diameter of the point set, B, with a transition at B = Θ(log n) mirroring the corresponding transition in d_{B; δ}(e^{-x}): - When B = o(log n), we give the first algorithm running in time n^{1 + o(1)}. - When B = κ log n for a small constant κ > 0, we give an algorithm running in time n^{1 + O(log log κ^{-1} /log κ^{-1})}. The log log κ^{-1} /log κ^{-1} term in the exponent comes from analyzing the behavior of the leading constant in our computation of d_{B; δ}(e^{-x}). - When B = ω(log n), we show that time n^{2 - o(1)} is necessary assuming SETH.

Cite as

Amol Aggarwal and Josh Alman. Optimal-Degree Polynomial Approximations for Exponentials and Gaussian Kernel Density Estimation. In 37th Computational Complexity Conference (CCC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 234, pp. 22:1-22:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{aggarwal_et_al:LIPIcs.CCC.2022.22,
  author =	{Aggarwal, Amol and Alman, Josh},
  title =	{{Optimal-Degree Polynomial Approximations for Exponentials and Gaussian Kernel Density Estimation}},
  booktitle =	{37th Computational Complexity Conference (CCC 2022)},
  pages =	{22:1--22:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-241-9},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{234},
  editor =	{Lovett, Shachar},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2022.22},
  URN =		{urn:nbn:de:0030-drops-165846},
  doi =		{10.4230/LIPIcs.CCC.2022.22},
  annote =	{Keywords: polynomial approximation, kernel density estimation, Chebyshev polynomials}
}
Document
Track A: Algorithms, Complexity and Games
Parameterized Sensitivity Oracles and Dynamic Algorithms Using Exterior Algebras

Authors: Josh Alman and Dean Hirsch

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
We design the first efficient sensitivity oracles and dynamic algorithms for a variety of parameterized problems. Our main approach is to modify the algebraic coding technique from static parameterized algorithm design, which had not previously been used in a dynamic context. We particularly build off of the "extensor coding" method of Brand, Dell and Husfeldt [STOC'18], employing properties of the exterior algebra over different fields. For the k-Path detection problem for directed graphs, it is known that no efficient dynamic algorithm exists (under popular assumptions from fine-grained complexity). We circumvent this by designing an efficient sensitivity oracle, which preprocesses a directed graph on n vertices in 2^k poly(k) n^{ω+o(1)} time, such that, given 𝓁 updates (mixing edge insertions and deletions, and vertex deletions) to that input graph, it can decide in time 𝓁² 2^kpoly(k) and with high probability, whether the updated graph contains a path of length k. We also give a deterministic sensitivity oracle requiring 4^k poly(k) n^{ω+o(1)} preprocessing time and 𝓁² 2^{ω k + o(k)} query time, and obtain a randomized sensitivity oracle for the task of approximately counting the number of k-paths. For k-Path detection in undirected graphs, we obtain a randomized sensitivity oracle with O(1.66^k n³) preprocessing time and O(𝓁³ 1.66^k) query time, and a better bound for undirected bipartite graphs. In addition, we present the first fully dynamic algorithms for a variety of problems: k-Partial Cover, m-Set k-Packing, t-Dominating Set, d-Dimensional k-Matching, and Exact k-Partial Cover. For example, for k-Partial Cover we show a randomized dynamic algorithm with 2^k poly(k)polylog(n) update time, and a deterministic dynamic algorithm with 4^k poly(k)polylog(n) update time. Finally, we show how our techniques can be adapted to deal with natural variants on these problems where additional constraints are imposed on the solutions.

Cite as

Josh Alman and Dean Hirsch. Parameterized Sensitivity Oracles and Dynamic Algorithms Using Exterior Algebras. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 9:1-9:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{alman_et_al:LIPIcs.ICALP.2022.9,
  author =	{Alman, Josh and Hirsch, Dean},
  title =	{{Parameterized Sensitivity Oracles and Dynamic Algorithms Using Exterior Algebras}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{9:1--9:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.9},
  URN =		{urn:nbn:de:0030-drops-163504},
  doi =		{10.4230/LIPIcs.ICALP.2022.9},
  annote =	{Keywords: sensitivity oracles, k-path, dynamic algorithms, parameterized algorithms, set packing, partial cover, exterior algebra, extensor, algebraic algorithms}
}
Document
OV Graphs Are (Probably) Hard Instances

Authors: Josh Alman and Virginia Vassilevska Williams

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
A graph G on n nodes is an Orthogonal Vectors (OV) graph of dimension d if there are vectors v_1, …, v_n ∈ {0,1}^d such that nodes i and j are adjacent in G if and only if ⟨v_i,v_j⟩ = 0 over Z. In this paper, we study a number of basic graph algorithm problems, except where one is given as input the vectors defining an OV graph instead of a general graph. We show that for each of the following problems, an algorithm solving it faster on such OV graphs G of dimension only d=O(log n) than in the general case would refute a plausible conjecture about the time required to solve sparse MAX-k-SAT instances: - Determining whether G contains a triangle. - More generally, determining whether G contains a directed k-cycle for any k ≥ 3. - Computing the square of the adjacency matrix of G over ℤ or ?_2. - Maintaining the shortest distance between two fixed nodes of G, or whether G has a perfect matching, when G is a dynamically updating OV graph. We also prove some complementary results about OV graphs. We show that any problem which is NP-hard on constant-degree graphs is also NP-hard on OV graphs of dimension O(log n), and we give two problems which can be solved faster on OV graphs than in general: Maximum Clique, and Online Matrix-Vector Multiplication.

Cite as

Josh Alman and Virginia Vassilevska Williams. OV Graphs Are (Probably) Hard Instances. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 83:1-83:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{alman_et_al:LIPIcs.ITCS.2020.83,
  author =	{Alman, Josh and Vassilevska Williams, Virginia},
  title =	{{OV Graphs Are (Probably) Hard Instances}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{83:1--83:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.83},
  URN =		{urn:nbn:de:0030-drops-117686},
  doi =		{10.4230/LIPIcs.ITCS.2020.83},
  annote =	{Keywords: Orthogonal Vectors, Fine-Grained Reductions, Cycle Finding}
}
Document
Limits on the Universal Method for Matrix Multiplication

Authors: Josh Alman

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
In this work, we prove limitations on the known methods for designing matrix multiplication algorithms. Alman and Vassilevska Williams [Alman and Williams, 2018] recently defined the Universal Method, which substantially generalizes all the known approaches including Strassen’s Laser Method [V. Strassen, 1987] and Cohn and Umans' Group Theoretic Method [Cohn and Umans, 2003]. We prove concrete lower bounds on the algorithms one can design by applying the Universal Method to many different tensors. Our proofs use new tools for upper bounding the asymptotic slice rank of a wide range of tensors. Our main result is that the Universal method applied to any Coppersmith-Winograd tensor CW_q cannot yield a bound on omega, the exponent of matrix multiplication, better than 2.16805. By comparison, it was previously only known that the weaker "Galactic Method" applied to CW_q could not achieve an exponent of 2. We also study the Laser Method (which is, in principle, a highly special case of the Universal Method) and prove that it is "complete" for matrix multiplication algorithms: when it applies to a tensor T, it achieves omega = 2 if and only if it is possible for the Universal method applied to T to achieve omega = 2. Hence, the Laser Method, which was originally used as an algorithmic tool, can also be seen as a lower bounding tool. For example, in their landmark paper, Coppersmith and Winograd [Coppersmith and Winograd, 1990] achieved a bound of omega <= 2.376, by applying the Laser Method to CW_q. By our result, the fact that they did not achieve omega=2 implies a lower bound on the Universal Method applied to CW_q. Indeed, if it were possible for the Universal Method applied to CW_q to achieve omega=2, then Coppersmith and Winograd’s application of the Laser Method would have achieved omega=2.

Cite as

Josh Alman. Limits on the Universal Method for Matrix Multiplication. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 12:1-12:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{alman:LIPIcs.CCC.2019.12,
  author =	{Alman, Josh},
  title =	{{Limits on the Universal Method for Matrix Multiplication}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{12:1--12:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.12},
  URN =		{urn:nbn:de:0030-drops-108347},
  doi =		{10.4230/LIPIcs.CCC.2019.12},
  annote =	{Keywords: Matrix Multiplication, Laser Method, Slice Rank}
}
  • Refine by Author
  • 10 Alman, Josh
  • 3 Vassilevska Williams, Virginia
  • 2 Chen, Lijie
  • 2 Williams, R. Ryan
  • 1 Abboud, Elie
  • Show More...

  • Refine by Classification
  • 4 Theory of computation → Design and analysis of algorithms
  • 4 Theory of computation → Problems, reductions and completeness
  • 2 Theory of computation
  • 2 Theory of computation → Algebraic complexity theory
  • 2 Theory of computation → Circuit complexity
  • Show More...

  • Refine by Keyword
  • 2 Circuit Lower Bounds
  • 2 Derandomization
  • 2 Fine-Grained Complexity
  • 2 Fine-grained complexity
  • 2 Matrix Multiplication
  • Show More...

  • Refine by Type
  • 21 document

  • Refine by Publication Year
  • 10 2024
  • 5 2019
  • 2 2022
  • 1 2017
  • 1 2018
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail