Search Results

Documents authored by Alman, Josh


Document
Finer-Grained Hardness of Kernel Density Estimation

Authors: Josh Alman and Yunfeng Guan

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
In batch Kernel Density Estimation (KDE) for a kernel function f : ℝ^m × ℝ^m → ℝ, we are given as input 2n points x^{(1)}, …, x^{(n)}, y^{(1)}, …, y^{(n)} ∈ ℝ^m in dimension m, as well as a vector v ∈ ℝⁿ. These inputs implicitly define the n × n kernel matrix K given by K[i,j] = f(x^{(i)}, y^{(j)}). The goal is to compute a vector v ∈ ℝⁿ which approximates K w, i.e., with || Kw - v||_∞ < ε ||w||₁. For illustrative purposes, consider the Gaussian kernel f(x,y) : = e^{-||x-y||₂²}. The classic approach to this problem is the famous Fast Multipole Method (FMM), which runs in time n ⋅ O(log^m(ε^{-1})) and is particularly effective in low dimensions because of its exponential dependence on m. Recently, as the higher-dimensional case m ≥ Ω(log n) has seen more applications in machine learning and statistics, new algorithms have focused on this setting: an algorithm using discrepancy theory, which runs in time O(n / ε), and an algorithm based on the polynomial method, which achieves inverse polynomial accuracy in almost linear time when the input points have bounded square diameter B < o(log n). A recent line of work has proved fine-grained lower bounds, with the goal of showing that the "curse of dimensionality" arising in FMM is necessary assuming the Strong Exponential Time Hypothesis (SETH). Backurs et al. [NeurIPS 2017] first showed the hardness of a variety of Empirical Risk Minimization problems including KDE for Gaussian-like kernels in the case with high dimension m = Ω(log n) and large scale B = Ω(log n). Alman et al. [FOCS 2020] later developed new reductions in roughly this same parameter regime, leading to lower bounds for more general kernels, but only for very small error ε < 2^{- log^{Ω(1)} (n)}. In this paper, we refine the approach of Alman et al. to show new lower bounds in all parameter regimes, closing gaps between the known algorithms and lower bounds. For example: - In the setting where m = Clog n and B = o(log n), we prove Gaussian KDE requires n^{2-o(1)} time to achieve additive error ε < Ω(m/B)^{-m}, matching the performance of the polynomial method up to low-order terms. - In the low dimensional setting m = o(log n), we show that Gaussian KDE requires n^{2-o(1)} time to achieve ε such that log log (ε^{-1}) > ̃ Ω ((log n)/m), matching the error bound achievable by FMM up to low-order terms. To our knowledge, no nontrivial lower bound was previously known in this regime. Our approach also generalizes to any parameter regime and any kernel. For example, we achieve similar fine-grained hardness results for any kernel with slowly-decaying Taylor coefficients such as the Cauchy kernel. Our new lower bounds make use of an intricate analysis of the "counting matrix", a special case of the kernel matrix focused on carefully-chosen evaluation points. As a key technical lemma, we give a novel approach to bounding the entries of its inverse by using Schur polynomials from algebraic combinatorics.

Cite as

Josh Alman and Yunfeng Guan. Finer-Grained Hardness of Kernel Density Estimation. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 35:1-35:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{alman_et_al:LIPIcs.CCC.2024.35,
  author =	{Alman, Josh and Guan, Yunfeng},
  title =	{{Finer-Grained Hardness of Kernel Density Estimation}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{35:1--35:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.35},
  URN =		{urn:nbn:de:0030-drops-204311},
  doi =		{10.4230/LIPIcs.CCC.2024.35},
  annote =	{Keywords: Kernel Density Estimation, Fine-Grained Complexity, Schur Polynomials}
}
Document
Tensor Ranks and the Fine-Grained Complexity of Dynamic Programming

Authors: Josh Alman, Ethan Turok, Hantao Yu, and Hengzhi Zhang

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
Generalizing work of Künnemann, Paturi, and Schneider [ICALP 2017], we study a wide class of high-dimensional dynamic programming (DP) problems in which one must find the shortest path between two points in a high-dimensional grid given a tensor of transition costs between nodes in the grid. This captures many classical problems which are solved using DP such as the knapsack problem, the airplane refueling problem, and the minimal-weight polygon triangulation problem. We observe that for many of these problems, the tensor naturally has low tensor rank or low slice rank. We then give new algorithms and a web of fine-grained reductions to tightly determine the complexity of these problems. For instance, we show that a polynomial speedup over the DP algorithm is possible when the tensor rank is a constant or the slice rank is 1, but that such a speedup is impossible if the tensor rank is slightly super-constant (assuming SETH) or the slice rank is at least 3 (assuming the APSP conjecture). We find that this characterizes the known complexities for many of these problems, and in some cases leads to new faster algorithms.

Cite as

Josh Alman, Ethan Turok, Hantao Yu, and Hengzhi Zhang. Tensor Ranks and the Fine-Grained Complexity of Dynamic Programming. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 4:1-4:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{alman_et_al:LIPIcs.ITCS.2024.4,
  author =	{Alman, Josh and Turok, Ethan and Yu, Hantao and Zhang, Hengzhi},
  title =	{{Tensor Ranks and the Fine-Grained Complexity of Dynamic Programming}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{4:1--4:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.4},
  URN =		{urn:nbn:de:0030-drops-195324},
  doi =		{10.4230/LIPIcs.ITCS.2024.4},
  annote =	{Keywords: Fine-grained complexity, Dynamic programming, Least-weight subsequence}
}
Document
Matrix Multiplication and Number on the Forehead Communication

Authors: Josh Alman and Jarosław Błasiok

Published in: LIPIcs, Volume 264, 38th Computational Complexity Conference (CCC 2023)


Abstract
Three-player Number On the Forehead communication may be thought of as a three-player Number In the Hand promise model, in which each player is given the inputs that are supposedly on the other two players' heads, and promised that they are consistent with the inputs of the other players. The set of all allowed inputs under this promise may be thought of as an order-3 tensor. We surprisingly observe that this tensor is exactly the matrix multiplication tensor, which is widely studied in the design of fast matrix multiplication algorithms. Using this connection, we prove a number of results about both Number On the Forehead communication and matrix multiplication, each by using known results or techniques about the other. For example, we show how the Laser method, a key technique used to design the best matrix multiplication algorithms, can also be used to design communication protocols for a variety of problems. We also show how known lower bounds for Number On the Forehead communication can be used to bound properties of the matrix multiplication tensor such as its zeroing out subrank. Finally, we substantially generalize known methods based on slice-rank for studying communication, and show how they directly relate to the matrix multiplication exponent ω.

Cite as

Josh Alman and Jarosław Błasiok. Matrix Multiplication and Number on the Forehead Communication. In 38th Computational Complexity Conference (CCC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 264, pp. 16:1-16:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{alman_et_al:LIPIcs.CCC.2023.16,
  author =	{Alman, Josh and B{\l}asiok, Jaros{\l}aw},
  title =	{{Matrix Multiplication and Number on the Forehead Communication}},
  booktitle =	{38th Computational Complexity Conference (CCC 2023)},
  pages =	{16:1--16:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-282-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{264},
  editor =	{Ta-Shma, Amnon},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2023.16},
  URN =		{urn:nbn:de:0030-drops-182861},
  doi =		{10.4230/LIPIcs.CCC.2023.16},
  annote =	{Keywords: Number on the forehead, communication complexity, matrix multiplication}
}
Document
Optimal-Degree Polynomial Approximations for Exponentials and Gaussian Kernel Density Estimation

Authors: Amol Aggarwal and Josh Alman

Published in: LIPIcs, Volume 234, 37th Computational Complexity Conference (CCC 2022)


Abstract
For any real numbers B ≥ 1 and δ ∈ (0,1) and function f: [0,B] → ℝ, let d_{B; δ}(f) ∈ ℤ_{> 0} denote the minimum degree of a polynomial p(x) satisfying sup_{x ∈ [0,B]} |p(x) - f(x)| < δ. In this paper, we provide precise asymptotics for d_{B; δ}(e^{-x}) and d_{B; δ}(e^x) in terms of both B and δ, improving both the previously known upper bounds and lower bounds. In particular, we show d_{B; δ}(e^{-x}) = Θ(max{√{B log(δ^{-1})}, log(δ^{-1})/{log(B^{-1} log(δ^{-1}))}}), and d_{B; δ}(e^{x}) = Θ(max{B, log(δ^{-1})/{log(B^{-1} log(δ^{-1}))}}), and we explicitly determine the leading coefficients in most parameter regimes. Polynomial approximations for e^{-x} and e^x have applications to the design of algorithms for many problems, including in scientific computing, graph algorithms, machine learning, and statistics. Our degree bounds show both the power and limitations of these algorithms. We focus in particular on the Batch Gaussian Kernel Density Estimation problem for n sample points in Θ(log n) dimensions with error δ = n^{-Θ(1)}. We show that the running time one can achieve depends on the square of the diameter of the point set, B, with a transition at B = Θ(log n) mirroring the corresponding transition in d_{B; δ}(e^{-x}): - When B = o(log n), we give the first algorithm running in time n^{1 + o(1)}. - When B = κ log n for a small constant κ > 0, we give an algorithm running in time n^{1 + O(log log κ^{-1} /log κ^{-1})}. The log log κ^{-1} /log κ^{-1} term in the exponent comes from analyzing the behavior of the leading constant in our computation of d_{B; δ}(e^{-x}). - When B = ω(log n), we show that time n^{2 - o(1)} is necessary assuming SETH.

Cite as

Amol Aggarwal and Josh Alman. Optimal-Degree Polynomial Approximations for Exponentials and Gaussian Kernel Density Estimation. In 37th Computational Complexity Conference (CCC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 234, pp. 22:1-22:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{aggarwal_et_al:LIPIcs.CCC.2022.22,
  author =	{Aggarwal, Amol and Alman, Josh},
  title =	{{Optimal-Degree Polynomial Approximations for Exponentials and Gaussian Kernel Density Estimation}},
  booktitle =	{37th Computational Complexity Conference (CCC 2022)},
  pages =	{22:1--22:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-241-9},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{234},
  editor =	{Lovett, Shachar},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2022.22},
  URN =		{urn:nbn:de:0030-drops-165846},
  doi =		{10.4230/LIPIcs.CCC.2022.22},
  annote =	{Keywords: polynomial approximation, kernel density estimation, Chebyshev polynomials}
}
Document
Track A: Algorithms, Complexity and Games
Parameterized Sensitivity Oracles and Dynamic Algorithms Using Exterior Algebras

Authors: Josh Alman and Dean Hirsch

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
We design the first efficient sensitivity oracles and dynamic algorithms for a variety of parameterized problems. Our main approach is to modify the algebraic coding technique from static parameterized algorithm design, which had not previously been used in a dynamic context. We particularly build off of the "extensor coding" method of Brand, Dell and Husfeldt [STOC'18], employing properties of the exterior algebra over different fields. For the k-Path detection problem for directed graphs, it is known that no efficient dynamic algorithm exists (under popular assumptions from fine-grained complexity). We circumvent this by designing an efficient sensitivity oracle, which preprocesses a directed graph on n vertices in 2^k poly(k) n^{ω+o(1)} time, such that, given 𝓁 updates (mixing edge insertions and deletions, and vertex deletions) to that input graph, it can decide in time 𝓁² 2^kpoly(k) and with high probability, whether the updated graph contains a path of length k. We also give a deterministic sensitivity oracle requiring 4^k poly(k) n^{ω+o(1)} preprocessing time and 𝓁² 2^{ω k + o(k)} query time, and obtain a randomized sensitivity oracle for the task of approximately counting the number of k-paths. For k-Path detection in undirected graphs, we obtain a randomized sensitivity oracle with O(1.66^k n³) preprocessing time and O(𝓁³ 1.66^k) query time, and a better bound for undirected bipartite graphs. In addition, we present the first fully dynamic algorithms for a variety of problems: k-Partial Cover, m-Set k-Packing, t-Dominating Set, d-Dimensional k-Matching, and Exact k-Partial Cover. For example, for k-Partial Cover we show a randomized dynamic algorithm with 2^k poly(k)polylog(n) update time, and a deterministic dynamic algorithm with 4^k poly(k)polylog(n) update time. Finally, we show how our techniques can be adapted to deal with natural variants on these problems where additional constraints are imposed on the solutions.

Cite as

Josh Alman and Dean Hirsch. Parameterized Sensitivity Oracles and Dynamic Algorithms Using Exterior Algebras. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 9:1-9:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{alman_et_al:LIPIcs.ICALP.2022.9,
  author =	{Alman, Josh and Hirsch, Dean},
  title =	{{Parameterized Sensitivity Oracles and Dynamic Algorithms Using Exterior Algebras}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{9:1--9:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.9},
  URN =		{urn:nbn:de:0030-drops-163504},
  doi =		{10.4230/LIPIcs.ICALP.2022.9},
  annote =	{Keywords: sensitivity oracles, k-path, dynamic algorithms, parameterized algorithms, set packing, partial cover, exterior algebra, extensor, algebraic algorithms}
}
Document
OV Graphs Are (Probably) Hard Instances

Authors: Josh Alman and Virginia Vassilevska Williams

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
A graph G on n nodes is an Orthogonal Vectors (OV) graph of dimension d if there are vectors v_1, …, v_n ∈ {0,1}^d such that nodes i and j are adjacent in G if and only if ⟨v_i,v_j⟩ = 0 over Z. In this paper, we study a number of basic graph algorithm problems, except where one is given as input the vectors defining an OV graph instead of a general graph. We show that for each of the following problems, an algorithm solving it faster on such OV graphs G of dimension only d=O(log n) than in the general case would refute a plausible conjecture about the time required to solve sparse MAX-k-SAT instances: - Determining whether G contains a triangle. - More generally, determining whether G contains a directed k-cycle for any k ≥ 3. - Computing the square of the adjacency matrix of G over ℤ or ?_2. - Maintaining the shortest distance between two fixed nodes of G, or whether G has a perfect matching, when G is a dynamically updating OV graph. We also prove some complementary results about OV graphs. We show that any problem which is NP-hard on constant-degree graphs is also NP-hard on OV graphs of dimension O(log n), and we give two problems which can be solved faster on OV graphs than in general: Maximum Clique, and Online Matrix-Vector Multiplication.

Cite as

Josh Alman and Virginia Vassilevska Williams. OV Graphs Are (Probably) Hard Instances. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 83:1-83:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{alman_et_al:LIPIcs.ITCS.2020.83,
  author =	{Alman, Josh and Vassilevska Williams, Virginia},
  title =	{{OV Graphs Are (Probably) Hard Instances}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{83:1--83:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.83},
  URN =		{urn:nbn:de:0030-drops-117686},
  doi =		{10.4230/LIPIcs.ITCS.2020.83},
  annote =	{Keywords: Orthogonal Vectors, Fine-Grained Reductions, Cycle Finding}
}
Document
Limits on the Universal Method for Matrix Multiplication

Authors: Josh Alman

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
In this work, we prove limitations on the known methods for designing matrix multiplication algorithms. Alman and Vassilevska Williams [Alman and Williams, 2018] recently defined the Universal Method, which substantially generalizes all the known approaches including Strassen’s Laser Method [V. Strassen, 1987] and Cohn and Umans' Group Theoretic Method [Cohn and Umans, 2003]. We prove concrete lower bounds on the algorithms one can design by applying the Universal Method to many different tensors. Our proofs use new tools for upper bounding the asymptotic slice rank of a wide range of tensors. Our main result is that the Universal method applied to any Coppersmith-Winograd tensor CW_q cannot yield a bound on omega, the exponent of matrix multiplication, better than 2.16805. By comparison, it was previously only known that the weaker "Galactic Method" applied to CW_q could not achieve an exponent of 2. We also study the Laser Method (which is, in principle, a highly special case of the Universal Method) and prove that it is "complete" for matrix multiplication algorithms: when it applies to a tensor T, it achieves omega = 2 if and only if it is possible for the Universal method applied to T to achieve omega = 2. Hence, the Laser Method, which was originally used as an algorithmic tool, can also be seen as a lower bounding tool. For example, in their landmark paper, Coppersmith and Winograd [Coppersmith and Winograd, 1990] achieved a bound of omega <= 2.376, by applying the Laser Method to CW_q. By our result, the fact that they did not achieve omega=2 implies a lower bound on the Universal Method applied to CW_q. Indeed, if it were possible for the Universal Method applied to CW_q to achieve omega=2, then Coppersmith and Winograd’s application of the Laser Method would have achieved omega=2.

Cite as

Josh Alman. Limits on the Universal Method for Matrix Multiplication. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 12:1-12:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{alman:LIPIcs.CCC.2019.12,
  author =	{Alman, Josh},
  title =	{{Limits on the Universal Method for Matrix Multiplication}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{12:1--12:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.12},
  URN =		{urn:nbn:de:0030-drops-108347},
  doi =		{10.4230/LIPIcs.CCC.2019.12},
  annote =	{Keywords: Matrix Multiplication, Laser Method, Slice Rank}
}
Document
An Illuminating Algorithm for the Light Bulb Problem

Authors: Josh Alman

Published in: OASIcs, Volume 69, 2nd Symposium on Simplicity in Algorithms (SOSA 2019)


Abstract
The Light Bulb Problem is one of the most basic problems in data analysis. One is given as input n vectors in {-1,1}^d, which are all independently and uniformly random, except for a planted pair of vectors with inner product at least rho * d for some constant rho > 0. The task is to find the planted pair. The most straightforward algorithm leads to a runtime of Omega(n^2). Algorithms based on techniques like Locality-Sensitive Hashing achieve runtimes of n^{2 - O(rho)}; as rho gets small, these approach quadratic. Building on prior work, we give a new algorithm for this problem which runs in time O(n^{1.582} + nd), regardless of how small rho is. This matches the best known runtime due to Karppa et al. Our algorithm combines techniques from previous work on the Light Bulb Problem with the so-called `polynomial method in algorithm design,' and has a simpler analysis than previous work. Our algorithm is also easily derandomized, leading to a deterministic algorithm for the Light Bulb Problem with the same runtime of O(n^{1.582} + nd), improving previous results.

Cite as

Josh Alman. An Illuminating Algorithm for the Light Bulb Problem. In 2nd Symposium on Simplicity in Algorithms (SOSA 2019). Open Access Series in Informatics (OASIcs), Volume 69, pp. 2:1-2:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{alman:OASIcs.SOSA.2019.2,
  author =	{Alman, Josh},
  title =	{{An Illuminating Algorithm for the Light Bulb Problem}},
  booktitle =	{2nd Symposium on Simplicity in Algorithms (SOSA 2019)},
  pages =	{2:1--2:11},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-099-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{69},
  editor =	{Fineman, Jeremy T. and Mitzenmacher, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SOSA.2019.2},
  URN =		{urn:nbn:de:0030-drops-100289},
  doi =		{10.4230/OASIcs.SOSA.2019.2},
  annote =	{Keywords: Light Bulb Problem, Polynomial Method, Finding Correlations}
}
Document
Further Limitations of the Known Approaches for Matrix Multiplication

Authors: Josh Alman and Virginia Vassilevska Williams

Published in: LIPIcs, Volume 94, 9th Innovations in Theoretical Computer Science Conference (ITCS 2018)


Abstract
We consider the techniques behind the current best algorithms for matrix multiplication. Our results are threefold. (1) We provide a unifying framework, showing that all known matrix multiplication running times since 1986 can be achieved from a single very natural tensor - the structural tensor T_q of addition modulo an integer q. (2) We show that if one applies a generalization of the known techniques (arbitrary zeroing out of tensor powers to obtain independent matrix products in order to use the asymptotic sum inequality of Schönhage) to an arbitrary monomial degeneration of T_q, then there is an explicit lower bound, depending on q, on the bound on the matrix multiplication exponent omega that one can achieve. We also show upper bounds on the value alpha that one can achieve, where alpha is such that n * n^alpha * n matrix multiplication can be computed in n^{2+o(1)} time. (3) We show that our lower bound on omega approaches 2 as q goes to infinity. This suggests a promising approach to improving the bound on omega: for variable q, find a monomial degeneration of T_q which, using the known techniques, produces an upper bound on omega as a function of q. Then, take q to infinity. It is not ruled out, and hence possible, that one can obtain omega=2 in this way.

Cite as

Josh Alman and Virginia Vassilevska Williams. Further Limitations of the Known Approaches for Matrix Multiplication. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 94, pp. 25:1-25:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{alman_et_al:LIPIcs.ITCS.2018.25,
  author =	{Alman, Josh and Vassilevska Williams, Virginia},
  title =	{{Further Limitations of the Known Approaches for Matrix Multiplication}},
  booktitle =	{9th Innovations in Theoretical Computer Science Conference (ITCS 2018)},
  pages =	{25:1--25:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-060-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{94},
  editor =	{Karlin, Anna R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2018.25},
  URN =		{urn:nbn:de:0030-drops-83609},
  doi =		{10.4230/LIPIcs.ITCS.2018.25},
  annote =	{Keywords: matrix multiplication, lower bound, monomial degeneration, structural tensor of addition mod p}
}
Document
Dynamic Parameterized Problems and Algorithms

Authors: Josh Alman, Matthias Mnich, and Virginia Vassilevska Williams

Published in: LIPIcs, Volume 80, 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)


Abstract
Fixed-parameter algorithms and kernelization are two powerful methods to solve NP-hard problems. Yet, so far those algorithms have been largely restricted to static inputs. In this paper we provide fixed-parameter algorithms and kernelizations for fundamental NP-hard problems with dynamic inputs. We consider a variety of parameterized graph and hitting set problems which are known to have f(k)n^{1+o(1)} time algorithms on inputs of size n, and we consider the question of whether there is a data structure that supports small updates (such as edge/vertex/set/element insertions and deletions) with an update time of g(k)n^{o(1)}; such an update time would be essentially optimal. Update and query times independent of n are particularly desirable. Among many other results, we show that Feedback Vertex Set and k-Path admit dynamic algorithms with f(k)log O(1) n update and query times for some function f depending on the solution size k only. We complement our positive results by several conditional and unconditional lower bounds. For example, we show that unlike their undirected counterparts, Directed Feedback Vertex Set and Directed k-Path do not admit dynamic algorithms with n^{o(1) } update and query times even for constant solution sizes k <= 3, assuming popular hardness hypotheses. We also show that unconditionally, in the cell probe model, Directed Feedback Vertex Set cannot be solved with update time that is purely a function of k.

Cite as

Josh Alman, Matthias Mnich, and Virginia Vassilevska Williams. Dynamic Parameterized Problems and Algorithms. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 41:1-41:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{alman_et_al:LIPIcs.ICALP.2017.41,
  author =	{Alman, Josh and Mnich, Matthias and Vassilevska Williams, Virginia},
  title =	{{Dynamic Parameterized Problems and Algorithms}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{41:1--41:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.41},
  URN =		{urn:nbn:de:0030-drops-74419},
  doi =		{10.4230/LIPIcs.ICALP.2017.41},
  annote =	{Keywords: Dynamic algorithms, fixed-parameter algorithms}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail