Search Results

Documents authored by Bennett, Huck


Document
RANDOM
Matrix Multiplication Verification Using Coding Theory

Authors: Huck Bennett, Karthik Gajulapalli, Alexander Golovnev, and Evelyn Warton

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
We study the Matrix Multiplication Verification Problem (MMV) where the goal is, given three n × n matrices A, B, and C as input, to decide whether AB = C. A classic randomized algorithm by Freivalds (MFCS, 1979) solves MMV in Õ(n²) time, and a longstanding challenge is to (partially) derandomize it while still running in faster than matrix multiplication time (i.e., in o(n^ω) time). To that end, we give two algorithms for MMV in the case where AB - C is sparse. Specifically, when AB - C has at most O(n^δ) non-zero entries for a constant 0 ≤ δ < 2, we give (1) a deterministic O(n^(ω-ε))-time algorithm for constant ε = ε(δ) > 0, and (2) a randomized Õ(n²)-time algorithm using δ/2 ⋅ log₂ n + O(1) random bits. The former algorithm is faster than the deterministic algorithm of Künnemann (ESA, 2018) when δ ≥ 1.056, and the latter algorithm uses fewer random bits than the algorithm of Kimbrel and Sinha (IPL, 1993), which runs in the same time and uses log₂ n + O(1) random bits (in turn fewer than Freivalds’s algorithm). Our algorithms are simple and use techniques from coding theory. Let H be a parity-check matrix of a Maximum Distance Separable (MDS) code, and let G = (I | G') be a generator matrix of a (possibly different) MDS code in systematic form. Our deterministic algorithm uses fast rectangular matrix multiplication to check whether HAB = HC and H(AB)^T = H(C^T), and our randomized algorithm samples a uniformly random row g' from G' and checks whether g'AB = g'C and g'(AB)^T = g'C^T. We additionally study the complexity of MMV. We first show that all algorithms in a natural class of deterministic linear algebraic algorithms for MMV (including ours) require Ω(n^ω) time. We also show a barrier to proving a super-quadratic running time lower bound for matrix multiplication (and hence MMV) under the Strong Exponential Time Hypothesis (SETH). Finally, we study relationships between natural variants and special cases of MMV (with respect to deterministic Õ(n²)-time reductions).

Cite as

Huck Bennett, Karthik Gajulapalli, Alexander Golovnev, and Evelyn Warton. Matrix Multiplication Verification Using Coding Theory. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 42:1-42:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bennett_et_al:LIPIcs.APPROX/RANDOM.2024.42,
  author =	{Bennett, Huck and Gajulapalli, Karthik and Golovnev, Alexander and Warton, Evelyn},
  title =	{{Matrix Multiplication Verification Using Coding Theory}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{42:1--42:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.42},
  URN =		{urn:nbn:de:0030-drops-210352},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.42},
  annote =	{Keywords: Matrix Multiplication Verification, Derandomization, Sparse Matrices, Error-Correcting Codes, Hardness Barriers, Reductions}
}
Document
Topological k-Metrics

Authors: Willow Barkan, Huck Bennett, and Amir Nayyeri

Published in: LIPIcs, Volume 293, 40th International Symposium on Computational Geometry (SoCG 2024)


Abstract
Metric spaces (X, d) are ubiquitous objects in mathematics and computer science that allow for capturing pairwise distance relationships d(x, y) between points x, y ∈ X. Because of this, it is natural to ask what useful generalizations there are of metric spaces for capturing "k-wise distance relationships" d(x_1, …, x_k) among points x_1, …, x_k ∈ X for k > 2. To that end, Gähler (Math. Nachr., 1963) (and perhaps others even earlier) defined k-metric spaces, which generalize metric spaces, and most notably generalize the triangle inequality d(x₁, x₂) ≤ d(x₁, y) + d(y, x₂) to the "simplex inequality" d(x_1, …, x_k) ≤ ∑_{i=1}^k d(x_1, …, x_{i-1}, y, x_{i+1}, …, x_k). (The definition holds for any fixed k ≥ 2, and a 2-metric space is just a (standard) metric space.) In this work, we introduce strong k-metric spaces, k-metric spaces that satisfy a topological condition stronger than the simplex inequality, which makes them "behave nicely." We also introduce coboundary k-metrics, which generalize 𝓁_p metrics (and in fact all finite metric spaces induced by norms) and minimum bounding chain k-metrics, which generalize shortest path metrics (and capture all strong k-metrics). Using these definitions, we prove analogs of a number of fundamental results about embedding finite metric spaces including Fréchet embedding (isometric embedding into 𝓁_∞) and isometric embedding of all tree metrics into 𝓁₁. We also study relationships between families of (strong) k-metrics, and show that natural quantities, like simplex volume, are strong k-metrics.

Cite as

Willow Barkan, Huck Bennett, and Amir Nayyeri. Topological k-Metrics. In 40th International Symposium on Computational Geometry (SoCG 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 293, pp. 13:1-13:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{barkan_et_al:LIPIcs.SoCG.2024.13,
  author =	{Barkan, Willow and Bennett, Huck and Nayyeri, Amir},
  title =	{{Topological k-Metrics}},
  booktitle =	{40th International Symposium on Computational Geometry (SoCG 2024)},
  pages =	{13:1--13:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-316-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{293},
  editor =	{Mulzer, Wolfgang and Phillips, Jeff M.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2024.13},
  URN =		{urn:nbn:de:0030-drops-199585},
  doi =		{10.4230/LIPIcs.SoCG.2024.13},
  annote =	{Keywords: k-metrics, metric embeddings, computational topology, simplicial complexes}
}
Document
RANDOM
Hardness of the (Approximate) Shortest Vector Problem: A Simple Proof via Reed-Solomon Codes

Authors: Huck Bennett and Chris Peikert

Published in: LIPIcs, Volume 275, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)


Abstract
We give a simple proof that the (approximate, decisional) Shortest Vector Problem is NP-hard under a randomized reduction. Specifically, we show that for any p ≥ 1 and any constant γ < 2^{1/p}, the γ-approximate problem in the 𝓁_p norm (γ-GapSVP_p) is not in RP unless NP ⊆ RP. Our proof follows an approach pioneered by Ajtai (STOC 1998), and strengthened by Micciancio (FOCS 1998 and SICOMP 2000), for showing hardness of γ-GapSVP_p using locally dense lattices. We construct such lattices simply by applying "Construction A" to Reed-Solomon codes with suitable parameters, and prove their local density via an elementary argument originally used in the context of Craig lattices. As in all known NP-hardness results for GapSVP_p with p < ∞, our reduction uses randomness. Indeed, it is a notorious open problem to prove NP-hardness via a deterministic reduction. To this end, we additionally discuss potential directions and associated challenges for derandomizing our reduction. In particular, we show that a close deterministic analogue of our local density construction would improve on the state-of-the-art explicit Reed-Solomon list-decoding lower bounds of Guruswami and Rudra (STOC 2005 and IEEE Transactions on Information Theory 2006). As a related contribution of independent interest, we also give a polynomial-time algorithm for decoding n-dimensional "Construction A Reed-Solomon lattices" (with different parameters than those used in our hardness proof) to a distance within an O(√log n) factor of Minkowski’s bound. This asymptotically matches the best known distance for decoding near Minkowski’s bound, due to Mook and Peikert (IEEE Transactions on Information Theory 2022), whose work we build on with a somewhat simpler construction and analysis.

Cite as

Huck Bennett and Chris Peikert. Hardness of the (Approximate) Shortest Vector Problem: A Simple Proof via Reed-Solomon Codes. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 275, pp. 37:1-37:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bennett_et_al:LIPIcs.APPROX/RANDOM.2023.37,
  author =	{Bennett, Huck and Peikert, Chris},
  title =	{{Hardness of the (Approximate) Shortest Vector Problem: A Simple Proof via Reed-Solomon Codes}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)},
  pages =	{37:1--37:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-296-9},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{275},
  editor =	{Megow, Nicole and Smith, Adam},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2023.37},
  URN =		{urn:nbn:de:0030-drops-188622},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2023.37},
  annote =	{Keywords: Lattices, Shortest Vector Problem, Reed-Solomon codes, NP-hardness, derandomization}
}
Document
Improved Hardness of BDD and SVP Under Gap-(S)ETH

Authors: Huck Bennett, Chris Peikert, and Yi Tang

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
We show improved fine-grained hardness of two key lattice problems in the 𝓁_p norm: Bounded Distance Decoding to within an α factor of the minimum distance (BDD_{p, α}) and the (decisional) γ-approximate Shortest Vector Problem (GapSVP_{p,γ}), assuming variants of the Gap (Strong) Exponential Time Hypothesis (Gap-(S)ETH). Specifically, we show: 1) For all p ∈ [1, ∞), there is no 2^{o(n)}-time algorithm for BDD_{p, α} for any constant α > α_kn, where α_kn = 2^{-c_kn} < 0.98491 and c_kn is the 𝓁₂ kissing-number constant, unless non-uniform Gap-ETH is false. 2) For all p ∈ [1, ∞), there is no 2^{o(n)}-time algorithm for BDD_{p, α} for any constant α > α^‡_p, where α^‡_p is explicit and satisfies α^‡_p = 1 for 1 ≤ p ≤ 2, α^‡_p < 1 for all p > 2, and α^‡_p → 1/2 as p → ∞, unless randomized Gap-ETH is false. 3) For all p ∈ [1, ∞) ⧵ 2 ℤ and all C > 1, there is no 2^{n/C}-time algorithm for BDD_{p, α} for any constant α > α^†_{p, C}, where α^†_{p, C} is explicit and satisfies α^†_{p, C} → 1 as C → ∞ for any fixed p ∈ [1, ∞), unless non-uniform Gap-SETH is false. 4) For all p > p₀ ≈ 2.1397, p ∉ 2ℤ, and all C > C_p, there is no 2^{n/C}-time algorithm for GapSVP_{p, γ} for some constant γ > 1, where C_p > 1 is explicit and satisfies C_p → 1 as p → ∞, unless randomized Gap-SETH is false. Our results for BDD_{p, α} improve and extend work by Aggarwal and Stephens-Davidowitz (STOC, 2018) and Bennett and Peikert (CCC, 2020). Specifically, the quantities α_kn and α^‡_p (respectively, α^†_{p,C}) significantly improve upon the corresponding quantity α_p^* (respectively, α_{p,C}^*) of Bennett and Peikert for small p (but arise from somewhat stronger assumptions). In particular, Item 1 improves the smallest value of α for which BDD_{p, α} is known to be exponentially hard in the Euclidean norm (p = 2) to an explicit constant α < 1 for the first time under a general-purpose complexity assumption. Items 1 and 3 crucially use the recent breakthrough result of Vlăduţ (Moscow Journal of Combinatorics and Number Theory, 2019), which showed an explicit exponential lower bound on the lattice kissing number. Finally, Item 4 answers a natural question left open by Aggarwal, Bennett, Golovnev, and Stephens-Davidowitz (SODA, 2021), which showed an analogous result for the Closest Vector Problem.

Cite as

Huck Bennett, Chris Peikert, and Yi Tang. Improved Hardness of BDD and SVP Under Gap-(S)ETH. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 19:1-19:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bennett_et_al:LIPIcs.ITCS.2022.19,
  author =	{Bennett, Huck and Peikert, Chris and Tang, Yi},
  title =	{{Improved Hardness of BDD and SVP Under Gap-(S)ETH}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{19:1--19:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.19},
  URN =		{urn:nbn:de:0030-drops-156151},
  doi =		{10.4230/LIPIcs.ITCS.2022.19},
  annote =	{Keywords: lattices, lattice-based cryptography, fine-grained complexity, Bounded Distance Decoding, Shortest Vector Problem}
}
Document
Hardness of Bounded Distance Decoding on Lattices in 𝓁_p Norms

Authors: Huck Bennett and Chris Peikert

Published in: LIPIcs, Volume 169, 35th Computational Complexity Conference (CCC 2020)


Abstract
Bounded Distance Decoding BDD_{p,α} is the problem of decoding a lattice when the target point is promised to be within an α factor of the minimum distance of the lattice, in the 𝓁_p norm. We prove that BDD_{p, α} is NP-hard under randomized reductions where α → 1/2 as p → ∞ (and for α = 1/2 when p = ∞), thereby showing the hardness of decoding for distances approaching the unique-decoding radius for large p. We also show fine-grained hardness for BDD_{p,α}. For example, we prove that for all p ∈ [1,∞) ⧵ 2ℤ and constants C > 1, ε > 0, there is no 2^((1-ε)n/C)-time algorithm for BDD_{p,α} for some constant α (which approaches 1/2 as p → ∞), assuming the randomized Strong Exponential Time Hypothesis (SETH). Moreover, essentially all of our results also hold (under analogous non-uniform assumptions) for BDD with preprocessing, in which unbounded precomputation can be applied to the lattice before the target is available. Compared to prior work on the hardness of BDD_{p,α} by Liu, Lyubashevsky, and Micciancio (APPROX-RANDOM 2008), our results improve the values of α for which the problem is known to be NP-hard for all p > p₁ ≈ 4.2773, and give the very first fine-grained hardness for BDD (in any norm). Our reductions rely on a special family of "locally dense" lattices in 𝓁_p norms, which we construct by modifying the integer-lattice sparsification technique of Aggarwal and Stephens-Davidowitz (STOC 2018).

Cite as

Huck Bennett and Chris Peikert. Hardness of Bounded Distance Decoding on Lattices in 𝓁_p Norms. In 35th Computational Complexity Conference (CCC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 169, pp. 36:1-36:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bennett_et_al:LIPIcs.CCC.2020.36,
  author =	{Bennett, Huck and Peikert, Chris},
  title =	{{Hardness of Bounded Distance Decoding on Lattices in 𝓁\underlinep Norms}},
  booktitle =	{35th Computational Complexity Conference (CCC 2020)},
  pages =	{36:1--36:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-156-6},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{169},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2020.36},
  URN =		{urn:nbn:de:0030-drops-125881},
  doi =		{10.4230/LIPIcs.CCC.2020.36},
  annote =	{Keywords: Lattices, Bounded Distance Decoding, NP-hardness, Fine-Grained Complexity}
}
Document
On Percolation and NP-Hardness

Authors: Huck Bennett, Daniel Reichman, and Igor Shinkar

Published in: LIPIcs, Volume 55, 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)


Abstract
The edge-percolation and vertex-percolation random graph models start with an arbitrary graph G, and randomly delete edges or vertices of G with some fixed probability. We study the computational hardness of problems whose inputs are obtained by applying percolation to worst-case instances. Specifically, we show that a number of classical N P-hard graph problems remain essentially as hard on percolated instances as they are in the worst-case (assuming NP !subseteq BPP). We also prove hardness results for other NP-hard problems such as Constraint Satisfaction Problems, where random deletions are applied to clauses or variables. We focus on proving the hardness of the Maximum Independent Set problem and the Graph Coloring problem on percolated instances. To show this we establish the robustness of the corresponding parameters alpha(.) and Chi(.) to percolation, which may be of independent interest. Given a graph G, let G' be the graph obtained by randomly deleting edges of G. We show that if alpha(G) is small, then alpha(G') remains small with probability at least 0.99. Similarly, we show that if Chi(G) is large, then Chi(G') remains large with probability at least 0.99.

Cite as

Huck Bennett, Daniel Reichman, and Igor Shinkar. On Percolation and NP-Hardness. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 80:1-80:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bennett_et_al:LIPIcs.ICALP.2016.80,
  author =	{Bennett, Huck and Reichman, Daniel and Shinkar, Igor},
  title =	{{On Percolation and NP-Hardness}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{80:1--80:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.80},
  URN =		{urn:nbn:de:0030-drops-62056},
  doi =		{10.4230/LIPIcs.ICALP.2016.80},
  annote =	{Keywords: percolation, NP-hardness, random subgraphs, chromatic number}
}
Document
On the Lattice Distortion Problem

Authors: Huck Bennett, Daniel Dadush, and Noah Stephens-Davidowitz

Published in: LIPIcs, Volume 57, 24th Annual European Symposium on Algorithms (ESA 2016)


Abstract
We introduce and study the Lattice Distortion Problem (LDP). LDP asks how "similar" two lattices are. I.e., what is the minimal distortion of a linear bijection between the two lattices? LDP generalizes the Lattice Isomorphism Problem (the lattice analogue of Graph Isomorphism), which simply asks whether the minimal distortion is one. As our first contribution, we show that the distortion between any two lattices is approximated up to a n^{O(log(n))} factor by a simple function of their successive minima. Our methods are constructive, allowing us to compute low-distortion mappings that are within a 2^{O(n*log(log(n))/log(n))} factor of optimal in polynomial time and within a n^{O(log(n))} factor of optimal in singly exponential time. Our algorithms rely on a notion of basis reduction introduced by Seysen (Combinatorica 1993), which we show is intimately related to lattice distortion. Lastly, we show that LDP is NP-hard to approximate to within any constant factor (under randomized reductions), by a reduction from the Shortest Vector Problem.

Cite as

Huck Bennett, Daniel Dadush, and Noah Stephens-Davidowitz. On the Lattice Distortion Problem. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 9:1-9:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bennett_et_al:LIPIcs.ESA.2016.9,
  author =	{Bennett, Huck and Dadush, Daniel and Stephens-Davidowitz, Noah},
  title =	{{On the Lattice Distortion Problem}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{9:1--9:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.9},
  URN =		{urn:nbn:de:0030-drops-63519},
  doi =		{10.4230/LIPIcs.ESA.2016.9},
  annote =	{Keywords: lattices, lattice distortion, lattice isomoprhism, geometry of numbers, basis reduction}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail