14 Search Results for "Stephens-Davidowitz, Noah"


Document
APPROX
The (Im)possibility of Simple Search-To-Decision Reductions for Approximation Problems

Authors: Alexander Golovnev, Siyao Guo, Spencer Peters, and Noah Stephens-Davidowitz

Published in: LIPIcs, Volume 275, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)


Abstract
We study the question of when an approximate search optimization problem is harder than the associated decision problem. Specifically, we study a natural and quite general model of black-box search-to-decision reductions, which we call branch-and-bound reductions (in analogy with branch-and-bound algorithms). In this model, an algorithm attempts to minimize (or maximize) a function f: D → ℝ_{≥ 0} by making oracle queries to h_f : 𝒮 → ℝ_{≥ 0} satisfying min_{x ∈ S} f(x) ≤ h_f(S) ≤ γ ⋅ min_{x ∈ S} f(x) (*) for some γ ≥ 1 and any subset S in some allowed class of subsets 𝒮 of the domain D. (When the goal is to maximize f, h_f instead yields an approximation to the maximal value of f over S.) We show tight upper and lower bounds on the number of queries q needed to find even a γ'-approximate minimizer (or maximizer) for quite large γ' in a number of interesting settings, as follows. - For arbitrary functions f : {0,1}ⁿ → ℝ_{≥ 0}, where 𝒮 contains all subsets of the domain, we show that no branch-and-bound reduction can achieve γ' ≲ γ^{n/log q}, while a simple greedy approach achieves essentially γ^{n/log q}. - For a large class of MAX-CSPs, where 𝒮 := {S_w} contains each set of assignments to the variables induced by a partial assignment w, we show that no branch-and-bound reduction can do significantly better than essentially a random guess, even when the oracle h_f guarantees an approximation factor of γ ≈ 1+√{log(q)/n}. - For the Traveling Salesperson Problem (TSP), where 𝒮 := {S_p} contains each set of tours extending a path p, we show that no branch-and-bound reduction can achieve γ' ≲ (γ-1) n/log q. We also prove a nearly matching upper bound in our model. These results show an oracle model in which approximate search and decision are strongly separated. (In particular, our result for TSP can be viewed as a negative answer to a question posed by Bellare and Goldwasser (SIAM J. Comput. 1994), though only in an oracle model.) We also note two alternative interpretations of our results. First, if we view h_f as a data structure, then our results unconditionally rule out black-box search-to-decision reductions for certain data structure problems. Second, if we view h_f as an efficiently computable heuristic, then our results show that any reasonably efficient branch-and-bound algorithm requires more guarantees from its heuristic than simply Eq. (*). Behind our results is a "useless oracle lemma," which allows us to argue that under certain conditions the oracle h_f is "useless," and which might be of independent interest. See also the full version [Alexander Golovnev et al., 2022].

Cite as

Alexander Golovnev, Siyao Guo, Spencer Peters, and Noah Stephens-Davidowitz. The (Im)possibility of Simple Search-To-Decision Reductions for Approximation Problems. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 275, pp. 10:1-10:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{golovnev_et_al:LIPIcs.APPROX/RANDOM.2023.10,
  author =	{Golovnev, Alexander and Guo, Siyao and Peters, Spencer and Stephens-Davidowitz, Noah},
  title =	{{The (Im)possibility of Simple Search-To-Decision Reductions for Approximation Problems}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)},
  pages =	{10:1--10:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-296-9},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{275},
  editor =	{Megow, Nicole and Smith, Adam},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2023.10},
  URN =		{urn:nbn:de:0030-drops-188351},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2023.10},
  annote =	{Keywords: search-to-decision reductions, oracles, constraint satisfaction, traveling salesman, discrete optimization}
}
Document
On Seedless PRNGs and Premature Next

Authors: Sandro Coretti, Yevgeniy Dodis, Harish Karthikeyan, Noah Stephens-Davidowitz, and Stefano Tessaro

Published in: LIPIcs, Volume 230, 3rd Conference on Information-Theoretic Cryptography (ITC 2022)


Abstract
Pseudorandom number generators with input (PRNGs) are cryptographic algorithms that generate pseudorandom bits from accumulated entropic inputs (e.g., keystrokes, interrupt timings, etc.). This paper studies in particular PRNGs that are secure against premature next attacks (Kelsey et al., FSE '98), a class of attacks leveraging the fact that a PRNG may produce an output (which could be seen by an adversary!) before enough entropy has been accumulated. Practical designs adopt either unsound entropy-estimation methods to prevent such attacks (as in Linux’s /dev/random) or sophisticated pool-based approaches as in Yarrow (MacOS/FreeBSD) and Fortuna (Windows). The only prior theoretical study of premature next attacks (Dodis et al., Algorithmica '17) considers either a seeded setting or assumes constant entropy rate, and thus falls short of providing and validating practical designs. Assuming the availability of random seed is particularly problematic, first because this requires us to somehow generate a random seed without using our PRNG, but also because we must ensure that the entropy inputs to the PRNG remain independent of the seed. Indeed, all practical designs are seedless. However, prior works on seedless PRNGs (Coretti et al., CRYPTO '19; Dodis et al., ITC '21, CRYPTO'21) do not consider premature next attacks. The main goal of this paper is to investigate the feasibility of theoretically sound seedless PRNGs that are secure against premature next attacks. To this end, we make the following contributions: 1) We prove that it is impossible to achieve seedless PRNGs that are secure against premature-next attacks, even in a rather weak model. Namely, the impossibility holds even when the entropic inputs to the PRNG are independent. In particular, our impossibility result holds in settings where seedless PRNGs are otherwise possible. 2) Given the above impossibility result, we investigate whether existing seedless pool-based approaches meant to overcome premature next attacks in practical designs provide meaningful guarantees in certain settings. Specifically, we show the following. 3) We introduce a natural condition on the entropic input and prove that it implies security of the round-robin entropy accumulation PRNG used by Windows 10, called Fortuna. Intuitively, our condition requires the input entropy "not to vary too wildly" within a given round-robin round. 4) We prove that the "root pool" approach (also used in Windows 10) is secure for general entropy inputs, provided that the system’s state is not compromised after system startup.

Cite as

Sandro Coretti, Yevgeniy Dodis, Harish Karthikeyan, Noah Stephens-Davidowitz, and Stefano Tessaro. On Seedless PRNGs and Premature Next. In 3rd Conference on Information-Theoretic Cryptography (ITC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 230, pp. 9:1-9:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{coretti_et_al:LIPIcs.ITC.2022.9,
  author =	{Coretti, Sandro and Dodis, Yevgeniy and Karthikeyan, Harish and Stephens-Davidowitz, Noah and Tessaro, Stefano},
  title =	{{On Seedless PRNGs and Premature Next}},
  booktitle =	{3rd Conference on Information-Theoretic Cryptography (ITC 2022)},
  pages =	{9:1--9:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-238-9},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{230},
  editor =	{Dachman-Soled, Dana},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITC.2022.9},
  URN =		{urn:nbn:de:0030-drops-164870},
  doi =		{10.4230/LIPIcs.ITC.2022.9},
  annote =	{Keywords: seedless PRNGs, pseudorandom number generators, PRNG, Fortuna, premature next}
}
Document
Improved Hardness of BDD and SVP Under Gap-(S)ETH

Authors: Huck Bennett, Chris Peikert, and Yi Tang

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
We show improved fine-grained hardness of two key lattice problems in the 𝓁_p norm: Bounded Distance Decoding to within an α factor of the minimum distance (BDD_{p, α}) and the (decisional) γ-approximate Shortest Vector Problem (GapSVP_{p,γ}), assuming variants of the Gap (Strong) Exponential Time Hypothesis (Gap-(S)ETH). Specifically, we show: 1) For all p ∈ [1, ∞), there is no 2^{o(n)}-time algorithm for BDD_{p, α} for any constant α > α_kn, where α_kn = 2^{-c_kn} < 0.98491 and c_kn is the 𝓁₂ kissing-number constant, unless non-uniform Gap-ETH is false. 2) For all p ∈ [1, ∞), there is no 2^{o(n)}-time algorithm for BDD_{p, α} for any constant α > α^‡_p, where α^‡_p is explicit and satisfies α^‡_p = 1 for 1 ≤ p ≤ 2, α^‡_p < 1 for all p > 2, and α^‡_p → 1/2 as p → ∞, unless randomized Gap-ETH is false. 3) For all p ∈ [1, ∞) ⧵ 2 ℤ and all C > 1, there is no 2^{n/C}-time algorithm for BDD_{p, α} for any constant α > α^†_{p, C}, where α^†_{p, C} is explicit and satisfies α^†_{p, C} → 1 as C → ∞ for any fixed p ∈ [1, ∞), unless non-uniform Gap-SETH is false. 4) For all p > p₀ ≈ 2.1397, p ∉ 2ℤ, and all C > C_p, there is no 2^{n/C}-time algorithm for GapSVP_{p, γ} for some constant γ > 1, where C_p > 1 is explicit and satisfies C_p → 1 as p → ∞, unless randomized Gap-SETH is false. Our results for BDD_{p, α} improve and extend work by Aggarwal and Stephens-Davidowitz (STOC, 2018) and Bennett and Peikert (CCC, 2020). Specifically, the quantities α_kn and α^‡_p (respectively, α^†_{p,C}) significantly improve upon the corresponding quantity α_p^* (respectively, α_{p,C}^*) of Bennett and Peikert for small p (but arise from somewhat stronger assumptions). In particular, Item 1 improves the smallest value of α for which BDD_{p, α} is known to be exponentially hard in the Euclidean norm (p = 2) to an explicit constant α < 1 for the first time under a general-purpose complexity assumption. Items 1 and 3 crucially use the recent breakthrough result of Vlăduţ (Moscow Journal of Combinatorics and Number Theory, 2019), which showed an explicit exponential lower bound on the lattice kissing number. Finally, Item 4 answers a natural question left open by Aggarwal, Bennett, Golovnev, and Stephens-Davidowitz (SODA, 2021), which showed an analogous result for the Closest Vector Problem.

Cite as

Huck Bennett, Chris Peikert, and Yi Tang. Improved Hardness of BDD and SVP Under Gap-(S)ETH. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 19:1-19:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bennett_et_al:LIPIcs.ITCS.2022.19,
  author =	{Bennett, Huck and Peikert, Chris and Tang, Yi},
  title =	{{Improved Hardness of BDD and SVP Under Gap-(S)ETH}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{19:1--19:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.19},
  URN =		{urn:nbn:de:0030-drops-156151},
  doi =		{10.4230/LIPIcs.ITCS.2022.19},
  annote =	{Keywords: lattices, lattice-based cryptography, fine-grained complexity, Bounded Distance Decoding, Shortest Vector Problem}
}
Document
RANDOM
On the Hardness of Average-Case k-SUM

Authors: Zvika Brakerski, Noah Stephens-Davidowitz, and Vinod Vaikuntanathan

Published in: LIPIcs, Volume 207, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)


Abstract
In this work, we show the first worst-case to average-case reduction for the classical k-SUM problem. A k-SUM instance is a collection of m integers, and the goal of the k-SUM problem is to find a subset of k integers that sums to 0. In the average-case version, the m elements are chosen uniformly at random from some interval [-u,u]. We consider the total setting where m is sufficiently large (with respect to u and k), so that we are guaranteed (with high probability) that solutions must exist. In particular, m = u^{Ω(1/k)} suffices for totality. Much of the appeal of k-SUM, in particular connections to problems in computational geometry, extends to the total setting. The best known algorithm in the average-case total setting is due to Wagner (following the approach of Blum-Kalai-Wasserman), and achieves a running time of u^{Θ(1/log k)} when m = u^{Θ(1/log k)}. This beats the known (conditional) lower bounds for worst-case k-SUM, raising the natural question of whether it can be improved even further. However, in this work, we show a matching average-case lower bound, by showing a reduction from worst-case lattice problems, thus introducing a new family of techniques into the field of fine-grained complexity. In particular, we show that any algorithm solving average-case k-SUM on m elements in time u^{o(1/log k)} will give a super-polynomial improvement in the complexity of algorithms for lattice problems.

Cite as

Zvika Brakerski, Noah Stephens-Davidowitz, and Vinod Vaikuntanathan. On the Hardness of Average-Case k-SUM. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 207, pp. 29:1-29:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{brakerski_et_al:LIPIcs.APPROX/RANDOM.2021.29,
  author =	{Brakerski, Zvika and Stephens-Davidowitz, Noah and Vaikuntanathan, Vinod},
  title =	{{On the Hardness of Average-Case k-SUM}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)},
  pages =	{29:1--29:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-207-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{207},
  editor =	{Wootters, Mary and Sanit\`{a}, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2021.29},
  URN =		{urn:nbn:de:0030-drops-147223},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2021.29},
  annote =	{Keywords: k-SUM, fine-grained complexity, average-case hardness}
}
Document
Online Linear Extractors for Independent Sources

Authors: Yevgeniy Dodis, Siyao Guo, Noah Stephens-Davidowitz, and Zhiye Xie

Published in: LIPIcs, Volume 199, 2nd Conference on Information-Theoretic Cryptography (ITC 2021)


Abstract
In this work, we characterize linear online extractors. In other words, given a matrix A ∈ F₂^{n×n}, we study the convergence of the iterated process S ← AS⊕X, where X∼D is repeatedly sampled independently from some fixed (but unknown) distribution D with (min)-entropy k. Here, we think of S ∈ {0,1}ⁿ as the state of an online extractor, and X ∈ {0,1}ⁿ as its input. As our main result, we show that the state S converges to the uniform distribution for all input distributions D with entropy k > 0 if and only if the matrix A has no non-trivial invariant subspace (i.e., a non-zero subspace V ⊊ F₂ⁿ such that AV ⊆ V). In other words, a matrix A yields a linear online extractor if and only if A has no non-trivial invariant subspace. For example, the linear transformation corresponding to multiplication by a generator of the field F_{2ⁿ} yields a good linear online extractor. Furthermore, for any such matrix convergence takes at most Õ(n²(k+1)/k²) steps. We also study the more general notion of condensing - that is, we ask when this process converges to a distribution with entropy at least l, when the input distribution has entropy at least k. (Extractors corresponding to the special case when l = n.) We show that a matrix gives a good condenser if there are relatively few vectors w ∈ F₂ⁿ such that w, A^Tw, …, (A^T)^{n-k}w are linearly dependent. As an application, we show that the very simple cyclic rotation transformation A(x₁,…, x_n) = (x_n,x₁,…, x_{n-1}) condenses to l = n-1 bits for any k > 1 if n is a prime satisfying a certain simple number-theoretic condition. Our proofs are Fourier-analytic and rely on a novel lemma, which gives a tight bound on the product of certain Fourier coefficients of any entropic distribution.

Cite as

Yevgeniy Dodis, Siyao Guo, Noah Stephens-Davidowitz, and Zhiye Xie. Online Linear Extractors for Independent Sources. In 2nd Conference on Information-Theoretic Cryptography (ITC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 199, pp. 14:1-14:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{dodis_et_al:LIPIcs.ITC.2021.14,
  author =	{Dodis, Yevgeniy and Guo, Siyao and Stephens-Davidowitz, Noah and Xie, Zhiye},
  title =	{{Online Linear Extractors for Independent Sources}},
  booktitle =	{2nd Conference on Information-Theoretic Cryptography (ITC 2021)},
  pages =	{14:1--14:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-197-9},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{199},
  editor =	{Tessaro, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITC.2021.14},
  URN =		{urn:nbn:de:0030-drops-143339},
  doi =		{10.4230/LIPIcs.ITC.2021.14},
  annote =	{Keywords: feasibility of randomness extraction, randomness condensers, Fourier analysis}
}
Document
Improved (Provable) Algorithms for the Shortest Vector Problem via Bounded Distance Decoding

Authors: Divesh Aggarwal, Yanlin Chen, Rajendra Kumar, and Yixin Shen

Published in: LIPIcs, Volume 187, 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)


Abstract
The most important computational problem on lattices is the Shortest Vector Problem (SVP). In this paper, we present new algorithms that improve the state-of-the-art for provable classical/quantum algorithms for SVP. We present the following results. 1) A new algorithm for SVP that provides a smooth tradeoff between time complexity and memory requirement. For any positive integer 4 ≤ q ≤ √n, our algorithm takes q^{13n+o(n)} time and requires poly(n)⋅ q^{16n/q²} memory. This tradeoff which ranges from enumeration (q = √n) to sieving (q constant), is a consequence of a new time-memory tradeoff for Discrete Gaussian sampling above the smoothing parameter. 2) A quantum algorithm that runs in time 2^{0.9533n+o(n)} and requires 2^{0.5n+o(n)} classical memory and poly(n) qubits. This improves over the previously fastest classical (which is also the fastest quantum) algorithm due to [Divesh Aggarwal et al., 2015] that has a time and space complexity 2^{n+o(n)}. 3) A classical algorithm for SVP that runs in time 2^{1.741n+o(n)} time and 2^{0.5n+o(n)} space. This improves over an algorithm of [Yanlin Chen et al., 2018] that has the same space complexity. The time complexity of our classical and quantum algorithms are expressed using a quantity related to the kissing number of a lattice. A known upper bound of this quantity is 2^{0.402n}, but in practice for most lattices, it can be much smaller and even 2^o(n). In that case, our classical algorithm runs in time 2^{1.292n} and our quantum algorithm runs in time 2^{0.750n}.

Cite as

Divesh Aggarwal, Yanlin Chen, Rajendra Kumar, and Yixin Shen. Improved (Provable) Algorithms for the Shortest Vector Problem via Bounded Distance Decoding. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 4:1-4:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{aggarwal_et_al:LIPIcs.STACS.2021.4,
  author =	{Aggarwal, Divesh and Chen, Yanlin and Kumar, Rajendra and Shen, Yixin},
  title =	{{Improved (Provable) Algorithms for the Shortest Vector Problem via Bounded Distance Decoding}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{4:1--4:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.4},
  URN =		{urn:nbn:de:0030-drops-136494},
  doi =		{10.4230/LIPIcs.STACS.2021.4},
  annote =	{Keywords: Lattices, Shortest Vector Problem, Discrete Gaussian Sampling, Time-Space Tradeoff, Quantum computation, Bounded distance decoding}
}
Document
Approximate CVP_p in Time 2^{0.802 n}

Authors: Friedrich Eisenbrand and Moritz Venzin

Published in: LIPIcs, Volume 173, 28th Annual European Symposium on Algorithms (ESA 2020)


Abstract
We show that a constant factor approximation of the shortest and closest lattice vector problem w.r.t. any 𝓁_p-norm can be computed in time 2^{(0.802 +ε) n}. This matches the currently fastest constant factor approximation algorithm for the shortest vector problem w.r.t. 𝓁₂. To obtain our result, we combine the latter algorithm w.r.t. 𝓁₂ with geometric insights related to coverings.

Cite as

Friedrich Eisenbrand and Moritz Venzin. Approximate CVP_p in Time 2^{0.802 n}. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 43:1-43:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{eisenbrand_et_al:LIPIcs.ESA.2020.43,
  author =	{Eisenbrand, Friedrich and Venzin, Moritz},
  title =	{{Approximate CVP\underlinep in Time 2^\{0.802 n\}}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{43:1--43:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.43},
  URN =		{urn:nbn:de:0030-drops-129097},
  doi =		{10.4230/LIPIcs.ESA.2020.43},
  annote =	{Keywords: Shortest and closest vector problem, approximation algorithm, sieving, covering convex bodies}
}
Document
RANDOM
Extractor Lower Bounds, Revisited

Authors: Divesh Aggarwal, Siyao Guo, Maciej Obremski, João Ribeiro, and Noah Stephens-Davidowitz

Published in: LIPIcs, Volume 176, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)


Abstract
We revisit the fundamental problem of determining seed length lower bounds for strong extractors and natural variants thereof. These variants stem from a "change in quantifiers" over the seeds of the extractor: While a strong extractor requires that the average output bias (over all seeds) is small for all input sources with sufficient min-entropy, a somewhere extractor only requires that there exists a seed whose output bias is small. More generally, we study what we call probable extractors, which on input a source with sufficient min-entropy guarantee that a large enough fraction of seeds have small enough associated output bias. Such extractors have played a key role in many constructions of pseudorandom objects, though they are often defined implicitly and have not been studied extensively. Prior known techniques fail to yield good seed length lower bounds when applied to the variants above. Our novel approach yields significantly improved lower bounds for somewhere and probable extractors. To complement this, we construct a somewhere extractor that implies our lower bound for such functions is tight in the high min-entropy regime. Surprisingly, this means that a random function is far from an optimal somewhere extractor in this regime. The techniques that we develop also yield an alternative, simpler proof of the celebrated optimal lower bound for strong extractors originally due to Radhakrishnan and Ta-Shma (SIAM J. Discrete Math., 2000).

Cite as

Divesh Aggarwal, Siyao Guo, Maciej Obremski, João Ribeiro, and Noah Stephens-Davidowitz. Extractor Lower Bounds, Revisited. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 176, pp. 1:1-1:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{aggarwal_et_al:LIPIcs.APPROX/RANDOM.2020.1,
  author =	{Aggarwal, Divesh and Guo, Siyao and Obremski, Maciej and Ribeiro, Jo\~{a}o and Stephens-Davidowitz, Noah},
  title =	{{Extractor Lower Bounds, Revisited}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)},
  pages =	{1:1--1:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-164-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{176},
  editor =	{Byrka, Jaros{\l}aw and Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2020.1},
  URN =		{urn:nbn:de:0030-drops-126041},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2020.1},
  annote =	{Keywords: randomness extractors, lower bounds, explicit constructions}
}
Document
On the Complexity of Modulo-q Arguments and the Chevalley - Warning Theorem

Authors: Mika Göös, Pritish Kamath, Katerina Sotiraki, and Manolis Zampetakis

Published in: LIPIcs, Volume 169, 35th Computational Complexity Conference (CCC 2020)


Abstract
We study the search problem class PPA_q defined as a modulo-q analog of the well-known polynomial parity argument class PPA introduced by Papadimitriou (JCSS 1994). Our first result shows that this class can be characterized in terms of PPA_p for prime p. Our main result is to establish that an explicit version of a search problem associated to the Chevalley - Warning theorem is complete for PPA_p for prime p. This problem is natural in that it does not explicitly involve circuits as part of the input. It is the first such complete problem for PPA_p when p ≥ 3. Finally we discuss connections between Chevalley-Warning theorem and the well-studied short integer solution problem and survey the structural properties of PPA_q.

Cite as

Mika Göös, Pritish Kamath, Katerina Sotiraki, and Manolis Zampetakis. On the Complexity of Modulo-q Arguments and the Chevalley - Warning Theorem. In 35th Computational Complexity Conference (CCC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 169, pp. 19:1-19:42, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{goos_et_al:LIPIcs.CCC.2020.19,
  author =	{G\"{o}\"{o}s, Mika and Kamath, Pritish and Sotiraki, Katerina and Zampetakis, Manolis},
  title =	{{On the Complexity of Modulo-q Arguments and the Chevalley - Warning Theorem}},
  booktitle =	{35th Computational Complexity Conference (CCC 2020)},
  pages =	{19:1--19:42},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-156-6},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{169},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2020.19},
  URN =		{urn:nbn:de:0030-drops-125712},
  doi =		{10.4230/LIPIcs.CCC.2020.19},
  annote =	{Keywords: Total NP Search Problems, Modulo-q arguments, Chevalley - Warning Theorem}
}
Document
Hardness of Bounded Distance Decoding on Lattices in 𝓁_p Norms

Authors: Huck Bennett and Chris Peikert

Published in: LIPIcs, Volume 169, 35th Computational Complexity Conference (CCC 2020)


Abstract
Bounded Distance Decoding BDD_{p,α} is the problem of decoding a lattice when the target point is promised to be within an α factor of the minimum distance of the lattice, in the 𝓁_p norm. We prove that BDD_{p, α} is NP-hard under randomized reductions where α → 1/2 as p → ∞ (and for α = 1/2 when p = ∞), thereby showing the hardness of decoding for distances approaching the unique-decoding radius for large p. We also show fine-grained hardness for BDD_{p,α}. For example, we prove that for all p ∈ [1,∞) ⧵ 2ℤ and constants C > 1, ε > 0, there is no 2^((1-ε)n/C)-time algorithm for BDD_{p,α} for some constant α (which approaches 1/2 as p → ∞), assuming the randomized Strong Exponential Time Hypothesis (SETH). Moreover, essentially all of our results also hold (under analogous non-uniform assumptions) for BDD with preprocessing, in which unbounded precomputation can be applied to the lattice before the target is available. Compared to prior work on the hardness of BDD_{p,α} by Liu, Lyubashevsky, and Micciancio (APPROX-RANDOM 2008), our results improve the values of α for which the problem is known to be NP-hard for all p > p₁ ≈ 4.2773, and give the very first fine-grained hardness for BDD (in any norm). Our reductions rely on a special family of "locally dense" lattices in 𝓁_p norms, which we construct by modifying the integer-lattice sparsification technique of Aggarwal and Stephens-Davidowitz (STOC 2018).

Cite as

Huck Bennett and Chris Peikert. Hardness of Bounded Distance Decoding on Lattices in 𝓁_p Norms. In 35th Computational Complexity Conference (CCC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 169, pp. 36:1-36:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bennett_et_al:LIPIcs.CCC.2020.36,
  author =	{Bennett, Huck and Peikert, Chris},
  title =	{{Hardness of Bounded Distance Decoding on Lattices in 𝓁\underlinep Norms}},
  booktitle =	{35th Computational Complexity Conference (CCC 2020)},
  pages =	{36:1--36:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-156-6},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{169},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2020.36},
  URN =		{urn:nbn:de:0030-drops-125881},
  doi =		{10.4230/LIPIcs.CCC.2020.36},
  annote =	{Keywords: Lattices, Bounded Distance Decoding, NP-hardness, Fine-Grained Complexity}
}
Document
A Time-Distance Trade-Off for GDD with Preprocessing - Instantiating the DLW Heuristic

Authors: Noah Stephens-Davidowitz

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
For 0 <= alpha <= 1/2, we show an algorithm that does the following. Given appropriate preprocessing P(L) consisting of N_alpha := 2^{O(n^{1-2 alpha} + log n)} vectors in some lattice L subset {R}^n and a target vector t in R^n, the algorithm finds y in L such that ||y-t|| <= n^{1/2 + alpha} eta(L) in time poly(n) * N_alpha, where eta(L) is the smoothing parameter of the lattice. The algorithm itself is very simple and was originally studied by Doulgerakis, Laarhoven, and de Weger (to appear in PQCrypto, 2019), who proved its correctness under certain reasonable heuristic assumptions on the preprocessing P(L) and target t. Our primary contribution is a choice of preprocessing that allows us to prove correctness without any heuristic assumptions. Our main motivation for studying this is the recent breakthrough algorithm for IdealSVP due to Hanrot, Pellet - Mary, and Stehlé (to appear in Eurocrypt, 2019), which uses the DLW algorithm as a key subprocedure. In particular, our result implies that the HPS IdealSVP algorithm can be made to work with fewer heuristic assumptions. Our only technical tool is the discrete Gaussian distribution over L, and in particular, a lemma showing that the one-dimensional projections of this distribution behave very similarly to the continuous Gaussian. This lemma might be of independent interest.

Cite as

Noah Stephens-Davidowitz. A Time-Distance Trade-Off for GDD with Preprocessing - Instantiating the DLW Heuristic. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 11:1-11:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{stephensdavidowitz:LIPIcs.CCC.2019.11,
  author =	{Stephens-Davidowitz, Noah},
  title =	{{A Time-Distance Trade-Off for GDD with Preprocessing - Instantiating the DLW Heuristic}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{11:1--11:8},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.11},
  URN =		{urn:nbn:de:0030-drops-108331},
  doi =		{10.4230/LIPIcs.CCC.2019.11},
  annote =	{Keywords: Lattices, guaranteed distance decoding, GDD, GDDP}
}
Document
Just Take the Average! An Embarrassingly Simple 2^n-Time Algorithm for SVP (and CVP)

Authors: Divesh Aggarwal and Noah Stephens-Davidowitz

Published in: OASIcs, Volume 61, 1st Symposium on Simplicity in Algorithms (SOSA 2018)


Abstract
We show a 2^{n+o(n)}-time (and space) algorithm for the Shortest Vector Problem on lattices (SVP) that works by repeatedly running an embarrassingly simple "pair and average" sieving-like procedure on a list of lattice vectors. This matches the running time (and space) of the current fastest known algorithm, due to Aggarwal, Dadush, Regev, and Stephens-Davidowitz (ADRS, in STOC, 2015), with a far simpler algorithm. Our algorithm is in fact a modification of the ADRS algorithm, with a certain careful rejection sampling step removed. The correctness of our algorithm follows from a more general "meta-theorem," showing that such rejection sampling steps are unnecessary for a certain class of algorithms and use cases. In particular, this also applies to the related 2^{n + o(n)}-time algorithm for the Closest Vector Problem (CVP), due to Aggarwal, Dadush, and Stephens-Davidowitz (ADS, in FOCS, 2015), yielding a similar embarrassingly simple algorithm for gamma-approximate CVP for any gamma = 1+2^{-o(n/log n)}. (We can also remove the rejection sampling procedure from the 2^{n+o(n)}-time ADS algorithm for exact CVP, but the resulting algorithm is still quite complicated.)

Cite as

Divesh Aggarwal and Noah Stephens-Davidowitz. Just Take the Average! An Embarrassingly Simple 2^n-Time Algorithm for SVP (and CVP). In 1st Symposium on Simplicity in Algorithms (SOSA 2018). Open Access Series in Informatics (OASIcs), Volume 61, pp. 12:1-12:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{aggarwal_et_al:OASIcs.SOSA.2018.12,
  author =	{Aggarwal, Divesh and Stephens-Davidowitz, Noah},
  title =	{{Just Take the Average! An Embarrassingly Simple 2^n-Time Algorithm for SVP (and CVP)}},
  booktitle =	{1st Symposium on Simplicity in Algorithms (SOSA 2018)},
  pages =	{12:1--12:19},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-064-4},
  ISSN =	{2190-6807},
  year =	{2018},
  volume =	{61},
  editor =	{Seidel, Raimund},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.SOSA.2018.12},
  URN =		{urn:nbn:de:0030-drops-83062},
  doi =		{10.4230/OASIcs.SOSA.2018.12},
  annote =	{Keywords: Lattices, SVP, CVP}
}
Document
Search-to-Decision Reductions for Lattice Problems with Approximation Factors (Slightly) Greater Than One

Authors: Noah Stephens-Davidowitz

Published in: LIPIcs, Volume 60, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2016)


Abstract
We show the first dimension-preserving search-to-decision reductions for approximate SVP and CVP. In particular, for any gamma <= 1 + O(log n/n), we obtain an efficient dimension-preserving reduction from gamma^{O(n/log n)}-SVP to gamma-GapSVP and an efficient dimension-preserving reduction from gamma^{O(n)}-CVP to gamma-GapCVP. These results generalize the known equivalences of the search and decision versions of these problems in the exact case when gamma = 1. For SVP, we actually obtain something slightly stronger than a search-to-decision reduction - we reduce gamma^{O(n/log n)}-SVP to gamma-unique SVP, a potentially easier problem than gamma-GapSVP.

Cite as

Noah Stephens-Davidowitz. Search-to-Decision Reductions for Lattice Problems with Approximation Factors (Slightly) Greater Than One. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 60, pp. 19:1-19:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{stephensdavidowitz:LIPIcs.APPROX-RANDOM.2016.19,
  author =	{Stephens-Davidowitz, Noah},
  title =	{{Search-to-Decision Reductions for Lattice Problems with Approximation Factors (Slightly) Greater Than One}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2016)},
  pages =	{19:1--19:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-018-7},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{60},
  editor =	{Jansen, Klaus and Mathieu, Claire and Rolim, Jos\'{e} D. P. and Umans, Chris},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2016.19},
  URN =		{urn:nbn:de:0030-drops-66421},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2016.19},
  annote =	{Keywords: Lattices, SVP, CVP}
}
Document
On the Lattice Distortion Problem

Authors: Huck Bennett, Daniel Dadush, and Noah Stephens-Davidowitz

Published in: LIPIcs, Volume 57, 24th Annual European Symposium on Algorithms (ESA 2016)


Abstract
We introduce and study the Lattice Distortion Problem (LDP). LDP asks how "similar" two lattices are. I.e., what is the minimal distortion of a linear bijection between the two lattices? LDP generalizes the Lattice Isomorphism Problem (the lattice analogue of Graph Isomorphism), which simply asks whether the minimal distortion is one. As our first contribution, we show that the distortion between any two lattices is approximated up to a n^{O(log(n))} factor by a simple function of their successive minima. Our methods are constructive, allowing us to compute low-distortion mappings that are within a 2^{O(n*log(log(n))/log(n))} factor of optimal in polynomial time and within a n^{O(log(n))} factor of optimal in singly exponential time. Our algorithms rely on a notion of basis reduction introduced by Seysen (Combinatorica 1993), which we show is intimately related to lattice distortion. Lastly, we show that LDP is NP-hard to approximate to within any constant factor (under randomized reductions), by a reduction from the Shortest Vector Problem.

Cite as

Huck Bennett, Daniel Dadush, and Noah Stephens-Davidowitz. On the Lattice Distortion Problem. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 9:1-9:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bennett_et_al:LIPIcs.ESA.2016.9,
  author =	{Bennett, Huck and Dadush, Daniel and Stephens-Davidowitz, Noah},
  title =	{{On the Lattice Distortion Problem}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{9:1--9:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.9},
  URN =		{urn:nbn:de:0030-drops-63519},
  doi =		{10.4230/LIPIcs.ESA.2016.9},
  annote =	{Keywords: lattices, lattice distortion, lattice isomoprhism, geometry of numbers, basis reduction}
}
  • Refine by Author
  • 9 Stephens-Davidowitz, Noah
  • 3 Aggarwal, Divesh
  • 3 Bennett, Huck
  • 3 Guo, Siyao
  • 2 Dodis, Yevgeniy
  • Show More...

  • Refine by Classification
  • 2 Theory of computation → Computational complexity and cryptography
  • 2 Theory of computation → Design and analysis of algorithms
  • 2 Theory of computation → Expander graphs and randomness extractors
  • 2 Theory of computation → Problems, reductions and completeness
  • 2 Theory of computation → Pseudorandomness and derandomization
  • Show More...

  • Refine by Keyword
  • 5 Lattices
  • 2 Bounded Distance Decoding
  • 2 CVP
  • 2 SVP
  • 2 Shortest Vector Problem
  • Show More...

  • Refine by Type
  • 14 document

  • Refine by Publication Year
  • 4 2020
  • 3 2021
  • 2 2016
  • 2 2022
  • 1 2018
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail