11 Search Results for "Drucker, Andrew"


Document
Guidable Local Hamiltonian Problems with Implications to Heuristic Ansatz State Preparation and the Quantum PCP Conjecture

Authors: Jordi Weggemans, Marten Folkertsma, and Chris Cade

Published in: LIPIcs, Volume 310, 19th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2024)


Abstract
We study "Merlinized" versions of the recently defined Guided Local Hamiltonian problem, which we call "Guidable Local Hamiltonian" problems. Unlike their guided counterparts, these problems do not have a guiding state provided as a part of the input, but merely come with the promise that one exists. We consider in particular two classes of guiding states: those that can be prepared efficiently by a quantum circuit; and those belonging to a class of quantum states we call classically evaluatable, for which it is possible to efficiently compute expectation values of local observables classically. We show that guidable local Hamiltonian problems for both classes of guiding states are QCMA-complete in the inverse-polynomial precision setting, but lie within NP (or NqP) in the constant precision regime when the guiding state is classically evaluatable. Our completeness results show that, from a complexity-theoretic perspective, classical Ansätze selected by classical heuristics are just as powerful as quantum Ansätze prepared by quantum heuristics, as long as one has access to quantum phase estimation. In relation to the quantum PCP conjecture, we (i) define a complexity class capturing quantum-classical probabilistically checkable proof systems and show that it is contained in BQP^NP[1] for constant proof queries; (ii) give a no-go result on "dequantizing" the known quantum reduction which maps a QPCP-verification circuit to a local Hamiltonian with constant promise gap; (iii) give several no-go results for the existence of quantum gap amplification procedures that preserve certain ground state properties; and (iv) propose two conjectures that can be viewed as stronger versions of the NLTS theorem. Finally, we show that many of our results can be directly modified to obtain similar results for the class MA.

Cite as

Jordi Weggemans, Marten Folkertsma, and Chris Cade. Guidable Local Hamiltonian Problems with Implications to Heuristic Ansatz State Preparation and the Quantum PCP Conjecture. In 19th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 310, pp. 10:1-10:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{weggemans_et_al:LIPIcs.TQC.2024.10,
  author =	{Weggemans, Jordi and Folkertsma, Marten and Cade, Chris},
  title =	{{Guidable Local Hamiltonian Problems with Implications to Heuristic Ansatz State Preparation and the Quantum PCP Conjecture}},
  booktitle =	{19th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2024)},
  pages =	{10:1--10:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-328-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{310},
  editor =	{Magniez, Fr\'{e}d\'{e}ric and Grilo, Alex Bredariol},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TQC.2024.10},
  URN =		{urn:nbn:de:0030-drops-206804},
  doi =		{10.4230/LIPIcs.TQC.2024.10},
  annote =	{Keywords: Quantum complexity theory, local Hamiltonian problem, quantum state ansatzes, QCMA, quantum PCP conjecture}
}
Document
Quantum Polynomial Hierarchies: Karp-Lipton, Error Reduction, and Lower Bounds

Authors: Avantika Agarwal, Sevag Gharibian, Venkata Koppula, and Dorian Rudolph

Published in: LIPIcs, Volume 306, 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)


Abstract
The Polynomial-Time Hierarchy (PH) is a staple of classical complexity theory, with applications spanning randomized computation to circuit lower bounds to "quantum advantage" analyses for near-term quantum computers. Quantumly, however, despite the fact that at least four definitions of quantum PH exist, it has been challenging to prove analogues for these of even basic facts from PH. This work studies three quantum-verifier based generalizations of PH, two of which are from [Gharibian, Santha, Sikora, Sundaram, Yirka, 2022] and use classical strings (QCPH) and quantum mixed states (QPH) as proofs, and one of which is new to this work, utilizing quantum pure states (QPHpure) as proofs. We first resolve several open problems from [GSSSY22], including a collapse theorem and a Karp-Lipton theorem for QCPH. Then, for our new class QPHpure, we show one-sided error reduction QPHpure, as well as the first bounds relating these quantum variants of PH, namely QCPH ⊆ QPHpure ⊆ EXP^PP.

Cite as

Avantika Agarwal, Sevag Gharibian, Venkata Koppula, and Dorian Rudolph. Quantum Polynomial Hierarchies: Karp-Lipton, Error Reduction, and Lower Bounds. In 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 306, pp. 7:1-7:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{agarwal_et_al:LIPIcs.MFCS.2024.7,
  author =	{Agarwal, Avantika and Gharibian, Sevag and Koppula, Venkata and Rudolph, Dorian},
  title =	{{Quantum Polynomial Hierarchies: Karp-Lipton, Error Reduction, and Lower Bounds}},
  booktitle =	{49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)},
  pages =	{7:1--7:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-335-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{306},
  editor =	{Kr\'{a}lovi\v{c}, Rastislav and Ku\v{c}era, Anton{\'\i}n},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2024.7},
  URN =		{urn:nbn:de:0030-drops-205632},
  doi =		{10.4230/LIPIcs.MFCS.2024.7},
  annote =	{Keywords: Quantum complexity, polynomial hierarchy}
}
Document
Communication Complexity vs Randomness Complexity in Interactive Proofs

Authors: Benny Applebaum, Kaartik Bhushan, and Manoj Prabhakaran

Published in: LIPIcs, Volume 304, 5th Conference on Information-Theoretic Cryptography (ITC 2024)


Abstract
In this work, we study the interplay between the communication from a verifier in a general private-coin interactive protocol and the number of random bits it uses in the protocol. Under worst-case derandomization assumptions, we show that it is possible to transform any I-round interactive protocol that uses ρ random bits into another one for the same problem with the additional property that the verifier’s communication is bounded by O(I⋅ ρ). Importantly, this is done with a minor, logarithmic, increase in the communication from the prover to the verifier and while preserving the randomness complexity. Along the way, we introduce a new compression game between computationally-bounded compressor and computationally-unbounded decompressor and a new notion of conditioned efficient distributions that may be of independent interest. Our solutions are based on a combination of perfect hashing and pseudorandom generators.

Cite as

Benny Applebaum, Kaartik Bhushan, and Manoj Prabhakaran. Communication Complexity vs Randomness Complexity in Interactive Proofs. In 5th Conference on Information-Theoretic Cryptography (ITC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 304, pp. 2:1-2:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{applebaum_et_al:LIPIcs.ITC.2024.2,
  author =	{Applebaum, Benny and Bhushan, Kaartik and Prabhakaran, Manoj},
  title =	{{Communication Complexity vs Randomness Complexity in Interactive Proofs}},
  booktitle =	{5th Conference on Information-Theoretic Cryptography (ITC 2024)},
  pages =	{2:1--2:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-333-1},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{304},
  editor =	{Aggarwal, Divesh},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITC.2024.2},
  URN =		{urn:nbn:de:0030-drops-205103},
  doi =		{10.4230/LIPIcs.ITC.2024.2},
  annote =	{Keywords: Interactive Proof Systems, Communication Complexity, Hash Functions, Pseudo-Random Generators, Compression}
}
Document
Breaking RSA Generically Is Equivalent to Factoring, with Preprocessing

Authors: Dana Dachman-Soled, Julian Loss, and Adam O'Neill

Published in: LIPIcs, Volume 304, 5th Conference on Information-Theoretic Cryptography (ITC 2024)


Abstract
We investigate the relationship between the classical RSA and factoring problems when preprocessing is considered. In such a model, adversaries can use an unbounded amount of precomputation to produce an "advice" string to then use during the online phase, when a problem instance becomes known. Previous work (e.g., [Bernstein, Lange ASIACRYPT '13]) has shown that preprocessing attacks significantly improve the runtime of the best-known factoring algorithms. Due to these improvements, we ask whether the relationship between factoring and RSA fundamentally changes when preprocessing is allowed. Specifically, we investigate whether there is a superpolynomial gap between the runtime of the best attack on RSA with preprocessing and on factoring with preprocessing. Our main result rules this out with respect to algorithms that perform generic computation on the RSA instance x^e od N yet arbitrary computation on the modulus N, namely a careful adaptation of the well-known generic ring model of Aggarwal and Maurer (Eurocrypt 2009) to the preprocessing setting. In particular, in this setting we show the existence of a factoring algorithm with polynomially related parameters, for any setting of RSA parameters. Our main technical contribution is a set of new information-theoretic techniques that allow us to handle or eliminate cases in which the Aggarwal and Maurer result does not yield a factoring algorithm in the standard model with parameters that are polynomially related to those of the RSA algorithm. These techniques include two novel compression arguments, and a variant of the Fiat-Naor/Hellman tables construction that is tailored to the factoring setting.

Cite as

Dana Dachman-Soled, Julian Loss, and Adam O'Neill. Breaking RSA Generically Is Equivalent to Factoring, with Preprocessing. In 5th Conference on Information-Theoretic Cryptography (ITC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 304, pp. 8:1-8:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{dachmansoled_et_al:LIPIcs.ITC.2024.8,
  author =	{Dachman-Soled, Dana and Loss, Julian and O'Neill, Adam},
  title =	{{Breaking RSA Generically Is Equivalent to Factoring, with Preprocessing}},
  booktitle =	{5th Conference on Information-Theoretic Cryptography (ITC 2024)},
  pages =	{8:1--8:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-333-1},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{304},
  editor =	{Aggarwal, Divesh},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITC.2024.8},
  URN =		{urn:nbn:de:0030-drops-205163},
  doi =		{10.4230/LIPIcs.ITC.2024.8},
  annote =	{Keywords: RSA, factoring, generic ring model, preprocessing}
}
Document
Time-Space Tradeoffs for Finding Multi-Collisions in Merkle-Damgård Hash Functions

Authors: Akshima

Published in: LIPIcs, Volume 304, 5th Conference on Information-Theoretic Cryptography (ITC 2024)


Abstract
We analyze the multi-collision resistance of Merkle-Damgård hash function construction in the auxiliary input random oracle model. Finding multi-collisions or m-way collisions, for some parameter m, in a hash function consists of m distinct input that have the same output under the hash function. This is a natural generalization of the collision finding problem in hash functions, which is basically finding 2-way collisions. Hardness of finding collisions, or collision resistance, is an important security assumption in cryptography. While the time-space trade-offs for collision resistance of hash functions has received considerable attention, this is the first work that studies time-space trade-offs for the multi-collision resistance property of hash functions based on the popular and widely used Merkle-Damgård (MD) constructions. In this work, we study how the advantage of finding m-way collisions depends on the parameter m. We believe understanding whether multi-collision resistance is a strictly easier property than collision resistance is a fundamental problem and our work facilitates this for adversaries with auxiliary information against MD based hash functions. Furthermore, in this work we study how the advantage varies with the bound on length of the m colliding inputs. Prior works [Akshima et al., 2020; Ashrujit Ghoshal and Ilan Komargodski, 2022; Akshima et al., 2022] have shown that finding "longer" collisions with auxiliary input in MD based hash functions becomes easier. More precisely, the advantage of finding collisions linearly depends on the bound on the length of colliding inputs. In this work, we show similar dependence for m-way collision finding, for any m ≥ 2. We show a simple attack for finding 1-block m-way collisions which achieves an advantage of Ω̃(S/mN). For 2 ≤ B < log m, we give the best known attack for finding B-blocks m-way collision which achieves an advantage of Ω̃(ST/m^{1/(B-1)}N) when m^{1/(B-1)}-way collisions exist on every salt. For B > log m, our attack achieves an advantage of Ω̃(STB/N) which is optimal when SB ≥ T and ST² ≤ N. The main results of this work is showing that our attacks are optimal for B = 1 and B = 2. This implies that in the auxiliary-input random oracle model, the advantage decreases by a multiplicative factor of m for finding 1-block and 2-block m-way collisions (compared to collision finding) in Merkle-Damgård based hash functions.

Cite as

Akshima. Time-Space Tradeoffs for Finding Multi-Collisions in Merkle-Damgård Hash Functions. In 5th Conference on Information-Theoretic Cryptography (ITC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 304, pp. 9:1-9:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{akshima:LIPIcs.ITC.2024.9,
  author =	{Akshima},
  title =	{{Time-Space Tradeoffs for Finding Multi-Collisions in Merkle-Damg\r{a}rd Hash Functions}},
  booktitle =	{5th Conference on Information-Theoretic Cryptography (ITC 2024)},
  pages =	{9:1--9:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-333-1},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{304},
  editor =	{Aggarwal, Divesh},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITC.2024.9},
  URN =		{urn:nbn:de:0030-drops-205171},
  doi =		{10.4230/LIPIcs.ITC.2024.9},
  annote =	{Keywords: Collision, hash functions, multi-collisions, Merkle-Damg\r{a}rd, pre-computation, auxiliary input}
}
Document
A Technique for Hardness Amplification Against AC⁰

Authors: William M. Hoza

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
We study hardness amplification in the context of two well-known "moderate" average-case hardness results for AC⁰ circuits. First, we investigate the extent to which AC⁰ circuits of depth d can approximate AC⁰ circuits of some larger depth d + k. The case k = 1 is resolved by Håstad, Rossman, Servedio, and Tan’s celebrated average-case depth hierarchy theorem (JACM 2017). Our contribution is a significantly stronger correlation bound when k ≥ 3. Specifically, we show that there exists a linear-size AC⁰_{d + k} circuit h : {0, 1}ⁿ → {0, 1} such that for every AC⁰_d circuit g, either g has size exp(n^{Ω(1/d)}), or else g agrees with h on at most a (1/2 + ε)-fraction of inputs where ε = exp(-(1/d) ⋅ Ω(log n)^{k-1}). For comparison, Håstad, Rossman, Servedio, and Tan’s result has ε = n^{-Θ(1/d)}. Second, we consider the majority function. It is well known that the majority function is moderately hard for AC⁰ circuits (and stronger classes). Our contribution is a stronger correlation bound for the XOR of t copies of the n-bit majority function, denoted MAJ_n^{⊕ t}. We show that if g is an AC⁰_d circuit of size S, then g agrees with MAJ_n^{⊕ t} on at most a (1/2 + ε)-fraction of inputs, where ε = (O(log S)^{d - 1} / √n)^t. To prove these results, we develop a hardness amplification technique that is tailored to a specific type of circuit lower bound proof. In particular, one way to show that a function h is moderately hard for AC⁰ circuits is to (a) design some distribution over random restrictions or random projections, (b) show that AC⁰ circuits simplify to shallow decision trees under these restrictions/projections, and finally (c) show that after applying the restriction/projection, h is moderately hard for shallow decision trees with respect to an appropriate distribution. We show that (roughly speaking) if h can be proven to be moderately hard by a proof with that structure, then XORing multiple copies of h amplifies its hardness. Our analysis involves a new kind of XOR lemma for decision trees, which might be of independent interest.

Cite as

William M. Hoza. A Technique for Hardness Amplification Against AC⁰. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 1:1-1:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hoza:LIPIcs.CCC.2024.1,
  author =	{Hoza, William M.},
  title =	{{A Technique for Hardness Amplification Against AC⁰}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{1:1--1:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.1},
  URN =		{urn:nbn:de:0030-drops-203977},
  doi =		{10.4230/LIPIcs.CCC.2024.1},
  annote =	{Keywords: Bounded-depth circuits, average-case lower bounds, hardness amplification, XOR lemmas}
}
Document
A Strong Direct Sum Theorem for Distributional Query Complexity

Authors: Guy Blanc, Caleb Koch, Carmen Strassle, and Li-Yang Tan

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
Consider the expected query complexity of computing the k-fold direct product f^{⊗ k} of a function f to error ε with respect to a distribution μ^k. One strategy is to sequentially compute each of the k copies to error ε/k with respect to μ and apply the union bound. We prove a strong direct sum theorem showing that this naive strategy is essentially optimal. In particular, computing a direct product necessitates a blowup in both query complexity and error. Strong direct sum theorems contrast with results that only show a blowup in query complexity or error but not both. There has been a long line of such results for distributional query complexity, dating back to (Impagliazzo, Raz, Wigderson 1994) and (Nisan, Rudich, Saks 1994), but a strong direct sum theorem that holds for all functions in the standard query model had been elusive. A key idea in our work is the first use of the Hardcore Theorem (Impagliazzo 1995) in the context of query complexity. We prove a new resilience lemma that accompanies it, showing that the hardcore of f^{⊗k} is likely to remain dense under arbitrary partitions of the input space.

Cite as

Guy Blanc, Caleb Koch, Carmen Strassle, and Li-Yang Tan. A Strong Direct Sum Theorem for Distributional Query Complexity. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 16:1-16:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{blanc_et_al:LIPIcs.CCC.2024.16,
  author =	{Blanc, Guy and Koch, Caleb and Strassle, Carmen and Tan, Li-Yang},
  title =	{{A Strong Direct Sum Theorem for Distributional Query Complexity}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{16:1--16:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.16},
  URN =		{urn:nbn:de:0030-drops-204123},
  doi =		{10.4230/LIPIcs.CCC.2024.16},
  annote =	{Keywords: Query complexity, direct product theorem, hardcore theorem}
}
Document
Dimension Independent Disentanglers from Unentanglement and Applications

Authors: Fernando Granha Jeronimo and Pei Wu

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
Quantum entanglement, a distinctive form of quantum correlation, has become a key enabling ingredient in diverse applications in quantum computation, complexity, cryptography, etc. However, the presence of unwanted adversarial entanglement also poses challenges and even prevents the correct behaviour of many protocols and applications. In this paper, we explore methods to "break" the quantum correlations. Specifically, we construct a dimension-independent k-partite disentangler (like) channel from bipartite unentangled input. In particular, we show: For every d,𝓁 ≥ k ∈ ℕ^+, there is an efficient channel Λ : ℂ^{d𝓁} ⊗ ℂ^{d𝓁} → ℂ^{dk} such that for every bipartite separable density operator ρ₁⊗ ρ₂, the output Λ(ρ₁⊗ρ₂) is close to a k-partite separable state. Concretely, for some distribution μ on states from C^d, ║ Λ(ρ₁⊗ρ₂) - ∫ |ψ⟩⟨ψ|^{⊗k} dμ(ψ) ║₁ ≤ Õ((k³/𝓁)^{1/4}). Moreover, Λ(|ψ⟩⟨ψ|^{⊗𝓁} ⊗ |ψ⟩⟨ψ|^{⊗𝓁}) = |ψ⟩⟨ψ|^{⊗k}. Without the bipartite unentanglement assumption, the above bound is conjectured to be impossible and would imply QMA(2) = QMA. Leveraging multipartite unentanglement ensured by our disentanglers, we achieve the following: (i) a new proof that QMA(2) admits arbitrary gap amplification; (ii) a variant of the swap test and product test with improved soundness, addressing a major limitation of their original versions. More importantly, we demonstrate that unentangled quantum proofs of almost general real amplitudes capture NEXP, thereby greatly relaxing the non-negative amplitudes assumption in the recent work of QMA^+(2) = NEXP [Jeronimo and Wu, STOC 2023]. Specifically, our findings show that to capture NEXP, it suffices to have unentangled proofs of the form |ψ⟩ = √a |ψ_{+}⟩ + √{1-a} |ψ_{-}⟩ where |ψ_{+}⟩ has non-negative amplitudes, |ψ_{-}⟩ only has negative amplitudes and |a-(1-a)| ≥ 1/poly(n) with a ∈ [0,1]. Additionally, we present a protocol achieving an almost largest possible completeness-soundness gap before obtaining QMA^ℝ(k) = NEXP, namely, a 1/poly(n) additive improvement to the gap results in this equality.

Cite as

Fernando Granha Jeronimo and Pei Wu. Dimension Independent Disentanglers from Unentanglement and Applications. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 26:1-26:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{jeronimo_et_al:LIPIcs.CCC.2024.26,
  author =	{Jeronimo, Fernando Granha and Wu, Pei},
  title =	{{Dimension Independent Disentanglers from Unentanglement and Applications}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{26:1--26:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.26},
  URN =		{urn:nbn:de:0030-drops-204228},
  doi =		{10.4230/LIPIcs.CCC.2024.26},
  annote =	{Keywords: QMA(2), disentangler, quantum proofs}
}
Document
Track A: Algorithms, Complexity and Games
No Polynomial Kernels for Knapsack

Authors: Klaus Heeger, Danny Hermelin, Matthias Mnich, and Dvir Shabtay

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
This paper focuses on kernelization algorithms for the fundamental Knapsack problem. A kernelization algorithm (or kernel) is a polynomial-time reduction from a problem onto itself, where the output size is bounded by a function of some problem-specific parameter. Such algorithms provide a theoretical model for data reduction and preprocessing and are central in the area of parameterized complexity. In this way, a kernel for Knapsack for some parameter k reduces any instance of Knapsack to an equivalent instance of size at most f(k) in polynomial time, for some computable function f. When f(k) = k^{O(1)} then we call such a reduction a polynomial kernel. Our study focuses on two natural parameters for Knapsack: The number w_# of different item weights, and the number p_# of different item profits. Our main technical contribution is a proof showing that Knapsack does not admit a polynomial kernel for any of these two parameters under standard complexity-theoretic assumptions. Our proof discovers an elaborate application of the standard kernelization lower bound framework, and develops along the way novel ideas that should be useful for other problems as well. We complement our lower bounds by showing that Knapsack admits a polynomial kernel for the combined parameter w_# ⋅ p_#.

Cite as

Klaus Heeger, Danny Hermelin, Matthias Mnich, and Dvir Shabtay. No Polynomial Kernels for Knapsack. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 83:1-83:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{heeger_et_al:LIPIcs.ICALP.2024.83,
  author =	{Heeger, Klaus and Hermelin, Danny and Mnich, Matthias and Shabtay, Dvir},
  title =	{{No Polynomial Kernels for Knapsack}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{83:1--83:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.83},
  URN =		{urn:nbn:de:0030-drops-202261},
  doi =		{10.4230/LIPIcs.ICALP.2024.83},
  annote =	{Keywords: Knapsack, polynomial kernels, compositions, number of different weights, number of different profits}
}
Document
Track A: Algorithms, Complexity and Games
The Power of Many Samples in Query Complexity

Authors: Andrew Bassilakis, Andrew Drucker, Mika Göös, Lunjia Hu, Weiyun Ma, and Li-Yang Tan

Published in: LIPIcs, Volume 168, 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)


Abstract
The randomized query complexity 𝖱(f) of a boolean function f: {0,1}ⁿ → {0,1} is famously characterized (via Yao’s minimax) by the least number of queries needed to distinguish a distribution 𝒟₀ over 0-inputs from a distribution 𝒟₁ over 1-inputs, maximized over all pairs (𝒟₀,𝒟₁). We ask: Does this task become easier if we allow query access to infinitely many samples from either 𝒟₀ or 𝒟₁? We show the answer is no: There exists a hard pair (𝒟₀,𝒟₁) such that distinguishing 𝒟₀^∞ from 𝒟₁^∞ requires Θ(𝖱(f)) many queries. As an application, we show that for any composed function f∘g we have 𝖱(f∘g) ≥ Ω(fbs(f)𝖱(g)) where fbs denotes fractional block sensitivity.

Cite as

Andrew Bassilakis, Andrew Drucker, Mika Göös, Lunjia Hu, Weiyun Ma, and Li-Yang Tan. The Power of Many Samples in Query Complexity. In 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 168, pp. 9:1-9:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bassilakis_et_al:LIPIcs.ICALP.2020.9,
  author =	{Bassilakis, Andrew and Drucker, Andrew and G\"{o}\"{o}s, Mika and Hu, Lunjia and Ma, Weiyun and Tan, Li-Yang},
  title =	{{The Power of Many Samples in Query Complexity}},
  booktitle =	{47th International Colloquium on Automata, Languages, and Programming (ICALP 2020)},
  pages =	{9:1--9:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-138-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{168},
  editor =	{Czumaj, Artur and Dawar, Anuj and Merelli, Emanuela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2020.9},
  URN =		{urn:nbn:de:0030-drops-124163},
  doi =		{10.4230/LIPIcs.ICALP.2020.9},
  annote =	{Keywords: Query complexity, Composition theorems}
}
Document
Exponential Time Paradigms Through the Polynomial Time Lens

Authors: Andrew Drucker, Jesper Nederlof, and Rahul Santhanam

Published in: LIPIcs, Volume 57, 24th Annual European Symposium on Algorithms (ESA 2016)


Abstract
We propose a general approach to modelling algorithmic paradigms for the exact solution of NP-hard problems. Our approach is based on polynomial time reductions to succinct versions of problems solvable in polynomial time. We use this viewpoint to explore and compare the power of paradigms such as branching and dynamic programming, and to shed light on the true complexity of various problems. As one instantiation, we model branching using the notion of witness compression, i.e., reducibility to the circuit satisfiability problem parameterized by the number of variables of the circuit. We show this is equivalent to the previously studied notion of `OPP-algorithms', and provide a technique for proving conditional lower bounds for witness compressions via a constructive variant of AND-composition, which is a notion previously studied in theory of preprocessing. In the context of parameterized complexity we use this to show that problems such as Pathwidth and Treewidth and Independent Set parameterized by pathwidth do not have witness compression, assuming NP subseteq coNP/poly. Since these problems admit fast fixed parameter tractable algorithms via dynamic programming, this shows that dynamic programming can be stronger than branching, under a standard complexity hypothesis. Our approach has applications outside parameterized complexity as well: for example, we show if a polynomial time algorithm outputs a maximum independent set of a given planar graph on n vertices with probability exp(-n^{1-epsilon}) for some epsilon>0, then NP subseteq coNP/poly. This negative result dims the prospects for one very natural approach to sub-exponential time algorithms for problems on planar graphs. As two other illustrations (more exploratory) of our approach, we model algorithms based on inclusion-exclusion or group algebras via the notion of "parity compression", and we model a subclass of dynamic programming algorithms with the notion of "disjunctive dynamic programming". These models give us a way to naturally classify various parameterized problems with FPT algorithms. In the case of the dynamic programming model, we show that Independent Set parameterized by pathwidth is complete for this model.

Cite as

Andrew Drucker, Jesper Nederlof, and Rahul Santhanam. Exponential Time Paradigms Through the Polynomial Time Lens. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 36:1-36:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{drucker_et_al:LIPIcs.ESA.2016.36,
  author =	{Drucker, Andrew and Nederlof, Jesper and Santhanam, Rahul},
  title =	{{Exponential Time Paradigms Through the Polynomial Time Lens}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{36:1--36:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.36},
  URN =		{urn:nbn:de:0030-drops-63871},
  doi =		{10.4230/LIPIcs.ESA.2016.36},
  annote =	{Keywords: exponential time paradigms, branching, dynamic programming, lower bounds}
}
  • Refine by Author
  • 2 Drucker, Andrew
  • 2 Tan, Li-Yang
  • 1 Agarwal, Avantika
  • 1 Akshima
  • 1 Applebaum, Benny
  • Show More...

  • Refine by Classification
  • 2 Security and privacy → Information-theoretic techniques
  • 2 Theory of computation → Interactive proof systems
  • 2 Theory of computation → Quantum complexity theory
  • 1 Mathematics of computing → Combinatorial optimization
  • 1 Mathematics of computing → Probability and statistics
  • Show More...

  • Refine by Keyword
  • 2 Query complexity
  • 1 Bounded-depth circuits
  • 1 Collision
  • 1 Communication Complexity
  • 1 Composition theorems
  • Show More...

  • Refine by Type
  • 11 document

  • Refine by Publication Year
  • 9 2024
  • 1 2016
  • 1 2020

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail