6 Search Results for "Mertz, Ian"


Document
Bounded Relativization

Authors: Shuichi Hirahara, Zhenjian Lu, and Hanlin Ren

Published in: LIPIcs, Volume 264, 38th Computational Complexity Conference (CCC 2023)


Abstract
Relativization is one of the most fundamental concepts in complexity theory, which explains the difficulty of resolving major open problems. In this paper, we propose a weaker notion of relativization called bounded relativization. For a complexity class ℭ, we say that a statement is ℭ-relativizing if the statement holds relative to every oracle 𝒪 ∈ ℭ. It is easy to see that every result that relativizes also ℭ-relativizes for every complexity class ℭ. On the other hand, we observe that many non-relativizing results, such as IP = PSPACE, are in fact PSPACE-relativizing. First, we use the idea of bounded relativization to obtain new lower bound results, including the following nearly maximum circuit lower bound: for every constant ε > 0, BPE^{MCSP}/2^{εn} ⊈ SIZE[2ⁿ/n]. We prove this by PSPACE-relativizing the recent pseudodeterministic pseudorandom generator by Lu, Oliveira, and Santhanam (STOC 2021). Next, we study the limitations of PSPACE-relativizing proof techniques, and show that a seemingly minor improvement over the known results using PSPACE-relativizing techniques would imply a breakthrough separation NP ≠ L. For example: - Impagliazzo and Wigderson (JCSS 2001) proved that if EXP ≠ BPP, then BPP admits infinitely-often subexponential-time heuristic derandomization. We show that their result is PSPACE-relativizing, and that improving it to worst-case derandomization using PSPACE-relativizing techniques implies NP ≠ L. - Oliveira and Santhanam (STOC 2017) recently proved that every dense subset in P admits an infinitely-often subexponential-time pseudodeterministic construction, which we observe is PSPACE-relativizing. Improving this to almost-everywhere (pseudodeterministic) or (infinitely-often) deterministic constructions by PSPACE-relativizing techniques implies NP ≠ L. - Santhanam (SICOMP 2009) proved that pr-MA does not have fixed polynomial-size circuits. This lower bound can be shown PSPACE-relativizing, and we show that improving it to an almost-everywhere lower bound using PSPACE-relativizing techniques implies NP ≠ L. In fact, we show that if we can use PSPACE-relativizing techniques to obtain the above-mentioned improvements, then PSPACE ≠ EXPH. We obtain our barrier results by constructing suitable oracles computable in EXPH relative to which these improvements are impossible.

Cite as

Shuichi Hirahara, Zhenjian Lu, and Hanlin Ren. Bounded Relativization. In 38th Computational Complexity Conference (CCC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 264, pp. 6:1-6:45, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{hirahara_et_al:LIPIcs.CCC.2023.6,
  author =	{Hirahara, Shuichi and Lu, Zhenjian and Ren, Hanlin},
  title =	{{Bounded Relativization}},
  booktitle =	{38th Computational Complexity Conference (CCC 2023)},
  pages =	{6:1--6:45},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-282-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{264},
  editor =	{Ta-Shma, Amnon},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2023.6},
  URN =		{urn:nbn:de:0030-drops-182764},
  doi =		{10.4230/LIPIcs.CCC.2023.6},
  annote =	{Keywords: relativization, circuit lower bound, derandomization, explicit construction, pseudodeterministic algorithms, interactive proofs}
}
Document
Derandomization with Minimal Memory Footprint

Authors: Dean Doron and Roei Tell

Published in: LIPIcs, Volume 264, 38th Computational Complexity Conference (CCC 2023)


Abstract
Existing proofs that deduce BPL = 𝐋 from circuit lower bounds convert randomized algorithms into deterministic algorithms with large constant overhead in space. We study space-bounded derandomization with minimal footprint, and ask what is the minimal possible space overhead for derandomization. We show that BPSPACE[S] ⊆ DSPACE[c ⋅ S] for c ≈ 2, assuming space-efficient cryptographic PRGs, and, either: (1) lower bounds against bounded-space algorithms with advice, or: (2) lower bounds against certain uniform compression algorithms. Under additional assumptions regarding the power of catalytic computation, in a new setting of parameters that was not studied before, we are even able to get c ≈ 1. Our results are constructive: Given a candidate hard function (and a candidate cryptographic PRG) we show how to transform the randomized algorithm into an efficient deterministic one. This follows from new PRGs and targeted PRGs for space-bounded algorithms, which we combine with novel space-efficient evaluation methods. A central ingredient in all our constructions is hardness amplification reductions in logspace-uniform TC⁰, that were not known before.

Cite as

Dean Doron and Roei Tell. Derandomization with Minimal Memory Footprint. In 38th Computational Complexity Conference (CCC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 264, pp. 11:1-11:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{doron_et_al:LIPIcs.CCC.2023.11,
  author =	{Doron, Dean and Tell, Roei},
  title =	{{Derandomization with Minimal Memory Footprint}},
  booktitle =	{38th Computational Complexity Conference (CCC 2023)},
  pages =	{11:1--11:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-282-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{264},
  editor =	{Ta-Shma, Amnon},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2023.11},
  URN =		{urn:nbn:de:0030-drops-182816},
  doi =		{10.4230/LIPIcs.CCC.2023.11},
  annote =	{Keywords: derandomization, space-bounded computation, catalytic space}
}
Document
Trading Time and Space in Catalytic Branching Programs

Authors: James Cook and Ian Mertz

Published in: LIPIcs, Volume 234, 37th Computational Complexity Conference (CCC 2022)


Abstract
An m-catalytic branching program (Girard, Koucký, McKenzie 2015) is a set of m distinct branching programs for f which are permitted to share internal (i.e. non-source non-sink) nodes. While originally introduced as a non-uniform analogue to catalytic space, this also gives a natural notion of amortized non-uniform space complexity for f, namely the smallest value |G|/m for an m-catalytic branching program G for f (Potechin 2017). Potechin (2017) showed that every function f has amortized size O(n), witnessed by an m-catalytic branching program where m = 2^(2ⁿ-1). We recreate this result by defining a catalytic algorithm for evaluating polynomials using a large amount of space but O(n) time. This allows us to balance this with previously known algorithms which are efficient with respect to space at the cost of time (Cook, Mertz 2020, 2021). We show that for any ε ≥ 2n^(-1), every function f has an m-catalytic branching program of size O_ε(mn), where m = 2^(2^(ε n)). We similarly recreate an improved result due to Robere and Zuiddam (2021), and show that for d ≤ n and ε ≥ 2d^(-1), the same result holds for m = 2^binom(n, ≤ ε d) as long as f is a degree-d polynomial over 𝔽₂. We also show that for certain classes of functions, m can be reduced to 2^(poly n) while still maintaining linear or quasi-linear amortized size. In the other direction, we bound the necessary length, and by extension the amortized size, of any permutation branching program for an arbitrary function between 3n and 4n-4.

Cite as

James Cook and Ian Mertz. Trading Time and Space in Catalytic Branching Programs. In 37th Computational Complexity Conference (CCC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 234, pp. 8:1-8:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{cook_et_al:LIPIcs.CCC.2022.8,
  author =	{Cook, James and Mertz, Ian},
  title =	{{Trading Time and Space in Catalytic Branching Programs}},
  booktitle =	{37th Computational Complexity Conference (CCC 2022)},
  pages =	{8:1--8:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-241-9},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{234},
  editor =	{Lovett, Shachar},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2022.8},
  URN =		{urn:nbn:de:0030-drops-165708},
  doi =		{10.4230/LIPIcs.CCC.2022.8},
  annote =	{Keywords: complexity theory, branching programs, amortized, space complexity, catalytic computation}
}
Document
Lifting with Sunflowers

Authors: Shachar Lovett, Raghu Meka, Ian Mertz, Toniann Pitassi, and Jiapeng Zhang

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
Query-to-communication lifting theorems translate lower bounds on query complexity to lower bounds for the corresponding communication model. In this paper, we give a simplified proof of deterministic lifting (in both the tree-like and dag-like settings). Our proof uses elementary counting together with a novel connection to the sunflower lemma. In addition to a simplified proof, our approach opens up a new avenue of attack towards proving lifting theorems with improved gadget size - one of the main challenges in the area. Focusing on one of the most widely used gadgets - the index gadget - existing lifting techniques are known to require at least a quadratic gadget size. Our new approach combined with robust sunflower lemmas allows us to reduce the gadget size to near linear. We conjecture that it can be further improved to polylogarithmic, similar to the known bounds for the corresponding robust sunflower lemmas.

Cite as

Shachar Lovett, Raghu Meka, Ian Mertz, Toniann Pitassi, and Jiapeng Zhang. Lifting with Sunflowers. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 104:1-104:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{lovett_et_al:LIPIcs.ITCS.2022.104,
  author =	{Lovett, Shachar and Meka, Raghu and Mertz, Ian and Pitassi, Toniann and Zhang, Jiapeng},
  title =	{{Lifting with Sunflowers}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{104:1--104:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.104},
  URN =		{urn:nbn:de:0030-drops-157004},
  doi =		{10.4230/LIPIcs.ITCS.2022.104},
  annote =	{Keywords: Lifting theorems, communication complexity, combinatorics, sunflowers}
}
Document
Bounds on the QAC^0 Complexity of Approximating Parity

Authors: Gregory Rosenthal

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
QAC circuits are quantum circuits with one-qubit gates and Toffoli gates of arbitrary arity. QAC^0 circuits are QAC circuits of constant depth, and are quantum analogues of AC^0 circuits. We prove the following: - For all d ≥ 7 and ε > 0 there is a depth-d QAC circuit of size exp(poly(n^{1/d}) log(n/ε)) that approximates the n-qubit parity function to within error ε on worst-case quantum inputs. Previously it was unknown whether QAC circuits of sublogarithmic depth could approximate parity regardless of size. - We introduce a class of "mostly classical" QAC circuits, including a major component of our circuit from the above upper bound, and prove a tight lower bound on the size of low-depth, mostly classical QAC circuits that approximate this component. - Arbitrary depth-d QAC circuits require at least Ω(n/d) multi-qubit gates to achieve a 1/2 + exp(-o(n/d)) approximation of parity. When d = Θ(log n) this nearly matches an easy O(n) size upper bound for computing parity exactly. - QAC circuits with at most two layers of multi-qubit gates cannot achieve a 1/2 + exp(-o(n)) approximation of parity, even non-cleanly. Previously it was known only that such circuits could not cleanly compute parity exactly for sufficiently large n. The proofs use a new normal form for quantum circuits which may be of independent interest, and are based on reductions to the problem of constructing certain generalizations of the cat state which we name "nekomata" after an analogous cat yōkai.

Cite as

Gregory Rosenthal. Bounds on the QAC^0 Complexity of Approximating Parity. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 32:1-32:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{rosenthal:LIPIcs.ITCS.2021.32,
  author =	{Rosenthal, Gregory},
  title =	{{Bounds on the QAC^0 Complexity of Approximating Parity}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{32:1--32:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.32},
  URN =		{urn:nbn:de:0030-drops-135713},
  doi =		{10.4230/LIPIcs.ITCS.2021.32},
  annote =	{Keywords: quantum circuit complexity, QAC^0, fanout, parity, nekomata}
}
Document
Track A: Algorithms, Complexity and Games
Short Proofs Are Hard to Find

Authors: Ian Mertz, Toniann Pitassi, and Yuanhao Wei

Published in: LIPIcs, Volume 132, 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)


Abstract
We obtain a streamlined proof of an important result by Alekhnovich and Razborov [Michael Alekhnovich and Alexander A. Razborov, 2008], showing that it is hard to automatize both tree-like and general Resolution. Under a different assumption than [Michael Alekhnovich and Alexander A. Razborov, 2008], our simplified proof gives improved bounds: we show under ETH that these proof systems are not automatizable in time n^f(n), whenever f(n) = o(log^{1/7 - epsilon} log n) for any epsilon > 0. Previously non-automatizability was only known for f(n) = O(1). Our proof also extends fairly straightforwardly to prove similar hardness results for PCR and Res(r).

Cite as

Ian Mertz, Toniann Pitassi, and Yuanhao Wei. Short Proofs Are Hard to Find. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 84:1-84:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{mertz_et_al:LIPIcs.ICALP.2019.84,
  author =	{Mertz, Ian and Pitassi, Toniann and Wei, Yuanhao},
  title =	{{Short Proofs Are Hard to Find}},
  booktitle =	{46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
  pages =	{84:1--84:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-109-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{132},
  editor =	{Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.84},
  URN =		{urn:nbn:de:0030-drops-106605},
  doi =		{10.4230/LIPIcs.ICALP.2019.84},
  annote =	{Keywords: automatizability, Resolution, SAT solvers, proof complexity}
}
  • Refine by Author
  • 3 Mertz, Ian
  • 2 Pitassi, Toniann
  • 1 Cook, James
  • 1 Doron, Dean
  • 1 Hirahara, Shuichi
  • Show More...

  • Refine by Classification
  • 3 Theory of computation → Circuit complexity
  • 2 Theory of computation → Complexity theory and logic
  • 2 Theory of computation → Proof complexity
  • 2 Theory of computation → Pseudorandomness and derandomization
  • 1 Hardware → Theorem proving and SAT solving
  • Show More...

  • Refine by Keyword
  • 2 derandomization
  • 1 Lifting theorems
  • 1 QAC^0
  • 1 Resolution
  • 1 SAT solvers
  • Show More...

  • Refine by Type
  • 6 document

  • Refine by Publication Year
  • 2 2022
  • 2 2023
  • 1 2019
  • 1 2021

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail