12 Search Results for "Shaltiel, Ronen"


Document
Instance-Wise Hardness Versus Randomness Tradeoffs for Arthur-Merlin Protocols

Authors: Dieter van Melkebeek and Nicollas Mocelin Sdroievski

Published in: LIPIcs, Volume 264, 38th Computational Complexity Conference (CCC 2023)


Abstract
A fundamental question in computational complexity asks whether probabilistic polynomial-time algorithms can be simulated deterministically with a small overhead in time (the BPP vs. P problem). A corresponding question in the realm of interactive proofs asks whether Arthur-Merlin protocols can be simulated nondeterministically with a small overhead in time (the AM vs. NP problem). Both questions are intricately tied to lower bounds. Prominently, in both settings blackbox derandomization, i.e., derandomization through pseudo-random generators, has been shown equivalent to lower bounds for decision problems against circuits. Recently, Chen and Tell (FOCS'21) established near-equivalences in the BPP setting between whitebox derandomization and lower bounds for multi-bit functions against algorithms on almost-all inputs. The key ingredient is a technique to translate hardness into targeted hitting sets in an instance-wise fashion based on a layered arithmetization of the evaluation of a uniform circuit computing the hard function f on the given instance. In this paper we develop a corresponding technique for Arthur-Merlin protocols and establish similar near-equivalences in the AM setting. As an example of our results in the hardness to derandomization direction, consider a length-preserving function f computable by a nondeterministic algorithm that runs in time n^a. We show that if every Arthur-Merlin protocol that runs in time n^c for c = O(log² a) can only compute f correctly on finitely many inputs, then AM is in NP. Our main technical contribution is the construction of suitable targeted hitting-set generators based on probabilistically checkable proofs for nondeterministic computations. As a byproduct of our constructions, we obtain the first result indicating that whitebox derandomization of AM may be equivalent to the existence of targeted hitting-set generators for AM, an issue raised by Goldreich (LNCS, 2011). Byproducts in the average-case setting include the first uniform hardness vs. randomness tradeoffs for AM, as well as an unconditional mild derandomization result for AM.

Cite as

Dieter van Melkebeek and Nicollas Mocelin Sdroievski. Instance-Wise Hardness Versus Randomness Tradeoffs for Arthur-Merlin Protocols. In 38th Computational Complexity Conference (CCC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 264, pp. 17:1-17:36, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{vanmelkebeek_et_al:LIPIcs.CCC.2023.17,
  author =	{van Melkebeek, Dieter and Mocelin Sdroievski, Nicollas},
  title =	{{Instance-Wise Hardness Versus Randomness Tradeoffs for Arthur-Merlin Protocols}},
  booktitle =	{38th Computational Complexity Conference (CCC 2023)},
  pages =	{17:1--17:36},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-282-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{264},
  editor =	{Ta-Shma, Amnon},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2023.17},
  URN =		{urn:nbn:de:0030-drops-182870},
  doi =		{10.4230/LIPIcs.CCC.2023.17},
  annote =	{Keywords: Hardness versus randomness tradeoff, Arthur-Merlin protocol, targeted hitting set generator}
}
Document
An Algorithmic Approach to Uniform Lower Bounds

Authors: Rahul Santhanam

Published in: LIPIcs, Volume 264, 38th Computational Complexity Conference (CCC 2023)


Abstract
We propose a new family of circuit-based sampling tasks, such that non-trivial algorithmic solutions to certain tasks from this family imply frontier uniform lower bounds such as "NP is not in uniform ACC⁰" and "NP does not have uniform polynomial-size depth-two threshold circuits". Indeed, the most general versions of our sampling tasks have implications for central open problems such as NP vs P and PSPACE vs P. We argue the soundness of our approach by showing that the non-trivial algorithmic solutions we require do follow from standard cryptographic assumptions. In addition, we give evidence that a version of our approach for uniform circuits is necessary in order to separate NP from P or PSPACE from P. We give an algorithmic characterization for the PSPACE vs P question: PSPACE ≠ P iff either E has sub-exponential time non-uniform algorithms infinitely often or there are non-trivial space-efficient solutions to our sampling tasks for uniform Boolean circuits. We show how to use our framework to capture uniform versions of known non-uniform lower bounds, as well as classical uniform lower bounds such as the space hierarchy theorem and Allender’s uniform lower bound for the Permanent. We also apply our framework to prove new lower bounds: NP does not have polynomial-size uniform AC⁰ circuits with a bottom layer of MOD 6 gates, nor does it have polynomial-size uniform AC⁰ circuits with a bottom layer of threshold gates. Our proofs exploit recently defined probabilistic time-bounded variants of Kolmogorov complexity [Zhenjian Lu et al., 2022; Halley Goldberg et al., 2022; Halley Goldberg et al., 2022].

Cite as

Rahul Santhanam. An Algorithmic Approach to Uniform Lower Bounds. In 38th Computational Complexity Conference (CCC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 264, pp. 35:1-35:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{santhanam:LIPIcs.CCC.2023.35,
  author =	{Santhanam, Rahul},
  title =	{{An Algorithmic Approach to Uniform Lower Bounds}},
  booktitle =	{38th Computational Complexity Conference (CCC 2023)},
  pages =	{35:1--35:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-282-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{264},
  editor =	{Ta-Shma, Amnon},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2023.35},
  URN =		{urn:nbn:de:0030-drops-183053},
  doi =		{10.4230/LIPIcs.CCC.2023.35},
  annote =	{Keywords: Probabilistic Kolmogorov complexity, sampling algorithms, uniform lower bounds}
}
Document
Algebraic and Analytic Methods in Computational Complexity (Dagstuhl Seminar 22371)

Authors: Markus Bläser, Valentine Kabanets, Ronen Shaltiel, and Jacobo Torán

Published in: Dagstuhl Reports, Volume 12, Issue 9 (2023)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 2237 "Algebraic and Analytic Methods in Computational Complexity". Computational Complexity is concerned with the resources that are required for algorithms to detect properties of combinatorial objects and structures. It has often proven true that the best way to argue about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Beside algebraic methods, analytic methods have been used for quite some time in theoretical computer science. These methods can also be used to solve algebraic problems as show by many recent examples in the areas of derandomization, coding theory or circuit lower bounds. These new directions were in the focus of the Dagstuhl Seminar and reflect the developments in the field since the previous Dagstuhl Seminar 18391. This Dagstuhl Seminar brought together researchers who are using a diverse array of algebraic and analytic methods in a variety of settings. Researchers in these areas are relying on ever more sophisticated and specialized mathematics and this seminar played a role in educating a diverse community about the latest new techniques, spurring further progress.

Cite as

Markus Bläser, Valentine Kabanets, Ronen Shaltiel, and Jacobo Torán. Algebraic and Analytic Methods in Computational Complexity (Dagstuhl Seminar 22371). In Dagstuhl Reports, Volume 12, Issue 9, pp. 41-59, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@Article{blaser_et_al:DagRep.12.9.41,
  author =	{Bl\"{a}ser, Markus and Kabanets, Valentine and Shaltiel, Ronen and Tor\'{a}n, Jacobo},
  title =	{{Algebraic and Analytic Methods in Computational Complexity (Dagstuhl Seminar 22371)}},
  pages =	{41--59},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2023},
  volume =	{12},
  number =	{9},
  editor =	{Bl\"{a}ser, Markus and Kabanets, Valentine and Shaltiel, Ronen and Tor\'{a}n, Jacobo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.12.9.41},
  URN =		{urn:nbn:de:0030-drops-178092},
  doi =		{10.4230/DagRep.12.9.41},
  annote =	{Keywords: (de-)randomization, algebra, circuits, coding, computational complexity}
}
Document
Incompressiblity and Next-Block Pseudoentropy

Authors: Iftach Haitner, Noam Mazor, and Jad Silbak

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
A distribution is k-incompressible, Yao [FOCS '82], if no efficient compression scheme compresses it to less than k bits. While being a natural measure, its relation to other computational analogs of entropy such as pseudoentropy, Hastad, Impagliazzo, Levin, and Luby [SICOMP '99], and to other cryptographic hardness assumptions, was unclear. We advance towards a better understating of this notion, showing that a k-incompressible distribution has (k-2) bits of next-block pseudoentropy, a refinement of pseudoentropy introduced by Haitner, Reingold, and Vadhan [SICOMP '13]. We deduce that a samplable distribution X that is (H(X)+2)-incompressible, implies the existence of one-way functions.

Cite as

Iftach Haitner, Noam Mazor, and Jad Silbak. Incompressiblity and Next-Block Pseudoentropy. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 66:1-66:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{haitner_et_al:LIPIcs.ITCS.2023.66,
  author =	{Haitner, Iftach and Mazor, Noam and Silbak, Jad},
  title =	{{Incompressiblity and Next-Block Pseudoentropy}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{66:1--66:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.66},
  URN =		{urn:nbn:de:0030-drops-175697},
  doi =		{10.4230/LIPIcs.ITCS.2023.66},
  annote =	{Keywords: incompressibility, next-block pseudoentropy, sparse languages}
}
Document
On Hardness Assumptions Needed for "Extreme High-End" PRGs and Fast Derandomization

Authors: Ronen Shaltiel and Emanuele Viola

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
The hardness vs. randomness paradigm aims to explicitly construct pseudorandom generators G:{0,1}^r → {0,1}^m that fool circuits of size m, assuming the existence of explicit hard functions. A "high-end PRG" with seed length r = O(log m) (implying BPP=P) was achieved in a seminal work of Impagliazzo and Wigderson (STOC 1997), assuming the high-end hardness assumption: there exist constants 0 < β < 1 < B, and functions computable in time 2^{B ⋅ n} that cannot be computed by circuits of size 2^{β ⋅ n}. Recently, motivated by fast derandomization of randomized algorithms, Doron et al. (FOCS 2020) and Chen and Tell (STOC 2021), construct "extreme high-end PRGs" with seed length r = (1+o(1))⋅ log m, under qualitatively stronger assumptions. We study whether extreme high-end PRGs can be constructed from the corresponding hardness assumption in which β = 1-o(1) and B = 1+o(1), which we call the extreme high-end hardness assumption. We give a partial negative answer: - The construction of Doron et al. composes a PEG (pseudo-entropy generator) with an extractor. The PEG is constructed starting from a function that is hard for MA-type circuits. We show that black-box PEG constructions from the extreme high-end hardness assumption must have large seed length (and so cannot be used to obtain extreme high-end PRGs by applying an extractor). To prove this, we establish a new property of (general) black-box PRG constructions from hard functions: it is possible to fix many output bits of the construction while fixing few bits of the hard function. This property distinguishes PRG constructions from typical extractor constructions, and this may explain why it is difficult to design PRG constructions. - The construction of Chen and Tell composes two PRGs: G₁:{0,1}^{(1+o(1)) ⋅ log m} → {0,1}^{r₂ = m^{Ω(1)}} and G₂:{0,1}^{r₂} → {0,1}^m. The first PRG is constructed from the extreme high-end hardness assumption, and the second PRG needs to run in time m^{1+o(1)}, and is constructed assuming one way functions. We show that in black-box proofs of hardness amplification to 1/2+1/m, reductions must make Ω(m) queries, even in the extreme high-end. Known PRG constructions from hard functions are black-box and use (or imply) hardness amplification, and so cannot be used to construct a PRG G₂ from the extreme high-end hardness assumption. The new feature of our hardness amplification result is that it applies even to the extreme high-end setting of parameters, whereas past work does not. Our techniques also improve recent lower bounds of Ron-Zewi, Shaltiel and Varma (ITCS 2021) on the number of queries of local list-decoding algorithms.

Cite as

Ronen Shaltiel and Emanuele Viola. On Hardness Assumptions Needed for "Extreme High-End" PRGs and Fast Derandomization. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 116:1-116:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{shaltiel_et_al:LIPIcs.ITCS.2022.116,
  author =	{Shaltiel, Ronen and Viola, Emanuele},
  title =	{{On Hardness Assumptions Needed for "Extreme High-End" PRGs and Fast Derandomization}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{116:1--116:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.116},
  URN =		{urn:nbn:de:0030-drops-157122},
  doi =		{10.4230/LIPIcs.ITCS.2022.116},
  annote =	{Keywords: Complexity Theory, Derandomization, Pseudorandom generators, Black-box proofs}
}
Document
Query Complexity Lower Bounds for Local List-Decoding and Hard-Core Predicates (Even for Small Rate and Huge Lists)

Authors: Noga Ron-Zewi, Ronen Shaltiel, and Nithin Varma

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
A binary code Enc:{0,1}^k → {0,1}ⁿ is (1/2-ε,L)-list decodable if for every w ∈ {0,1}ⁿ, there exists a set List(w) of size at most L, containing all messages m ∈ {0,1}^k such that the relative Hamming distance between Enc(m) and w is at most 1/2-ε. A q-query local list-decoder for Enc is a randomized procedure Dec that when given oracle access to a string w, makes at most q oracle calls, and for every message m ∈ List(w), with high probability, there exists j ∈ [L] such that for every i ∈ [k], with high probability, Dec^w(i,j) = m_i. We prove lower bounds on q, that apply even if L is huge (say L = 2^{k^{0.9}}) and the rate of Enc is small (meaning that n ≥ 2^{k}): - For ε = 1/k^{ν} for some constant 0 < ν < 1, we prove a lower bound of q = Ω(log(1/δ)/ε²), where δ is the error probability of the local list-decoder. This bound is tight as there is a matching upper bound by Goldreich and Levin (STOC 1989) of q = O(log(1/δ)/ε²) for the Hadamard code (which has n = 2^k). This bound extends an earlier work of Grinberg, Shaltiel and Viola (FOCS 2018) which only works if n ≤ 2^{k^ν} and the number of coins tossed by Dec is small (and therefore does not apply to the Hadamard code, or other codes with low rate). - For smaller ε, we prove a lower bound of roughly q = Ω(1/(√ε)). To the best of our knowledge, this is the first lower bound on the number of queries of local list-decoders that gives q ≥ k for small ε. Local list-decoders with small ε form the key component in the celebrated theorem of Goldreich and Levin that extracts a hard-core predicate from a one-way function. We show that black-box proofs cannot improve the Goldreich-Levin theorem and produce a hard-core predicate that is hard to predict with probability 1/2 + 1/𝓁^ω(1) when provided with a one-way function f:{0,1}^𝓁 → {0,1}^𝓁, where f is such that circuits of size poly(𝓁) cannot invert f with probability ρ = 1/2^√𝓁 (or even ρ = 1/2^Ω(𝓁)). This limitation applies to any proof by black-box reduction (even if the reduction is allowed to use nonuniformity and has oracle access to f).

Cite as

Noga Ron-Zewi, Ronen Shaltiel, and Nithin Varma. Query Complexity Lower Bounds for Local List-Decoding and Hard-Core Predicates (Even for Small Rate and Huge Lists). In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 33:1-33:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{ronzewi_et_al:LIPIcs.ITCS.2021.33,
  author =	{Ron-Zewi, Noga and Shaltiel, Ronen and Varma, Nithin},
  title =	{{Query Complexity Lower Bounds for Local List-Decoding and Hard-Core Predicates (Even for Small Rate and Huge Lists)}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{33:1--33:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.33},
  URN =		{urn:nbn:de:0030-drops-135724},
  doi =		{10.4230/LIPIcs.ITCS.2021.33},
  annote =	{Keywords: Local list-decoding, Hard-core predicates, Black-box reduction, Hadamard code}
}
Document
RANDOM
Is It Possible to Improve Yao’s XOR Lemma Using Reductions That Exploit the Efficiency of Their Oracle?

Authors: Ronen Shaltiel

Published in: LIPIcs, Volume 176, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)


Abstract
Yao’s XOR lemma states that for every function f:{0,1}^k → {0,1}, if f has hardness 2/3 for P/poly (meaning that for every circuit C in P/poly, Pr[C(X) = f(X)] ≤ 2/3 on a uniform input X), then the task of computing f(X₁) ⊕ … ⊕ f(X_t) for sufficiently large t has hardness 1/2 +ε for P/poly. Known proofs of this lemma cannot achieve ε = 1/k^ω(1), and even for ε = 1/k, we do not know how to replace P/poly by AC⁰[parity] (the class of constant depth circuits with the gates {and,or,not,parity} of unbounded fan-in). Recently, Grinberg, Shaltiel and Viola (FOCS 2018) (building on a sequence of earlier works) showed that these limitations cannot be circumvented by black-box reductions. Namely, by reductions Red^(⋅) that given oracle access to a function D that violates the conclusion of Yao’s XOR lemma, implement a circuit that violates the assumption of Yao’s XOR lemma. There are a few known reductions in the related literature on worst-case to average case reductions that are non-black box. Specifically, the reductions of Gutfreund, Shaltiel and Ta Shma (Computational Complexity 2007) and Hirahara (FOCS 2018)) are "class reductions" that are only guaranteed to succeed when given oracle access to an oracle D from some efficient class of algorithms. These works seem to circumvent some black-box impossibility results. In this paper we extend the previous limitations of Grinberg, Shaltiel and Viola to class reductions, giving evidence that class reductions cannot yield the desired improvements in Yao’s XOR lemma. To the best of our knowledge, this is the first limitation on reductions for hardness amplification that applies to class reductions. Our technique imitates the previous lower bounds for black-box reductions, replacing the inefficient oracle used in that proof, with an efficient one that is based on limited independence, and developing tools to deal with the technical difficulties that arise following this replacement.

Cite as

Ronen Shaltiel. Is It Possible to Improve Yao’s XOR Lemma Using Reductions That Exploit the Efficiency of Their Oracle?. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 176, pp. 10:1-10:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{shaltiel:LIPIcs.APPROX/RANDOM.2020.10,
  author =	{Shaltiel, Ronen},
  title =	{{Is It Possible to Improve Yao’s XOR Lemma Using Reductions That Exploit the Efficiency of Their Oracle?}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)},
  pages =	{10:1--10:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-164-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{176},
  editor =	{Byrka, Jaros{\l}aw and Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2020.10},
  URN =		{urn:nbn:de:0030-drops-126138},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2020.10},
  annote =	{Keywords: Yao’s XOR lemma, Hardness amplification, black-box reductions}
}
Document
RANDOM
Improved Extractors for Recognizable and Algebraic Sources

Authors: Fu Li and David Zuckerman

Published in: LIPIcs, Volume 145, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)


Abstract
We study the task of seedless randomness extraction from recognizable sources, which are uniform distributions over sets of the form {x : f(x) = 1} for functions f in some specified class C. We give two simple methods for constructing seedless extractors for C-recognizable sources. Our first method shows that if C admits XOR amplification, then we can construct a seedless extractor for C-recognizable sources by using a mildly hard function for C as a black box. By exploiting this reduction, we give polynomial-time, seedless randomness extractors for three natural recognizable sources: (1) constant-degree algebraic sources over any prime field, where constant-degree algebraic sources are uniform distributions over the set of zeros of a system of constant degree polynomials; (2) sources recognizable by randomized multiparty communication protocols of cn bits, where c>0 is a small enough constant; (3) halfspace sources, or sources recognizable by linear threshold functions. In particular, the new extractor for each of these three sources has linear output length and exponentially small error for min-entropy k >= (1-alpha)n, where alpha>0 is a small enough constant. Our second method shows that a seed-extending pseudorandom generator with exponentially small error for C yields an extractor with exponentially small error for C-recognizable sources, improving a reduction by Kinne, Melkebeek, and Shaltiel [Kinne et al., 2012]. Using the hardness of the parity function against AC^0 [Håstad, 1987], we significantly improve Shaltiel’s extractor [Shaltiel, 2011] for AC^0-recognizable sources. Finally, assuming sufficiently strong one-way permutations, we construct seedless extractors for sources recognizable by BPP algorithms, and these extractors run in quasi-polynomial time.

Cite as

Fu Li and David Zuckerman. Improved Extractors for Recognizable and Algebraic Sources. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 145, pp. 72:1-72:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.APPROX-RANDOM.2019.72,
  author =	{Li, Fu and Zuckerman, David},
  title =	{{Improved Extractors for Recognizable and Algebraic Sources}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)},
  pages =	{72:1--72:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-125-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{145},
  editor =	{Achlioptas, Dimitris and V\'{e}gh, L\'{a}szl\'{o} A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2019.72},
  URN =		{urn:nbn:de:0030-drops-112873},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2019.72},
  annote =	{Keywords: Extractor, Pseudorandomness}
}
Document
Track A: Algorithms, Complexity and Games
AC^0[p] Lower Bounds Against MCSP via the Coin Problem

Authors: Alexander Golovnev, Rahul Ilango, Russell Impagliazzo, Valentine Kabanets, Antonina Kolokolova, and Avishay Tal

Published in: LIPIcs, Volume 132, 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)


Abstract
Minimum Circuit Size Problem (MCSP) asks to decide if a given truth table of an n-variate boolean function has circuit complexity less than a given parameter s. We prove that MCSP is hard for constant-depth circuits with mod p gates, for any prime p >= 2 (the circuit class AC^0[p]). Namely, we show that MCSP requires d-depth AC^0[p] circuits of size at least exp(N^{0.49/d}), where N=2^n is the size of an input truth table of an n-variate boolean function. Our circuit lower bound proof shows that MCSP can solve the coin problem: distinguish uniformly random N-bit strings from those generated using independent samples from a biased random coin which is 1 with probability 1/2+N^{-0.49}, and 0 otherwise. Solving the coin problem with such parameters is known to require exponentially large AC^0[p] circuits. Moreover, this also implies that MAJORITY is computable by a non-uniform AC^0 circuit of polynomial size that also has MCSP-oracle gates. The latter has a few other consequences for the complexity of MCSP, e.g., we get that any boolean function in NC^1 (i.e., computable by a polynomial-size formula) can also be computed by a non-uniform polynomial-size AC^0 circuit with MCSP-oracle gates.

Cite as

Alexander Golovnev, Rahul Ilango, Russell Impagliazzo, Valentine Kabanets, Antonina Kolokolova, and Avishay Tal. AC^0[p] Lower Bounds Against MCSP via the Coin Problem. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 66:1-66:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{golovnev_et_al:LIPIcs.ICALP.2019.66,
  author =	{Golovnev, Alexander and Ilango, Rahul and Impagliazzo, Russell and Kabanets, Valentine and Kolokolova, Antonina and Tal, Avishay},
  title =	{{AC^0\lbrackp\rbrack Lower Bounds Against MCSP via the Coin Problem}},
  booktitle =	{46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
  pages =	{66:1--66:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-109-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{132},
  editor =	{Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.66},
  URN =		{urn:nbn:de:0030-drops-106422},
  doi =		{10.4230/LIPIcs.ICALP.2019.66},
  annote =	{Keywords: Minimum Circuit Size Problem (MCSP), circuit lower bounds, AC0\lbrackp\rbrack, coin problem, hybrid argument, MKTP, biased random boolean functions}
}
Document
Explicit List-Decodable Codes with Optimal Rate for Computationally Bounded Channels

Authors: Ronen Shaltiel and Jad Silbak

Published in: LIPIcs, Volume 60, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2016)


Abstract
A stochastic code is a pair of encoding and decoding procedures where Encoding procedure receives a k bit message m, and a d bit uniform string S. The code is (p,L)-list-decodable against a class C of "channel functions" from n bits to n bits, if for every message m and every channel C in C that induces at most $pn$ errors, applying decoding on the "received word" C(Enc(m,S)) produces a list of at most L messages that contain m with high probability (over the choice of uniform S). Note that both the channel C and the decoding algorithm Dec do not receive the random variable S. The rate of a code is the ratio between the message length and the encoding length, and a code is explicit if Enc, Dec run in time poly(n). Guruswami and Smith (J. ACM, to appear), showed that for every constants 0 < p < 1/2 and c>1 there are Monte-Carlo explicit constructions of stochastic codes with rate R >= 1-H(p)-epsilon that are (p,L=poly(1/epsilon))-list decodable for size n^c channels. Monte-Carlo, means that the encoding and decoding need to share a public uniformly chosen poly(n^c) bit string Y, and the constructed stochastic code is (p,L)-list decodable with high probability over the choice of Y. Guruswami and Smith pose an open problem to give fully explicit (that is not Monte-Carlo) explicit codes with the same parameters, under hardness assumptions. In this paper we resolve this open problem, using a minimal assumption: the existence of poly-time computable pseudorandom generators for small circuits, which follows from standard complexity assumptions by Impagliazzo and Wigderson (STOC 97). Guruswami and Smith also asked to give a fully explicit unconditional constructions with the same parameters against O(log n)-space online channels. (These are channels that have space O(log n) and are allowed to read the input codeword in one pass). We resolve this open problem. Finally, we consider a tighter notion of explicitness, in which the running time of encoding and list-decoding algorithms does not increase, when increasing the complexity of the channel. We give explicit constructions (with rate approaching 1-H(p) for every p <= p_0 for some p_0>0) for channels that are circuits of size 2^{n^{Omega(1/d)}} and depth d. Here, the running time of encoding and decoding is a fixed polynomial (that does not depend on d). Our approach builds on the machinery developed by Guruswami and Smith, replacing some probabilistic arguments with explicit constructions. We also present a simplified and general approach that makes the reductions in the proof more efficient, so that we can handle weak classes of channels.

Cite as

Ronen Shaltiel and Jad Silbak. Explicit List-Decodable Codes with Optimal Rate for Computationally Bounded Channels. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 60, pp. 45:1-45:38, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{shaltiel_et_al:LIPIcs.APPROX-RANDOM.2016.45,
  author =	{Shaltiel, Ronen and Silbak, Jad},
  title =	{{Explicit List-Decodable Codes with Optimal Rate for Computationally Bounded Channels}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2016)},
  pages =	{45:1--45:38},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-018-7},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{60},
  editor =	{Jansen, Klaus and Mathieu, Claire and Rolim, Jos\'{e} D. P. and Umans, Chris},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2016.45},
  URN =		{urn:nbn:de:0030-drops-66682},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2016.45},
  annote =	{Keywords: Error Correcting Codes, List Decoding, Pseudorandomness}
}
Document
Pseudorandomness When the Odds are Against You

Authors: Sergei Artemenko, Russell Impagliazzo, Valentine Kabanets, and Ronen Shaltiel

Published in: LIPIcs, Volume 50, 31st Conference on Computational Complexity (CCC 2016)


Abstract
Impagliazzo and Wigderson (STOC 1997) showed that if E=DTIME(2^O(n)) requires size 2^Omega(n) circuits, then every time T constant-error randomized algorithm can be simulated deterministically in time poly(T). However, such polynomial slowdown is a deal breaker when T=2^(alpha*n), for a constant alpha>0, as is the case for some randomized algorithms for NP-complete problems. Paturi and Pudlak (STOC 2010) observed that many such algorithms are obtained from randomized time T algorithms, for T < 2^o(n), with large one-sided error 1-epsilon, for epsilon=2^(-alpha*n), that are repeated 1/epsilon times to yield a constant-error randomized algorithm running in time T/epsilon=2^((alpha+o(1))*n). We show that if E requires size 2^Omega(n) nondeterministic circuits, then there is a poly(n)-time epsilon-HSG (Hitting-Set Generator) H:{0,1}^(O(log(n)) + log(1/epsilon) -> {0,1}^n, implying that time T randomized algorithms with one-sided error 1-epsilon can be simulated in deterministic time poly(T)/epsilon. In particular, under this hardness assumption, the fastest known constant-error randomized algorithm for k-SAT (for k > 3) by Paturi et al. (J. ACM 2005) can be made deterministic with essentially the same time bound. This is the first hardness versus randomness tradeoff for algorithms for NP-complete problems. We address the necessity of our assumption by showing that HSGs with very low error imply hardness for nondeterministic circuits with "few" nondeterministic bits. Applebaum et al. (CCC 2015) showed that "black-box techniques" cannot achieve poly(n)-time computable epsilon-PRGs (Pseudo-Random Generators) for epsilon=n^-omega(1), even if we assume hardness against circuits with oracle access to an arbitrary language in the polynomial time hierarchy. We introduce weaker variants of PRGs with relative error, that do follow under the latter hardness assumption. Specifically, we say that a function G:{0,1}^r -> {0,1}^n is an (epsilon,delta)-re-PRG for a circuit C if (1-epsilon)*Pr[C(U_n)=1] - delta < Pr[C(G(U_r)=1] < (1+epsilon)*Pr[C(U_n)=1] + delta. We construct poly(n)-time computable (epsilon,delta)-re-PRGs with arbitrary polynomial stretch, epsilon=n^-O(1) and delta=2^(-n^Omega(1)). We also construct PRGs with relative error that fool non-boolean distinguishers (in the sense introduced by Dubrov and Ishai (STOC 2006)). Our techniques use ideas from Paturi and Pudlak (STOC 2010), Trevisan and Vadhan (FOCS 2000), Applebaum et al. (CCC 2015). Common themes in our proofs are "composing" a PRG/HSG with a combinatorial object such as dispersers and extractors, and the use of nondeterministic reductions in the spirit of Feige and Lund (Comp. Complexity 1997).

Cite as

Sergei Artemenko, Russell Impagliazzo, Valentine Kabanets, and Ronen Shaltiel. Pseudorandomness When the Odds are Against You. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 9:1-9:35, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{artemenko_et_al:LIPIcs.CCC.2016.9,
  author =	{Artemenko, Sergei and Impagliazzo, Russell and Kabanets, Valentine and Shaltiel, Ronen},
  title =	{{Pseudorandomness When the Odds are Against You}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{9:1--9:35},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.9},
  URN =		{urn:nbn:de:0030-drops-58375},
  doi =		{10.4230/LIPIcs.CCC.2016.9},
  annote =	{Keywords: Derandomization, pseudorandom generator, hitting-set generator, relative error}
}
Document
Extended Abstract
Incompressible Functions, Relative-Error Extractors, and the Power of Nondeterministic Reductions (Extended Abstract)

Authors: Benny Applebaum, Sergei Artemenko, Ronen Shaltiel, and Guang Yang

Published in: LIPIcs, Volume 33, 30th Conference on Computational Complexity (CCC 2015)


Abstract
A circuit C compresses a function f:{0,1}^n -> {0,1}^m if given an input x in {0,1}^n the circuit C can shrink x to a shorter l-bit string x' such that later, a computationally-unbounded solver D will be able to compute f(x) based on x'. In this paper we study the existence of functions which are incompressible by circuits of some fixed polynomial size s=n^c. Motivated by cryptographic applications, we focus on average-case (l,epsilon) incompressibility, which guarantees that on a random input x in {0,1}^n, for every size s circuit C:{0,1}^n -> {0,1}^l and any unbounded solver D, the success probability Pr_x[D(C(x))=f(x)] is upper-bounded by 2^(-m)+epsilon. While this notion of incompressibility appeared in several works (e.g., Dubrov and Ishai, STOC 06), so far no explicit constructions of efficiently computable incompressible functions were known. In this work we present the following results: 1. Assuming that E is hard for exponential size nondeterministic circuits, we construct a polynomial time computable boolean function f:{0,1}^n -> {0,1} which is incompressible by size n^c circuits with communication l=(1-o(1)) * n and error epsilon=n^(-c). Our technique generalizes to the case of PRGs against nonboolean circuits, improving and simplifying the previous construction of Shaltiel and Artemenko (STOC 14). 2. We show that it is possible to achieve negligible error parameter epsilon=n^(-omega(1)) for nonboolean functions. Specifically, assuming that E is hard for exponential size Sigma_3-circuits, we construct a nonboolean function f:{0,1}^n -> {0,1}^m which is incompressible by size n^c circuits with l=Omega(n) and extremely small epsilon=n^(-c) * 2^(-m). Our construction combines the techniques of Trevisan and Vadhan (FOCS 00) with a new notion of relative error deterministic extractor which may be of independent interest. 3. We show that the task of constructing an incompressible boolean function f:{0,1}^n -> {0,1} with negligible error parameter epsilon cannot be achieved by "existing proof techniques". Namely, nondeterministic reductions (or even Sigma_i reductions) cannot get epsilon=n^(-omega(1)) for boolean incompressible functions. Our results also apply to constructions of standard Nisan-Wigderson type PRGs and (standard) boolean functions that are hard on average, explaining, in retrospective, the limitations of existing constructions. Our impossibility result builds on an approach of Shaltiel and Viola (SIAM J. Comp., 2010).

Cite as

Benny Applebaum, Sergei Artemenko, Ronen Shaltiel, and Guang Yang. Incompressible Functions, Relative-Error Extractors, and the Power of Nondeterministic Reductions (Extended Abstract). In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 582-600, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{applebaum_et_al:LIPIcs.CCC.2015.582,
  author =	{Applebaum, Benny and Artemenko, Sergei and Shaltiel, Ronen and Yang, Guang},
  title =	{{Incompressible Functions, Relative-Error Extractors, and the Power of Nondeterministic Reductions}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{582--600},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.582},
  URN =		{urn:nbn:de:0030-drops-50567},
  doi =		{10.4230/LIPIcs.CCC.2015.582},
  annote =	{Keywords: compression, pseudorandomness, extractors, nondeterministic reductions}
}
  • Refine by Author
  • 7 Shaltiel, Ronen
  • 3 Kabanets, Valentine
  • 2 Artemenko, Sergei
  • 2 Impagliazzo, Russell
  • 2 Silbak, Jad
  • Show More...

  • Refine by Classification
  • 4 Theory of computation → Circuit complexity
  • 3 Theory of computation → Pseudorandomness and derandomization
  • 2 Theory of computation
  • 2 Theory of computation → Computational complexity and cryptography
  • 2 Theory of computation → Problems, reductions and completeness
  • Show More...

  • Refine by Keyword
  • 2 Derandomization
  • 2 Pseudorandomness
  • 1 (de-)randomization
  • 1 AC0[p]
  • 1 Arthur-Merlin protocol
  • Show More...

  • Refine by Type
  • 12 document

  • Refine by Publication Year
  • 4 2023
  • 2 2016
  • 2 2019
  • 1 2015
  • 1 2020
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail