Search Results

Documents authored by Shaltiel, Ronen


Document
Algebraic and Analytic Methods in Computational Complexity (Dagstuhl Seminar 22371)

Authors: Markus Bläser, Valentine Kabanets, Ronen Shaltiel, and Jacobo Torán

Published in: Dagstuhl Reports, Volume 12, Issue 9 (2023)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 2237 "Algebraic and Analytic Methods in Computational Complexity". Computational Complexity is concerned with the resources that are required for algorithms to detect properties of combinatorial objects and structures. It has often proven true that the best way to argue about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Beside algebraic methods, analytic methods have been used for quite some time in theoretical computer science. These methods can also be used to solve algebraic problems as show by many recent examples in the areas of derandomization, coding theory or circuit lower bounds. These new directions were in the focus of the Dagstuhl Seminar and reflect the developments in the field since the previous Dagstuhl Seminar 18391. This Dagstuhl Seminar brought together researchers who are using a diverse array of algebraic and analytic methods in a variety of settings. Researchers in these areas are relying on ever more sophisticated and specialized mathematics and this seminar played a role in educating a diverse community about the latest new techniques, spurring further progress.

Cite as

Markus Bläser, Valentine Kabanets, Ronen Shaltiel, and Jacobo Torán. Algebraic and Analytic Methods in Computational Complexity (Dagstuhl Seminar 22371). In Dagstuhl Reports, Volume 12, Issue 9, pp. 41-59, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@Article{blaser_et_al:DagRep.12.9.41,
  author =	{Bl\"{a}ser, Markus and Kabanets, Valentine and Shaltiel, Ronen and Tor\'{a}n, Jacobo},
  title =	{{Algebraic and Analytic Methods in Computational Complexity (Dagstuhl Seminar 22371)}},
  pages =	{41--59},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2023},
  volume =	{12},
  number =	{9},
  editor =	{Bl\"{a}ser, Markus and Kabanets, Valentine and Shaltiel, Ronen and Tor\'{a}n, Jacobo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.12.9.41},
  URN =		{urn:nbn:de:0030-drops-178092},
  doi =		{10.4230/DagRep.12.9.41},
  annote =	{Keywords: (de-)randomization, algebra, circuits, coding, computational complexity}
}
Document
On Hardness Assumptions Needed for "Extreme High-End" PRGs and Fast Derandomization

Authors: Ronen Shaltiel and Emanuele Viola

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
The hardness vs. randomness paradigm aims to explicitly construct pseudorandom generators G:{0,1}^r → {0,1}^m that fool circuits of size m, assuming the existence of explicit hard functions. A "high-end PRG" with seed length r = O(log m) (implying BPP=P) was achieved in a seminal work of Impagliazzo and Wigderson (STOC 1997), assuming the high-end hardness assumption: there exist constants 0 < β < 1 < B, and functions computable in time 2^{B ⋅ n} that cannot be computed by circuits of size 2^{β ⋅ n}. Recently, motivated by fast derandomization of randomized algorithms, Doron et al. (FOCS 2020) and Chen and Tell (STOC 2021), construct "extreme high-end PRGs" with seed length r = (1+o(1))⋅ log m, under qualitatively stronger assumptions. We study whether extreme high-end PRGs can be constructed from the corresponding hardness assumption in which β = 1-o(1) and B = 1+o(1), which we call the extreme high-end hardness assumption. We give a partial negative answer: - The construction of Doron et al. composes a PEG (pseudo-entropy generator) with an extractor. The PEG is constructed starting from a function that is hard for MA-type circuits. We show that black-box PEG constructions from the extreme high-end hardness assumption must have large seed length (and so cannot be used to obtain extreme high-end PRGs by applying an extractor). To prove this, we establish a new property of (general) black-box PRG constructions from hard functions: it is possible to fix many output bits of the construction while fixing few bits of the hard function. This property distinguishes PRG constructions from typical extractor constructions, and this may explain why it is difficult to design PRG constructions. - The construction of Chen and Tell composes two PRGs: G₁:{0,1}^{(1+o(1)) ⋅ log m} → {0,1}^{r₂ = m^{Ω(1)}} and G₂:{0,1}^{r₂} → {0,1}^m. The first PRG is constructed from the extreme high-end hardness assumption, and the second PRG needs to run in time m^{1+o(1)}, and is constructed assuming one way functions. We show that in black-box proofs of hardness amplification to 1/2+1/m, reductions must make Ω(m) queries, even in the extreme high-end. Known PRG constructions from hard functions are black-box and use (or imply) hardness amplification, and so cannot be used to construct a PRG G₂ from the extreme high-end hardness assumption. The new feature of our hardness amplification result is that it applies even to the extreme high-end setting of parameters, whereas past work does not. Our techniques also improve recent lower bounds of Ron-Zewi, Shaltiel and Varma (ITCS 2021) on the number of queries of local list-decoding algorithms.

Cite as

Ronen Shaltiel and Emanuele Viola. On Hardness Assumptions Needed for "Extreme High-End" PRGs and Fast Derandomization. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 116:1-116:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{shaltiel_et_al:LIPIcs.ITCS.2022.116,
  author =	{Shaltiel, Ronen and Viola, Emanuele},
  title =	{{On Hardness Assumptions Needed for "Extreme High-End" PRGs and Fast Derandomization}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{116:1--116:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.116},
  URN =		{urn:nbn:de:0030-drops-157122},
  doi =		{10.4230/LIPIcs.ITCS.2022.116},
  annote =	{Keywords: Complexity Theory, Derandomization, Pseudorandom generators, Black-box proofs}
}
Document
Query Complexity Lower Bounds for Local List-Decoding and Hard-Core Predicates (Even for Small Rate and Huge Lists)

Authors: Noga Ron-Zewi, Ronen Shaltiel, and Nithin Varma

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
A binary code Enc:{0,1}^k → {0,1}ⁿ is (1/2-ε,L)-list decodable if for every w ∈ {0,1}ⁿ, there exists a set List(w) of size at most L, containing all messages m ∈ {0,1}^k such that the relative Hamming distance between Enc(m) and w is at most 1/2-ε. A q-query local list-decoder for Enc is a randomized procedure Dec that when given oracle access to a string w, makes at most q oracle calls, and for every message m ∈ List(w), with high probability, there exists j ∈ [L] such that for every i ∈ [k], with high probability, Dec^w(i,j) = m_i. We prove lower bounds on q, that apply even if L is huge (say L = 2^{k^{0.9}}) and the rate of Enc is small (meaning that n ≥ 2^{k}): - For ε = 1/k^{ν} for some constant 0 < ν < 1, we prove a lower bound of q = Ω(log(1/δ)/ε²), where δ is the error probability of the local list-decoder. This bound is tight as there is a matching upper bound by Goldreich and Levin (STOC 1989) of q = O(log(1/δ)/ε²) for the Hadamard code (which has n = 2^k). This bound extends an earlier work of Grinberg, Shaltiel and Viola (FOCS 2018) which only works if n ≤ 2^{k^ν} and the number of coins tossed by Dec is small (and therefore does not apply to the Hadamard code, or other codes with low rate). - For smaller ε, we prove a lower bound of roughly q = Ω(1/(√ε)). To the best of our knowledge, this is the first lower bound on the number of queries of local list-decoders that gives q ≥ k for small ε. Local list-decoders with small ε form the key component in the celebrated theorem of Goldreich and Levin that extracts a hard-core predicate from a one-way function. We show that black-box proofs cannot improve the Goldreich-Levin theorem and produce a hard-core predicate that is hard to predict with probability 1/2 + 1/𝓁^ω(1) when provided with a one-way function f:{0,1}^𝓁 → {0,1}^𝓁, where f is such that circuits of size poly(𝓁) cannot invert f with probability ρ = 1/2^√𝓁 (or even ρ = 1/2^Ω(𝓁)). This limitation applies to any proof by black-box reduction (even if the reduction is allowed to use nonuniformity and has oracle access to f).

Cite as

Noga Ron-Zewi, Ronen Shaltiel, and Nithin Varma. Query Complexity Lower Bounds for Local List-Decoding and Hard-Core Predicates (Even for Small Rate and Huge Lists). In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 33:1-33:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{ronzewi_et_al:LIPIcs.ITCS.2021.33,
  author =	{Ron-Zewi, Noga and Shaltiel, Ronen and Varma, Nithin},
  title =	{{Query Complexity Lower Bounds for Local List-Decoding and Hard-Core Predicates (Even for Small Rate and Huge Lists)}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{33:1--33:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.33},
  URN =		{urn:nbn:de:0030-drops-135724},
  doi =		{10.4230/LIPIcs.ITCS.2021.33},
  annote =	{Keywords: Local list-decoding, Hard-core predicates, Black-box reduction, Hadamard code}
}
Document
RANDOM
Is It Possible to Improve Yao’s XOR Lemma Using Reductions That Exploit the Efficiency of Their Oracle?

Authors: Ronen Shaltiel

Published in: LIPIcs, Volume 176, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)


Abstract
Yao’s XOR lemma states that for every function f:{0,1}^k → {0,1}, if f has hardness 2/3 for P/poly (meaning that for every circuit C in P/poly, Pr[C(X) = f(X)] ≤ 2/3 on a uniform input X), then the task of computing f(X₁) ⊕ … ⊕ f(X_t) for sufficiently large t has hardness 1/2 +ε for P/poly. Known proofs of this lemma cannot achieve ε = 1/k^ω(1), and even for ε = 1/k, we do not know how to replace P/poly by AC⁰[parity] (the class of constant depth circuits with the gates {and,or,not,parity} of unbounded fan-in). Recently, Grinberg, Shaltiel and Viola (FOCS 2018) (building on a sequence of earlier works) showed that these limitations cannot be circumvented by black-box reductions. Namely, by reductions Red^(⋅) that given oracle access to a function D that violates the conclusion of Yao’s XOR lemma, implement a circuit that violates the assumption of Yao’s XOR lemma. There are a few known reductions in the related literature on worst-case to average case reductions that are non-black box. Specifically, the reductions of Gutfreund, Shaltiel and Ta Shma (Computational Complexity 2007) and Hirahara (FOCS 2018)) are "class reductions" that are only guaranteed to succeed when given oracle access to an oracle D from some efficient class of algorithms. These works seem to circumvent some black-box impossibility results. In this paper we extend the previous limitations of Grinberg, Shaltiel and Viola to class reductions, giving evidence that class reductions cannot yield the desired improvements in Yao’s XOR lemma. To the best of our knowledge, this is the first limitation on reductions for hardness amplification that applies to class reductions. Our technique imitates the previous lower bounds for black-box reductions, replacing the inefficient oracle used in that proof, with an efficient one that is based on limited independence, and developing tools to deal with the technical difficulties that arise following this replacement.

Cite as

Ronen Shaltiel. Is It Possible to Improve Yao’s XOR Lemma Using Reductions That Exploit the Efficiency of Their Oracle?. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 176, pp. 10:1-10:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{shaltiel:LIPIcs.APPROX/RANDOM.2020.10,
  author =	{Shaltiel, Ronen},
  title =	{{Is It Possible to Improve Yao’s XOR Lemma Using Reductions That Exploit the Efficiency of Their Oracle?}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)},
  pages =	{10:1--10:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-164-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{176},
  editor =	{Byrka, Jaros{\l}aw and Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2020.10},
  URN =		{urn:nbn:de:0030-drops-126138},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2020.10},
  annote =	{Keywords: Yao’s XOR lemma, Hardness amplification, black-box reductions}
}
Document
Explicit List-Decodable Codes with Optimal Rate for Computationally Bounded Channels

Authors: Ronen Shaltiel and Jad Silbak

Published in: LIPIcs, Volume 60, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2016)


Abstract
A stochastic code is a pair of encoding and decoding procedures where Encoding procedure receives a k bit message m, and a d bit uniform string S. The code is (p,L)-list-decodable against a class C of "channel functions" from n bits to n bits, if for every message m and every channel C in C that induces at most $pn$ errors, applying decoding on the "received word" C(Enc(m,S)) produces a list of at most L messages that contain m with high probability (over the choice of uniform S). Note that both the channel C and the decoding algorithm Dec do not receive the random variable S. The rate of a code is the ratio between the message length and the encoding length, and a code is explicit if Enc, Dec run in time poly(n). Guruswami and Smith (J. ACM, to appear), showed that for every constants 0 < p < 1/2 and c>1 there are Monte-Carlo explicit constructions of stochastic codes with rate R >= 1-H(p)-epsilon that are (p,L=poly(1/epsilon))-list decodable for size n^c channels. Monte-Carlo, means that the encoding and decoding need to share a public uniformly chosen poly(n^c) bit string Y, and the constructed stochastic code is (p,L)-list decodable with high probability over the choice of Y. Guruswami and Smith pose an open problem to give fully explicit (that is not Monte-Carlo) explicit codes with the same parameters, under hardness assumptions. In this paper we resolve this open problem, using a minimal assumption: the existence of poly-time computable pseudorandom generators for small circuits, which follows from standard complexity assumptions by Impagliazzo and Wigderson (STOC 97). Guruswami and Smith also asked to give a fully explicit unconditional constructions with the same parameters against O(log n)-space online channels. (These are channels that have space O(log n) and are allowed to read the input codeword in one pass). We resolve this open problem. Finally, we consider a tighter notion of explicitness, in which the running time of encoding and list-decoding algorithms does not increase, when increasing the complexity of the channel. We give explicit constructions (with rate approaching 1-H(p) for every p <= p_0 for some p_0>0) for channels that are circuits of size 2^{n^{Omega(1/d)}} and depth d. Here, the running time of encoding and decoding is a fixed polynomial (that does not depend on d). Our approach builds on the machinery developed by Guruswami and Smith, replacing some probabilistic arguments with explicit constructions. We also present a simplified and general approach that makes the reductions in the proof more efficient, so that we can handle weak classes of channels.

Cite as

Ronen Shaltiel and Jad Silbak. Explicit List-Decodable Codes with Optimal Rate for Computationally Bounded Channels. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 60, pp. 45:1-45:38, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{shaltiel_et_al:LIPIcs.APPROX-RANDOM.2016.45,
  author =	{Shaltiel, Ronen and Silbak, Jad},
  title =	{{Explicit List-Decodable Codes with Optimal Rate for Computationally Bounded Channels}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2016)},
  pages =	{45:1--45:38},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-018-7},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{60},
  editor =	{Jansen, Klaus and Mathieu, Claire and Rolim, Jos\'{e} D. P. and Umans, Chris},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2016.45},
  URN =		{urn:nbn:de:0030-drops-66682},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2016.45},
  annote =	{Keywords: Error Correcting Codes, List Decoding, Pseudorandomness}
}
Document
Pseudorandomness When the Odds are Against You

Authors: Sergei Artemenko, Russell Impagliazzo, Valentine Kabanets, and Ronen Shaltiel

Published in: LIPIcs, Volume 50, 31st Conference on Computational Complexity (CCC 2016)


Abstract
Impagliazzo and Wigderson (STOC 1997) showed that if E=DTIME(2^O(n)) requires size 2^Omega(n) circuits, then every time T constant-error randomized algorithm can be simulated deterministically in time poly(T). However, such polynomial slowdown is a deal breaker when T=2^(alpha*n), for a constant alpha>0, as is the case for some randomized algorithms for NP-complete problems. Paturi and Pudlak (STOC 2010) observed that many such algorithms are obtained from randomized time T algorithms, for T < 2^o(n), with large one-sided error 1-epsilon, for epsilon=2^(-alpha*n), that are repeated 1/epsilon times to yield a constant-error randomized algorithm running in time T/epsilon=2^((alpha+o(1))*n). We show that if E requires size 2^Omega(n) nondeterministic circuits, then there is a poly(n)-time epsilon-HSG (Hitting-Set Generator) H:{0,1}^(O(log(n)) + log(1/epsilon) -> {0,1}^n, implying that time T randomized algorithms with one-sided error 1-epsilon can be simulated in deterministic time poly(T)/epsilon. In particular, under this hardness assumption, the fastest known constant-error randomized algorithm for k-SAT (for k > 3) by Paturi et al. (J. ACM 2005) can be made deterministic with essentially the same time bound. This is the first hardness versus randomness tradeoff for algorithms for NP-complete problems. We address the necessity of our assumption by showing that HSGs with very low error imply hardness for nondeterministic circuits with "few" nondeterministic bits. Applebaum et al. (CCC 2015) showed that "black-box techniques" cannot achieve poly(n)-time computable epsilon-PRGs (Pseudo-Random Generators) for epsilon=n^-omega(1), even if we assume hardness against circuits with oracle access to an arbitrary language in the polynomial time hierarchy. We introduce weaker variants of PRGs with relative error, that do follow under the latter hardness assumption. Specifically, we say that a function G:{0,1}^r -> {0,1}^n is an (epsilon,delta)-re-PRG for a circuit C if (1-epsilon)*Pr[C(U_n)=1] - delta < Pr[C(G(U_r)=1] < (1+epsilon)*Pr[C(U_n)=1] + delta. We construct poly(n)-time computable (epsilon,delta)-re-PRGs with arbitrary polynomial stretch, epsilon=n^-O(1) and delta=2^(-n^Omega(1)). We also construct PRGs with relative error that fool non-boolean distinguishers (in the sense introduced by Dubrov and Ishai (STOC 2006)). Our techniques use ideas from Paturi and Pudlak (STOC 2010), Trevisan and Vadhan (FOCS 2000), Applebaum et al. (CCC 2015). Common themes in our proofs are "composing" a PRG/HSG with a combinatorial object such as dispersers and extractors, and the use of nondeterministic reductions in the spirit of Feige and Lund (Comp. Complexity 1997).

Cite as

Sergei Artemenko, Russell Impagliazzo, Valentine Kabanets, and Ronen Shaltiel. Pseudorandomness When the Odds are Against You. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 9:1-9:35, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{artemenko_et_al:LIPIcs.CCC.2016.9,
  author =	{Artemenko, Sergei and Impagliazzo, Russell and Kabanets, Valentine and Shaltiel, Ronen},
  title =	{{Pseudorandomness When the Odds are Against You}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{9:1--9:35},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.9},
  URN =		{urn:nbn:de:0030-drops-58375},
  doi =		{10.4230/LIPIcs.CCC.2016.9},
  annote =	{Keywords: Derandomization, pseudorandom generator, hitting-set generator, relative error}
}
Document
Extended Abstract
Incompressible Functions, Relative-Error Extractors, and the Power of Nondeterministic Reductions (Extended Abstract)

Authors: Benny Applebaum, Sergei Artemenko, Ronen Shaltiel, and Guang Yang

Published in: LIPIcs, Volume 33, 30th Conference on Computational Complexity (CCC 2015)


Abstract
A circuit C compresses a function f:{0,1}^n -> {0,1}^m if given an input x in {0,1}^n the circuit C can shrink x to a shorter l-bit string x' such that later, a computationally-unbounded solver D will be able to compute f(x) based on x'. In this paper we study the existence of functions which are incompressible by circuits of some fixed polynomial size s=n^c. Motivated by cryptographic applications, we focus on average-case (l,epsilon) incompressibility, which guarantees that on a random input x in {0,1}^n, for every size s circuit C:{0,1}^n -> {0,1}^l and any unbounded solver D, the success probability Pr_x[D(C(x))=f(x)] is upper-bounded by 2^(-m)+epsilon. While this notion of incompressibility appeared in several works (e.g., Dubrov and Ishai, STOC 06), so far no explicit constructions of efficiently computable incompressible functions were known. In this work we present the following results: 1. Assuming that E is hard for exponential size nondeterministic circuits, we construct a polynomial time computable boolean function f:{0,1}^n -> {0,1} which is incompressible by size n^c circuits with communication l=(1-o(1)) * n and error epsilon=n^(-c). Our technique generalizes to the case of PRGs against nonboolean circuits, improving and simplifying the previous construction of Shaltiel and Artemenko (STOC 14). 2. We show that it is possible to achieve negligible error parameter epsilon=n^(-omega(1)) for nonboolean functions. Specifically, assuming that E is hard for exponential size Sigma_3-circuits, we construct a nonboolean function f:{0,1}^n -> {0,1}^m which is incompressible by size n^c circuits with l=Omega(n) and extremely small epsilon=n^(-c) * 2^(-m). Our construction combines the techniques of Trevisan and Vadhan (FOCS 00) with a new notion of relative error deterministic extractor which may be of independent interest. 3. We show that the task of constructing an incompressible boolean function f:{0,1}^n -> {0,1} with negligible error parameter epsilon cannot be achieved by "existing proof techniques". Namely, nondeterministic reductions (or even Sigma_i reductions) cannot get epsilon=n^(-omega(1)) for boolean incompressible functions. Our results also apply to constructions of standard Nisan-Wigderson type PRGs and (standard) boolean functions that are hard on average, explaining, in retrospective, the limitations of existing constructions. Our impossibility result builds on an approach of Shaltiel and Viola (SIAM J. Comp., 2010).

Cite as

Benny Applebaum, Sergei Artemenko, Ronen Shaltiel, and Guang Yang. Incompressible Functions, Relative-Error Extractors, and the Power of Nondeterministic Reductions (Extended Abstract). In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 582-600, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{applebaum_et_al:LIPIcs.CCC.2015.582,
  author =	{Applebaum, Benny and Artemenko, Sergei and Shaltiel, Ronen and Yang, Guang},
  title =	{{Incompressible Functions, Relative-Error Extractors, and the Power of Nondeterministic Reductions}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{582--600},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.582},
  URN =		{urn:nbn:de:0030-drops-50567},
  doi =		{10.4230/LIPIcs.CCC.2015.582},
  annote =	{Keywords: compression, pseudorandomness, extractors, nondeterministic reductions}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail