11 Search Results for "Tell, Roei"


Document
Derandomization with Minimal Memory Footprint

Authors: Dean Doron and Roei Tell

Published in: LIPIcs, Volume 264, 38th Computational Complexity Conference (CCC 2023)


Abstract
Existing proofs that deduce BPL = 𝐋 from circuit lower bounds convert randomized algorithms into deterministic algorithms with large constant overhead in space. We study space-bounded derandomization with minimal footprint, and ask what is the minimal possible space overhead for derandomization. We show that BPSPACE[S] ⊆ DSPACE[c ⋅ S] for c ≈ 2, assuming space-efficient cryptographic PRGs, and, either: (1) lower bounds against bounded-space algorithms with advice, or: (2) lower bounds against certain uniform compression algorithms. Under additional assumptions regarding the power of catalytic computation, in a new setting of parameters that was not studied before, we are even able to get c ≈ 1. Our results are constructive: Given a candidate hard function (and a candidate cryptographic PRG) we show how to transform the randomized algorithm into an efficient deterministic one. This follows from new PRGs and targeted PRGs for space-bounded algorithms, which we combine with novel space-efficient evaluation methods. A central ingredient in all our constructions is hardness amplification reductions in logspace-uniform TC⁰, that were not known before.

Cite as

Dean Doron and Roei Tell. Derandomization with Minimal Memory Footprint. In 38th Computational Complexity Conference (CCC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 264, pp. 11:1-11:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{doron_et_al:LIPIcs.CCC.2023.11,
  author =	{Doron, Dean and Tell, Roei},
  title =	{{Derandomization with Minimal Memory Footprint}},
  booktitle =	{38th Computational Complexity Conference (CCC 2023)},
  pages =	{11:1--11:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-282-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{264},
  editor =	{Ta-Shma, Amnon},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2023.11},
  URN =		{urn:nbn:de:0030-drops-182816},
  doi =		{10.4230/LIPIcs.CCC.2023.11},
  annote =	{Keywords: derandomization, space-bounded computation, catalytic space}
}
Document
New Lower Bounds and Derandomization for ACC, and a Derandomization-Centric View on the Algorithmic Method

Authors: Lijie Chen

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
In this paper, we obtain several new results on lower bounds and derandomization for ACC⁰ circuits (constant-depth circuits consisting of AND/OR/MOD_m gates for a fixed constant m, a frontier class in circuit complexity): 1) We prove that any polynomial-time Merlin-Arthur proof system with an ACC⁰ verifier (denoted by MA_{ACC⁰}) can be simulated by a nondeterministic proof system with quasi-polynomial running time and polynomial proof length, on infinitely many input lengths. This improves the previous simulation by [Chen, Lyu, and Williams, FOCS 2020], which requires both quasi-polynomial running time and proof length. 2) We show that MA_{ACC⁰} cannot be computed by fixed-polynomial-size ACC⁰ circuits, and our hard languages are hard on a sufficiently dense set of input lengths. 3) We show that NEXP (nondeterministic exponential-time) does not have ACC⁰ circuits of sub-half-exponential size, improving the previous sub-third-exponential size lower bound for NEXP against ACC⁰ by [Williams, J. ACM 2014]. Combining our first and second results gives a conceptually simpler and derandomization-centric proof of the recent breakthrough result NQP := NTIME[2^polylog(n)] ̸ ⊂ ACC⁰ by [Murray and Williams, SICOMP 2020]: Instead of going through an easy witness lemma as they did, we first prove an ACC⁰ lower bound for a subclass of MA, and then derandomize that subclass into NQP, while retaining its hardness against ACC⁰. Moreover, since our derandomization of MA_{ACC⁰} achieves a polynomial proof length, we indeed prove that nondeterministic quasi-polynomial-time with n^ω(1) nondeterminism bits (denoted as NTIMEGUESS[2^polylog(n), n^ω(1)]) has no poly(n)-size ACC⁰ circuits, giving a new proof of a result by Vyas. Combining with a win-win argument based on randomized encodings from [Chen and Ren, STOC 2020], we also prove that NTIMEGUESS[2^polylog(n), n^ω(1)] cannot be 1/2+1/poly(n)-approximated by poly(n)-size ACC⁰ circuits, improving the recent strongly average-case lower bounds for NQP against ACC⁰ by [Chen and Ren, STOC 2020]. One interesting technical ingredient behind our second result is the construction of a PSPACE-complete language that is paddable, downward self-reducible, same-length checkable, and weakly error correctable. Moreover, all its reducibility properties have corresponding AC⁰[2] non-adaptive oracle circuits. Our construction builds and improves upon similar constructions from [Trevisan and Vadhan, Complexity 2007] and [Chen, FOCS 2019], which all require at least TC⁰ oracle circuits for implementing these properties.

Cite as

Lijie Chen. New Lower Bounds and Derandomization for ACC, and a Derandomization-Centric View on the Algorithmic Method. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 34:1-34:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{chen:LIPIcs.ITCS.2023.34,
  author =	{Chen, Lijie},
  title =	{{New Lower Bounds and Derandomization for ACC, and a Derandomization-Centric View on the Algorithmic Method}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{34:1--34:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.34},
  URN =		{urn:nbn:de:0030-drops-175373},
  doi =		{10.4230/LIPIcs.ITCS.2023.34},
  annote =	{Keywords: Circuit Lower Bounds, Derandomization, Algorithmic Method, ACC}
}
Document
Track A: Algorithms, Complexity and Games
Majority vs. Approximate Linear Sum and Average-Case Complexity Below NC¹

Authors: Lijie Chen, Zhenjian Lu, Xin Lyu, and Igor C. Oliveira

Published in: LIPIcs, Volume 198, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)


Abstract
We develop a general framework that characterizes strong average-case lower bounds against circuit classes 𝒞 contained in NC¹, such as AC⁰[⊕] and ACC⁰. We apply this framework to show: - Generic seed reduction: Pseudorandom generators (PRGs) against 𝒞 of seed length ≤ n -1 and error ε(n) = n^{-ω(1)} can be converted into PRGs of sub-polynomial seed length. - Hardness under natural distributions: If 𝖤 (deterministic exponential time) is average-case hard against 𝒞 under some distribution, then 𝖤 is average-case hard against 𝒞 under the uniform distribution. - Equivalence between worst-case and average-case hardness: Worst-case lower bounds against MAJ∘𝒞 for problems in 𝖤 are equivalent to strong average-case lower bounds against 𝒞. This can be seen as a certain converse to the Discriminator Lemma [Hajnal et al., JCSS'93]. These results were not known to hold for circuit classes that do not compute majority. Additionally, we prove that classical and recent approaches to worst-case lower bounds against ACC⁰ via communication lower bounds for NOF multi-party protocols [Håstad and Goldmann, CC'91; Razborov and Wigderson, IPL'93] and Torus polynomials degree lower bounds [Bhrushundi et al., ITCS'19] also imply strong average-case hardness against ACC⁰ under the uniform distribution. Crucial to these results is the use of non-black-box hardness amplification techniques and the interplay between Majority (MAJ) and Approximate Linear Sum (SUM̃) gates. Roughly speaking, while a MAJ gate outputs 1 when the sum of the m input bits is at least m/2, a SUM̃ gate computes a real-valued bounded weighted sum of the input bits and outputs 1 (resp. 0) if the sum is close to 1 (resp. close to 0), with the promise that one of the two cases always holds. As part of our framework, we explore ideas introduced in [Chen and Ren, STOC'20] to show that, for the purpose of proving lower bounds, a top layer MAJ gate is equivalent to a (weaker) SUM̃ gate. Motivated by this result, we extend the algorithmic method and establish stronger lower bounds against bounded-depth circuits with layers of MAJ and SUM̃ gates. Among them, we prove that: - Lower bound: NQP does not admit fixed quasi-polynomial size MAJ∘SUM̃∘ACC⁰∘THR circuits. This is the first explicit lower bound against circuits with distinct layers of MAJ, SUM̃, and THR gates. Consequently, if the aforementioned equivalence between MAJ and SUM̃ as a top gate can be extended to intermediate layers, long sought-after lower bounds against the class THR∘THR of depth-2 polynomial-size threshold circuits would follow.

Cite as

Lijie Chen, Zhenjian Lu, Xin Lyu, and Igor C. Oliveira. Majority vs. Approximate Linear Sum and Average-Case Complexity Below NC¹. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 198, pp. 51:1-51:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ICALP.2021.51,
  author =	{Chen, Lijie and Lu, Zhenjian and Lyu, Xin and Oliveira, Igor C.},
  title =	{{Majority vs. Approximate Linear Sum and Average-Case Complexity Below NC¹}},
  booktitle =	{48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)},
  pages =	{51:1--51:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-195-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{198},
  editor =	{Bansal, Nikhil and Merelli, Emanuela and Worrell, James},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2021.51},
  URN =		{urn:nbn:de:0030-drops-141202},
  doi =		{10.4230/LIPIcs.ICALP.2021.51},
  annote =	{Keywords: circuit complexity, average-case hardness, complexity lower bounds}
}
Document
RANDOM
On Hitting-Set Generators for Polynomials That Vanish Rarely

Authors: Dean Doron, Amnon Ta-Shma, and Roei Tell

Published in: LIPIcs, Volume 176, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)


Abstract
The problem of constructing hitting-set generators for polynomials of low degree is fundamental in complexity theory and has numerous well-known applications. We study the following question, which is a relaxation of this problem: Is it easier to construct a hitting-set generator for polynomials p: 𝔽ⁿ → 𝔽 of degree d if we are guaranteed that the polynomial vanishes on at most an ε > 0 fraction of its inputs? We will specifically be interested in tiny values of ε≪ d/|𝔽|. This question was first considered by Goldreich and Wigderson (STOC 2014), who studied a specific setting geared for a particular application, and another specific setting was later studied by the third author (CCC 2017). In this work our main interest is a systematic study of the relaxed problem, in its general form, and we prove results that significantly improve and extend the two previously-known results. Our contributions are of two types: - Over fields of size 2 ≤ |𝔽| ≤ poly(n), we show that the seed length of any hitting-set generator for polynomials of degree d ≤ n^{.49} that vanish on at most ε = |𝔽|^{-t} of their inputs is at least Ω((d/t)⋅log(n)). - Over 𝔽₂, we show that there exists a (non-explicit) hitting-set generator for polynomials of degree d ≤ n^{.99} that vanish on at most ε = |𝔽|^{-t} of their inputs with seed length O((d-t)⋅log(n)). We also show a polynomial-time computable hitting-set generator with seed length O((d-t)⋅(2^{d-t}+log(n))). In addition, we prove that the problem we study is closely related to the following question: "Does there exist a small set S ⊆ 𝔽ⁿ whose degree-d closure is very large?", where the degree-d closure of S is the variety induced by the set of degree-d polynomials that vanish on S.

Cite as

Dean Doron, Amnon Ta-Shma, and Roei Tell. On Hitting-Set Generators for Polynomials That Vanish Rarely. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 176, pp. 7:1-7:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{doron_et_al:LIPIcs.APPROX/RANDOM.2020.7,
  author =	{Doron, Dean and Ta-Shma, Amnon and Tell, Roei},
  title =	{{On Hitting-Set Generators for Polynomials That Vanish Rarely}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)},
  pages =	{7:1--7:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-164-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{176},
  editor =	{Byrka, Jaros{\l}aw and Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2020.7},
  URN =		{urn:nbn:de:0030-drops-126109},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2020.7},
  annote =	{Keywords: Hitting-set generators, Polynomials over finite fields, Quantified derandomization}
}
Document
A Super-Quadratic Lower Bound for Depth Four Arithmetic Circuits

Authors: Nikhil Gupta, Chandan Saha, and Bhargav Thankey

Published in: LIPIcs, Volume 169, 35th Computational Complexity Conference (CCC 2020)


Abstract
We show an Ω̃(n^2.5) lower bound for general depth four arithmetic circuits computing an explicit n-variate degree-Θ(n) multilinear polynomial over any field of characteristic zero. To our knowledge, and as stated in the survey [Amir Shpilka and Amir Yehudayoff, 2010], no super-quadratic lower bound was known for depth four circuits over fields of characteristic ≠ 2 before this work. The previous best lower bound is Ω̃(n^1.5) [Abhijat Sharma, 2017], which is a slight quantitative improvement over the roughly Ω(n^1.33) bound obtained by invoking the super-linear lower bound for constant depth circuits in [Ran Raz, 2010; Victor Shoup and Roman Smolensky, 1997]. Our lower bound proof follows the approach of the almost cubic lower bound for depth three circuits in [Neeraj Kayal et al., 2016] by replacing the shifted partials measure with a suitable variant of the projected shifted partials measure, but it differs from [Neeraj Kayal et al., 2016]’s proof at a crucial step - namely, the way "heavy" product gates are handled. Loosely speaking, a heavy product gate has a relatively high fan-in. Product gates of a depth three circuit compute products of affine forms, and so, it is easy to prune Θ(n) many heavy product gates by projecting the circuit to a low-dimensional affine subspace [Neeraj Kayal et al., 2016; Amir Shpilka and Avi Wigderson, 2001]. However, in a depth four circuit, the second (from the top) layer of product gates compute products of polynomials having arbitrary degree, and hence it was not clear how to prune such heavy product gates from the circuit. We show that heavy product gates can also be eliminated from a depth four circuit by projecting the circuit to a low-dimensional affine subspace, unless the heavy gates together account for Ω̃(n^2.5) size. This part of our argument is inspired by a well-known greedy approximation algorithm for the weighted set-cover problem.

Cite as

Nikhil Gupta, Chandan Saha, and Bhargav Thankey. A Super-Quadratic Lower Bound for Depth Four Arithmetic Circuits. In 35th Computational Complexity Conference (CCC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 169, pp. 23:1-23:31, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{gupta_et_al:LIPIcs.CCC.2020.23,
  author =	{Gupta, Nikhil and Saha, Chandan and Thankey, Bhargav},
  title =	{{A Super-Quadratic Lower Bound for Depth Four Arithmetic Circuits}},
  booktitle =	{35th Computational Complexity Conference (CCC 2020)},
  pages =	{23:1--23:31},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-156-6},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{169},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2020.23},
  URN =		{urn:nbn:de:0030-drops-125757},
  doi =		{10.4230/LIPIcs.CCC.2020.23},
  annote =	{Keywords: depth four arithmetic circuits, Projected Shifted Partials, super-quadratic lower bound}
}
Document
Smooth and Strong PCPs

Authors: Orr Paradise

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
Probabilistically checkable proofs (PCPs) can be verified based only on a constant amount of random queries, such that any correct claim has a proof that is always accepted, and incorrect claims are rejected with high probability (regardless of the given alleged proof). We consider two possible features of PCPs: - A PCP is strong if it rejects an alleged proof of a correct claim with probability proportional to its distance from some correct proof of that claim. - A PCP is smooth if each location in a proof is queried with equal probability. We prove that all sets in NP have PCPs that are both smooth and strong, are of polynomial length, and can be verified based on a constant number of queries. This is achieved by following the proof of the PCP theorem of Arora, Lund, Motwani, Sudan and Szegedy (JACM, 1998), providing a stronger analysis of the Hadamard and Reed - Muller based PCPs and a refined PCP composition theorem. In fact, we show that any set in NP has a smooth strong canonical PCP of Proximity (PCPP), meaning that there is an efficiently computable bijection of NP witnesses to correct proofs. This improves on the recent construction of Dinur, Gur and Goldreich (ITCS, 2019) of PCPPs that are strong canonical but inherently non-smooth. Our result implies the hardness of approximating the satisfiability of "stable" 3CNF formulae with bounded variable occurrence, where stable means that the number of clauses violated by an assignment is proportional to its distance from a satisfying assignment (in the relative Hamming metric). This proves a hypothesis used in the work of Friggstad, Khodamoradi and Salavatipour (SODA, 2019), suggesting a connection between the hardness of these instances and other stable optimization problems.

Cite as

Orr Paradise. Smooth and Strong PCPs. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 2:1-2:41, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{paradise:LIPIcs.ITCS.2020.2,
  author =	{Paradise, Orr},
  title =	{{Smooth and Strong PCPs}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{2:1--2:41},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.2},
  URN =		{urn:nbn:de:0030-drops-116875},
  doi =		{10.4230/LIPIcs.ITCS.2020.2},
  annote =	{Keywords: Interactive and probabilistic proof systems, Probabilistically checkable proofs, Hardness of approximation}
}
Document
The Computational Cost of Asynchronous Neural Communication

Authors: Yael Hitron, Merav Parter, and Gur Perri

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
Biological neural computation is inherently asynchronous due to large variations in neuronal spike timing and transmission delays. So-far, most theoretical work on neural networks assumes the synchronous setting where neurons fire simultaneously in discrete rounds. In this work we aim at understanding the barriers of asynchronous neural computation from an algorithmic perspective. We consider an extension of the widely studied model of synchronized spiking neurons [Maass, Neural Networks 97] to the asynchronous setting by taking into account edge and node delays. - Edge Delays: We define an asynchronous model for spiking neurons in which the latency values (i.e., transmission delays) of non self-loop edges vary adversarially over time. This extends the recent work of [Hitron and Parter, ESA'19] in which the latency values are restricted to be fixed over time. Our first contribution is an impossibility result that implies that the assumption that self-loop edges have no delays (as assumed in Hitron and Parter) is indeed necessary. Interestingly, in real biological networks self-loop edges (a.k.a. autapse) are indeed free of delays, and the latter has been noted by neuroscientists to be crucial for network synchronization. To capture the computational challenges in this setting, we first consider the implementation of a single NOT gate. This simple function already captures the fundamental difficulties in the asynchronous setting. Our key technical results are space and time upper and lower bounds for the NOT function, our time bounds are tight. In the spirit of the distributed synchronizers [Awerbuch and Peleg, FOCS'90] and following [Hitron and Parter, ESA'19], we then provide a general synchronizer machinery. Our construction is very modular and it is based on efficient circuit implementation of threshold gates. The complexity of our scheme is measured by the overhead in the number of neurons and the computation time, both are shown to be polynomial in the largest latency value, and the largest incoming degree Δ of the original network. - Node Delays: We introduce the study of asynchronous communication due to variations in the response rates of the neurons in the network. In real brain networks, the round duration varies between different neurons in the network. Our key result is a simulation methodology that allows one to transform the above mentioned synchronized solution under edge delays into a synchronized under node delays while incurring a small overhead w.r.t space and time.

Cite as

Yael Hitron, Merav Parter, and Gur Perri. The Computational Cost of Asynchronous Neural Communication. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 48:1-48:47, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{hitron_et_al:LIPIcs.ITCS.2020.48,
  author =	{Hitron, Yael and Parter, Merav and Perri, Gur},
  title =	{{The Computational Cost of Asynchronous Neural Communication}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{48:1--48:47},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.48},
  URN =		{urn:nbn:de:0030-drops-117330},
  doi =		{10.4230/LIPIcs.ITCS.2020.48},
  annote =	{Keywords: asynchronous communication, asynchronous computation, spiking neurons, synchronizers}
}
Document
Beyond Natural Proofs: Hardness Magnification and Locality

Authors: Lijie Chen, Shuichi Hirahara, Igor C. Oliveira, Ján Pich, Ninad Rajgopal, and Rahul Santhanam

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
Hardness magnification reduces major complexity separations (such as EXP ⊈ NC^1) to proving lower bounds for some natural problem Q against weak circuit models. Several recent works [Igor Carboni Oliveira and Rahul Santhanam, 2018; Dylan M. McKay et al., 2019; Lijie Chen and Roei Tell, 2019; Igor Carboni Oliveira et al., 2019; Lijie Chen et al., 2019; Igor Carboni Oliveira, 2019; Lijie Chen et al., 2019] have established results of this form. In the most intriguing cases, the required lower bound is known for problems that appear to be significantly easier than Q, while Q itself is susceptible to lower bounds but these are not yet sufficient for magnification. In this work, we provide more examples of this phenomenon, and investigate the prospects of proving new lower bounds using this approach. In particular, we consider the following essential questions associated with the hardness magnification program: - Does hardness magnification avoid the natural proofs barrier of Razborov and Rudich [Alexander A. Razborov and Steven Rudich, 1997]? - Can we adapt known lower bound techniques to establish the desired lower bound for Q? We establish that some instantiations of hardness magnification overcome the natural proofs barrier in the following sense: slightly superlinear-size circuit lower bounds for certain versions of the minimum circuit size problem MCSP imply the non-existence of natural proofs. As a corollary of our result, we show that certain magnification theorems not only imply strong worst-case circuit lower bounds but also rule out the existence of efficient learning algorithms. Hardness magnification might sidestep natural proofs, but we identify a source of difficulty when trying to adapt existing lower bound techniques to prove strong lower bounds via magnification. This is captured by a locality barrier: existing magnification theorems unconditionally show that the problems Q considered above admit highly efficient circuits extended with small fan-in oracle gates, while lower bound techniques against weak circuit models quite often easily extend to circuits containing such oracles. This explains why direct adaptations of certain lower bounds are unlikely to yield strong complexity separations via hardness magnification.

Cite as

Lijie Chen, Shuichi Hirahara, Igor C. Oliveira, Ján Pich, Ninad Rajgopal, and Rahul Santhanam. Beyond Natural Proofs: Hardness Magnification and Locality. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 70:1-70:48, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ITCS.2020.70,
  author =	{Chen, Lijie and Hirahara, Shuichi and Oliveira, Igor C. and Pich, J\'{a}n and Rajgopal, Ninad and Santhanam, Rahul},
  title =	{{Beyond Natural Proofs: Hardness Magnification and Locality}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{70:1--70:48},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.70},
  URN =		{urn:nbn:de:0030-drops-117550},
  doi =		{10.4230/LIPIcs.ITCS.2020.70},
  annote =	{Keywords: Hardness Magnification, Natural Proofs, Minimum Circuit Size Problem, Circuit Lower Bounds}
}
Document
Expander-Based Cryptography Meets Natural Proofs

Authors: Igor Carboni Oliveira, Rahul Santhanam, and Roei Tell

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
We introduce new forms of attack on expander-based cryptography, and in particular on Goldreich's pseudorandom generator and one-way function. Our attacks exploit low circuit complexity of the underlying expander's neighbor function and/or of the local predicate. Our two key conceptual contributions are: 1) We put forward the possibility that the choice of expander matters in expander-based cryptography. In particular, using expanders whose neighbour function has low circuit complexity might compromise the security of Goldreich's PRG and OWF in certain settings. 2) We show that the security of Goldreich's PRG and OWF is closely related to two other long-standing problems: Specifically, to the existence of unbalanced lossless expanders with low-complexity neighbor function, and to limitations on circuit lower bounds (i.e., natural proofs). In particular, our results further motivate the investigation of affine/local unbalanced lossless expanders and of average-case lower bounds against DNF-XOR circuits. We prove two types of technical results that support the above conceptual messages. First, we unconditionally break Goldreich's PRG when instantiated with a specific expander (whose existence we prove), for a class of predicates that match the parameters of the currently-best "hard" candidates, in the regime of quasi-polynomial stretch. Secondly, conditioned on the existence of expanders whose neighbor functions have extremely low circuit complexity, we present attacks on Goldreich's generator in the regime of polynomial stretch. As one corollary, conditioned on the existence of the foregoing expanders, we show that either the parameters of natural properties for several constant-depth circuit classes cannot be improved, even mildly; or Goldreich's generator is insecure in the regime of a large polynomial stretch, regardless of the predicate used.

Cite as

Igor Carboni Oliveira, Rahul Santhanam, and Roei Tell. Expander-Based Cryptography Meets Natural Proofs. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 18:1-18:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{carbonioliveira_et_al:LIPIcs.ITCS.2019.18,
  author =	{Carboni Oliveira, Igor and Santhanam, Rahul and Tell, Roei},
  title =	{{Expander-Based Cryptography Meets Natural Proofs}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{18:1--18:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.18},
  URN =		{urn:nbn:de:0030-drops-101112},
  doi =		{10.4230/LIPIcs.ITCS.2019.18},
  annote =	{Keywords: Pseudorandom Generators, One-Way Functions, Expanders, Circuit Complexity}
}
Document
Lower Bounds on Black-Box Reductions of Hitting to Density Estimation

Authors: Roei Tell

Published in: LIPIcs, Volume 96, 35th Symposium on Theoretical Aspects of Computer Science (STACS 2018)


Abstract
Consider a deterministic algorithm that tries to find a string in an unknown set S\subseteq{0,1}^n, under the promise that S has large density. The only information that the algorithm can obtain about S is estimates of the density of S in adaptively chosen subsets of {0,1}^n, up to an additive error of mu>0. This problem is appealing as a derandomization problem, when S is the set of satisfying inputs for a circuit C:{0,1}^n->{0,1} that accepts many inputs: In this context, an algorithm as above constitutes a deterministic black-box reduction of the problem of hitting C (i.e., finding a satisfying input for C) to the problem of approximately counting the number of satisfying inputs for C on subsets of {0,1}^n. We prove tight lower bounds for this problem, demonstrating that naive approaches to solve the problem cannot be improved upon, in general. First, we show a tight trade-off between the estimation error mu and the required number of queries to solve the problem: When mu=O(log(n)/n) a polynomial number of queries suffices, and when mu>=(4log(n)/n) the required number of queries is 2^{Theta(mu \cdot n)}. Secondly, we show that the problem "resists" parallelization: Any algorithm that works in iterations, and can obtain p=p(n) density estimates "in parallel" in each iteration, still requires Omega( frac{n}{log(p)+log(1/mu)} ) iterations to solve the problem. This work extends the well-known work of Karp, Upfal, and Wigderson (1988), who studied the setting in which S is only guaranteed to be non-empty (rather than dense), and the algorithm can only probe subsets for the existence of a solution in them. In addition, our lower bound on parallel algorithms affirms a weak version of a conjecture of Motwani, Naor, and Naor (1994); we also make progress on a stronger version of their conjecture.

Cite as

Roei Tell. Lower Bounds on Black-Box Reductions of Hitting to Density Estimation. In 35th Symposium on Theoretical Aspects of Computer Science (STACS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 96, pp. 58:1-58:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{tell:LIPIcs.STACS.2018.58,
  author =	{Tell, Roei},
  title =	{{Lower Bounds on Black-Box Reductions of Hitting to Density Estimation}},
  booktitle =	{35th Symposium on Theoretical Aspects of Computer Science (STACS 2018)},
  pages =	{58:1--58:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-062-0},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{96},
  editor =	{Niedermeier, Rolf and Vall\'{e}e, Brigitte},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2018.58},
  URN =		{urn:nbn:de:0030-drops-85005},
  doi =		{10.4230/LIPIcs.STACS.2018.58},
  annote =	{Keywords: Approximate Counting, Lower Bounds, Derandomization, Parallel Algorithms, Query Complexity}
}
Document
Improved Bounds for Quantified Derandomization of Constant-Depth Circuits and Polynomials

Authors: Roei Tell

Published in: LIPIcs, Volume 79, 32nd Computational Complexity Conference (CCC 2017)


Abstract
This work studies the question of quantified derandomization, which was introduced by Goldreich and Wigderson (STOC 2014). The generic quantified derandomization problem is the following: For a circuit class cal{C} and a parameter B=B(n), given a circuit C in cal{C} with n input bits, decide whether C rejects all of its inputs, or accepts all but B(n) of its inputs. In the current work we consider three settings for this question. In each setting, we bring closer the parameter setting for which we can unconditionally construct relatively fast quantified derandomization algorithms, and the "threshold" values (for the parameters) for which any quantified derandomization algorithm implies a similar algorithm for standard derandomization. For constant-depth circuits, we construct an algorithm for quantified derandomization that works for a parameter B(n) that is only slightly smaller than a "threshold" parameter, and is significantly faster than the best currently-known algorithms for standard derandomization. On the way to this result we establish a new derandomization of the switching lemma, which significantly improves on previous results when the width of the formula is small. For constant-depth circuits with parity gates, we lower a "threshold" of Goldreich and Wigderson from depth five to depth four, and construct algorithms for quantified derandomization of a remaining type of layered depth-3 circuit that they left as an open problem. We also consider the question of constructing hitting-set generators for multivariate polynomials over large fields that vanish rarely, and prove two lower bounds on the seed length of such generators. Several of our proofs rely on an interesting technique, which we call the randomized tests technique. Intuitively, a standard technique to deterministically find a "good" object is to construct a simple deterministic test that decides the set of good objects, and then "fool" that test using a pseudorandom generator. We show that a similar approach works also if the simple deterministic test is replaced with a distribution over simple tests, and demonstrate the benefits in using a distribution instead of a single test.

Cite as

Roei Tell. Improved Bounds for Quantified Derandomization of Constant-Depth Circuits and Polynomials. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 13:1-13:48, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{tell:LIPIcs.CCC.2017.13,
  author =	{Tell, Roei},
  title =	{{Improved Bounds for Quantified Derandomization of Constant-Depth Circuits and Polynomials}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{13:1--13:48},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.13},
  URN =		{urn:nbn:de:0030-drops-75349},
  doi =		{10.4230/LIPIcs.CCC.2017.13},
  annote =	{Keywords: Computational complexity, derandomization, quantified derandomization, hitting-set generator, constant-depth circuits}
}
  • Refine by Author
  • 5 Tell, Roei
  • 3 Chen, Lijie
  • 2 Doron, Dean
  • 2 Oliveira, Igor C.
  • 2 Santhanam, Rahul
  • Show More...

  • Refine by Classification
  • 3 Theory of computation → Pseudorandomness and derandomization
  • 2 Theory of computation → Circuit complexity
  • 1 Theory of computation
  • 1 Theory of computation → Algebraic complexity theory
  • 1 Theory of computation → Complexity theory and logic
  • Show More...

  • Refine by Keyword
  • 2 Circuit Lower Bounds
  • 2 Derandomization
  • 2 derandomization
  • 1 ACC
  • 1 Algorithmic Method
  • Show More...

  • Refine by Type
  • 11 document

  • Refine by Publication Year
  • 5 2020
  • 2 2023
  • 1 2017
  • 1 2018
  • 1 2019
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail