eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-06-04
102
0
0
10.4230/LIPIcs.CCC.2018
article
LIPIcs, Volume 102, CCC'18, Complete Volume
Servedio, Rocco A.
LIPIcs, Volume 102, CCC'18, Complete Volume
https://drops.dagstuhl.de/storage/00lipics/lipics-vol102-ccc2018/LIPIcs.CCC.2018/LIPIcs.CCC.2018.pdf
Theory of computation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-06-04
102
0:i
0:xi
10.4230/LIPIcs.CCC.2018.0
article
Front Matter, Table of Contents, Preface, Conference Organization
Servedio, Rocco A.
Front Matter, Table of Contents, Preface, Conference Organization
https://drops.dagstuhl.de/storage/00lipics/lipics-vol102-ccc2018/LIPIcs.CCC.2018.0/LIPIcs.CCC.2018.0.pdf
Front Matter
Table of Contents
Preface
Conference Organization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-06-04
102
1:1
1:21
10.4230/LIPIcs.CCC.2018.1
article
Pseudorandom Generators from Polarizing Random Walks
Chattopadhyay, Eshan
1
Hatami, Pooya
2
Hosseini, Kaave
3
Lovett, Shachar
3
Cornell Univeristy and IAS, Princeton, USA
University of Texas at Austin, USA
University of California, San Diego, USA
We propose a new framework for constructing pseudorandom generators for n-variate Boolean functions. It is based on two new notions. First, we introduce fractional pseudorandom generators, which are pseudorandom distributions taking values in [-1,1]^n. Next, we use a fractional pseudorandom generator as steps of a random walk in [-1,1]^n that converges to {-1,1}^n. We prove that this random walk converges fast (in time logarithmic in n) due to polarization. As an application, we construct pseudorandom generators for Boolean functions with bounded Fourier tails. We use this to obtain a pseudorandom generator for functions with sensitivity s, whose seed length is polynomial in s. Other examples include functions computed by branching programs of various sorts or by bounded depth circuits.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol102-ccc2018/LIPIcs.CCC.2018.1/LIPIcs.CCC.2018.1.pdf
AC0
branching program
polarization
pseudorandom generators
random walks
Sensitivity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-06-04
102
2:1
2:24
10.4230/LIPIcs.CCC.2018.2
article
A PRG for Boolean PTF of Degree 2 with Seed Length Subpolynomial in epsilon and Logarithmic in n
Kane, Daniel
1
Rao, Sankeerth
1
UC San Diego
We construct and analyze a pseudorandom generator for degree 2 boolean polynomial threshold functions. Random constructions achieve the optimal seed length of O(log n + log 1/epsilon), however the best known explicit construction of [Ilias Diakonikolas, 2010] uses a seed length of O(log n * epsilon^{-8}). In this work we give an explicit construction that uses a seed length of O(log n + (1/epsilon)^{o(1)}). Note that this improves the seed length substantially and that the dependence on the error epsilon is additive and only grows subpolynomially as opposed to the previously known multiplicative polynomial dependence.
Our generator uses dimensionality reduction on a Nisan-Wigderson based pseudorandom generator given by Lu, Kabanets [Kabanets and Lu, 2018].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol102-ccc2018/LIPIcs.CCC.2018.2/LIPIcs.CCC.2018.2.pdf
Pseudorandomness
Polynomial Threshold Functions
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-06-04
102
3:1
3:19
10.4230/LIPIcs.CCC.2018.3
article
A New Approach for Constructing Low-Error, Two-Source Extractors
Ben-Aroya, Avraham
1
Chattopadhyay, Eshan
2
3
Doron, Dean
1
Li, Xin
4
Ta-Shma, Amnon
1
The Blavatnik School of Computer Science, Tel-Aviv University, Tel Aviv 69978, Israel
Department of Computer Science, Cornell University and School of Mathematics, IAS, Ithaca, NY 14850, USA
Princeton, NJ 08540, USA
Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
Our main contribution in this paper is a new reduction from explicit two-source extractors for polynomially-small entropy rate and negligible error to explicit t-non-malleable extractors with seed-length that has a good dependence on t. Our reduction is based on the Chattopadhyay and Zuckerman framework (STOC 2016), and surprisingly we dispense with the use of resilient functions which appeared to be a major ingredient there and in follow-up works. The use of resilient functions posed a fundamental barrier towards achieving negligible error, and our new reduction circumvents this bottleneck.
The parameters we require from t-non-malleable extractors for our reduction to work hold in a non-explicit construction, but currently it is not known how to explicitly construct such extractors. As a result we do not give an unconditional construction of an explicit low-error two-source extractor. Nonetheless, we believe our work gives a viable approach for solving the important problem of low-error two-source extractors. Furthermore, our work highlights an existing barrier in constructing low-error two-source extractors, and draws attention to the dependence of the parameter t in the seed-length of the non-malleable extractor. We hope this work would lead to further developments in explicit constructions of both non-malleable and two-source extractors.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol102-ccc2018/LIPIcs.CCC.2018.3/LIPIcs.CCC.2018.3.pdf
Two-Source Extractors
Non-Malleable Extractors
Pseudorandomness
Explicit Constructions
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-06-04
102
4:1
4:16
10.4230/LIPIcs.CCC.2018.4
article
Lossless Dimension Expanders via Linearized Polynomials and Subspace Designs
Guruswami, Venkatesan
1
https://orcid.org/0000-0001-7926-3396
Resch, Nicolas
2
Xing, Chaoping
3
https://orcid.org/0000-0002-1257-1033
Department of Computer Science, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh, PA, USA, 15213
Department of Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh, PA, USA, 15213
School of Physical and Mathematical Sciences, Nanyang Technological University, 21 Nanyang Link, Singapore 637371
For a vector space F^n over a field F, an (eta,beta)-dimension expander of degree d is a collection of d linear maps Gamma_j : F^n -> F^n such that for every subspace U of F^n of dimension at most eta n, the image of U under all the maps, sum_{j=1}^d Gamma_j(U), has dimension at least beta dim(U). Over a finite field, a random collection of d = O(1) maps Gamma_j offers excellent "lossless" expansion whp: beta ~~ d for eta >= Omega(1/d). When it comes to a family of explicit constructions (for growing n), however, achieving even modest expansion factor beta = 1+epsilon with constant degree is a non-trivial goal.
We present an explicit construction of dimension expanders over finite fields based on linearized polynomials and subspace designs, drawing inspiration from recent progress on list-decoding in the rank-metric. Our approach yields the following:
- Lossless expansion over large fields; more precisely beta >= (1-epsilon)d and eta >= (1-epsilon)/d with d = O_epsilon(1), when |F| >= Omega(n).
- Optimal up to constant factors expansion over fields of arbitrarily small polynomial size; more precisely beta >= Omega(delta d) and eta >= Omega(1/(delta d)) with d=O_delta(1), when |F| >= n^{delta}. Previously, an approach reducing to monotone expanders (a form of vertex expansion that is highly non-trivial to establish) gave (Omega(1),1+Omega(1))-dimension expanders of constant degree over all fields. An approach based on "rank condensing via subspace designs" led to dimension expanders with beta >rsim sqrt{d} over large fields. Ours is the first construction to achieve lossless dimension expansion, or even expansion proportional to the degree.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol102-ccc2018/LIPIcs.CCC.2018.4/LIPIcs.CCC.2018.4.pdf
Algebraic constructions
coding theory
linear algebra
list-decoding
polynomial method
pseudorandomness
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-06-04
102
5:1
5:31
10.4230/LIPIcs.CCC.2018.5
article
NP-hardness of Minimum Circuit Size Problem for OR-AND-MOD Circuits
Hirahara, Shuichi
1
Oliveira, Igor C.
2
Santhanam, Rahul
2
Department of Computer Science, The University of Tokyo, Tokyo, Japan
Department of Computer Science, University of Oxford, Oxford, United Kingdom
The Minimum Circuit Size Problem (MCSP) asks for the size of the smallest boolean circuit that computes a given truth table. It is a prominent problem in NP that is believed to be hard, but for which no proof of NP-hardness has been found. A significant number of works have demonstrated the central role of this problem and its variations in diverse areas such as cryptography, derandomization, proof complexity, learning theory, and circuit lower bounds.
The NP-hardness of computing the minimum numbers of terms in a DNF formula consistent with a given truth table was proved by W. Masek [William J. Masek, 1979] in 1979. In this work, we make the first progress in showing NP-hardness for more expressive classes of circuits, and establish an analogous result for the MCSP problem for depth-3 circuits of the form OR-AND-MOD_2. Our techniques extend to an NP-hardness result for MOD_m gates at the bottom layer under inputs from (Z / m Z)^n.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol102-ccc2018/LIPIcs.CCC.2018.5/LIPIcs.CCC.2018.5.pdf
NP-hardness
Minimum Circuit Size Problem
depth-3 circuits
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-06-04
102
6:1
6:24
10.4230/LIPIcs.CCC.2018.6
article
Limits on Representing Boolean Functions by Linear Combinations of Simple Functions: Thresholds, ReLUs, and Low-Degree Polynomials
Williams, Richard Ryan
1
https://orcid.org/0000-0003-2326-2233
EECS and CSAIL, MIT, 32 Vassar St., Cambridge MA, USA
We consider the problem of representing Boolean functions exactly by "sparse" linear combinations (over R) of functions from some "simple" class C. In particular, given C we are interested in finding low-complexity functions lacking sparse representations. When C forms a basis for the space of Boolean functions (e.g., the set of PARITY functions or the set of conjunctions) this sort of problem has a well-understood answer; the problem becomes interesting when C is "overcomplete" and the set of functions is not linearly independent. We focus on the cases where C is the set of linear threshold functions, the set of rectified linear units (ReLUs), and the set of low-degree polynomials over a finite field, all of which are well-studied in different contexts.
We provide generic tools for proving lower bounds on representations of this kind. Applying these, we give several new lower bounds for "semi-explicit" Boolean functions. Let alpha(n) be an unbounded function such that n^{alpha(n)} is time constructible (e.g. alpha(n) = log^*(n)). We show:
- Functions in NTIME[n^{alpha(n)}] that require super-polynomially many linear threshold functions to represent (depth-two neural networks with sign activation function, a special case of depth-two threshold circuit lower bounds).
- Functions in NTIME[n^{alpha(n)}] that require super-polynomially many ReLU gates to represent (depth-two neural networks with ReLU activation function).
- Functions in NTIME[n^{alpha(n)}] that require super-polynomially many O(1)-degree F_p-polynomials to represent exactly, for every prime p (related to problems regarding Higher-Order "Uncertainty Principles"). We also obtain a function in E^{NP} requiring 2^{Omega(n)} linear combinations.
- Functions in NTIME[n^{poly(log n)}] that require super-polynomially many ACC ° THR circuits to represent exactly (further generalizing the recent lower bounds of Murray and the author). We also obtain "fixed-polynomial" lower bounds for functions in NP, for the first three representation classes. All our lower bounds are obtained via algorithms for analyzing linear combinations of simple functions in the above scenarios, in ways which substantially beat exhaustive search.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol102-ccc2018/LIPIcs.CCC.2018.6/LIPIcs.CCC.2018.6.pdf
linear threshold functions
lower bounds
neural networks
low-degree polynomials
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-06-04
102
7:1
7:20
10.4230/LIPIcs.CCC.2018.7
article
The Power of Natural Properties as Oracles
Impagliazzo, Russell
1
Kabanets, Valentine
2
Volkovich, Ilya
3
Department of Computer Science, University of California San Diego, La Jolla, CA, USA
School of Computing Science, Simon Fraser University, Burnaby, BC, Canada
Department of EECS, CSE Division, University of Michigan, Ann Arbor, MI, USA
We study the power of randomized complexity classes that are given oracle access to a natural property of Razborov and Rudich (JCSS, 1997) or its special case, the Minimal Circuit Size Problem (MCSP). We show that in a number of complexity-theoretic results that use the SAT oracle, one can use the MCSP oracle instead. For example, we show that ZPEXP^{MCSP} !subseteq P/poly, which should be contrasted with the previously known circuit lower bound ZPEXP^{NP} !subseteq P/poly. We also show that, assuming the existence of Indistinguishability Obfuscators (IO), SAT and MCSP are equivalent in the sense that one has a ZPP algorithm if and only the other one does. We interpret our results as providing some evidence that MCSP may be NP-hard under randomized polynomial-time reductions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol102-ccc2018/LIPIcs.CCC.2018.7/LIPIcs.CCC.2018.7.pdf
natural properties
Minimal Circuit Size Problem (MCSP)
circuit lower bounds
hardness of MCSP
learning algorithms
obfuscation
Indistinguishability Obfuscators (IO)
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-06-04
102
8:1
8:37
10.4230/LIPIcs.CCC.2018.8
article
Linear Sketching over F_2
Kannan, Sampath
1
Mossel, Elchanan
2
Sanyal, Swagato
3
Yaroslavtsev, Grigory
4
University of Pennsylvania
Massachusetts Institute of Technology
Division of Mathematical Sciences, Nanyang Technological University, Singapore and Centre for Quantum Technologies, National University of Singapore, Singapore
Indiana University, Bloomington
We initiate a systematic study of linear sketching over F_2. For a given Boolean function treated as f : F_2^n -> F_2 a randomized F_2-sketch is a distribution M over d x n matrices with elements over F_2 such that Mx suffices for computing f(x) with high probability. Such sketches for d << n can be used to design small-space distributed and streaming algorithms.
Motivated by these applications we study a connection between F_2-sketching and a two-player one-way communication game for the corresponding XOR-function. We conjecture that F_2-sketching is optimal for this communication game. Our results confirm this conjecture for multiple important classes of functions: 1) low-degree F_2-polynomials, 2) functions with sparse Fourier spectrum, 3) most symmetric functions, 4) recursive majority function. These results rely on a new structural theorem that shows that F_2-sketching is optimal (up to constant factors) for uniformly distributed inputs.
Furthermore, we show that (non-uniform) streaming algorithms that have to process random updates over F_2 can be constructed as F_2-sketches for the uniform distribution. In contrast with the previous work of Li, Nguyen and Woodruff (STOC'14) who show an analogous result for linear sketches over integers in the adversarial setting our result does not require the stream length to be triply exponential in n and holds for streams of length O(n) constructed through uniformly random updates.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol102-ccc2018/LIPIcs.CCC.2018.8/LIPIcs.CCC.2018.8.pdf
Linear sketch
Streaming algorithms
XOR-functions
Communication complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-06-04
102
9:1
9:17
10.4230/LIPIcs.CCC.2018.9
article
Communication Complexity with Small Advantage
Watson, Thomas
1
Department of Computer Science, University of Memphis, Memphis, TN, USA
We study problems in randomized communication complexity when the protocol is only required to attain some small advantage over purely random guessing, i.e., it produces the correct output with probability at least epsilon greater than one over the codomain size of the function. Previously, Braverman and Moitra (STOC 2013) showed that the set-intersection function requires Theta(epsilon n) communication to achieve advantage epsilon. Building on this, we prove the same bound for several variants of set-intersection: (1) the classic "tribes" function obtained by composing with And (provided 1/epsilon is at most the width of the And), and (2) the variant where the sets are uniquely intersecting and the goal is to determine partial information about (say, certain bits of the index of) the intersecting coordinate.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol102-ccc2018/LIPIcs.CCC.2018.9/LIPIcs.CCC.2018.9.pdf
Communication
complexity
small
advantage
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-06-04
102
10:1
10:21
10.4230/LIPIcs.CCC.2018.10
article
Algebraic Dependencies and PSPACE Algorithms in Approximative Complexity
Guo, Zeyu
1
Saxena, Nitin
1
Sinhababu, Amit
1
Department of Computer Science & Engineering, Indian Institute of Technology Kanpur
Testing whether a set f of polynomials has an algebraic dependence is a basic problem with several applications. The polynomials are given as algebraic circuits. Algebraic independence testing question is wide open over finite fields (Dvir, Gabizon, Wigderson, FOCS'07). Previously, the best complexity known was NP^{#P} (Mittmann, Saxena, Scheiblechner, Trans.AMS'14). In this work we put the problem in AM cap coAM. In particular, dependence testing is unlikely to be NP-hard and joins the league of problems of "intermediate" complexity, eg. graph isomorphism & integer factoring. Our proof method is algebro-geometric- estimating the size of the image/preimage of the polynomial map f over the finite field. A gap in this size is utilized in the AM protocols.
Next, we study the open question of testing whether every annihilator of f has zero constant term (Kayal, CCC'09). We give a geometric characterization using Zariski closure of the image of f; introducing a new problem called approximate polynomials satisfiability (APS). We show that APS is NP-hard and, using projective algebraic-geometry ideas, we put APS in PSPACE (prior best was EXPSPACE via Gröbner basis computation). As an unexpected application of this to approximative complexity theory we get- over any field, hitting-sets for overline{VP} can be verified in PSPACE. This solves an open problem posed in (Mulmuley, FOCS'12, J.AMS 2017); greatly mitigating the GCT Chasm (exponentially in terms of space complexity).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol102-ccc2018/LIPIcs.CCC.2018.10/LIPIcs.CCC.2018.10.pdf
algebraic dependence
Jacobian
Arthur-Merlin
approximate polynomial
satisfiability
hitting-set
border VP
finite field
PSPACE
EXPSPACE
GCT Chasm
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-06-04
102
11:1
11:16
10.4230/LIPIcs.CCC.2018.11
article
Unbalancing Sets and an Almost Quadratic Lower Bound for Syntactically Multilinear Arithmetic Circuits
Alon, Noga
1
Kumar, Mrinal
2
Volk, Ben Lee
3
Sackler School of Mathematics and Blavatnik School of Computer Science , Tel Aviv, 6997801, Israel, and , Center for Mathematical Sciences and Applications, Harvard University, Cambridge, MA 02138, USA
Center for Mathematical Sciences and Applications, Harvard University, Cambridge, MA 02138, USA
Blavatnik School of Computer Science, Tel Aviv University , Tel Aviv, 6997801, Israel
We prove a lower bound of Omega(n^2/log^2 n) on the size of any syntactically multilinear arithmetic circuit computing some explicit multilinear polynomial f(x_1, ..., x_n). Our approach expands and improves upon a result of Raz, Shpilka and Yehudayoff ([Ran Raz et al., 2008]), who proved a lower bound of Omega(n^{4/3}/log^2 n) for the same polynomial. Our improvement follows from an asymptotically optimal lower bound for a generalized version of Galvin's problem in extremal set theory.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol102-ccc2018/LIPIcs.CCC.2018.11/LIPIcs.CCC.2018.11.pdf
Algebraic Complexity
Multilinear Circuits
Circuit Lower Bounds
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-06-04
102
12:1
12:16
10.4230/LIPIcs.CCC.2018.12
article
Hardness Amplification for Non-Commutative Arithmetic Circuits
Carmosino, Marco L.
1
Impagliazzo, Russell
1
Lovett, Shachar
1
Mihajlin, Ivan
1
Department of Computer Science, University of California San Diego, La Jolla, CA, USA
We show that proving mildly super-linear lower bounds on non-commutative arithmetic circuits implies exponential lower bounds on non-commutative circuits. That is, non-commutative circuit complexity is a threshold phenomenon: an apparently weak lower bound actually suffices to show the strongest lower bounds we could desire.
This is part of a recent line of inquiry into why arithmetic circuit complexity, despite being a heavily restricted version of Boolean complexity, still cannot prove super-linear lower bounds on general devices. One can view our work as positive news (it suffices to prove weak lower bounds to get strong ones) or negative news (it is as hard to prove weak lower bounds as it is to prove strong ones). We leave it to the reader to determine their own level of optimism.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol102-ccc2018/LIPIcs.CCC.2018.12/LIPIcs.CCC.2018.12.pdf
arithmetic circuits
hardness amplification
circuit lower bounds
non-commutative computation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-06-04
102
13:1
13:17
10.4230/LIPIcs.CCC.2018.13
article
Hardness vs Randomness for Bounded Depth Arithmetic Circuits
Chou, Chi-Ning
1
Kumar, Mrinal
2
Solomon, Noam
2
School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA
Center for Mathematical Sciences and Applications, Harvard University, Cambridge, MA 02138, USA
In this paper, we study the question of hardness-randomness tradeoffs for bounded depth arithmetic circuits. We show that if there is a family of explicit polynomials {f_n}, where f_n is of degree O(log^2n/log^2 log n) in n variables such that f_n cannot be computed by a depth Delta arithmetic circuits of size poly(n), then there is a deterministic sub-exponential time algorithm for polynomial identity testing of arithmetic circuits of depth Delta-5.
This is incomparable to a beautiful result of Dvir et al.[SICOMP, 2009], where they showed that super-polynomial lower bounds for depth Delta circuits for any explicit family of polynomials (of potentially high degree) implies sub-exponential time deterministic PIT for depth Delta-5 circuits of bounded individual degree. Thus, we remove the "bounded individual degree" condition in the work of Dvir et al. at the cost of strengthening the hardness assumption to hold for polynomials of low degree.
The key technical ingredient of our proof is the following property of roots of polynomials computable by a bounded depth arithmetic circuit : if f(x_1, x_2, ..., x_n) and P(x_1, x_2, ..., x_n, y) are polynomials of degree d and r respectively, such that P can be computed by a circuit of size s and depth Delta and P(x_1, x_2, ..., x_n, f) equiv 0, then, f can be computed by a circuit of size poly(n, s, r, d^{O(sqrt{d})}) and depth Delta + 3. In comparison, Dvir et al. showed that f can be computed by a circuit of depth Delta + 3 and size poly(n, s, r, d^{t}), where t is the degree of P in y. Thus, the size upper bound in the work of Dvir et al. is non-trivial when t is small but d could be large, whereas our size upper bound is non-trivial when d is small, but t could be large.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol102-ccc2018/LIPIcs.CCC.2018.13/LIPIcs.CCC.2018.13.pdf
Algebraic Complexity
Polynomial Factorization Circuit Lower Bounds
Polynomial Identity Testing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-06-04
102
14:1
14:45
10.4230/LIPIcs.CCC.2018.14
article
On The Hardness of Approximate and Exact (Bichromatic) Maximum Inner Product
Chen, Lijie
1
Massachusetts Institute of Technology, USA
In this paper we study the (Bichromatic) Maximum Inner Product Problem (Max-IP), in which we are given sets A and B of vectors, and the goal is to find a in A and b in B maximizing inner product a * b. Max-IP is very basic and serves as the base problem in the recent breakthrough of [Abboud et al., FOCS 2017] on hardness of approximation for polynomial-time problems. It is also used (implicitly) in the argument for hardness of exact l_2-Furthest Pair (and other important problems in computational geometry) in poly-log-log dimensions in [Williams, SODA 2018]. We have three main results regarding this problem.
- Characterization of Multiplicative Approximation. First, we study the best multiplicative approximation ratio for Boolean Max-IP in sub-quadratic time. We show that, for Max-IP with two sets of n vectors from {0,1}^{d}, there is an n^{2 - Omega(1)} time (d/log n)^{Omega(1)}-multiplicative-approximating algorithm, and we show this is conditionally optimal, as such a (d/log n)^{o(1)}-approximating algorithm would refute SETH. Similar characterization is also achieved for additive approximation for Max-IP.
- 2^{O(log^* n)}-dimensional Hardness for Exact Max-IP Over The Integers. Second, we revisit the hardness of solving Max-IP exactly for vectors with integer entries. We show that, under SETH, for Max-IP with sets of n vectors from Z^{d} for some d = 2^{O(log^* n)}, every exact algorithm requires n^{2 - o(1)} time. With the reduction from [Williams, SODA 2018], it follows that l_2-Furthest Pair and Bichromatic l_2-Closest Pair in 2^{O(log^* n)} dimensions require n^{2 - o(1)} time.
- Connection with NP * UPP Communication Protocols. Last, We establish a connection between conditional lower bounds for exact Max-IP with integer entries and NP * UPP communication protocols for Set-Disjointness, parallel to the connection between conditional lower bounds for approximating Max-IP and MA communication protocols for Set-Disjointness.
The lower bound in our first result is a direct corollary of the new MA protocol for Set-Disjointness introduced in [Rubinstein, STOC 2018], and our algorithms utilize the polynomial method and simple random sampling. Our second result follows from a new dimensionality self reduction from the Orthogonal Vectors problem for n vectors from {0,1}^{d} to n vectors from Z^{l} where l = 2^{O(log^* d)}, dramatically improving the previous reduction in [Williams, SODA 2018]. The key technical ingredient is a recursive application of Chinese Remainder Theorem.
As a side product, we obtain an MA communication protocol for Set-Disjointness with complexity O (sqrt{n log n log log n}), slightly improving the O (sqrt{n} log n) bound [Aaronson and Wigderson, TOCT 2009], and approaching the Omega(sqrt{n}) lower bound [Klauck, CCC 2003].
Moreover, we show that (under SETH) one can apply the O(sqrt{n}) BQP communication protocol for Set-Disjointness to prove near-optimal hardness for approximation to Max-IP with vectors in {-1,1}^d. This answers a question from [Abboud et al., FOCS 2017] in the affirmative.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol102-ccc2018/LIPIcs.CCC.2018.14/LIPIcs.CCC.2018.14.pdf
Maximum Inner Product
SETH
Hardness of Approximation in P
Fined-Grained Complexity
Hopcroft's Problem
Chinese Remainder Theorem
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-06-04
102
15:1
15:22
10.4230/LIPIcs.CCC.2018.15
article
Hardness of Function Composition for Semantic Read once Branching Programs
Edmonds, Jeff
1
http://www.cs.yorku.ca/ jeff/
Medabalimi, Venkatesh
2
https://www.cs.toronto.edu/ venkatm
Pitassi, Toniann
3
https://www.cs.toronto.edu/ toni/
York University, 4700 Keele Street, Toronto, CANADA
University of Toronto, 10 King’s College Road, Toronto, CANADA
University of Toronto, 10 King’s College Road, Toronto, CANADA, and Institute for Advanced Study, Princeon NJ
In this work, we study time/space trade-offs for function composition. We prove asymptotically optimal lower bounds for function composition in the setting of nondeterministic read once branching programs, for the syntactic model as well as the stronger semantic model of read-once nondeterministic computation. We prove that such branching programs for solving the tree evaluation problem over an alphabet of size k requires size roughly k^{Omega(h)}, i.e space Omega(h log k). Our lower bound nearly matches the natural upper bound which follows the best strategy for black-white pebbling the underlying tree. While previous super-polynomial lower bounds have been proven for read-once nondeterministic branching programs (for both the syntactic as well as the semantic models), we give the first lower bounds for iterated function composition, and in these models our lower bounds are near optimal.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol102-ccc2018/LIPIcs.CCC.2018.15/LIPIcs.CCC.2018.15.pdf
Branching Programs
Function Composition
Time-Space Tradeoffs
Semantic Read Once
Tree Evaluation Problem
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-06-04
102
16:1
16:24
10.4230/LIPIcs.CCC.2018.16
article
Reordering Rule Makes OBDD Proof Systems Stronger
Buss, Sam
1
Itsykson, Dmitry
2
Knop, Alexander
3
Sokolov, Dmitry
4
University of California, San Diego, La Jolla, CA, USA
St. Petersburg Department of V.A. Steklov Institute of Mathematics of the Russian Academy of Sciences, St. Petersburg, Russia
University of California, San Diego, La Jolla, CA, USA, St. Petersburg Department of V.A. Steklov Institute of Mathematics of the Russian Academy of Sciences, St. Petersburg, Russia
KTH Royal Institute of Technology, Stockholm, Sweden , St. Petersburg Department of V.A. Steklov Institute of Mathematics of the Russian Academy of Sciences, St. Petersburg, Russia
Atserias, Kolaitis, and Vardi showed that the proof system of Ordered Binary Decision Diagrams with conjunction and weakening, OBDD(^, weakening), simulates CP^* (Cutting Planes with unary coefficients). We show that OBDD(^, weakening) can give exponentially shorter proofs than dag-like cutting planes. This is proved by showing that the Clique-Coloring tautologies have polynomial size proofs in the OBDD(^, weakening) system.
The reordering rule allows changing the variable order for OBDDs. We show that OBDD(^, weakening, reordering) is strictly stronger than OBDD(^, weakening). This is proved using the Clique-Coloring tautologies, and by transforming tautologies using coded permutations and orification. We also give CNF formulas which have polynomial size OBDD(^) proofs but require superpolynomial (actually, quasipolynomial size) resolution proofs, and thus we partially resolve an open question proposed by Groote and Zantema.
Applying dag-like and tree-like lifting techniques to the mentioned results, we completely analyze which of the systems among CP^*, OBDD(^), OBDD(^, reordering), OBDD(^, weakening) and OBDD(^, weakening, reordering) polynomially simulate each other. For dag-like proof systems, some of our separations are quasipolynomial and some are exponential; for tree-like systems, all of our separations are exponential.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol102-ccc2018/LIPIcs.CCC.2018.16/LIPIcs.CCC.2018.16.pdf
Proof complexity
OBDD
Tseitin formulas
the Clique-Coloring principle
lifting theorems
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-06-04
102
17:1
17:37
10.4230/LIPIcs.CCC.2018.17
article
Testing Linearity against Non-Signaling Strategies
Chiesa, Alessandro
1
Manohar, Peter
1
Shinkar, Igor
1
UC Berkeley, Berkeley (CA), USA
Non-signaling strategies are collections of distributions with certain non-local correlations. They have been studied in Physics as a strict generalization of quantum strategies to understand the power and limitations of Nature's apparent non-locality. Recently, they have received attention in Theoretical Computer Science due to connections to Complexity and Cryptography.
We initiate the study of Property Testing against non-signaling strategies, focusing first on the classical problem of linearity testing (Blum, Luby, and Rubinfeld; JCSS 1993). We prove that any non-signaling strategy that passes the linearity test with high probability must be close to a quasi-distribution over linear functions.
Quasi-distributions generalize the notion of probability distributions over global objects (such as functions) by allowing negative probabilities, while at the same time requiring that "local views" follow standard distributions (with non-negative probabilities). Quasi-distributions arise naturally in the study of Quantum Mechanics as a tool to describe various non-local phenomena.
Our analysis of the linearity test relies on Fourier analytic techniques applied to quasi-distributions. Along the way, we also establish general equivalences between non-signaling strategies and quasi-distributions, which we believe will provide a useful perspective on the study of Property Testing against non-signaling strategies beyond linearity testing.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol102-ccc2018/LIPIcs.CCC.2018.17/LIPIcs.CCC.2018.17.pdf
property testing
linearity testing
non-signaling strategies
quasi-distributions
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-06-04
102
18:1
18:35
10.4230/LIPIcs.CCC.2018.18
article
Earthmover Resilience and Testing in Ordered Structures
Ben-Eliezer, Omri
1
Fischer, Eldar
2
School of Computer Science, Tel Aviv University, Tel Aviv, Israel
Faculty of Computer Science, Israel Institute of Technology (Technion), Haifa, Israel
One of the main challenges in property testing is to characterize those properties that are testable with a constant number of queries. For unordered structures such as graphs and hypergraphs this task has been mostly settled. However, for ordered structures such as strings, images, and ordered graphs, the characterization problem seems very difficult in general.
In this paper, we identify a wide class of properties of ordered structures - the earthmover resilient (ER) properties - and show that the "good behavior" of such properties allows us to obtain general testability results that are similar to (and more general than) those of unordered graphs. A property P is ER if, roughly speaking, slight changes in the order of the elements in an object satisfying P cannot make this object far from P. The class of ER properties includes, e.g., all unordered graph properties, many natural visual properties of images, such as convexity, and all hereditary properties of ordered graphs and images.
A special case of our results implies, building on a recent result of Alon and the authors, that the distance of a given image or ordered graph from any hereditary property can be estimated (with good probability) up to a constant additive error, using a constant number of queries.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol102-ccc2018/LIPIcs.CCC.2018.18/LIPIcs.CCC.2018.18.pdf
characterizations of testability
distance estimation
earthmover resilient
ordered structures
property testing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-06-04
102
19:1
19:29
10.4230/LIPIcs.CCC.2018.19
article
New Hardness Results for the Permanent Using Linear Optics
Grier, Daniel
1
Schaeffer, Luke
1
MIT, Cambridge, USA
In 2011, Aaronson gave a striking proof, based on quantum linear optics, that the problem of computing the permanent of a matrix is #P-hard. Aaronson's proof led naturally to hardness of approximation results for the permanent, and it was arguably simpler than Valiant's seminal proof of the same fact in 1979. Nevertheless, it did not show #P-hardness of the permanent for any class of matrices which was not previously known. In this paper, we present a collection of new results about matrix permanents that are derived primarily via these linear optical techniques.
First, we show that the problem of computing the permanent of a real orthogonal matrix is #P-hard. Much like Aaronson's original proof, this implies that even a multiplicative approximation remains #P-hard to compute. The hardness result even translates to permanents of orthogonal matrices over the finite field F_{p^4} for p != 2, 3. Interestingly, this characterization is tight: in fields of characteristic 2, the permanent coincides with the determinant; in fields of characteristic 3, one can efficiently compute the permanent of an orthogonal matrix by a nontrivial result of Kogan.
Finally, we use more elementary arguments to prove #P-hardness for the permanent of a positive semidefinite matrix. This result shows that certain probabilities of boson sampling experiments with thermal states are hard to compute exactly, despite the fact that they can be efficiently sampled by a classical computer.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol102-ccc2018/LIPIcs.CCC.2018.19/LIPIcs.CCC.2018.19.pdf
Permanent
Linear optics
#P-hardness
Orthogonal matrices
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-06-04
102
20:1
20:18
10.4230/LIPIcs.CCC.2018.20
article
Retracted: Two-Player Entangled Games are NP-Hard
Natarajan, Anand
1
https://orcid.org/0000-0003-3648-3844
Vidick, Thomas
2
https://orcid.org/0000-0002-6405-365X
Center for Theoretical Physics, MIT, Cambridge, USA
Department of Computing and Mathematical Sciences, California Institute of Technology, Pasadena, USA
The article, published on June 4th, 2018 in the CCC 2018 proceedings, has been retracted by agreement between the authors, the editor(s), and the publisher Schloss Dagstuhl / LIPIcs. The retraction has been agreed due to an error in the proof of the main result. This error is carried over from an error in the referenced paper “Three-player entangled XOR games are NP-hard to approximate” by Thomas Vidick (SICOMP ’16). That paper was used in an essential way to obtain the present result, and the error cannot be addressed through an erratum. See Retraction Notice on the last page of the PDF.
We show that it is NP-hard to approximate, to within an additive constant, the maximum success probability of players sharing quantum entanglement in a two-player game with classical questions of logarithmic length and classical answers of constant length. As a corollary, the inclusion NEXP subseteq MIP^*, first shown by Ito and Vidick (FOCS'12) with three provers, holds with two provers only. The proof is based on a simpler, improved analysis of the low-degree test of Raz and Safra (STOC'97) against two entangled provers.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol102-ccc2018/LIPIcs.CCC.2018.20/LIPIcs.CCC.2018.20.pdf
low-degree testing
entangled nonlocal games
multi-prover interactive proof systems
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-06-04
102
21:1
21:25
10.4230/LIPIcs.CCC.2018.21
article
Complexity Classification of Conjugated Clifford Circuits
Bouland, Adam
1
https://orcid.org/0000-0002-8556-8337
Fitzsimons, Joseph F.
2
Koh, Dax Enshan
3
https://orcid.org/0000-0002-8968-591X
Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, USA
Singapore University of Technology and Design, 8 Somapah Road, Singapore 487372 , Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543
Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
Clifford circuits - i.e. circuits composed of only CNOT, Hadamard, and pi/4 phase gates - play a central role in the study of quantum computation. However, their computational power is limited: a well-known result of Gottesman and Knill states that Clifford circuits are efficiently classically simulable. We show that in contrast, "conjugated Clifford circuits" (CCCs) - where one additionally conjugates every qubit by the same one-qubit gate U - can perform hard sampling tasks. In particular, we fully classify the computational power of CCCs by showing that essentially any non-Clifford conjugating unitary U can give rise to sampling tasks which cannot be efficiently classically simulated to constant multiplicative error, unless the polynomial hierarchy collapses. Furthermore, by standard techniques, this hardness result can be extended to allow for the more realistic model of constant additive error, under a plausible complexity-theoretic conjecture. This work can be seen as progress towards classifying the computational power of all restricted quantum gate sets.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol102-ccc2018/LIPIcs.CCC.2018.21/LIPIcs.CCC.2018.21.pdf
gate set classification
quantum advantage
sampling problems
polynomial hierarchy
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-06-04
102
22:1
22:23
10.4230/LIPIcs.CCC.2018.22
article
Efficient Batch Verification for UP
Reingold, Omer
1
Rothblum, Guy N.
2
Rothblum, Ron D.
3
Stanford University, Palo Alto CA, USA
Weizmann Institute, Rehovot, Israel
MIT and Northeastern University, Cambridge and Boston MA, USA
Consider a setting in which a prover wants to convince a verifier of the correctness of k NP statements. For example, the prover wants to convince the verifier that k given integers N_1,...,N_k are all RSA moduli (i.e., products of equal length primes). Clearly this problem can be solved by simply having the prover send the k NP witnesses, but this involves a lot of communication. Can interaction help? In particular, is it possible to construct interactive proofs for this task whose communication grows sub-linearly with k?
Our main result is such an interactive proof for verifying the correctness of any k UP statements (i.e., NP statements that have a unique witness). The proof-system uses only a constant number of rounds and the communication complexity is k^delta * poly(m), where delta>0 is an arbitrarily small constant, m is the length of a single witness, and the poly term refers to a fixed polynomial that only depends on the language and not on delta. The (honest) prover strategy can be implemented in polynomial-time given access to the k (unique) witnesses.
Our proof leverages "interactive witness verification" (IWV), a new type of proof-system that may be of independent interest. An IWV is a proof-system in which the verifier needs to verify the correctness of an NP statement using: (i) a sublinear number of queries to an alleged NP witness, and (ii) a short interaction with a powerful but untrusted prover. In contrast to the setting of PCPs and Interactive PCPs, here the verifier only has access to the raw NP witness, rather than some encoding thereof.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol102-ccc2018/LIPIcs.CCC.2018.22/LIPIcs.CCC.2018.22.pdf
Interactive Proof
Batch Verification
Unique Solution
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-06-04
102
23:1
23:28
10.4230/LIPIcs.CCC.2018.23
article
A Tight Lower Bound for Entropy Flattening
Chen, Yi-Hsiu
1
Göös, Mika
1
Vadhan, Salil P.
2
Zhang, Jiapeng
3
School of Engineering and Applied Sciences, Harvard University, USA
Computer Science and Applied Mathematics, Harvard University, USA
University of California San Diego, USA
We study entropy flattening: Given a circuit C_X implicitly describing an n-bit source X (namely, X is the output of C_X on a uniform random input), construct another circuit C_Y describing a source Y such that (1) source Y is nearly flat (uniform on its support), and (2) the Shannon entropy of Y is monotonically related to that of X. The standard solution is to have C_Y evaluate C_X altogether Theta(n^2) times on independent inputs and concatenate the results (correctness follows from the asymptotic equipartition property). In this paper, we show that this is optimal among black-box constructions: Any circuit C_Y for entropy flattening that repeatedly queries C_X as an oracle requires Omega(n^2) queries.
Entropy flattening is a component used in the constructions of pseudorandom generators and other cryptographic primitives from one-way functions [Johan Håstad et al., 1999; John Rompel, 1990; Thomas Holenstein, 2006; Iftach Haitner et al., 2006; Iftach Haitner et al., 2009; Iftach Haitner et al., 2013; Iftach Haitner et al., 2010; Salil P. Vadhan and Colin Jia Zheng, 2012]. It is also used in reductions between problems complete for statistical zero-knowledge [Tatsuaki Okamoto, 2000; Amit Sahai and Salil P. Vadhan, 1997; Oded Goldreich et al., 1999; Vadhan, 1999]. The Theta(n^2) query complexity is often the main efficiency bottleneck. Our lower bound can be viewed as a step towards proving that the current best construction of pseudorandom generator from arbitrary one-way functions by Vadhan and Zheng (STOC 2012) has optimal efficiency.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol102-ccc2018/LIPIcs.CCC.2018.23/LIPIcs.CCC.2018.23.pdf
Entropy
One-way function
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-06-04
102
24:1
24:23
10.4230/LIPIcs.CCC.2018.24
article
Worst-Case to Average Case Reductions for the Distance to a Code
Ben-Sasson, Eli
1
https://orcid.org/0000-0002-0708-0483
Kopparty, Swastik
2
Saraf, Shubhangi
2
Technion, Haifa, Israel
Rutgers Univeristy, New Brunswick, NJ, USA
Algebraic proof systems reduce computational problems to problems about estimating the distance of a sequence of functions vec{u}=(u_1,..., u_k), given as oracles, from a linear error correcting code V. The soundness of such systems relies on methods that act "locally" on vec{u} and map it to a single function u^* that is, roughly, as far from V as are u_1,..., u_k.
Motivated by these applications to efficient proof systems, we study a natural worst-case to average-case reduction of distance for linear spaces, and show several general cases in which the following statement holds: If some member of a linear space U=span(u_1,...,u_k) is delta-far from (all elements) of V in relative Hamming distance, then nearly all elements of U are (1-epsilon)delta-far from V; the value of epsilon depends only on the distance of the code V and approaches 0 as that distance approaches 1. Our results improve on the previous state-of-the-art which showed that nearly all elements of U are 1/2delta-far from V [Rothblum, Vadhan and Wigderson, STOC 2013].
When V is a Reed-Solomon (RS) code, as is often the case for algebraic proof systems, we show how to boost distance via a new "local" transformation that may be useful elsewhere. Relying on the affine-invariance of V, we map a vector u to a random linear combination of affine transformations of u, and show this process amplifies distance from V. Assuming V is an RS code with sufficiently large distance, this amplification process converts a function u that is somewhat far from V to one that is (1-epsilon)-far from V; as above, epsilon depends only on the distance of V and approaches 0 as the distance of V approaches 1.
We give two concrete application of these techniques. First, we revisit the axis-parallel low-degree test for bivariate polynomials of [Polischuk-Spielman, STOC 1994] and prove a "list-decoding" type result for it, when the degree of one axis is extremely small. This result is similar to the recent list-decoding-regime result of [Chiesa, Manohar and Shinkar, RANDOM 2017] but is proved using different techniques, and allows the degree in one axis to be arbitrarily large. Second, we improve the soundness analysis of the recent RS proximity testing protocol of [Ben-Sasson et al., ICALP 2018] and extend it to the "list-decoding" regime, bringing it closer to the Johnson bound.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol102-ccc2018/LIPIcs.CCC.2018.24/LIPIcs.CCC.2018.24.pdf
Proximity testing
Reed-Solomon codes
algebraic coding complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-06-04
102
25:1
25:12
10.4230/LIPIcs.CCC.2018.25
article
On the Complexity of the Cayley Semigroup Membership Problem
Fleischer, Lukas
1
FMI, University of Stuttgart , Universitätsstraße 38, 70569 Stuttgart, Germany
We investigate the complexity of deciding, given a multiplication table representing a semigroup S, a subset X of S and an element t of S, whether t can be expressed as a product of elements of X. It is well-known that this problem is {NL}-complete and that the more general Cayley groupoid membership problem, where the multiplication table is not required to be associative, is {P}-complete. For groups, the problem can be solved in deterministic log-space which raised the question of determining the exact complexity of this variant. Barrington, Kadau, Lange and McKenzie showed that for Abelian groups and for certain solvable groups, the problem is contained in the complexity class {FOLL} and they concluded that these variants are not hard for any complexity class containing {Parity}. The more general case of arbitrary groups remained open. In this work, we show that for both groups and for commutative semigroups, the problem is solvable in {qAC}^0 (quasi-polynomial size circuits of constant depth with unbounded fan-in) and conclude that these variants are also not hard for any class containing {Parity}. Moreover, we prove that {NL}-completeness already holds for the classes of 0-simple semigroups and nilpotent semigroups. Together with our results on groups and commutative semigroups, we prove the existence of a natural class of finite semigroups which generates a variety of finite semigroups with {NL}-complete Cayley semigroup membership, while the Cayley semigroup membership problem for the class itself is not {NL}-hard. We also discuss applications of our technique to {FOLL}.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol102-ccc2018/LIPIcs.CCC.2018.25/LIPIcs.CCC.2018.25.pdf
subsemigroup
multiplication table
generators
completeness
quasi-polynomial-size circuits
FOLL
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-06-04
102
26:1
26:10
10.4230/LIPIcs.CCC.2018.26
article
Small Normalized Boolean Circuits for Semi-disjoint Bilinear Forms Require Logarithmic Conjunction-depth
Lingas, Andrzej
1
Department of Computer Science, Lund University, Box 118, 22100 Lund, Sweden
We consider normalized Boolean circuits that use binary operations of disjunction and conjunction, and unary negation, with the restriction that negation can be only applied to input variables. We derive a lower bound trade-off between the size of normalized Boolean circuits computing Boolean semi-disjoint bilinear forms and their conjunction-depth (i.e., the maximum number of and-gates on a directed path to an output gate). In particular, we show that any normalized Boolean circuit of at most epsilon log n conjunction-depth computing the n-dimensional Boolean vector convolution has Omega(n^{2-4 epsilon}) and-gates. Analogously, any normalized Boolean circuit of at most epsilon log n conjunction-depth computing the n x n Boolean matrix product has Omega(n^{3-4 epsilon}) and-gates. We complete our lower-bound trade-offs with upper-bound trade-offs of similar form yielded by the known fast algebraic algorithms.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol102-ccc2018/LIPIcs.CCC.2018.26/LIPIcs.CCC.2018.26.pdf
Boolean circuits
semi-disjoint bilinear form
Boolean vector convolution
Boolean matrix product
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-06-04
102
27:1
27:16
10.4230/LIPIcs.CCC.2018.27
article
Lower Bounds on Non-Adaptive Data Structures Maintaining Sets of Numbers, from Sunflowers
Natarajan Ramamoorthy, Sivaramakrishnan
1
Rao, Anup
1
Paul G. Allen School for Computer Science & Engineering, University of Washington, Seattle, USA
We prove new cell-probe lower bounds for dynamic data structures that maintain a subset of {1,2,...,n}, and compute various statistics of the set. The data structure is said to handle insertions non-adaptively if the locations of memory accessed depend only on the element being inserted, and not on the contents of the memory. For any such data structure that can compute the median of the set, we prove that: t_{med} >= Omega(n^{1/(t_{ins}+1)}/(w^2 * t_{ins}^2)), where t_{ins} is the number of memory locations accessed during insertions, t_{med} is the number of memory locations accessed to compute the median, and w is the number of bits stored in each memory location. When the data structure is able to perform deletions non-adaptively and compute the minimum non-adaptively, we prove t_{min} + t_{del} >= Omega(log n /(log w + log log n)), where t_{min} is the number of locations accessed to compute the minimum, and t_{del} is the number of locations accessed to perform deletions. For the predecessor search problem, where the data structure is required to compute the predecessor of any element in the set, we prove that if computing the predecessors can be done non-adaptively, then either t_{pred} >= Omega(log n/(log log n + log w)), or t_{ins} >= Omega(n^{1/(2(t_{pred}+1))}), where t_{pred} is the number of locations accessed to compute predecessors.
These bounds are nearly matched by Binary Search Trees in some range of parameters. Our results follow from using the Sunflower Lemma of Erdös and Rado [Paul Erdös and Richard Rado, 1960] together with several kinds of encoding arguments.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol102-ccc2018/LIPIcs.CCC.2018.27/LIPIcs.CCC.2018.27.pdf
Non-adaptive data structures
Sunflower lemma
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-06-04
102
28:1
28:37
10.4230/LIPIcs.CCC.2018.28
article
Dimension Reduction for Polynomials over Gaussian Space and Applications
Ghazi, Badih
1
Kamath, Pritish
2
Raghavendra, Prasad
3
Google Research, 1600 Amphitheatre Parkway Mountain View, CA 94043, USA
Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA
University of California Berkeley, Berkeley, CA, USA
We introduce a new technique for reducing the dimension of the ambient space of low-degree polynomials in the Gaussian space while preserving their relative correlation structure. As an application, we obtain an explicit upper bound on the dimension of an epsilon-optimal noise-stable Gaussian partition. In fact, we address the more general problem of upper bounding the number of samples needed to epsilon-approximate any joint distribution that can be non-interactively simulated from a correlated Gaussian source. Our results significantly improve (from Ackermann-like to "merely" exponential) the upper bounds recently proved on the above problems by De, Mossel & Neeman [CCC 2017, SODA 2018 resp.] and imply decidability of the larger alphabet case of the gap non-interactive simulation problem posed by Ghazi, Kamath & Sudan [FOCS 2016].
Our technique of dimension reduction for low-degree polynomials is simple and can be seen as a generalization of the Johnson-Lindenstrauss lemma and could be of independent interest.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol102-ccc2018/LIPIcs.CCC.2018.28/LIPIcs.CCC.2018.28.pdf
Dimension reduction
Low-degree Polynomials
Noise Stability
Non-Interactive Simulation