LIPIcs, Volume 33

30th Conference on Computational Complexity (CCC 2015)



Thumbnail PDF

Publication Details

  • published at: 2015-06-06
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-939897-81-1
  • DBLP: db/conf/coco/coco2015

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 33, CCC'15, Complete Volume

Authors: David Zuckerman


Abstract
LIPIcs, Volume 33, CCC'15, Complete Volume

Cite as

30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@Proceedings{zuckerman:LIPIcs.CCC.2015,
  title =	{{LIPIcs, Volume 33, CCC'15, Complete Volume}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015},
  URN =		{urn:nbn:de:0030-drops-52111},
  doi =		{10.4230/LIPIcs.CCC.2015},
  annote =	{Keywords: Theory of Computation}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: David Zuckerman


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. i-xiv, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{zuckerman:LIPIcs.CCC.2015.i,
  author =	{Zuckerman, David},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{i--xiv},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.i},
  URN =		{urn:nbn:de:0030-drops-50790},
  doi =		{10.4230/LIPIcs.CCC.2015.i},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Strong Locally Testable Codes with Relaxed Local Decoders

Authors: Oded Goldreich, Tom Gur, and Ilan Komargodski


Abstract
Locally testable codes (LTCs) are error-correcting codes that admit very efficient codeword tests. An LTC is said to be strong if it has a proximity-oblivious tester; that is, a tester that makes only a constant number of queries and reject non-codewords with probability that depends solely on their distance from the code. Locally decodable codes (LDCs) are complimentary to LTCs. While the latter allow for highly efficient rejection of strings that are far from being codewords, LDCs allow for highly efficient recovery of individual bits of the information that is encoded in strings that are close to being codewords. While there are known constructions of strong-LTCs with nearly-linear length, the existence of a constant-query LDC with polynomial length is a major open problem. In an attempt to bypass this barrier, Ben-Sasson et al. (SICOMP 2006) introduced a natural relaxation of local decodability, called relaxed-LDCs. This notion requires local recovery of nearly all individual information-bits, yet allows for recovery-failure (but not error) on the rest. Ben-Sasson et al. constructed a constant-query relaxed-LDC with nearly-linear length (i.e., length k^(1 + alpha) for an arbitrarily small constant alpha>0, where k is the dimension of the code). This work focuses on obtaining strong testability and relaxed decodability simultaneously. We construct a family of binary linear codes of nearly-linear length that are both strong-LTCs (with one-sided error) and constant-query relaxed-LDCs. This improves upon the previously known constructions, which obtain either weak LTCs or require polynomial length. Our construction heavily relies on tensor codes and PCPs. In particular, we provide strong canonical PCPs of proximity for membership in any linear code with constant rate and relative distance. Loosely speaking, these are PCPs of proximity wherein the verifier is proximity oblivious (similarly to strong-LTCs and every valid statement has a unique canonical proof. Furthermore, the verifier is required to reject non-canonical proofs (even for valid statements). As an application, we improve the best known separation result between the complexity of decision and verification in the setting of property testing.

Cite as

Oded Goldreich, Tom Gur, and Ilan Komargodski. Strong Locally Testable Codes with Relaxed Local Decoders. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 1-41, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{goldreich_et_al:LIPIcs.CCC.2015.1,
  author =	{Goldreich, Oded and Gur, Tom and Komargodski, Ilan},
  title =	{{Strong Locally Testable Codes with Relaxed Local Decoders}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{1--41},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.1},
  URN =		{urn:nbn:de:0030-drops-50507},
  doi =		{10.4230/LIPIcs.CCC.2015.1},
  annote =	{Keywords: Locally Testable Codes, Locally Decodable Codes, PCPs of Proximity}
}
Document
An Entropy Sumset Inequality and Polynomially Fast Convergence to Shannon Capacity Over All Alphabets

Authors: Venkatesan Guruswami and Ameya Velingker


Abstract
We prove a lower estimate on the increase in entropy when two copies of a conditional random variable X | Y, with X supported on Z_q={0,1,...,q-1} for prime q, are summed modulo q. Specifically, given two i.i.d. copies (X_1,Y_1) and (X_2,Y_2) of a pair of random variables (X,Y), with X taking values in Z_q, we show H(X_1 + X_2 \mid Y_1, Y_2) - H(X|Y) >=e alpha(q) * H(X|Y) (1-H(X|Y)) for some alpha(q) > 0, where H(.) is the normalized (by factor log_2(q)) entropy. In particular, if X | Y is not close to being fully random or fully deterministic and H(X| Y) \in (gamma,1-gamma), then the entropy of the sum increases by Omega_q(gamma). Our motivation is an effective analysis of the finite-length behavior of polar codes, for which the linear dependence on gamma is quantitatively important. The assumption of q being prime is necessary: for X supported uniformly on a proper subgroup of Z_q we have H(X+X)=H(X). For X supported on infinite groups without a finite subgroup (the torsion-free case) and no conditioning, a sumset inequality for the absolute increase in (unnormalized) entropy was shown by Tao in [Tao, CP&R 2010]. We use our sumset inequality to analyze Ari kan's construction of polar codes and prove that for any q-ary source X, where q is any fixed prime, and anyepsilon > 0, polar codes allow efficient data compression of N i.i.d. copies of X into (H(X)+epsilon)N q-ary symbols, as soon as N is polynomially large in 1/epsilon. We can get capacity-achieving source codes with similar guarantees for composite alphabets, by factoring q into primes and combining different polar codes for each prime in factorization. A consequence of our result for noisy channel coding is that for all discrete memoryless channels, there are explicit codes enabling reliable communication within epsilon > 0 of the symmetric Shannon capacity for a block length and decoding complexity bounded by a polynomial in 1/epsilon. The result was previously shown for the special case of binary-input channels [Guruswami/Xial, FOCS'13; Hassani/Alishahi/Urbanke, CoRR 2013], and this work extends the result to channels over any alphabet.

Cite as

Venkatesan Guruswami and Ameya Velingker. An Entropy Sumset Inequality and Polynomially Fast Convergence to Shannon Capacity Over All Alphabets. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 42-57, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.CCC.2015.42,
  author =	{Guruswami, Venkatesan and Velingker, Ameya},
  title =	{{An Entropy Sumset Inequality and Polynomially Fast Convergence to Shannon Capacity Over All Alphabets}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{42--57},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.42},
  URN =		{urn:nbn:de:0030-drops-50755},
  doi =		{10.4230/LIPIcs.CCC.2015.42},
  annote =	{Keywords: Polar codes, polynomial gap to capacity, entropy sumset inequality, arbitrary alphabets}
}
Document
The List-Decoding Size of Fourier-Sparse Boolean Functions

Authors: Ishay Haviv and Oded Regev


Abstract
A function defined on the Boolean hypercube is k-Fourier-sparse if it has at most k nonzero Fourier coefficients. For a function f: F_2^n -> R and parameters k and d, we prove a strong upper bound on the number of k-Fourier-sparse Boolean functions that disagree with f on at most d inputs. Our bound implies that the number of uniform and independent random samples needed for learning the class of k-Fourier-sparse Boolean functions on n variables exactly is at most O(n * k * log(k)). As an application, we prove an upper bound on the query complexity of testing Booleanity of Fourier-sparse functions. Our bound is tight up to a logarithmic factor and quadratically improves on a result due to Gur and Tamuz [Chicago J. Theor. Comput. Sci.,2013].

Cite as

Ishay Haviv and Oded Regev. The List-Decoding Size of Fourier-Sparse Boolean Functions. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 58-71, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{haviv_et_al:LIPIcs.CCC.2015.58,
  author =	{Haviv, Ishay and Regev, Oded},
  title =	{{The List-Decoding Size of Fourier-Sparse Boolean Functions}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{58--71},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.58},
  URN =		{urn:nbn:de:0030-drops-50600},
  doi =		{10.4230/LIPIcs.CCC.2015.58},
  annote =	{Keywords: Fourier-sparse functions, list-decoding, learning theory, property testing}
}
Document
Nonclassical Polynomials as a Barrier to Polynomial Lower Bounds

Authors: Abhishek Bhowmick and Shachar Lovett


Abstract
The problem of constructing explicit functions which cannot be approximated by low degree polynomials has been extensively studied in computational complexity, motivated by applications in circuit lower bounds, pseudo-randomness, constructions of Ramsey graphs and locally decodable codes. Still, most of the known lower bounds become trivial for polynomials of super-logarithmic degree. Here, we suggest a new barrier explaining this phenomenon. We show that many of the existing lower bound proof techniques extend to nonclassical polynomials, an extension of classical polynomials which arose in higher order Fourier analysis. Moreover, these techniques are tight for nonclassical polynomials of logarithmic degree.

Cite as

Abhishek Bhowmick and Shachar Lovett. Nonclassical Polynomials as a Barrier to Polynomial Lower Bounds. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 72-87, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{bhowmick_et_al:LIPIcs.CCC.2015.72,
  author =	{Bhowmick, Abhishek and Lovett, Shachar},
  title =	{{Nonclassical Polynomials as a Barrier to Polynomial Lower Bounds}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{72--87},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.72},
  URN =		{urn:nbn:de:0030-drops-50491},
  doi =		{10.4230/LIPIcs.CCC.2015.72},
  annote =	{Keywords: nonclassical polynomials, polynomials, lower bounds, barrier}
}
Document
Simplified Lower Bounds on the Multiparty Communication Complexity of Disjointness

Authors: Anup Rao and Amir Yehudayoff


Abstract
We show that the deterministic number-on-forehead communication complexity of set disjointness for k parties on a universe of size n is Omega(n/4^k). This gives the first lower bound that is linear in n, nearly matching Grolmusz's upper bound of O(log^2(n) + k^2n/2^k). We also simplify the proof of Sherstov's Omega(sqrt(n)/(k2^k)) lower bound for the randomized communication complexity of set disjointness.

Cite as

Anup Rao and Amir Yehudayoff. Simplified Lower Bounds on the Multiparty Communication Complexity of Disjointness. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 88-101, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{rao_et_al:LIPIcs.CCC.2015.88,
  author =	{Rao, Anup and Yehudayoff, Amir},
  title =	{{Simplified Lower Bounds on the Multiparty Communication Complexity of Disjointness}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{88--101},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.88},
  URN =		{urn:nbn:de:0030-drops-50769},
  doi =		{10.4230/LIPIcs.CCC.2015.88},
  annote =	{Keywords: communication complexity, set disjointness, number on forehead, lower bounds}
}
Document
How to Compress Asymmetric Communication

Authors: Sivaramakrishnan Natarajan Ramamoorthy and Anup Rao


Abstract
We study the relationship between communication and information in 2-party communication protocols when the information is asymmetric. If I^A denotes the number of bits of information revealed by the first party, I^B denotes the information revealed by the second party, and C is the number of bits of communication in the protocol, we show that i) one can simulate the protocol using order I^A + (C^3 * I^B)^(1/4) * log(C) + (C * I^B)^(1/2) * log(C) bits of communication, ii) one can simulate the protocol using order I^A * 2^(O(I^B)) bits of communication The first result gives the best known bound on the complexity of a simulation when I^A >> I^B,C^(3/4). The second gives the best known bound when I^B << log C. In addition we show that if a function is computed by a protocol with asymmetric information complexity, then the inputs must have a large, nearly monochromatic rectangle of the right dimensions, a fact that is useful for proving lower bounds on lopsided communication problems.

Cite as

Sivaramakrishnan Natarajan Ramamoorthy and Anup Rao. How to Compress Asymmetric Communication. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 102-123, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{natarajanramamoorthy_et_al:LIPIcs.CCC.2015.102,
  author =	{Natarajan Ramamoorthy, Sivaramakrishnan and Rao, Anup},
  title =	{{How to Compress Asymmetric Communication}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{102--123},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.102},
  URN =		{urn:nbn:de:0030-drops-50679},
  doi =		{10.4230/LIPIcs.CCC.2015.102},
  annote =	{Keywords: Communication Complexity, Interactive Compression, Information Complexity}
}
Document
Majority is Incompressible by AC^0[p] Circuits

Authors: Igor Carboni Oliveira and Rahul Santhanam


Abstract
We consider C-compression games, a hybrid model between computational and communication complexity. A C-compression game for a function f:{0,1}^n -> {0,1} is a two-party communication game, where the first party Alice knows the entire input x but is restricted to use strategies computed by C-circuits, while the second party Bob initially has no information about the input, but is computationally unbounded. The parties implement an interactive communication protocol to decide the value of f(x), and the communication cost of the protocol is the maximum number of bits sent by Alice as a function of n = |x|. We show that any AC_d[p]-compression protocol to compute Majority_n requires communication n / (log(n))^(2d + O(1)), where p is prime, and AC_d[p] denotes polynomial size unbounded fan-in depth-d Boolean circuits extended with modulo p gates. This bound is essentially optimal, and settles a question of Chattopadhyay and Santhanam (2012). This result has a number of consequences, and yields a tight lower bound on the total fan-in of oracle gates in constant-depth oracle circuits computing Majority_n. We define multiparty compression games, where Alice interacts in parallel with a polynomial number of players that are not allowed to communicate with each other, and communication cost is defined as the sum of the lengths of the longest messages sent by Alice during each round. In this setting, we prove that the randomized r-round AC^0[p]-compression cost of Majority_n is n^(Theta(1/r)). This result implies almost tight lower bounds on the maximum individual fan-in of oracle gates in certain restricted bounded-depth oracle circuits computing Majority_n. Stronger lower bounds for functions in NP would separate NP from NC^1. Finally, we consider the round separation question for two-party AC-compression games, and significantly improve known separations between r-round and (r+1)-round protocols, for any constant r.

Cite as

Igor Carboni Oliveira and Rahul Santhanam. Majority is Incompressible by AC^0[p] Circuits. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 124-157, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{oliveira_et_al:LIPIcs.CCC.2015.124,
  author =	{Oliveira, Igor Carboni and Santhanam, Rahul},
  title =	{{Majority is Incompressible by AC^0\lbrackp\rbrack Circuits}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{124--157},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.124},
  URN =		{urn:nbn:de:0030-drops-50658},
  doi =		{10.4230/LIPIcs.CCC.2015.124},
  annote =	{Keywords: interactive communication, compression, circuit lower bound}
}
Document
Lower Bounds for Depth Three Arithmetic Circuits with Small Bottom Fanin

Authors: Neeraj Kayal and Chandan Saha


Abstract
Shpilka and Wigderson (CCC 99) had posed the problem of proving exponential lower bounds for (nonhomogeneous) depth three arithmetic circuits with bounded bottom fanin over a field F of characteristic zero. We resolve this problem by proving a N^(Omega(d/t)) lower bound for (nonhomogeneous) depth three arithmetic circuits with bottom fanin at most t computing an explicit N-variate polynomial of degree d over F. Meanwhile, Nisan and Wigderson (CC 97) had posed the problem of proving superpolynomial lower bounds for homogeneous depth five arithmetic circuits. Over fields of characteristic zero, we show a lower bound of N^(Omega(sqrt(d))) for homogeneous depth five circuits (resp. also for depth three circuits) with bottom fanin at most N^(u), for any fixed u < 1. This resolves the problem posed by Nisan and Wigderson only partially because of the added restriction on the bottom fanin (a general homogeneous depth five circuit has bottom fanin at most N).

Cite as

Neeraj Kayal and Chandan Saha. Lower Bounds for Depth Three Arithmetic Circuits with Small Bottom Fanin. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 158-182, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{kayal_et_al:LIPIcs.CCC.2015.158,
  author =	{Kayal, Neeraj and Saha, Chandan},
  title =	{{Lower Bounds for Depth Three Arithmetic Circuits with Small Bottom Fanin}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{158--182},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.158},
  URN =		{urn:nbn:de:0030-drops-50617},
  doi =		{10.4230/LIPIcs.CCC.2015.158},
  annote =	{Keywords: arithmetic circuits, depth three circuits, lower bound, bottom fanin}
}
Document
A Depth-Five Lower Bound for Iterated Matrix Multiplication

Authors: Suman K. Bera and Amit Chakrabarti


Abstract
We prove that certain instances of the iterated matrix multiplication (IMM) family of polynomials with N variables and degree n require N^(Omega(sqrt(n))) gates when expressed as a homogeneous depth-five Sigma Pi Sigma Pi Sigma arithmetic circuit with the bottom fan-in bounded by N^(1/2-epsilon). By a depth-reduction result of Tavenas, this size lower bound is optimal and can be achieved by the weaker class of homogeneous depth-four Sigma Pi Sigma Pi circuits. Our result extends a recent result of Kumar and Saraf, who gave the same N^(Omega(sqrt(n))) lower bound for homogeneous depth-four Sigma Pi Sigma Pi circuits computing IMM. It is analogous to a recent result of Kayal and Saha, who gave the same lower bound for homogeneous Sigma Pi Sigma Pi Sigma circuits (over characteristic zero) with bottom fan-in at most N^(1-epsilon), for the harder problem of computing certain polynomials defined by Nisan-Wigderson designs.

Cite as

Suman K. Bera and Amit Chakrabarti. A Depth-Five Lower Bound for Iterated Matrix Multiplication. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 183-197, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{bera_et_al:LIPIcs.CCC.2015.183,
  author =	{Bera, Suman K. and Chakrabarti, Amit},
  title =	{{A Depth-Five Lower Bound for Iterated Matrix Multiplication}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{183--197},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.183},
  URN =		{urn:nbn:de:0030-drops-50622},
  doi =		{10.4230/LIPIcs.CCC.2015.183},
  annote =	{Keywords: arithmetic circuits, iterated matrix multiplication, depth five circuits, lower bound}
}
Document
Factors of Low Individual Degree Polynomials

Authors: Rafael Oliveira


Abstract
In [Kaltofen, 1989], Kaltofen proved the remarkable fact that multivariate polynomial factorization can be done efficiently, in randomized polynomial time. Still, more than twenty years after Kaltofen's work, many questions remain unanswered regarding the complexity aspects of polynomial factorization, such as the question of whether factors of polynomials efficiently computed by arithmetic formulas also have small arithmetic formulas, asked in [Kopparty/Saraf/Shpilka,CCC'14], and the question of bounding the depth of the circuits computing the factors of a polynomial. We are able to answer these questions in the affirmative for the interesting class of polynomials of bounded individual degrees, which contains polynomials such as the determinant and the permanent. We show that if P(x_1, ..., x_n) is a polynomial with individual degrees bounded by r that can be computed by a formula of size s and depth d, then any factor f(x_1, ..., x_n) of P(x_1, ..., x_n) can be computed by a formula of size poly((rn)^r, s) and depth d+5. This partially answers the question above posed in [Kopparty/Saraf/Shpilka,CCC'14], that asked if this result holds without the exponential dependence on r. Our work generalizes the main factorization theorem from Dvir et al. [Dvir/Shpilka/Yehudayoff,SIAM J. Comp., 2009], who proved it for the special case when the factors are of the form f(x_1, ..., x_n) = x_n - g(x_1, ..., x_n-1). Along the way, we introduce several new technical ideas that could be of independent interest when studying arithmetic circuits (or formulas).

Cite as

Rafael Oliveira. Factors of Low Individual Degree Polynomials. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 198-216, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{oliveira:LIPIcs.CCC.2015.198,
  author =	{Oliveira, Rafael},
  title =	{{Factors of Low Individual Degree Polynomials}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{198--216},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.198},
  URN =		{urn:nbn:de:0030-drops-50594},
  doi =		{10.4230/LIPIcs.CCC.2015.198},
  annote =	{Keywords: Arithmetic Circuits, Factoring, Algebraic Complexity}
}
Document
Verifiable Stream Computation and Arthur–Merlin Communication

Authors: Amit Chakrabarti, Graham Cormode, Andrew McGregor, Justin Thaler, and Suresh Venkatasubramanian


Abstract
In the setting of streaming interactive proofs (SIPs), a client (verifier) needs to compute a given function on a massive stream of data, arriving online, but is unable to store even a small fraction of the data. It outsources the processing to a third party service (prover), but is unwilling to blindly trust answers returned by this service. Thus, the service cannot simply supply the desired answer; it must convince the verifier of its correctness via a short interaction after the stream has been seen. In this work we study "barely interactive" SIPs. Specifically, we show that two or three rounds of interaction suffice to solve several query problems - including Index, Median, Nearest Neighbor Search, Pattern Matching, and Range Counting - with polylogarithmic space and communication costs. Such efficiency with O(1) rounds of interaction was thought to be impossible based on previous work. On the other hand, we initiate a formal study of the limitations of constant-round SIPs by introducing a new hierarchy of communication models called Online Interactive Proofs (OIPs). The online nature of these models is analogous to the streaming restriction placed upon the verifier in an SIP. We give upper and lower bounds that (1) characterize, up to quadratic blowups, every finite level of the OIP hierarchy in terms of other well-known communication complexity classes, (2) separate the first four levels of the hierarchy, and (3) reveal that the hierarchy collapses to the fourth level. Our study of OIPs reveals marked contrasts and some parallels with the classic Turing Machine theory of interactive proofs, establishes limits on the power of existing techniques for developing constant-round SIPs, and provides a new characterization of (non-online) Arthur-Merlin communication in terms of an online model.

Cite as

Amit Chakrabarti, Graham Cormode, Andrew McGregor, Justin Thaler, and Suresh Venkatasubramanian. Verifiable Stream Computation and Arthur–Merlin Communication. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 217-243, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{chakrabarti_et_al:LIPIcs.CCC.2015.217,
  author =	{Chakrabarti, Amit and Cormode, Graham and McGregor, Andrew and Thaler, Justin and Venkatasubramanian, Suresh},
  title =	{{Verifiable Stream Computation and Arthur–Merlin Communication}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{217--243},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.217},
  URN =		{urn:nbn:de:0030-drops-50680},
  doi =		{10.4230/LIPIcs.CCC.2015.217},
  annote =	{Keywords: Arthur-Merlin communication complexity, streaming interactive proofs}
}
Document
Identifying an Honest EXP^NP Oracle Among Many

Authors: Shuichi Hirahara


Abstract
We provide a general framework to remove short advice by formulating the following computational task for a function f: given two oracles at least one of which is honest (i.e. correctly computes f on all inputs) as well as an input, the task is to compute f on the input with the help of the oracles by a probabilistic polynomial-time machine, which we shall call a selector. We characterize the languages for which short advice can be removed by the notion of selector: a paddable language has a selector if and only if short advice of a probabilistic machine that accepts the language can be removed under any relativized world. Previously, instance checkers have served as a useful tool to remove short advice of probabilistic computation. We indicate that existence of instance checkers is a property stronger than that of removing short advice: although no instance checker for EXP^NP-complete languages exists unless EXP^NP = NEXP, we prove that there exists a selector for any EXP^NP-complete language, by building on the proof of MIP = NEXP by Babai, Fortnow, and Lund (1991).

Cite as

Shuichi Hirahara. Identifying an Honest EXP^NP Oracle Among Many. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 244-263, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{hirahara:LIPIcs.CCC.2015.244,
  author =	{Hirahara, Shuichi},
  title =	{{Identifying an Honest EXP^NP Oracle Among Many}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{244--263},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.244},
  URN =		{urn:nbn:de:0030-drops-50718},
  doi =		{10.4230/LIPIcs.CCC.2015.244},
  annote =	{Keywords: nonuniform complexity, short advice, instance checker, interactive proof systems, probabilistic checkable proofs}
}
Document
Adaptivity Helps for Testing Juntas

Authors: Rocco A. Servedio, Li-Yang Tan, and John Wright


Abstract
We give a new lower bound on the query complexity of any non-adaptive algorithm for testing whether an unknown Boolean function is a k-junta versus epsilon-far from every k-junta. Our lower bound is that any non-adaptive algorithm must make Omega(( k * log*(k)) / ( epsilon^c * log(log(k)/epsilon^c))) queries for this testing problem, where c is any absolute constant <1. For suitable values of epsilon this is asymptotically larger than the O(k * log(k) + k/epsilon) query complexity of the best known adaptive algorithm [Blais,STOC'09] for testing juntas, and thus the new lower bound shows that adaptive algorithms are more powerful than non-adaptive algorithms for the junta testing problem.

Cite as

Rocco A. Servedio, Li-Yang Tan, and John Wright. Adaptivity Helps for Testing Juntas. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 264-279, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{servedio_et_al:LIPIcs.CCC.2015.264,
  author =	{Servedio, Rocco A. and Tan, Li-Yang and Wright, John},
  title =	{{Adaptivity Helps for Testing Juntas}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{264--279},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.264},
  URN =		{urn:nbn:de:0030-drops-50663},
  doi =		{10.4230/LIPIcs.CCC.2015.264},
  annote =	{Keywords: Property testing, juntas, adaptivity}
}
Document
A Characterization of Hard-to-cover CSPs

Authors: Amey Bhangale, Prahladh Harsha, and Girish Varma


Abstract
We continue the study of covering complexity of constraint satisfaction problems (CSPs) initiated by Guruswami, Håstad and Sudan [SIAM J. Computing, 31(6):1663--1686, 2002] and Dinur and Kol [in Proc. 28th IEEE Conference on Computational Complexity, 2013]. The covering number of a CSP instance Phi, denoted by nu(Phi) is the smallest number of assignments to the variables of Phi, such that each constraint of Phi is satisfied by at least one of the assignments. We show the following results regarding how well efficient algorithms can approximate the covering number of a given CSP instance. 1. Assuming a covering unique games conjecture, introduced by Dinur and Kol, we show that for every non-odd predicate P over any constant sized alphabet and every integer K, it is NP-hard to distinguish between P-CSP instances (i.e., CSP instances where all the constraints are of type P) which are coverable by a constant number of assignments and those whose covering number is at least K. Previously, Dinur and Kol, using the same covering unique games conjecture, had shown a similar hardness result for every non-odd predicate over the Boolean alphabet that supports a pairwise independent distribution. Our generalization yields a complete characterization of CSPs over constant sized alphabet Sigma that are hard to cover since CSPs over odd predicates are trivially coverable with |Sigma| assignments. 2. For a large class of predicates that are contained in the 2k-LIN predicate, we show that it is quasi-NP-hard to distinguish between instances which have covering number at most two and covering number at least Omega(log(log(n))). This generalizes the 4-LIN result of Dinur and Kol that states it is quasi-NP-hard to distinguish between 4-LIN-CSP instances which have covering number at most two and covering number at least Omega(log(log(log(n)))).

Cite as

Amey Bhangale, Prahladh Harsha, and Girish Varma. A Characterization of Hard-to-cover CSPs. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 280-303, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{bhangale_et_al:LIPIcs.CCC.2015.280,
  author =	{Bhangale, Amey and Harsha, Prahladh and Varma, Girish},
  title =	{{A Characterization of Hard-to-cover CSPs}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{280--303},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.280},
  URN =		{urn:nbn:de:0030-drops-50574},
  doi =		{10.4230/LIPIcs.CCC.2015.280},
  annote =	{Keywords: CSPs, Covering Problem, Hardness of Approximation, Unique Games, Invariance Principle}
}
Document
Subexponential Size Hitting Sets for Bounded Depth Multilinear Formulas

Authors: Rafael Oliveira, Amir Shpilka, and Ben Lee Volk


Abstract
In this paper we give subexponential size hitting sets for bounded depth multilinear arithmetic formulas. Using the known relation between black-box PIT and lower bounds we obtain lower bounds for these models. For depth-3 multilinear formulas, of size exp(n^delta), we give a hitting set of size exp(~O(n^(2/3 + 2*delta/3))). This implies a lower bound of exp(~Omega(n^(1/2))) for depth-3 multilinear formulas, for some explicit polynomial. For depth-4 multilinear formulas, of size exp(n^delta), we give a hitting set of size exp(~O(n^(2/3 + 4*delta/3)). This implies a lower bound of exp(~Omega(n^(1/4))) for depth-4 multilinear formulas, for some explicit polynomial. A regular formula consists of alternating layers of +,* gates, where all gates at layer i have the same fan-in. We give a hitting set of size (roughly) exp(n^(1-delta)), for regular depth-d multilinear formulas of size exp(n^delta), where delta = O(1/sqrt(5)^d)). This result implies a lower bound of roughly exp(~Omega(n^(1/sqrt(5)^d))) for such formulas. We note that better lower bounds are known for these models, but also that none of these bounds was achieved via construction of a hitting set. Moreover, no lower bound that implies such PIT results, even in the white-box model, is currently known. Our results are combinatorial in nature and rely on reducing the underlying formula, first to a depth-4 formula, and then to a read-once algebraic branching program (from depth-3 formulas we go straight to read-once algebraic branching programs).

Cite as

Rafael Oliveira, Amir Shpilka, and Ben Lee Volk. Subexponential Size Hitting Sets for Bounded Depth Multilinear Formulas. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 304-322, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{oliveira_et_al:LIPIcs.CCC.2015.304,
  author =	{Oliveira, Rafael and Shpilka, Amir and Volk, Ben Lee},
  title =	{{Subexponential Size Hitting Sets for Bounded Depth Multilinear Formulas}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{304--322},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.304},
  URN =		{urn:nbn:de:0030-drops-50548},
  doi =		{10.4230/LIPIcs.CCC.2015.304},
  annote =	{Keywords: Arithmetic Circuits, Derandomization, Polynomial Identity Testing}
}
Document
Deterministic Identity Testing for Sum of Read-once Oblivious Arithmetic Branching Programs

Authors: Rohit Gurjar, Arpita Korwar, Nitin Saxena, and Thomas Thierauf


Abstract
A read-once oblivious arithmetic branching program (ROABP) is an arithmetic branching program (ABP) where each variable occurs in at most one layer. We give the first polynomial time whitebox identity test for a polynomial computed by a sum of constantly many ROABPs. We also give a corresponding blackbox algorithm with quasi-polynomial time complexity n^(O(log(n))). In both the cases, our time complexity is double exponential in the number of ROABPs. ROABPs are a generalization of set-multilinear depth-3 circuits. The prior results for the sum of constantly many set-multilinear depth-3 circuits were only slightly better than brute-force, i.e. exponential-time. Our techniques are a new interplay of three concepts for ROABP: low evaluation dimension, basis isolating weight assignment and low-support rank concentration. We relate basis isolation to rank concentration and extend it to a sum of two ROABPs using evaluation dimension (or partial derivatives).

Cite as

Rohit Gurjar, Arpita Korwar, Nitin Saxena, and Thomas Thierauf. Deterministic Identity Testing for Sum of Read-once Oblivious Arithmetic Branching Programs. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 323-346, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{gurjar_et_al:LIPIcs.CCC.2015.323,
  author =	{Gurjar, Rohit and Korwar, Arpita and Saxena, Nitin and Thierauf, Thomas},
  title =	{{Deterministic Identity Testing for Sum of Read-once Oblivious Arithmetic Branching Programs}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{323--346},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.323},
  URN =		{urn:nbn:de:0030-drops-50647},
  doi =		{10.4230/LIPIcs.CCC.2015.323},
  annote =	{Keywords: PIT, Hitting-set, Sum of ROABPs, Evaluation Dimension, Rank Concentration}
}
Document
Kolmogorov Width of Discrete Linear Spaces: an Approach to Matrix Rigidity

Authors: Alex Samorodnitsky, Ilya Shkredov, and Sergey Yekhanin


Abstract
A square matrix V is called rigid if every matrix V' obtained by altering a small number of entries of $V$ has sufficiently high rank. While random matrices are rigid with high probability, no explicit constructions of rigid matrices are known to date. Obtaining such explicit matrices would have major implications in computational complexity theory. One approach to establishing rigidity of a matrix V is to come up with a property that is satisfied by any collection of vectors arising from a low-dimensional space, but is not satisfied by the rows of V even after alterations. In this paper we propose such a candidate property that has the potential of establishing rigidity of combinatorial design matrices over the field F_2. Stated informally, we conjecture that under a suitable embedding of F_2^n into R^n, vectors arising from a low dimensional F_2-linear space always have somewhat small Kolmogorov width, i.e., admit a non-trivial simultaneous approximation by a low dimensional Euclidean space. This implies rigidity of combinatorial designs, as their rows do not admit such an approximation even after alterations. Our main technical contribution is a collection of results establishing weaker forms and special cases of the conjecture above.

Cite as

Alex Samorodnitsky, Ilya Shkredov, and Sergey Yekhanin. Kolmogorov Width of Discrete Linear Spaces: an Approach to Matrix Rigidity. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 347-364, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{samorodnitsky_et_al:LIPIcs.CCC.2015.347,
  author =	{Samorodnitsky, Alex and Shkredov, Ilya and Yekhanin, Sergey},
  title =	{{Kolmogorov Width of Discrete Linear Spaces: an Approach to Matrix Rigidity}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{347--364},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.347},
  URN =		{urn:nbn:de:0030-drops-50703},
  doi =		{10.4230/LIPIcs.CCC.2015.347},
  annote =	{Keywords: Matrix rigidity, linear codes, Kolmogorov width}
}
Document
On the (Non) NP-Hardness of Computing Circuit Complexity

Authors: Cody D. Murray and R. Ryan Williams


Abstract
The Minimum Circuit Size Problem (MCSP) is: given the truth table of a Boolean function f and a size parameter k, is the circuit complexity of f at most k? This is the definitive problem of circuit synthesis, and it has been studied since the 1950s. Unlike many problems of its kind, MCSP is not known to be NP-hard, yet an efficient algorithm for this problem also seems very unlikely: for example, MCSP in P would imply there are no pseudorandom functions. Although most NP-complete problems are complete under strong "local" reduction notions such as poly-logarithmic time projections, we show that MCSP is provably not NP-hard under O(n^(1/2-epsilon))-time projections, for every epsilon > 0. We prove that the NP-hardness of MCSP under (logtime-uniform) AC0 reductions would imply extremely strong lower bounds: NP \not\subset P/poly and E \not\subset i.o.-SIZE(2^(delta * n)) for some delta > 0 (hence P = BPP also follows). We show that even the NP-hardness of MCSP under general polynomial-time reductions would separate complexity classes: EXP != NP \cap P/poly, which implies EXP != ZPP. These results help explain why it has been so difficult to prove that MCSP is NP-hard. We also consider the nondeterministic generalization of MCSP: the Nondeterministic Minimum Circuit Size Problem (NMCSP), where one wishes to compute the nondeterministic circuit complexity of a given function. We prove that the Sigma_2 P-hardness of NMCSP, even under arbitrary polynomial-time reductions, would imply EXP \not\subset P/poly.

Cite as

Cody D. Murray and R. Ryan Williams. On the (Non) NP-Hardness of Computing Circuit Complexity. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 365-380, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{murray_et_al:LIPIcs.CCC.2015.365,
  author =	{Murray, Cody D. and Williams, R. Ryan},
  title =	{{On the (Non) NP-Hardness of Computing Circuit Complexity}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{365--380},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.365},
  URN =		{urn:nbn:de:0030-drops-50745},
  doi =		{10.4230/LIPIcs.CCC.2015.365},
  annote =	{Keywords: circuit lower bounds, Minimum Circuit Size Problem, NP-completeness, projections, Reductions}
}
Document
Circuits with Medium Fan-In

Authors: Pavel Hrubes and Anup Rao


Abstract
We consider boolean circuits in which every gate may compute an arbitrary boolean function of k other gates, for a parameter k. We give an explicit function $f:{0,1}^n -> {0,1} that requires at least Omega(log^2(n)) non-input gates when k = 2n/3. When the circuit is restricted to being layered and depth 2, we prove a lower bound of n^(Omega(1)) on the number of non-input gates. When the circuit is a formula with gates of fan-in k, we give a lower bound Omega(n^2/k*log(n)) on the total number of gates. Our model is connected to some well known approaches to proving lower bounds in complexity theory. Optimal lower bounds for the Number-On-Forehead model in communication complexity, or for bounded depth circuits in AC_0, or extractors for varieties over small fields would imply strong lower bounds in our model. On the other hand, new lower bounds for our model would prove new time-space tradeoffs for branching programs and impossibility results for (fan-in 2) circuits with linear size and logarithmic depth. In particular, our lower bound gives a different proof for a known time-space tradeoff for oblivious branching programs.

Cite as

Pavel Hrubes and Anup Rao. Circuits with Medium Fan-In. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 381-391, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{hrubes_et_al:LIPIcs.CCC.2015.381,
  author =	{Hrubes, Pavel and Rao, Anup},
  title =	{{Circuits with Medium Fan-In}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{381--391},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.381},
  URN =		{urn:nbn:de:0030-drops-50528},
  doi =		{10.4230/LIPIcs.CCC.2015.381},
  annote =	{Keywords: Boolean circuit, Complexity, Communication Complexity}
}
Document
Correlation Bounds Against Monotone NC^1

Authors: Benjamin Rossman


Abstract
This paper gives the first correlation bounds under product distributions, including the uniform distribution, against the class mNC^ of polynomial-size O(log(n))-depth monotone circuits. Our main theorem, proved using the pathset complexity framework introduced in [Rossmann,arXiv:1312.0355], shows that the average-case k-CYCLE problem (on Erdös-Renyi random graphs with an appropriate edge density) is 1/2 + 1/poly(n) hard for mNC^1. Combining this result with O'Donnell's hardness amplification theorem [O'Donnell,2002], we obtain an explicit monotone function of n variables (in the class mSAC^1) which is 1/2 + n^(-1/2+epsilon) hard for mNC^1 under the uniform distribution for any desired constant epsilon > 0. This bound is nearly best possible, since every monotone function has agreement 1/2 + Omega(log(n)/sqrt(n)) with some function in mNC^1 [O'Donnell/Wimmer,FOCS'09]. Our correlation bounds against mNC^1 extend smoothly to non-monotone NC^1 circuits with a bounded number of negation gates. Using Holley's monotone coupling theorem [Holley,Comm. Math. Physics,1974], we prove the following lemma: with respect to any product distribution, if a balanced monotone function f is 1/2 + delta hard for monotone circuits of a given size and depth, then f is 1/2 + (2^(t+1)-1)*delta hard for (non-monotone) circuits of the same size and depth with at most t negation gates. We thus achieve a lower bound against NC^1 circuits with (1/2-epsilon)*log(n) negation gates, improving the previous record of 1/6*log(log(n)) [Amano/Maruoka,SIAML J. Comp.,2005]. Our bound on negations is "half" optimal, since \lceil log(n+1) \rceil negation gates are known to be fully powerful for NC^1 [Ajtai/Komlos/Szemeredi,STOC'83; Fischer,GI'75].

Cite as

Benjamin Rossman. Correlation Bounds Against Monotone NC^1. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 392-411, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{rossman:LIPIcs.CCC.2015.392,
  author =	{Rossman, Benjamin},
  title =	{{Correlation Bounds Against Monotone NC^1}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{392--411},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.392},
  URN =		{urn:nbn:de:0030-drops-50785},
  doi =		{10.4230/LIPIcs.CCC.2015.392},
  annote =	{Keywords: circuit complexity, average-case complexity}
}
Document
Non-Commutative Formulas and Frege Lower Bounds: a New Characterization of Propositional Proofs

Authors: Fu Li, Iddo Tzameret, and Zhengyu Wang


Abstract
Does every Boolean tautology have a short propositional-calculus proof? Here, a propositional-calculus (i.e. Frege) proof is any proof starting from a set of axioms and deriving new Boolean formulas using a fixed set of sound derivation rules. Establishing any super-polynomial size lower bound on Frege proofs (in terms of the size of the formula proved) is a major open problem in proof complexity, and among a handful of fundamental hardness questions in complexity theory by and large. Non-commutative arithmetic formulas, on the other hand, constitute a quite weak computational model, for which exponential-size lower bounds were shown already back in 1991 by Nisan [STOC 1991], using a particularly transparent argument. In this work we show that Frege lower bounds in fact follow from corresponding size lower bounds on non-commutative formulas computing certain polynomials (and that such lower bounds on non-commutative formulas must exist, unless NP=coNP). More precisely, we demonstrate a natural association between tautologies T to non-commutative polynomials p, such that: (*) if T has a polynomial-size Frege proof then p has a polynomial-size non-commutative arithmetic formula; and conversely, when T is a DNF, if p has a polynomial-size non-commutative arithmetic formula over GF(2) then T has a Frege proof of quasi-polynomial size. The argument is a characterization of Frege proofs as non-commutative formulas: we show that the Frege system is (quasi-)polynomially equivalent to a non-commutative Ideal Proof System (IPS), following the recent work of Grochow and Pitassi [FOCS 2014] that introduced a propositional proof system in which proofs are arithmetic circuits, and the work in [Tzameret 2011] that considered adding the commutator as an axiom in algebraic propositional proof systems. This gives a characterization of propositional Frege proofs in terms of (non-commutative) arithmetic formulas that is tighter than (the formula version of IPS) in Grochow and Pitassi [FOCS 2014], in the following sense: (i) The non-commutative IPS is polynomial-time checkable - whereas the original IPS was checkable in probabilistic polynomial-time; and (ii) Frege proofs unconditionally quasi-polynomially simulate the non-commutative IPS - whereas Frege was shown to efficiently simulate IPS only assuming that the decidability of PIT for (commutative) arithmetic formulas by polynomial-size circuits is efficiently provable in Frege.

Cite as

Fu Li, Iddo Tzameret, and Zhengyu Wang. Non-Commutative Formulas and Frege Lower Bounds: a New Characterization of Propositional Proofs. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 412-432, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.CCC.2015.412,
  author =	{Li, Fu and Tzameret, Iddo and Wang, Zhengyu},
  title =	{{Non-Commutative Formulas and Frege Lower Bounds: a New Characterization of Propositional Proofs}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{412--432},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.412},
  URN =		{urn:nbn:de:0030-drops-50585},
  doi =		{10.4230/LIPIcs.CCC.2015.412},
  annote =	{Keywords: Proof complexity, algebraic complexity, arithmetic circuits, Frege, non-commutative formulas}
}
Document
The Space Complexity of Cutting Planes Refutations

Authors: Nicola Galesi, Pavel Pudlák, and Neil Thapen


Abstract
We study the space complexity of the cutting planes proof system, in which the lines in a proof are integral linear inequalities. We measure the space used by a refutation as the number of linear inequalities that need to be kept on a blackboard while verifying it. We show that any unsatisfiable set of linear inequalities has a cutting planes refutation in space five. This is in contrast to the weaker resolution proof system, for which the analogous space measure has been well-studied and many optimal linear lower bounds are known. Motivated by this result we consider a natural restriction of cutting planes, in which all coefficients have size bounded by a constant. We show that there is a CNF which requires super-constant space to refute in this system. The system nevertheless already has an exponential speed-up over resolution with respect to size, and we additionally show that it is stronger than resolution with respect to space, by constructing constant-space cutting planes proofs, with coefficients bounded by two, of the pigeonhole principle. We also consider variable instance space for cutting planes, where we count the number of instances of variables on the blackboard, and total space, where we count the total number of symbols.

Cite as

Nicola Galesi, Pavel Pudlák, and Neil Thapen. The Space Complexity of Cutting Planes Refutations. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 433-447, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{galesi_et_al:LIPIcs.CCC.2015.433,
  author =	{Galesi, Nicola and Pudl\'{a}k, Pavel and Thapen, Neil},
  title =	{{The Space Complexity of Cutting Planes Refutations}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{433--447},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.433},
  URN =		{urn:nbn:de:0030-drops-50551},
  doi =		{10.4230/LIPIcs.CCC.2015.433},
  annote =	{Keywords: Proof Complexity, Cutting Planes, Space Complexity}
}
Document
Tight Size-Degree Bounds for Sums-of-Squares Proofs

Authors: Massimo Lauria and Jakob Nordström


Abstract
We exhibit families of 4-CNF formulas over n variables that have sums-of-squares (SOS) proofs of unsatisfiability of degree (a.k.a. rank) d but require SOS proofs of size n^Omega(d) for values of d = d(n) from constant all the way up to n^delta for some universal constant delta. This shows that the n^O(d) running time obtained by using the Lasserre semidefinite programming relaxations to find degree-d SOS proofs is optimal up to constant factors in the exponent. We establish this result by combining NP-reductions expressible as low-degree SOS derivations with the idea of relativizing CNF formulas in [Krajicek '04] and [Dantchev and Riis '03], and then applying a restriction argument as in [Atserias, Müller, and Oliva '13] and [Atserias, Lauria, and Nordstrom '14]. This yields a generic method of amplifying SOS degree lower bounds to size lower bounds, and also generalizes the approach in [ALN14] to obtain size lower bounds for the proof systems resolution, polynomial calculus, and Sherali-Adams from lower bounds on width, degree, and rank, respectively.

Cite as

Massimo Lauria and Jakob Nordström. Tight Size-Degree Bounds for Sums-of-Squares Proofs. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 448-466, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{lauria_et_al:LIPIcs.CCC.2015.448,
  author =	{Lauria, Massimo and Nordstr\"{o}m, Jakob},
  title =	{{Tight Size-Degree Bounds for Sums-of-Squares Proofs}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{448--466},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.448},
  URN =		{urn:nbn:de:0030-drops-50736},
  doi =		{10.4230/LIPIcs.CCC.2015.448},
  annote =	{Keywords: Proof complexity, resolution, Lasserre, Positivstellensatz, sums-of-squares, SOS, semidefinite programming, size, degree, rank, clique, lower bound}
}
Document
A Generalized Method for Proving Polynomial Calculus Degree Lower Bounds

Authors: Mladen Miksa and Jakob Nordström


Abstract
We study the problem of establishing lower bounds for polynomial calculus (PC) and polynomial calculus resolution (PCR) on proof degree, and hence by [Impagliazzo et al. '99] also on proof size. [Alekhnovich and Razborov '03] established that if the clause-variable incidence graph of a CNF formula F is a good enough expander, then proving that F is unsatisfiable requires high PC/PCR degree. We further develop the techniques in [AR03] to show that if one can "cluster" clauses and variables in a way that "respects the structure" of the formula in a certain sense, then it is sufficient that the incidence graph of this clustered version is an expander. As a corollary of this, we prove that the functional pigeonhole principle (FPHP) formulas require high PC/PCR degree when restricted to constant-degree expander graphs. This answers an open question in [Razborov '02], and also implies that the standard CNF encoding of the FPHP formulas require exponential proof size in polynomial calculus resolution.

Cite as

Mladen Miksa and Jakob Nordström. A Generalized Method for Proving Polynomial Calculus Degree Lower Bounds. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 467-487, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{miksa_et_al:LIPIcs.CCC.2015.467,
  author =	{Miksa, Mladen and Nordstr\"{o}m, Jakob},
  title =	{{A Generalized Method for Proving Polynomial Calculus Degree Lower Bounds}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{467--487},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.467},
  URN =		{urn:nbn:de:0030-drops-50775},
  doi =		{10.4230/LIPIcs.CCC.2015.467},
  annote =	{Keywords: proof complexity, polynomial calculus, polynomial calculus resolution, PCR, degree, size, functional pigeonhole principle, lower bound}
}
Document
Generalized Quantum Arthur-Merlin Games

Authors: Hirotada Kobayashi, Francois Le Gall, and Harumichi Nishimura


Abstract
This paper investigates the role of interaction and coins in quantum Arthur-Merlin games (also called public-coin quantum interactive proof systems). While the existing model restricts the messages from the verifier to be classical even in the quantum setting, the present work introduces a generalized version of quantum Arthur-Merlin games where the messages from the verifier can be quantum as well: the verifier can send not only random bits, but also halves of EPR pairs. This generalization turns out to provide several novel characterizations of quantum interactive proof systems with a constant number of turns. First, it is proved that the complexity class corresponding to two-turn quantum Arthur-Merlin games where both of the two messages are quantum, denoted qq-QAM in this paper, does not change by adding a constant number of turns of classical interaction prior to the communications of qq-QAM proof systems. This can be viewed as a quantum analogue of the celebrated collapse theorem for AM due to Babai. To prove this collapse theorem, this paper presents a natural complete problem for qq-QAM: deciding whether the output of a given quantum circuit is close to a totally mixed state. This complete problem is on the very line of the previous studies investigating the hardness of checking properties related to quantum circuits, and thus, qq-QAM may provide a good measure in computational complexity theory. It is further proved that the class qq-QAM_1, the perfect-completeness variant of qq-QAM, gives new bounds for standard well-studied classes of two-turn quantum interactive proof systems. Finally, the collapse theorem above is extended to comprehensively classify the role of classical and quantum interactions in quantum Arthur-Merlin games: it is proved that, for any constant m >= 2, the class of problems having $m$-turn quantum Arthur-Merlin proof systems is either equal to PSPACE or equal to the class of problems having two-turn quantum Arthur-Merlin proof systems of a specific type, which provides a complete set of quantum analogues of Babai's collapse theorem.

Cite as

Hirotada Kobayashi, Francois Le Gall, and Harumichi Nishimura. Generalized Quantum Arthur-Merlin Games. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 488-511, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{kobayashi_et_al:LIPIcs.CCC.2015.488,
  author =	{Kobayashi, Hirotada and Le Gall, Francois and Nishimura, Harumichi},
  title =	{{Generalized Quantum Arthur-Merlin Games}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{488--511},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.488},
  URN =		{urn:nbn:de:0030-drops-50697},
  doi =		{10.4230/LIPIcs.CCC.2015.488},
  annote =	{Keywords: interactive proof systems, Arthur-Merlin games, quantum computing, complete problems, entanglement}
}
Document
Parallel Repetition for Entangled k-player Games via Fast Quantum Search

Authors: Kai-Min Chung, Xiaodi Wu, and Henry Yuen


Abstract
We present two parallel repetition theorems for the entangled value of multi-player, one-round free games (games where the inputs come from a product distribution). Our first theorem shows that for a k-player free game G with entangled value val^*(G) = 1 - epsilon, the n-fold repetition of G has entangled value val^*(G^(\otimes n)) at most (1 - epsilon^(3/2))^(Omega(n/sk^4)), where s is the answer length of any player. In contrast, the best known parallel repetition theorem for the classical value of two-player free games is val(G^(\otimes n)) <= (1 - epsilon^2)^(Omega(n/s)), due to Barak, et al. (RANDOM 2009). This suggests the possibility of a separation between the behavior of entangled and classical free games under parallel repetition. Our second theorem handles the broader class of free games G where the players can output (possibly entangled) quantum states. For such games, the repeated entangled value is upper bounded by (1 - epsilon^2)^(Omega(n/sk^2)). We also show that the dependence of the exponent on k is necessary: we exhibit a k-player free game G and n >= 1 such that val^*(G^(\otimes n)) >= val^*(G)^(n/k). Our analysis exploits the novel connection between communication protocols and quantum parallel repetition, first explored by Chailloux and Scarpa (ICALP 2014). We demonstrate that better communication protocols yield better parallel repetition theorems: in particular, our first theorem crucially uses a quantum search protocol by Aaronson and Ambainis, which gives a quadratic Grover speed-up for distributed search problems. Finally, our results apply to a broader class of games than were previously considered before; in particular, we obtain the first parallel repetition theorem for entangled games involving more than two players, and for games involving quantum outputs.

Cite as

Kai-Min Chung, Xiaodi Wu, and Henry Yuen. Parallel Repetition for Entangled k-player Games via Fast Quantum Search. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 512-536, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{chung_et_al:LIPIcs.CCC.2015.512,
  author =	{Chung, Kai-Min and Wu, Xiaodi and Yuen, Henry},
  title =	{{Parallel Repetition for Entangled k-player Games via Fast Quantum Search}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{512--536},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.512},
  URN =		{urn:nbn:de:0030-drops-50727},
  doi =		{10.4230/LIPIcs.CCC.2015.512},
  annote =	{Keywords: Parallel repetition, quantum entanglement, communication complexity}
}
Document
Upper Bounds on Quantum Query Complexity Inspired by the Elitzur-Vaidman Bomb Tester

Authors: Cedric Yen-Yu Lin and Han-Hsuan Lin


Abstract
Inspired by the Elitzur-Vaidman bomb testing problem [Elitzur/Vaidman 1993], we introduce a new query complexity model, which we call bomb query complexity B(f). We investigate its relationship with the usual quantum query complexity Q(f), and show that B(f)=Theta(Q(f)^2). This result gives a new method to upper bound the quantum query complexity: we give a method of finding bomb query algorithms from classical algorithms, which then provide nonconstructive upper bounds on Q(f)=Theta(sqrt(B(f))). We subsequently were able to give explicit quantum algorithms matching our upper bound method. We apply this method on the single-source shortest paths problem on unweighted graphs, obtaining an algorithm with O(n^(1.5)) quantum query complexity, improving the best known algorithm of O(n^(1.5) * sqrt(log(n))) [Furrow, 2008]. Applying this method to the maximum bipartite matching problem gives an O(n^(1.75)) algorithm, improving the best known trivial O(n^2) upper bound.

Cite as

Cedric Yen-Yu Lin and Han-Hsuan Lin. Upper Bounds on Quantum Query Complexity Inspired by the Elitzur-Vaidman Bomb Tester. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 537-566, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{lin_et_al:LIPIcs.CCC.2015.537,
  author =	{Lin, Cedric Yen-Yu and Lin, Han-Hsuan},
  title =	{{Upper Bounds on Quantum Query Complexity Inspired by the Elitzur-Vaidman Bomb Tester}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{537--566},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.537},
  URN =		{urn:nbn:de:0030-drops-50635},
  doi =		{10.4230/LIPIcs.CCC.2015.537},
  annote =	{Keywords: Quantum Algorithms, Query Complexity, Elitzur-Vaidman Bomb Tester, Adversary Method, Maximum Bipartite Matching}
}
Document
A Polylogarithmic PRG for Degree 2 Threshold Functions in the Gaussian Setting

Authors: Daniel M. Kane


Abstract
We construct and analyze a new pseudorandom generator for degree 2 polynomial threshold functions with respect to the Gaussian measure. In particular, we obtain one whose seed length is polylogarithmic in both the dimension and the desired error, a substantial improvement over existing constructions. Our generator is obtained as an appropriate weighted average of pseudorandom generators against read once branching programs. The analysis requires a number of ideas including a hybrid argument and a structural result that allows us to treat our degree 2 threshold function as a function of a number of linear polynomials and one approximately linear polynomial.

Cite as

Daniel M. Kane. A Polylogarithmic PRG for Degree 2 Threshold Functions in the Gaussian Setting. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 567-581, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{kane:LIPIcs.CCC.2015.567,
  author =	{Kane, Daniel M.},
  title =	{{A Polylogarithmic PRG for Degree 2 Threshold Functions in the Gaussian Setting}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{567--581},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.567},
  URN =		{urn:nbn:de:0030-drops-50534},
  doi =		{10.4230/LIPIcs.CCC.2015.567},
  annote =	{Keywords: polynomial threshold function, pseudorandom generator, Gaussian distribution}
}
Document
Extended Abstract
Incompressible Functions, Relative-Error Extractors, and the Power of Nondeterministic Reductions (Extended Abstract)

Authors: Benny Applebaum, Sergei Artemenko, Ronen Shaltiel, and Guang Yang


Abstract
A circuit C compresses a function f:{0,1}^n -> {0,1}^m if given an input x in {0,1}^n the circuit C can shrink x to a shorter l-bit string x' such that later, a computationally-unbounded solver D will be able to compute f(x) based on x'. In this paper we study the existence of functions which are incompressible by circuits of some fixed polynomial size s=n^c. Motivated by cryptographic applications, we focus on average-case (l,epsilon) incompressibility, which guarantees that on a random input x in {0,1}^n, for every size s circuit C:{0,1}^n -> {0,1}^l and any unbounded solver D, the success probability Pr_x[D(C(x))=f(x)] is upper-bounded by 2^(-m)+epsilon. While this notion of incompressibility appeared in several works (e.g., Dubrov and Ishai, STOC 06), so far no explicit constructions of efficiently computable incompressible functions were known. In this work we present the following results: 1. Assuming that E is hard for exponential size nondeterministic circuits, we construct a polynomial time computable boolean function f:{0,1}^n -> {0,1} which is incompressible by size n^c circuits with communication l=(1-o(1)) * n and error epsilon=n^(-c). Our technique generalizes to the case of PRGs against nonboolean circuits, improving and simplifying the previous construction of Shaltiel and Artemenko (STOC 14). 2. We show that it is possible to achieve negligible error parameter epsilon=n^(-omega(1)) for nonboolean functions. Specifically, assuming that E is hard for exponential size Sigma_3-circuits, we construct a nonboolean function f:{0,1}^n -> {0,1}^m which is incompressible by size n^c circuits with l=Omega(n) and extremely small epsilon=n^(-c) * 2^(-m). Our construction combines the techniques of Trevisan and Vadhan (FOCS 00) with a new notion of relative error deterministic extractor which may be of independent interest. 3. We show that the task of constructing an incompressible boolean function f:{0,1}^n -> {0,1} with negligible error parameter epsilon cannot be achieved by "existing proof techniques". Namely, nondeterministic reductions (or even Sigma_i reductions) cannot get epsilon=n^(-omega(1)) for boolean incompressible functions. Our results also apply to constructions of standard Nisan-Wigderson type PRGs and (standard) boolean functions that are hard on average, explaining, in retrospective, the limitations of existing constructions. Our impossibility result builds on an approach of Shaltiel and Viola (SIAM J. Comp., 2010).

Cite as

Benny Applebaum, Sergei Artemenko, Ronen Shaltiel, and Guang Yang. Incompressible Functions, Relative-Error Extractors, and the Power of Nondeterministic Reductions (Extended Abstract). In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 582-600, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{applebaum_et_al:LIPIcs.CCC.2015.582,
  author =	{Applebaum, Benny and Artemenko, Sergei and Shaltiel, Ronen and Yang, Guang},
  title =	{{Incompressible Functions, Relative-Error Extractors, and the Power of Nondeterministic Reductions}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{582--600},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.582},
  URN =		{urn:nbn:de:0030-drops-50567},
  doi =		{10.4230/LIPIcs.CCC.2015.582},
  annote =	{Keywords: compression, pseudorandomness, extractors, nondeterministic reductions}
}
Document
On Randomness Extraction in AC0

Authors: Oded Goldreich, Emanuele Viola, and Avi Wigderson


Abstract
We consider randomness extraction by AC0 circuits. The main parameter, n, is the length of the source, and all other parameters are functions of it. The additional extraction parameters are the min-entropy bound k=k(n), the seed length r=r(n), the output length m=m(n), and the (output) deviation bound epsilon=epsilon(n). For k <=e n/\log^(omega(1))(n), we show that AC0-extraction is possible if and only if m/r <= 1+ poly(log(n)) * k/n; that is, the extraction rate m/r exceeds the trivial rate (of one) by an additive amount that is proportional to the min-entropy rate k/n. In particular, non-trivial AC0-extraction (i.e., m >= r+1) is possible if and only if k * r > n/poly(log(n)). For k >= n/log^(O(1))(n), we show that AC0-extraction of r+Omega(r) bits is possible when r=O(log(n)), but leave open the question of whether more bits can be extracted in this case. The impossibility result is for constant epsilon, and the possibility result supports epsilon=1/poly(n). The impossibility result is for (possibly) non-uniform AC0, whereas the possibility result hold for uniform AC0. All our impossibility results hold even for the model of bit-fixing sources, where k coincides with the number of non-fixed (i.e., random) bits. We also consider deterministic AC0 extraction from various classes of restricted sources. In particular, for any constant $\delta>0$, we give explicit AC0 extractors for poly(1/delta) independent sources that are each of min-entropy rate delta; and four sources suffice for delta=0.99. Also, we give non-explicit AC0 extractors for bit-fixing sources of entropy rate 1/poly(log(n)) (i.e., having n/poly(log(n)) unfixed bits). This shows that the known analysis of the "restriction method" (for making a circuit constant by fixing as few variables as possible) is tight for AC0 even if the restriction is picked deterministically depending on the circuit.

Cite as

Oded Goldreich, Emanuele Viola, and Avi Wigderson. On Randomness Extraction in AC0. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 601-668, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{goldreich_et_al:LIPIcs.CCC.2015.601,
  author =	{Goldreich, Oded and Viola, Emanuele and Wigderson, Avi},
  title =	{{On Randomness Extraction in AC0}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{601--668},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.601},
  URN =		{urn:nbn:de:0030-drops-50515},
  doi =		{10.4230/LIPIcs.CCC.2015.601},
  annote =	{Keywords: AC0, randomness extractors, general min-entropy sources, block sources, bit-fixing sources, multiple-source extraction}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail