LIPIcs, Volume 200

36th Computational Complexity Conference (CCC 2021)



Thumbnail PDF

Event

CCC 2021, July 20-23, 2021, Toronto, Ontario, Canada (Virtual Conference)

Editor

Valentine Kabanets
  • School of Computing Science, Simon Fraser University, Burnaby, BC, Canada

Publication Details

  • published at: 2021-07-08
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-193-1
  • DBLP: db/conf/coco/coco2021

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 200, CCC 2021, Complete Volume

Authors: Valentine Kabanets


Abstract
LIPIcs, Volume 200, CCC 2021, Complete Volume

Cite as

36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 1-1290, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@Proceedings{kabanets:LIPIcs.CCC.2021,
  title =	{{LIPIcs, Volume 200, CCC 2021, Complete Volume}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{1--1290},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021},
  URN =		{urn:nbn:de:0030-drops-142732},
  doi =		{10.4230/LIPIcs.CCC.2021},
  annote =	{Keywords: LIPIcs, Volume 200, CCC 2021, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Valentine Kabanets


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 0:i-0:xvi, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{kabanets:LIPIcs.CCC.2021.0,
  author =	{Kabanets, Valentine},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{0:i--0:xvi},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.0},
  URN =		{urn:nbn:de:0030-drops-142745},
  doi =		{10.4230/LIPIcs.CCC.2021.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Rate Amplification and Query-Efficient Distance Amplification for Linear LCC and LDC

Authors: Gil Cohen and Tal Yankovitz


Abstract
The main contribution of this work is a rate amplification procedure for LCC. Our procedure converts any q-query linear LCC, having rate ρ and, say, constant distance to an asymptotically good LCC with q^poly(1/ρ) queries. Our second contribution is a distance amplification procedure for LDC that converts any linear LDC with distance δ and, say, constant rate to an asymptotically good LDC. The query complexity only suffers a multiplicative overhead that is roughly equal to the query complexity of a length 1/δ asymptotically good LDC. This improves upon the poly(1/δ) overhead obtained by the AEL distance amplification procedure [Alon and Luby, 1996; Alon et al., 1995]. Our work establishes that the construction of asymptotically good LDC and LCC is reduced, with a minor overhead in query complexity, to the problem of constructing a vanishing rate linear LCC and a (rapidly) vanishing distance linear LDC, respectively.

Cite as

Gil Cohen and Tal Yankovitz. Rate Amplification and Query-Efficient Distance Amplification for Linear LCC and LDC. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 1:1-1:57, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{cohen_et_al:LIPIcs.CCC.2021.1,
  author =	{Cohen, Gil and Yankovitz, Tal},
  title =	{{Rate Amplification and Query-Efficient Distance Amplification for Linear LCC and LDC}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{1:1--1:57},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.1},
  URN =		{urn:nbn:de:0030-drops-142750},
  doi =		{10.4230/LIPIcs.CCC.2021.1},
  annote =	{Keywords: Locally decodable codes, Locally correctable codes}
}
Document
An Improved Protocol for the Exactly-N Problem

Authors: Nati Linial and Adi Shraibman


Abstract
In the 3-players exactly-N problem the players need to decide whether x+y+z = N for inputs x,y,z and fixed N. This is the first problem considered in the multiplayer Number On the Forehead (NOF) model. Even though this is such a basic problem, no progress has been made on it throughout the years. Only recently have explicit protocols been found for the first time, yet no improvement in complexity has been achieved to date. The present paper offers the first improved protocol for the exactly-N problem. This improved protocol has also interesting consequences in additive combinatorics. As we explain below, it yields a higher lower bound on the possible density of corner-free sets in [N]×[N].

Cite as

Nati Linial and Adi Shraibman. An Improved Protocol for the Exactly-N Problem. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 2:1-2:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{linial_et_al:LIPIcs.CCC.2021.2,
  author =	{Linial, Nati and Shraibman, Adi},
  title =	{{An Improved Protocol for the Exactly-N Problem}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{2:1--2:8},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.2},
  URN =		{urn:nbn:de:0030-drops-142760},
  doi =		{10.4230/LIPIcs.CCC.2021.2},
  annote =	{Keywords: Communication complexity, Number-On-the-Forehead, Corner-free sets}
}
Document
Proof Complexity of Natural Formulas via Communication Arguments

Authors: Dmitry Itsykson and Artur Riazanov


Abstract
A canonical communication problem Search(φ) is defined for every unsatisfiable CNF φ: an assignment to the variables of φ is partitioned among the communicating parties, they are to find a clause of φ falsified by this assignment. Lower bounds on the randomized k-party communication complexity of Search(φ) in the number-on-forehead (NOF) model imply tree-size lower bounds, rank lower bounds, and size-space tradeoffs for the formula φ in the semantic proof system T^{cc}(k,c) that operates with proof lines that can be computed by k-party randomized communication protocol using at most c bits of communication [Göös and Pitassi, 2014]. All known lower bounds on Search(φ) (e.g. [Beame et al., 2007; Göös and Pitassi, 2014; Russell Impagliazzo et al., 1994]) are realized on ad-hoc formulas φ (i.e. they were introduced specifically for these lower bounds). We introduce a new communication complexity approach that allows establishing proof complexity lower bounds for natural formulas. First, we demonstrate our approach for two-party communication and apply it to the proof system Res(⊕) that operates with disjunctions of linear equalities over 𝔽₂ [Dmitry Itsykson and Dmitry Sokolov, 2014]. Let a formula PM_G encode that a graph G has a perfect matching. If G has an odd number of vertices, then PM_G has a tree-like Res(⊕)-refutation of a polynomial-size [Dmitry Itsykson and Dmitry Sokolov, 2014]. It was unknown whether this is the case for graphs with an even number of vertices. Using our approach we resolve this question and show a lower bound 2^{Ω(n)} on size of tree-like Res(⊕)-refutations of PM_{K_{n+2,n}}. Then we apply our approach for k-party communication complexity in the NOF model and obtain a Ω(1/k 2^{n/2k - 3k/2}) lower bound on the randomized k-party communication complexity of Search(BPHP^{M}_{2ⁿ}) w.r.t. to some natural partition of the variables, where BPHP^{M}_{2ⁿ} is the bit pigeonhole principle and M = 2ⁿ+2^{n(1-1/k)}. In particular, our result implies that the bit pigeonhole requires exponential tree-like Th(k) proofs, where Th(k) is the semantic proof system operating with polynomial inequalities of degree at most k and k = 𝒪(log^{1-ε} n) for some ε > 0. We also show that BPHP^{2ⁿ+1}_{2ⁿ} superpolynomially separates tree-like Th(log^{1-ε} m) from tree-like Th(log m), where m is the number of variables in the refuted formula.

Cite as

Dmitry Itsykson and Artur Riazanov. Proof Complexity of Natural Formulas via Communication Arguments. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 3:1-3:34, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{itsykson_et_al:LIPIcs.CCC.2021.3,
  author =	{Itsykson, Dmitry and Riazanov, Artur},
  title =	{{Proof Complexity of Natural Formulas via Communication Arguments}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{3:1--3:34},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.3},
  URN =		{urn:nbn:de:0030-drops-142773},
  doi =		{10.4230/LIPIcs.CCC.2021.3},
  annote =	{Keywords: bit pigeonhole principle, disjointness, multiparty communication complexity, perfect matching, proof complexity, randomized communication complexity, Resolution over linear equations, tree-like proofs}
}
Document
A Lower Bound on Determinantal Complexity

Authors: Mrinal Kumar and Ben Lee Volk


Abstract
The determinantal complexity of a polynomial P ∈ 𝔽[x₁, …, x_n] over a field 𝔽 is the dimension of the smallest matrix M whose entries are affine functions in 𝔽[x₁, …, x_n] such that P = Det(M). We prove that the determinantal complexity of the polynomial ∑_{i = 1}^n x_i^n is at least 1.5n - 3. For every n-variate polynomial of degree d, the determinantal complexity is trivially at least d, and it is a long standing open problem to prove a lower bound which is super linear in max{n,d}. Our result is the first lower bound for any explicit polynomial which is bigger by a constant factor than max{n,d}, and improves upon the prior best bound of n + 1, proved by Alper, Bogart and Velasco [Jarod Alper et al., 2017] for the same polynomial.

Cite as

Mrinal Kumar and Ben Lee Volk. A Lower Bound on Determinantal Complexity. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 4:1-4:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{kumar_et_al:LIPIcs.CCC.2021.4,
  author =	{Kumar, Mrinal and Volk, Ben Lee},
  title =	{{A Lower Bound on Determinantal Complexity}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{4:1--4:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.4},
  URN =		{urn:nbn:de:0030-drops-142781},
  doi =		{10.4230/LIPIcs.CCC.2021.4},
  annote =	{Keywords: Determinantal Complexity, Algebraic Circuits, Lower Bounds, Singular Variety}
}
Document
Optimal Tiling of the Euclidean Space Using Permutation-Symmetric Bodies

Authors: Mark Braverman and Dor Minzer


Abstract
What is the least surface area of a permutation-symmetric body B whose ℤⁿ translations tile ℝⁿ? Since any such body must have volume 1, the isoperimetric inequality implies that its surface area must be at least Ω(√n). Remarkably, Kindler et al. showed that for general bodies B this is tight, i.e. that there is a tiling body of ℝⁿ whose surface area is O(√n). In theoretical computer science, the tiling problem is intimately related to the study of parallel repetition theorems (which are an important component in PCPs), and more specifically in the question of whether a "strong version" of the parallel repetition theorem holds. Raz showed, using the odd cycle game, that strong parallel repetition fails in general, and subsequently these ideas were used in order to construct non-trivial tilings of ℝⁿ. In this paper, motivated by the study of a symmetric parallel repetition, we consider the permutation-symmetric variant of the tiling problem in ℝⁿ. We show that any permutation-symmetric body that tiles ℝⁿ must have surface area at least Ω(n/√{log n}), and that this bound is tight, i.e. that there is a permutation-symmetric tiling body of ℝⁿ with surface area O(n/√{log n}). We also give matching bounds for the value of the symmetric parallel repetition of Raz’s odd cycle game. Our result suggests that while strong parallel repetition fails in general, there may be important special cases where it still applies.

Cite as

Mark Braverman and Dor Minzer. Optimal Tiling of the Euclidean Space Using Permutation-Symmetric Bodies. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 5:1-5:48, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{braverman_et_al:LIPIcs.CCC.2021.5,
  author =	{Braverman, Mark and Minzer, Dor},
  title =	{{Optimal Tiling of the Euclidean Space Using Permutation-Symmetric Bodies}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{5:1--5:48},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.5},
  URN =		{urn:nbn:de:0030-drops-142796},
  doi =		{10.4230/LIPIcs.CCC.2021.5},
  annote =	{Keywords: PCP, Parallel Repetition, Tilings}
}
Document
On the Power and Limitations of Branch and Cut

Authors: Noah Fleming, Mika Göös, Russell Impagliazzo, Toniann Pitassi, Robert Robere, Li-Yang Tan, and Avi Wigderson


Abstract
The Stabbing Planes proof system [Paul Beame et al., 2018] was introduced to model the reasoning carried out in practical mixed integer programming solvers. As a proof system, it is powerful enough to simulate Cutting Planes and to refute the Tseitin formulas - certain unsatisfiable systems of linear equations od 2 - which are canonical hard examples for many algebraic proof systems. In a recent (and surprising) result, Dadush and Tiwari [Daniel Dadush and Samarth Tiwari, 2020] showed that these short refutations of the Tseitin formulas could be translated into quasi-polynomial size and depth Cutting Planes proofs, refuting a long-standing conjecture. This translation raises several interesting questions. First, whether all Stabbing Planes proofs can be efficiently simulated by Cutting Planes. This would allow for the substantial analysis done on the Cutting Planes system to be lifted to practical mixed integer programming solvers. Second, whether the quasi-polynomial depth of these proofs is inherent to Cutting Planes. In this paper we make progress towards answering both of these questions. First, we show that any Stabbing Planes proof with bounded coefficients (SP*) can be translated into Cutting Planes. As a consequence of the known lower bounds for Cutting Planes, this establishes the first exponential lower bounds on SP*. Using this translation, we extend the result of Dadush and Tiwari to show that Cutting Planes has short refutations of any unsatisfiable system of linear equations over a finite field. Like the Cutting Planes proofs of Dadush and Tiwari, our refutations also incur a quasi-polynomial blow-up in depth, and we conjecture that this is inherent. As a step towards this conjecture, we develop a new geometric technique for proving lower bounds on the depth of Cutting Planes proofs. This allows us to establish the first lower bounds on the depth of Semantic Cutting Planes proofs of the Tseitin formulas.

Cite as

Noah Fleming, Mika Göös, Russell Impagliazzo, Toniann Pitassi, Robert Robere, Li-Yang Tan, and Avi Wigderson. On the Power and Limitations of Branch and Cut. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 6:1-6:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{fleming_et_al:LIPIcs.CCC.2021.6,
  author =	{Fleming, Noah and G\"{o}\"{o}s, Mika and Impagliazzo, Russell and Pitassi, Toniann and Robere, Robert and Tan, Li-Yang and Wigderson, Avi},
  title =	{{On the Power and Limitations of Branch and Cut}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{6:1--6:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.6},
  URN =		{urn:nbn:de:0030-drops-142809},
  doi =		{10.4230/LIPIcs.CCC.2021.6},
  annote =	{Keywords: Proof Complexity, Integer Programming, Cutting Planes, Branch and Cut, Stabbing Planes}
}
Document
Separating ABPs and Some Structured Formulas in the Non-Commutative Setting

Authors: Prerona Chatterjee


Abstract
The motivating question for this work is a long standing open problem, posed by Nisan [Noam Nisan, 1991], regarding the relative powers of algebraic branching programs (ABPs) and formulas in the non-commutative setting. Even though the general question remains open, we make some progress towards its resolution. To that effect, we generalise the notion of ordered polynomials in the non-commutative setting (defined by Hrubeš, Wigderson and Yehudayoff [Hrubeš et al., 2011]) to define abecedarian polynomials and models that naturally compute them. Our main contribution is a possible new approach towards resolving the VF_{nc} vs VBP_{nc} question, via lower bounds against abecedarian formulas. In particular, we show the following. There is an explicit n²-variate degree d abecedarian polynomial f_{n,d}(𝐱) such that - f_{n, d}(𝐱) can be computed by an abecedarian ABP of size O(nd); - any abecedarian formula computing f_{n, log n}(𝐱) must have size at least n^{Ω(log log n)}. We also show that a super-polynomial lower bound against abecedarian formulas for f_{log n, n}(𝐱) would separate the powers of formulas and ABPs in the non-commutative setting.

Cite as

Prerona Chatterjee. Separating ABPs and Some Structured Formulas in the Non-Commutative Setting. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 7:1-7:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{chatterjee:LIPIcs.CCC.2021.7,
  author =	{Chatterjee, Prerona},
  title =	{{Separating ABPs and Some Structured Formulas in the Non-Commutative Setting}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{7:1--7:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.7},
  URN =		{urn:nbn:de:0030-drops-142812},
  doi =		{10.4230/LIPIcs.CCC.2021.7},
  annote =	{Keywords: Non-Commutative Formulas, Lower Bound, Separating ABPs and Formulas}
}
Document
The (Generalized) Orthogonality Dimension of (Generalized) Kneser Graphs: Bounds and Applications

Authors: Alexander Golovnev and Ishay Haviv


Abstract
The orthogonality dimension of a graph G = (V,E) over a field 𝔽 is the smallest integer t for which there exists an assignment of a vector u_v ∈ 𝔽^t with ⟨ u_v,u_v ⟩ ≠ 0 to every vertex v ∈ V, such that ⟨ u_v, u_{v'} ⟩ = 0 whenever v and v' are adjacent vertices in G. The study of the orthogonality dimension of graphs is motivated by various applications in information theory and in theoretical computer science. The contribution of the present work is two-fold. First, we prove that there exists a constant c such that for every sufficiently large integer t, it is NP-hard to decide whether the orthogonality dimension of an input graph over ℝ is at most t or at least 3t/2-c. At the heart of the proof lies a geometric result, which might be of independent interest, on a generalization of the orthogonality dimension parameter for the family of Kneser graphs, analogously to a long-standing conjecture of Stahl (J. Comb. Theo. Ser. B, 1976). Second, we study the smallest possible orthogonality dimension over finite fields of the complement of graphs that do not contain certain fixed subgraphs. In particular, we provide an explicit construction of triangle-free n-vertex graphs whose complement has orthogonality dimension over the binary field at most n^{1-δ} for some constant δ > 0. Our results involve constructions from the family of generalized Kneser graphs and they are motivated by the rigidity approach to circuit lower bounds. We use them to answer a couple of questions raised by Codenotti, Pudlák, and Resta (Theor. Comput. Sci., 2000), and in particular, to disprove their Odd Alternating Cycle Conjecture over every finite field.

Cite as

Alexander Golovnev and Ishay Haviv. The (Generalized) Orthogonality Dimension of (Generalized) Kneser Graphs: Bounds and Applications. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 8:1-8:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{golovnev_et_al:LIPIcs.CCC.2021.8,
  author =	{Golovnev, Alexander and Haviv, Ishay},
  title =	{{The (Generalized) Orthogonality Dimension of (Generalized) Kneser Graphs: Bounds and Applications}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{8:1--8:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.8},
  URN =		{urn:nbn:de:0030-drops-142829},
  doi =		{10.4230/LIPIcs.CCC.2021.8},
  annote =	{Keywords: Orthogonality dimension, minrank, rigidity, hardness of approximation, circuit complexity, chromatic number, Kneser graphs}
}
Document
Shadows of Newton Polytopes

Authors: Pavel Hrubeš and Amir Yehudayoff


Abstract
We define the shadow complexity of a polytope P as the maximum number of vertices in a linear projection of P to the plane. We describe connections to algebraic complexity and to parametrized optimization. We also provide several basic examples and constructions, and develop tools for bounding shadow complexity.

Cite as

Pavel Hrubeš and Amir Yehudayoff. Shadows of Newton Polytopes. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 9:1-9:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{hrubes_et_al:LIPIcs.CCC.2021.9,
  author =	{Hrube\v{s}, Pavel and Yehudayoff, Amir},
  title =	{{Shadows of Newton Polytopes}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{9:1--9:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.9},
  URN =		{urn:nbn:de:0030-drops-142833},
  doi =		{10.4230/LIPIcs.CCC.2021.9},
  annote =	{Keywords: Newton polytope, Monotone arithmetic circuit}
}
Document
Fractional Pseudorandom Generators from Any Fourier Level

Authors: Eshan Chattopadhyay, Jason Gaitonde, Chin Ho Lee, Shachar Lovett, and Abhishek Shetty


Abstract
We prove new results on the polarizing random walk framework introduced in recent works of Chattopadhyay et al. [Chattopadhyay et al., 2019; Eshan Chattopadhyay et al., 2019] that exploit L₁ Fourier tail bounds for classes of Boolean functions to construct pseudorandom generators (PRGs). We show that given a bound on the k-th level of the Fourier spectrum, one can construct a PRG with a seed length whose quality scales with k. This interpolates previous works, which either require Fourier bounds on all levels [Chattopadhyay et al., 2019], or have polynomial dependence on the error parameter in the seed length [Eshan Chattopadhyay et al., 2019], and thus answers an open question in [Eshan Chattopadhyay et al., 2019]. As an example, we show that for polynomial error, Fourier bounds on the first O(log n) levels is sufficient to recover the seed length in [Chattopadhyay et al., 2019], which requires bounds on the entire tail. We obtain our results by an alternate analysis of fractional PRGs using Taylor’s theorem and bounding the degree-k Lagrange remainder term using multilinearity and random restrictions. Interestingly, our analysis relies only on the level-k unsigned Fourier sum, which is potentially a much smaller quantity than the L₁ notion in previous works. By generalizing a connection established in [Chattopadhyay et al., 2020], we give a new reduction from constructing PRGs to proving correlation bounds. Finally, using these improvements we show how to obtain a PRG for 𝔽₂ polynomials with seed length close to the state-of-the-art construction due to Viola [Emanuele Viola, 2009].

Cite as

Eshan Chattopadhyay, Jason Gaitonde, Chin Ho Lee, Shachar Lovett, and Abhishek Shetty. Fractional Pseudorandom Generators from Any Fourier Level. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 10:1-10:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{chattopadhyay_et_al:LIPIcs.CCC.2021.10,
  author =	{Chattopadhyay, Eshan and Gaitonde, Jason and Lee, Chin Ho and Lovett, Shachar and Shetty, Abhishek},
  title =	{{Fractional Pseudorandom Generators from Any Fourier Level}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{10:1--10:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.10},
  URN =		{urn:nbn:de:0030-drops-142843},
  doi =		{10.4230/LIPIcs.CCC.2021.10},
  annote =	{Keywords: Derandomization, pseudorandomness, pseudorandom generators, Fourier analysis}
}
Document
Deterministic Identity Testing Paradigms for Bounded Top-Fanin Depth-4 Circuits

Authors: Pranjal Dutta, Prateek Dwivedi, and Nitin Saxena


Abstract
Polynomial Identity Testing (PIT) is a fundamental computational problem. The famous depth-4 reduction (Agrawal & Vinay, FOCS'08) has made PIT for depth-4 circuits, an enticing pursuit. The largely open special-cases of sum-product-of-sum-of-univariates (Σ^[k] Π Σ ∧) and sum-product-of-constant-degree-polynomials (Σ^[k] Π Σ Π^[δ]), for constants k, δ, have been a source of many great ideas in the last two decades. For eg. depth-3 ideas (Dvir & Shpilka, STOC'05; Kayal & Saxena, CCC'06; Saxena & Seshadhri, FOCS'10, STOC'11); depth-4 ideas (Beecken, Mittmann & Saxena, ICALP'11; Saha,Saxena & Saptharishi, Comput.Compl.'13; Forbes, FOCS'15; Kumar & Saraf, CCC'16); geometric Sylvester-Gallai ideas (Kayal & Saraf, FOCS'09; Shpilka, STOC'19; Peleg & Shpilka, CCC'20, STOC'21). We solve two of the basic underlying open problems in this work. We give the first polynomial-time PIT for Σ^[k] Π Σ ∧. Further, we give the first quasipolynomial time blackbox PIT for both Σ^[k] Π Σ ∧ and Σ^[k] Π Σ Π^[δ]. No subexponential time algorithm was known prior to this work (even if k = δ = 3). A key technical ingredient in all the three algorithms is how the logarithmic derivative, and its power-series, modify the top Π-gate to ∧.

Cite as

Pranjal Dutta, Prateek Dwivedi, and Nitin Saxena. Deterministic Identity Testing Paradigms for Bounded Top-Fanin Depth-4 Circuits. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 11:1-11:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{dutta_et_al:LIPIcs.CCC.2021.11,
  author =	{Dutta, Pranjal and Dwivedi, Prateek and Saxena, Nitin},
  title =	{{Deterministic Identity Testing Paradigms for Bounded Top-Fanin Depth-4 Circuits}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{11:1--11:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.11},
  URN =		{urn:nbn:de:0030-drops-142857},
  doi =		{10.4230/LIPIcs.CCC.2021.11},
  annote =	{Keywords: Polynomial identity testing, hitting set, depth-4 circuits}
}
Document
Robustly Self-Ordered Graphs: Constructions and Applications to Property Testing

Authors: Oded Goldreich and Avi Wigderson


Abstract
A graph G is called self-ordered (a.k.a asymmetric) if the identity permutation is its only automorphism. Equivalently, there is a unique isomorphism from G to any graph that is isomorphic to G. We say that G = (V,E) is robustly self-ordered if the size of the symmetric difference between E and the edge-set of the graph obtained by permuting V using any permutation π:V → V is proportional to the number of non-fixed-points of π. In this work, we initiate the study of the structure, construction and utility of robustly self-ordered graphs. We show that robustly self-ordered bounded-degree graphs exist (in abundance), and that they can be constructed efficiently, in a strong sense. Specifically, given the index of a vertex in such a graph, it is possible to find all its neighbors in polynomial-time (i.e., in time that is poly-logarithmic in the size of the graph). We provide two very different constructions, in tools and structure. The first, a direct construction, is based on proving a sufficient condition for robust self-ordering, which requires that an auxiliary graph is expanding. The second construction is iterative, boosting the property of robust self-ordering from smaller to larger graphs. Structuraly, the first construction always yields expanding graphs, while the second construction may produce graphs that have many tiny (sub-logarithmic) connected components. We also consider graphs of unbounded degree, seeking correspondingly unbounded robustness parameters. We again demonstrate that such graphs (of linear degree) exist (in abundance), and that they can be constructed efficiently, in a strong sense. This turns out to require very different tools. Specifically, we show that the construction of such graphs reduces to the construction of non-malleable two-source extractors (with very weak parameters but with some additional natural features). We demonstrate that robustly self-ordered bounded-degree graphs are useful towards obtaining lower bounds on the query complexity of testing graph properties both in the bounded-degree and the dense graph models. Indeed, their robustness offers efficient, local and distance preserving reductions from testing problems on ordered structures (like sequences) to the unordered (effectively unlabeled) graphs. One of the results that we obtain, via such a reduction, is a subexponential separation between the query complexities of testing and tolerant testing of graph properties in the bounded-degree graph model.

Cite as

Oded Goldreich and Avi Wigderson. Robustly Self-Ordered Graphs: Constructions and Applications to Property Testing. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 12:1-12:74, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{goldreich_et_al:LIPIcs.CCC.2021.12,
  author =	{Goldreich, Oded and Wigderson, Avi},
  title =	{{Robustly Self-Ordered Graphs: Constructions and Applications to Property Testing}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{12:1--12:74},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.12},
  URN =		{urn:nbn:de:0030-drops-142867},
  doi =		{10.4230/LIPIcs.CCC.2021.12},
  annote =	{Keywords: Asymmetric graphs, expanders, testing graph properties, two-source extractors, non-malleable extractors, coding theory, tolerant testing, random graphs}
}
Document
Barriers for Recent Methods in Geodesic Optimization

Authors: W. Cole Franks and Philipp Reichenbach


Abstract
We study a class of optimization problems including matrix scaling, matrix balancing, multidimensional array scaling, operator scaling, and tensor scaling that arise frequently in theory and in practice. Some of these problems, such as matrix and array scaling, are convex in the Euclidean sense, but others such as operator scaling and tensor scaling are geodesically convex on a different Riemannian manifold. Trust region methods, which include box-constrained Newton’s method, are known to produce high precision solutions very quickly for matrix scaling and matrix balancing (Cohen et. al., FOCS 2017, Allen-Zhu et. al. FOCS 2017), and result in polynomial time algorithms for some geodesically convex problems like operator scaling (Garg et. al. STOC 2018, Bürgisser et. al. FOCS 2019). One is led to ask whether these guarantees also hold for multidimensional array scaling and tensor scaling. We show that this is not the case by exhibiting instances with exponential diameter bound: we construct polynomial-size instances of 3-dimensional array scaling and 3-tensor scaling whose approximate solutions all have doubly exponential condition number. Moreover, we study convex-geometric notions of complexity known as margin and gap, which are used to bound the running times of all existing optimization algorithms for such problems. We show that margin and gap are exponentially small for several problems including array scaling, tensor scaling and polynomial scaling. Our results suggest that it is impossible to prove polynomial running time bounds for tensor scaling based on diameter bounds alone. Therefore, our work motivates the search for analogues of more sophisticated algorithms, such as interior point methods, for geodesically convex optimization that do not rely on polynomial diameter bounds.

Cite as

W. Cole Franks and Philipp Reichenbach. Barriers for Recent Methods in Geodesic Optimization. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 13:1-13:54, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{franks_et_al:LIPIcs.CCC.2021.13,
  author =	{Franks, W. Cole and Reichenbach, Philipp},
  title =	{{Barriers for Recent Methods in Geodesic Optimization}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{13:1--13:54},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.13},
  URN =		{urn:nbn:de:0030-drops-142879},
  doi =		{10.4230/LIPIcs.CCC.2021.13},
  annote =	{Keywords: Geodesically Convex Optimization, Weight Margin, Moment Polytope, Diameter Bounds, Tensor Scaling, Matrix Scaling}
}
Document
Communication Complexity with Defective Randomness

Authors: Marshall Ball, Oded Goldreich, and Tal Malkin


Abstract
Starting with the two standard model of randomized communication complexity, we study the communication complexity of functions when the protocol has access to a defective source of randomness. Specifically, we consider both the public-randomness and private-randomness cases, while replacing the commonly postulated perfect randomness with distributions over 𝓁 bit strings that have min-entropy at least k ≤ 𝓁. We present general upper and lower bounds on the communication complexity in these cases, where the bounds are typically linear in 𝓁-k and also depend on the size of the fooling set for the function being computed and on its standard randomized complexity.

Cite as

Marshall Ball, Oded Goldreich, and Tal Malkin. Communication Complexity with Defective Randomness. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 14:1-14:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{ball_et_al:LIPIcs.CCC.2021.14,
  author =	{Ball, Marshall and Goldreich, Oded and Malkin, Tal},
  title =	{{Communication Complexity with Defective Randomness}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{14:1--14:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.14},
  URN =		{urn:nbn:de:0030-drops-142886},
  doi =		{10.4230/LIPIcs.CCC.2021.14},
  annote =	{Keywords: Randomized Communication Complexity, Randomness Extraction, Min-Entropy}
}
Document
On the Cut Dimension of a Graph

Authors: Troy Lee, Tongyang Li, Miklos Santha, and Shengyu Zhang


Abstract
Let G = (V,w) be a weighted undirected graph with m edges. The cut dimension of G is the dimension of the span of the characteristic vectors of the minimum cuts of G, viewed as vectors in {0,1}^m. For every n ≥ 2 we show that the cut dimension of an n-vertex graph is at most 2n-3, and construct graphs realizing this bound. The cut dimension was recently defined by Graur et al. [Andrei Graur et al., 2020], who show that the maximum cut dimension of an n-vertex graph is a lower bound on the number of cut queries needed by a deterministic algorithm to solve the minimum cut problem on n-vertex graphs. For every n ≥ 2, Graur et al. exhibit a graph on n vertices with cut dimension at least 3n/2 -2, giving the first lower bound larger than n on the deterministic cut query complexity of computing mincut. We observe that the cut dimension is even a lower bound on the number of linear queries needed by a deterministic algorithm to solve mincut, where a linear query can ask any vector x ∈ ℝ^{binom(n,2)} and receives the answer w^T x. Our results thus show a lower bound of 2n-3 on the number of linear queries needed by a deterministic algorithm to solve minimum cut on n-vertex graphs, and imply that one cannot show a lower bound larger than this via the cut dimension. We further introduce a generalization of the cut dimension which we call the 𝓁₁-approximate cut dimension. The 𝓁₁-approximate cut dimension is also a lower bound on the number of linear queries needed by a deterministic algorithm to compute minimum cut. It is always at least as large as the cut dimension, and we construct an infinite family of graphs on n = 3k+1 vertices with 𝓁₁-approximate cut dimension 2n-2, showing that it can be strictly larger than the cut dimension.

Cite as

Troy Lee, Tongyang Li, Miklos Santha, and Shengyu Zhang. On the Cut Dimension of a Graph. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 15:1-15:35, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{lee_et_al:LIPIcs.CCC.2021.15,
  author =	{Lee, Troy and Li, Tongyang and Santha, Miklos and Zhang, Shengyu},
  title =	{{On the Cut Dimension of a Graph}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{15:1--15:35},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.15},
  URN =		{urn:nbn:de:0030-drops-142890},
  doi =		{10.4230/LIPIcs.CCC.2021.15},
  annote =	{Keywords: Query complexity, submodular function minimization, cut dimension}
}
Document
On p-Group Isomorphism: Search-To-Decision, Counting-To-Decision, and Nilpotency Class Reductions via Tensors

Authors: Joshua A. Grochow and Youming Qiao


Abstract
In this paper we study some classical complexity-theoretic questions regarding Group Isomorphism (GpI). We focus on p-groups (groups of prime power order) with odd p, which are believed to be a bottleneck case for GpI, and work in the model of matrix groups over finite fields. Our main results are as follows. - Although search-to-decision and counting-to-decision reductions have been known for over four decades for Graph Isomorphism (GI), they had remained open for GpI, explicitly asked by Arvind & Torán (Bull. EATCS, 2005). Extending methods from Tensor Isomorphism (Grochow & Qiao, ITCS 2021), we show moderately exponential-time such reductions within p-groups of class 2 and exponent p. - Despite the widely held belief that p-groups of class 2 and exponent p are the hardest cases of GpI, there was no reduction to these groups from any larger class of groups. Again using methods from Tensor Isomorphism (ibid.), we show the first such reduction, namely from isomorphism testing of p-groups of "small" class and exponent p to those of class two and exponent p. For the first results, our main innovation is to develop linear-algebraic analogues of classical graph coloring gadgets, a key technique in studying the structural complexity of GI. Unlike the graph coloring gadgets, which support restricting to various subgroups of the symmetric group, the problems we study require restricting to various subgroups of the general linear group, which entails significantly different and more complicated gadgets. The analysis of one of our gadgets relies on a classical result from group theory regarding random generation of classical groups (Kantor & Lubotzky, Geom. Dedicata, 1990). For the nilpotency class reduction, we combine a runtime analysis of the Lazard Correspondence with Tensor Isomorphism-completeness results (Grochow & Qiao, ibid.).

Cite as

Joshua A. Grochow and Youming Qiao. On p-Group Isomorphism: Search-To-Decision, Counting-To-Decision, and Nilpotency Class Reductions via Tensors. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 16:1-16:38, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{grochow_et_al:LIPIcs.CCC.2021.16,
  author =	{Grochow, Joshua A. and Qiao, Youming},
  title =	{{On p-Group Isomorphism: Search-To-Decision, Counting-To-Decision, and Nilpotency Class Reductions via Tensors}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{16:1--16:38},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.16},
  URN =		{urn:nbn:de:0030-drops-142905},
  doi =		{10.4230/LIPIcs.CCC.2021.16},
  annote =	{Keywords: group isomorphism, search-to-decision reduction, counting-to-decision reduction, nilpotent group isomorphism, p-group isomorphism, tensor isomorphism}
}
Document
Branching Programs with Bounded Repetitions and Flow Formulas

Authors: Anastasia Sofronova and Dmitry Sokolov


Abstract
Restricted branching programs capture various complexity measures like space in Turing machines or length of proofs in proof systems. In this paper, we focus on the application in the proof complexity that was discovered by Lovasz et al. [László Lovász et al., 1995] who showed the equivalence between regular Resolution and read-once branching programs for "unsatisfied clause search problem" (Search_φ). This connection is widely used, in particular, in the recent breakthrough result about the Clique problem in regular Resolution by Atserias et al. [Albert Atserias et al., 2018]. We study the branching programs with bounded repetitions, so-called (1,+k)-BPs (Sieling [Detlef Sieling, 1996]) in application to the Search_φ problem. On the one hand, it is a natural generalization of read-once branching programs. On the other hand, this model gives a powerful proof system that can efficiently certify the unsatisfiability of a wide class of formulas that is hard for Resolution (Knop [Alexander Knop, 2017]). We deal with Search_φ that is "relatively easy" compared to all known hard examples for the (1,+k)-BPs. We introduce the first technique for proving exponential lower bounds for the (1,+k)-BPs on Search_φ. To do it we combine a well-known technique for proving lower bounds on the size of branching programs [Detlef Sieling, 1996; Detlef Sieling and Ingo Wegener, 1994; Stasys Jukna and Alexander A. Razborov, 1998] with the modification of the "closure" technique [Michael Alekhnovich et al., 2004; Michael Alekhnovich and Alexander A. Razborov, 2003]. In contrast with most Resolution lower bounds, our technique uses not only "local" properties of the formula, but also a "global" structure. Our hard examples are based on the Flow formulas introduced in [Michael Alekhnovich and Alexander A. Razborov, 2003].

Cite as

Anastasia Sofronova and Dmitry Sokolov. Branching Programs with Bounded Repetitions and Flow Formulas. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 17:1-17:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{sofronova_et_al:LIPIcs.CCC.2021.17,
  author =	{Sofronova, Anastasia and Sokolov, Dmitry},
  title =	{{Branching Programs with Bounded Repetitions and Flow Formulas}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{17:1--17:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.17},
  URN =		{urn:nbn:de:0030-drops-142915},
  doi =		{10.4230/LIPIcs.CCC.2021.17},
  annote =	{Keywords: proof complexity, branching programs, bounded repetitions, lower bounds}
}
Document
A Majority Lemma for Randomised Query Complexity

Authors: Mika Göös and Gilbert Maystre


Abstract
We show that computing the majority of n copies of a boolean function g has randomised query complexity R(Maj∘gⁿ) = Θ(n⋅R ̅_{1/n}(g)). In fact, we show that to obtain a similar result for any composed function f∘gⁿ, it suffices to prove a sufficiently strong form of the result only in the special case g = GapOr.

Cite as

Mika Göös and Gilbert Maystre. A Majority Lemma for Randomised Query Complexity. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 18:1-18:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{goos_et_al:LIPIcs.CCC.2021.18,
  author =	{G\"{o}\"{o}s, Mika and Maystre, Gilbert},
  title =	{{A Majority Lemma for Randomised Query Complexity}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{18:1--18:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.18},
  URN =		{urn:nbn:de:0030-drops-142922},
  doi =		{10.4230/LIPIcs.CCC.2021.18},
  annote =	{Keywords: Query Complexity, Composition, Majority}
}
Document
Hitting Sets and Reconstruction for Dense Orbits in VP_{e} and ΣΠΣ Circuits

Authors: Dori Medini and Amir Shpilka


Abstract
In this paper we study polynomials in VP_{e} (polynomial-sized formulas) and in ΣΠΣ (polynomial-size depth-3 circuits) whose orbits, under the action of the affine group GL^{aff}_n(𝔽) (the action of (A,b) ∈ GL^{aff}_n(𝔽) on a polynomial f ∈ 𝔽[x] is defined as (A,b)∘f = f(A^Tx+b)), are dense in their ambient class. We construct hitting sets and interpolating sets for these orbits as well as give reconstruction algorithms. Specifically, we obtain the following results: 1) For C_n(ℓ_1(x),…,ℓ_n(x)) ≜ Trace(\begin{pmatrix} 𝓁₁(x) & 1 \\ 1 & 0 \end{pmatrix} ⋅ … ⋅ \begin{pmatrix} 𝓁_n(x) & 1 \\ 1 & 0 \end{pmatrix}), where the 𝓁_is are linearly independent linear functions, we construct a polynomial-sized interpolating set, and give a polynomial-time reconstruction algorithm. By a result of Bringmann, Ikenmeyer and Zuiddam, the set of all such polynomials is dense in VP_e [Karl Bringmann et al., 2018], thus our construction gives the first polynomial-size interpolating set for a dense subclass of VP_e. 2) For polynomials of the form ANF_Δ(𝓁₁(x),…,𝓁_{4^Δ}(x)), where ANF_Δ(x) is the canonical read-once formula in alternating normal form, of depth 2Δ, and the 𝓁_is are linearly independent linear functions, we provide a quasipolynomial-size interpolating set. We also observe that the reconstruction algorithm of [Ankit Gupta et al., 2014] works for all polynomials in this class. This class is also dense in VP_e. 3) Similarly, we give a quasipolynomial-sized hitting set for read-once formulas (not necessarily in alternating normal form) composed with a set of linearly independent linear functions. This gives another dense class in VP_e. 4) We give a quasipolynomial-sized hitting set for polynomials of the form f(𝓁₁(x),…,𝓁_{m}(x)), where f is an m-variate s-sparse polynomial. and the 𝓁_is are linearly independent linear functions in n ≥ m variables. This class is dense in ΣΠΣ. 5) For polynomials of the form ∑_{i=1}^{s}∏_{j=1}^{d}𝓁_{i,j}(x), where the 𝓁_{i,j}s are linearly independent linear functions, we construct a polynomial-sized interpolating set. We also observe that the reconstruction algorithm of [Neeraj Kayal and Chandan Saha, 2019] works for every polynomial in the class. This class is dense in ΣΠΣ. As VP = VNC², our results for VP_{e} translate immediately to VP with a quasipolynomial blow up in parameters. If any of our hitting or interpolating sets could be made robust then this would immediately yield a hitting set for the superclass in which the relevant class is dense, and as a consequence also a lower bound for the superclass. Unfortunately, we also prove that the kind of constructions that we have found (which are defined in terms of k-independent polynomial maps) do not necessarily yield robust hitting sets.

Cite as

Dori Medini and Amir Shpilka. Hitting Sets and Reconstruction for Dense Orbits in VP_{e} and ΣΠΣ Circuits. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 19:1-19:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{medini_et_al:LIPIcs.CCC.2021.19,
  author =	{Medini, Dori and Shpilka, Amir},
  title =	{{Hitting Sets and Reconstruction for Dense Orbits in VP\underline\{e\} and \Sigma\Pi\Sigma Circuits}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{19:1--19:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.19},
  URN =		{urn:nbn:de:0030-drops-142930},
  doi =		{10.4230/LIPIcs.CCC.2021.19},
  annote =	{Keywords: Algebraic complexity, VP, VNP, algebraic circuits, algebraic formula}
}
Document
Variety Evasive Subspace Families

Authors: Zeyu Guo


Abstract
We introduce the problem of constructing explicit variety evasive subspace families. Given a family ℱ of subvarieties of a projective or affine space, a collection ℋ of projective or affine k-subspaces is (ℱ,ε)-evasive if for every 𝒱 ∈ ℱ, all but at most ε-fraction of W ∈ ℋ intersect every irreducible component of 𝒱 with (at most) the expected dimension. The problem of constructing such an explicit subspace family generalizes both deterministic black-box polynomial identity testing (PIT) and the problem of constructing explicit (weak) lossless rank condensers. Using Chow forms, we construct explicit k-subspace families of polynomial size that are evasive for all varieties of bounded degree in a projective or affine n-space. As one application, we obtain a complete derandomization of Noether’s normalization lemma for varieties of bounded degree in a projective or affine n-space. In another application, we obtain a simple polynomial-time black-box PIT algorithm for depth-4 arithmetic circuits with bounded top fan-in and bottom fan-in that are not in the Sylvester-Gallai configuration, improving and simplifying a result of Gupta (ECCC TR 14-130). As a complement of our explicit construction, we prove a lower bound for the size of k-subspace families that are evasive for degree-d varieties in a projective n-space. When n-k = n^Ω(1), the lower bound is superpolynomial unless d is bounded. The proof uses a dimension-counting argument on Chow varieties that parametrize projective subvarieties.

Cite as

Zeyu Guo. Variety Evasive Subspace Families. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 20:1-20:33, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{guo:LIPIcs.CCC.2021.20,
  author =	{Guo, Zeyu},
  title =	{{Variety Evasive Subspace Families}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{20:1--20:33},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.20},
  URN =		{urn:nbn:de:0030-drops-142949},
  doi =		{10.4230/LIPIcs.CCC.2021.20},
  annote =	{Keywords: algebraic complexity, dimension reduction, Noether normalization, polynomial identity testing, pseudorandomness, varieties}
}
Document
A Lower Bound for Polynomial Calculus with Extension Rule

Authors: Yaroslav Alekseev


Abstract
A major proof complexity problem is to prove a superpolynomial lower bound on the length of Frege proofs of arbitrary depth. A more general question is to prove an Extended Frege lower bound. Surprisingly, proving such bounds turns out to be much easier in the algebraic setting. In this paper, we study a proof system that can simulate Extended Frege: an extension of the Polynomial Calculus proof system where we can take a square root and introduce new variables that are equivalent to arbitrary depth algebraic circuits. We prove that an instance of the subset-sum principle, the binary value principle 1 + x₁ + 2 x₂ + … + 2^{n-1} x_n = 0 (BVP_n), requires refutations of exponential bit size over ℚ in this system. Part and Tzameret [Fedor Part and Iddo Tzameret, 2020] proved an exponential lower bound on the size of Res-Lin (Resolution over linear equations [Ran Raz and Iddo Tzameret, 2008]) refutations of BVP_n. We show that our system p-simulates Res-Lin and thus we get an alternative exponential lower bound for the size of Res-Lin refutations of BVP_n.

Cite as

Yaroslav Alekseev. A Lower Bound for Polynomial Calculus with Extension Rule. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 21:1-21:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{alekseev:LIPIcs.CCC.2021.21,
  author =	{Alekseev, Yaroslav},
  title =	{{A Lower Bound for Polynomial Calculus with Extension Rule}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{21:1--21:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.21},
  URN =		{urn:nbn:de:0030-drops-142959},
  doi =		{10.4230/LIPIcs.CCC.2021.21},
  annote =	{Keywords: proof complexity, algebraic proofs, polynomial calculus}
}
Document
Error Reduction for Weighted PRGs Against Read Once Branching Programs

Authors: Gil Cohen, Dean Doron, Oren Renard, Ori Sberlo, and Amnon Ta-Shma


Abstract
Weighted pseudorandom generators (WPRGs), introduced by Braverman, Cohen and Garg [Braverman et al., 2020], are a generalization of pseudorandom generators (PRGs) in which arbitrary real weights are considered, rather than a probability mass. Braverman et al. constructed WPRGs against read once branching programs (ROBPs) with near-optimal dependence on the error parameter. Chattopadhyay and Liao [Eshan Chattopadhyay and Jyun-Jie Liao, 2020] somewhat simplified the technically involved BCG construction, also obtaining some improvement in parameters. In this work we devise an error reduction procedure for PRGs against ROBPs. More precisely, our procedure transforms any PRG against length n width w ROBP with error 1/poly(n) having seed length s to a WPRG with seed length s + O(logw/(ε) ⋅ log log1/(ε)). By instantiating our procedure with Nisan’s PRG [Noam Nisan, 1992] we obtain a WPRG with seed length O(log{n} ⋅ log(nw) + logw/(ε) ⋅ log log 1/(ε)). This improves upon [Braverman et al., 2020] and is incomparable with [Eshan Chattopadhyay and Jyun-Jie Liao, 2020]. Our construction is significantly simpler on the technical side and is conceptually cleaner. Another advantage of our construction is its low space complexity O(log{nw})+poly(log log1/(ε)) which is logarithmic in n for interesting values of the error parameter ε. Previous constructions (like [Braverman et al., 2020; Eshan Chattopadhyay and Jyun-Jie Liao, 2020]) specify the seed length but not the space complexity, though it is plausible they can also achieve such (or close) space complexity.

Cite as

Gil Cohen, Dean Doron, Oren Renard, Ori Sberlo, and Amnon Ta-Shma. Error Reduction for Weighted PRGs Against Read Once Branching Programs. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 22:1-22:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{cohen_et_al:LIPIcs.CCC.2021.22,
  author =	{Cohen, Gil and Doron, Dean and Renard, Oren and Sberlo, Ori and Ta-Shma, Amnon},
  title =	{{Error Reduction for Weighted PRGs Against Read Once Branching Programs}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{22:1--22:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.22},
  URN =		{urn:nbn:de:0030-drops-142963},
  doi =		{10.4230/LIPIcs.CCC.2021.22},
  annote =	{Keywords: Pseudorandom generators, Read once branching programs, Space-bounded computation}
}
Document
A Stress-Free Sum-Of-Squares Lower Bound for Coloring

Authors: Pravesh K. Kothari and Peter Manohar


Abstract
We prove that with high probability over the choice of a random graph G from the Erdős-Rényi distribution G(n, 1/2), a natural n^{O(ε² log n)}-time, degree O(ε² log n) sum-of-squares semidefinite program cannot refute the existence of a valid k-coloring of G for k = n^{1/2 + ε}. Our result implies that the refutation guarantee of the basic semidefinite program (a close variant of the Lovász theta function) cannot be appreciably improved by a natural o(log n)-degree sum-of-squares strengthening, and this is tight up to a n^{o(1)} slack in k. To the best of our knowledge, this is the first lower bound for coloring G(n, 1/2) for even a single round strengthening of the basic SDP in any SDP hierarchy. Our proof relies on a new variant of instance-preserving non-pointwise complete reduction within SoS from coloring a graph to finding large independent sets in it. Our proof is (perhaps surprisingly) short, simple and does not require complicated spectral norm bounds on random matrices with dependent entries that have been otherwise necessary in the proofs of many similar results [Boaz Barak et al., 2016; S. B. {Hopkins} et al., 2017; Dmitriy Kunisky and Afonso S. Bandeira, 2019; Mrinalkanti Ghosh et al., 2020; Mohanty et al., 2020]. Our result formally holds for a constraint system where vertices are allowed to belong to multiple color classes; we leave the extension to the formally stronger formulation of coloring, where vertices must belong to unique colors classes, as an outstanding open problem.

Cite as

Pravesh K. Kothari and Peter Manohar. A Stress-Free Sum-Of-Squares Lower Bound for Coloring. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 23:1-23:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{kothari_et_al:LIPIcs.CCC.2021.23,
  author =	{Kothari, Pravesh K. and Manohar, Peter},
  title =	{{A Stress-Free Sum-Of-Squares Lower Bound for Coloring}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{23:1--23:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.23},
  URN =		{urn:nbn:de:0030-drops-142978},
  doi =		{10.4230/LIPIcs.CCC.2021.23},
  annote =	{Keywords: Sum-of-Squares, Graph Coloring, Independent Set, Lower Bounds}
}
Document
Junta Distance Approximation with Sub-Exponential Queries

Authors: Vishnu Iyer, Avishay Tal, and Michael Whitmeyer


Abstract
Leveraging tools of De, Mossel, and Neeman [FOCS, 2019], we show two different results pertaining to the tolerant testing of juntas. Given black-box access to a Boolean function f:{±1}ⁿ → {±1}: 1) We give a poly(k, 1/(ε)) query algorithm that distinguishes between functions that are γ-close to k-juntas and (γ+ε)-far from k'-juntas, where k' = O(k/(ε²)). 2) In the non-relaxed setting, we extend our ideas to give a 2^{Õ(√{k/ε})} (adaptive) query algorithm that distinguishes between functions that are γ-close to k-juntas and (γ+ε)-far from k-juntas. To the best of our knowledge, this is the first subexponential-in-k query algorithm for approximating the distance of f to being a k-junta (previous results of Blais, Canonne, Eden, Levi, and Ron [SODA, 2018] and De, Mossel, and Neeman [FOCS, 2019] required exponentially many queries in k). Our techniques are Fourier analytical and make use of the notion of "normalized influences" that was introduced by Talagrand [Michel Talagrand, 1994].

Cite as

Vishnu Iyer, Avishay Tal, and Michael Whitmeyer. Junta Distance Approximation with Sub-Exponential Queries. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 24:1-24:38, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{iyer_et_al:LIPIcs.CCC.2021.24,
  author =	{Iyer, Vishnu and Tal, Avishay and Whitmeyer, Michael},
  title =	{{Junta Distance Approximation with Sub-Exponential Queries}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{24:1--24:38},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.24},
  URN =		{urn:nbn:de:0030-drops-142988},
  doi =		{10.4230/LIPIcs.CCC.2021.24},
  annote =	{Keywords: Algorithms, Complexity Theory, Fourier Analysis, Juntas, Normalized Influence, Property Testing, Tolerant Property Testing}
}
Document
Arithmetic Circuit Complexity of Division and Truncation

Authors: Pranjal Dutta, Gorav Jindal, Anurag Pandey, and Amit Sinhababu


Abstract
Given polynomials f,g,h ∈ 𝔽[x₁,…,x_n] such that f = g/h, where both g and h are computable by arithmetic circuits of size s, we show that f can be computed by a circuit of size poly(s,deg(h)). This solves a special case of division elimination for high-degree circuits (Kaltofen'87 & WACT'16). The result is an exponential improvement over Strassen’s classic result (Strassen'73) when deg(h) is poly(s) and deg(f) is exp(s), since the latter gives an upper bound of poly(s, deg(f)). Further, we show that any univariate polynomial family (f_d)_d, defined by the initial segment of the power series expansion of rational function g_d(x)/h_d(x) up to degree d (i.e. f_d = g_d/h_d od x^{d+1}), where circuit size of g is s_d and degree of g_d is at most d, can be computed by a circuit of size poly(s_d,deg(h_d),log d). We also show a hardness result when the degrees of the rational functions are high (i.e. Ω (d)), assuming hardness of the integer factorization problem. Finally, we extend this conditional hardness to simple algebraic functions as well, and show that for every prime p, there is an integral algebraic power series with its minimal polynomial satisfying a degree p polynomial equation, such that its initial segment is hard to compute unless integer factoring is easy, or a multiple of n! is easy to compute. Both, integer factoring and computation of multiple of n!, are believed to be notoriously hard. In contrast, we show examples of transcendental power series whose initial segments are easy to compute.

Cite as

Pranjal Dutta, Gorav Jindal, Anurag Pandey, and Amit Sinhababu. Arithmetic Circuit Complexity of Division and Truncation. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 25:1-25:36, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{dutta_et_al:LIPIcs.CCC.2021.25,
  author =	{Dutta, Pranjal and Jindal, Gorav and Pandey, Anurag and Sinhababu, Amit},
  title =	{{Arithmetic Circuit Complexity of Division and Truncation}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{25:1--25:36},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.25},
  URN =		{urn:nbn:de:0030-drops-142990},
  doi =		{10.4230/LIPIcs.CCC.2021.25},
  annote =	{Keywords: Arithmetic Circuits, Division, Truncation, Division elimination, Rational function, Algebraic power series, Transcendental power series, Integer factorization}
}
Document
SOS Lower Bound for Exact Planted Clique

Authors: Shuo Pang


Abstract
We prove a SOS degree lower bound for the planted clique problem on the Erdös-Rényi random graph G(n,1/2). The bound we get is degree d = Ω(ε²log n/log log n) for clique size ω = n^{1/2-ε}, which is almost tight. This improves the result of [Barak et al., 2019] for the "soft" version of the problem, where the family of the equality-axioms generated by x₁+...+x_n = ω is relaxed to one inequality x₁+...+x_n ≥ ω. As a technical by-product, we also "naturalize" certain techniques that were developed and used for the relaxed problem. This includes a new way to define the pseudo-expectation, and a more robust method to solve out the coarse diagonalization of the moment matrix.

Cite as

Shuo Pang. SOS Lower Bound for Exact Planted Clique. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 26:1-26:63, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{pang:LIPIcs.CCC.2021.26,
  author =	{Pang, Shuo},
  title =	{{SOS Lower Bound for Exact Planted Clique}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{26:1--26:63},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.26},
  URN =		{urn:nbn:de:0030-drops-143000},
  doi =		{10.4230/LIPIcs.CCC.2021.26},
  annote =	{Keywords: Sum-of-Squares, planted clique, random graphs, average-case lower bound}
}
Document
A Direct Product Theorem for One-Way Quantum Communication

Authors: Rahul Jain and Srijita Kundu


Abstract
We prove a direct product theorem for the one-way entanglement-assisted quantum communication complexity of a general relation f ⊆ 𝒳×𝒴×𝒵. For any 0 < ε < δ < 1/2 and any k≥1, we show that Q¹_{1-(1-ε)^{Ω(k/log|𝒵|)}}(f^k) = Ω(k⋅Q¹_{δ}(f)), where Q¹_{ε}(f) represents the one-way entanglement-assisted quantum communication complexity of f with worst-case error ε and f^k denotes k parallel instances of f. As far as we are aware, this is the first direct product theorem for the quantum communication complexity of a general relation - direct sum theorems were previously known for one-way quantum protocols for general relations, while direct product theorems were only known for special cases. Our techniques are inspired by the parallel repetition theorems for the entangled value of two-player non-local games, under product distributions due to Jain, Pereszlényi and Yao [Rahul Jain et al., 2014], and under anchored distributions due to Bavarian, Vidick and Yuen [Bavarian et al., 2017], as well as message compression for quantum protocols due to Jain, Radhakrishnan and Sen [Rahul Jain et al., 2005]. In particular, we show that a direct product theorem holds for the distributional one-way quantum communication complexity of f under any distribution q on 𝒳×𝒴 that is anchored on one side, i.e., there exists a y^* such that q(y^*) is constant and q(x|y^*) = q(x) for all x. This allows us to show a direct product theorem for general distributions, since for any relation f and any distribution p on its inputs, we can define a modified relation f̃ which has an anchored distribution q close to p, such that a protocol that fails with probability at most ε for f̃ under q can be used to give a protocol that fails with probability at most ε + ζ for f under p. Our techniques also work for entangled non-local games which have input distributions anchored on any one side, i.e., either there exists a y^* as previously specified, or there exists an x^* such that q(x^*) is constant and q(y|x^*) = q(y) for all y. In particular, we show that for any game G = (q, 𝒳×𝒴, 𝒜×ℬ, 𝖵) where q is a distribution on 𝒳×𝒴 anchored on any one side with constant anchoring probability, then ω^*(G^k) = (1 - (1-ω^*(G))⁵) ^{Ω(k/(log(|𝒜|⋅|ℬ|)))} where ω^*(G) represents the entangled value of the game G. This is a generalization of the result of [Bavarian et al., 2017], who proved a parallel repetition theorem for games anchored on both sides, i.e., where both a special x^* and a special y^* exist, and potentially a simplification of their proof.

Cite as

Rahul Jain and Srijita Kundu. A Direct Product Theorem for One-Way Quantum Communication. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 27:1-27:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{jain_et_al:LIPIcs.CCC.2021.27,
  author =	{Jain, Rahul and Kundu, Srijita},
  title =	{{A Direct Product Theorem for One-Way Quantum Communication}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{27:1--27:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.27},
  URN =		{urn:nbn:de:0030-drops-143017},
  doi =		{10.4230/LIPIcs.CCC.2021.27},
  annote =	{Keywords: Direct product theorem, parallel repetition theorem, quantum communication, one-way protocols, communication complexity}
}
Document
Quantum Complexity of Minimum Cut

Authors: Simon Apers and Troy Lee


Abstract
The minimum cut problem in an undirected and weighted graph G is to find the minimum total weight of a set of edges whose removal disconnects G. We completely characterize the quantum query and time complexity of the minimum cut problem in the adjacency matrix model. If G has n vertices and edge weights at least 1 and at most τ, we give a quantum algorithm to solve the minimum cut problem using Õ(n^{3/2}√{τ}) queries and time. Moreover, for every integer 1 ≤ τ ≤ n we give an example of a graph G with edge weights 1 and τ such that solving the minimum cut problem on G requires Ω(n^{3/2}√{τ}) queries to the adjacency matrix of G. These results contrast with the classical randomized case where Ω(n²) queries to the adjacency matrix are needed in the worst case even to decide if an unweighted graph is connected or not. In the adjacency array model, when G has m edges the classical randomized complexity of the minimum cut problem is ̃ Θ(m). We show that the quantum query and time complexity are Õ(√{mnτ}) and Õ(√{mnτ} + n^{3/2}), respectively, where again the edge weights are between 1 and τ. For dense graphs we give lower bounds on the quantum query complexity of Ω(n^{3/2}) for τ > 1 and Ω(τ n) for any 1 ≤ τ ≤ n. Our query algorithm uses a quantum algorithm for graph sparsification by Apers and de Wolf (FOCS 2020) and results on the structure of near-minimum cuts by Kawarabayashi and Thorup (STOC 2015) and Rubinstein, Schramm and Weinberg (ITCS 2018). Our time efficient implementation builds on Karger’s tree packing technique (STOC 1996).

Cite as

Simon Apers and Troy Lee. Quantum Complexity of Minimum Cut. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 28:1-28:33, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{apers_et_al:LIPIcs.CCC.2021.28,
  author =	{Apers, Simon and Lee, Troy},
  title =	{{Quantum Complexity of Minimum Cut}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{28:1--28:33},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.28},
  URN =		{urn:nbn:de:0030-drops-143026},
  doi =		{10.4230/LIPIcs.CCC.2021.28},
  annote =	{Keywords: Quantum algorithms, quantum query complexity, minimum cut}
}
Document
On the Complexity of Evaluating Highest Weight Vectors

Authors: Markus Bläser, Julian Dörfler, and Christian Ikenmeyer


Abstract
Geometric complexity theory (GCT) is an approach towards separating algebraic complexity classes through algebraic geometry and representation theory. Originally Mulmuley and Sohoni proposed (SIAM J Comput 2001, 2008) to use occurrence obstructions to prove Valiant’s determinant vs permanent conjecture, but recently Bürgisser, Ikenmeyer, and Panova (Journal of the AMS 2019) proved this impossible. However, fundamental theorems of algebraic geometry and representation theory grant that every lower bound in GCT can be proved by the use of so-called highest weight vectors (HWVs). In the setting of interest in GCT (namely in the setting of polynomials) we prove the NP-hardness of the evaluation of HWVs in general, and we give efficient algorithms if the treewidth of the corresponding Young-tableau is small, where the point of evaluation is concisely encoded as a noncommutative algebraic branching program! In particular, this gives a large new class of separating functions that can be efficiently evaluated at points with low (border) Waring rank. As a structural side result we prove that border Waring rank is bounded from above by the ABP width complexity.

Cite as

Markus Bläser, Julian Dörfler, and Christian Ikenmeyer. On the Complexity of Evaluating Highest Weight Vectors. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 29:1-29:36, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{blaser_et_al:LIPIcs.CCC.2021.29,
  author =	{Bl\"{a}ser, Markus and D\"{o}rfler, Julian and Ikenmeyer, Christian},
  title =	{{On the Complexity of Evaluating Highest Weight Vectors}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{29:1--29:36},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.29},
  URN =		{urn:nbn:de:0030-drops-143036},
  doi =		{10.4230/LIPIcs.CCC.2021.29},
  annote =	{Keywords: Algebraic complexity theory, geometric complexity theory, algebraic branching program, Waring rank, border Waring rank, representation theory, highest weight vector, treewidth}
}
Document
On Query-To-Communication Lifting for Adversary Bounds

Authors: Anurag Anshu, Shalev Ben-David, and Srijita Kundu


Abstract
We investigate query-to-communication lifting theorems for models related to the quantum adversary bounds. Our results are as follows: 1) We show that the classical adversary bound lifts to a lower bound on randomized communication complexity with a constant-sized gadget. We also show that the classical adversary bound is a strictly stronger lower bound technique than the previously-lifted measure known as critical block sensitivity, making our lifting theorem one of the strongest lifting theorems for randomized communication complexity using a constant-sized gadget. 2) Turning to quantum models, we show a connection between lifting theorems for quantum adversary bounds and secure 2-party quantum computation in a certain "honest-but-curious" model. Under the assumption that such secure 2-party computation is impossible, we show that a simplified version of the positive-weight adversary bound lifts to a quantum communication lower bound using a constant-sized gadget. We also give an unconditional lifting theorem which lower bounds bounded-round quantum communication protocols. 3) Finally, we give some new results in query complexity. We show that the classical adversary and the positive-weight quantum adversary are quadratically related. We also show that the positive-weight quantum adversary is never larger than the square of the approximate degree. Both relations hold even for partial functions.

Cite as

Anurag Anshu, Shalev Ben-David, and Srijita Kundu. On Query-To-Communication Lifting for Adversary Bounds. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 30:1-30:39, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{anshu_et_al:LIPIcs.CCC.2021.30,
  author =	{Anshu, Anurag and Ben-David, Shalev and Kundu, Srijita},
  title =	{{On Query-To-Communication Lifting for Adversary Bounds}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{30:1--30:39},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.30},
  URN =		{urn:nbn:de:0030-drops-143042},
  doi =		{10.4230/LIPIcs.CCC.2021.30},
  annote =	{Keywords: Quantum computing, query complexity, communication complexity, lifting theorems, adversary method}
}
Document
Hardness of Constant-Round Communication Complexity

Authors: Shuichi Hirahara, Rahul Ilango, and Bruno Loff


Abstract
How difficult is it to compute the communication complexity of a two-argument total Boolean function f:[N]×[N] → {0,1}, when it is given as an N×N binary matrix? In 2009, Kushilevitz and Weinreb showed that this problem is cryptographically hard, but it is still open whether it is NP-hard. In this work, we show that it is NP-hard to approximate the size (number of leaves) of the smallest constant-round protocol for a two-argument total Boolean function f:[N]×[N] → {0,1}, when it is given as an N×N binary matrix. Along the way to proving this, we show a new deterministic variant of the round elimination lemma, which may be of independent interest.

Cite as

Shuichi Hirahara, Rahul Ilango, and Bruno Loff. Hardness of Constant-Round Communication Complexity. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 31:1-31:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{hirahara_et_al:LIPIcs.CCC.2021.31,
  author =	{Hirahara, Shuichi and Ilango, Rahul and Loff, Bruno},
  title =	{{Hardness of Constant-Round Communication Complexity}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{31:1--31:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.31},
  URN =		{urn:nbn:de:0030-drops-143055},
  doi =		{10.4230/LIPIcs.CCC.2021.31},
  annote =	{Keywords: NP-completeness, Communication Complexity, Round Elimination Lemma, Meta-Complexity}
}
Document
Polynomial Time Algorithms in Invariant Theory for Torus Actions

Authors: Peter Bürgisser, M. Levent Doğan, Visu Makam, Michael Walter, and Avi Wigderson


Abstract
An action of a group on a vector space partitions the latter into a set of orbits. We consider three natural and useful algorithmic "isomorphism" or "classification" problems, namely, orbit equality, orbit closure intersection, and orbit closure containment. These capture and relate to a variety of problems within mathematics, physics and computer science, optimization and statistics. These orbit problems extend the more basic null cone problem, whose algorithmic complexity has seen significant progress in recent years. In this paper, we initiate a study of these problems by focusing on the actions of commutative groups (namely, tori). We explain how this setting is motivated from questions in algebraic complexity, and is still rich enough to capture interesting combinatorial algorithmic problems. While the structural theory of commutative actions is well understood, no general efficient algorithms were known for the aforementioned problems. Our main results are polynomial time algorithms for all three problems. We also show how to efficiently find separating invariants for orbits, and how to compute systems of generating rational invariants for these actions (in contrast, for polynomial invariants the latter is known to be hard). Our techniques are based on a combination of fundamental results in invariant theory, linear programming, and algorithmic lattice theory.

Cite as

Peter Bürgisser, M. Levent Doğan, Visu Makam, Michael Walter, and Avi Wigderson. Polynomial Time Algorithms in Invariant Theory for Torus Actions. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 32:1-32:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{burgisser_et_al:LIPIcs.CCC.2021.32,
  author =	{B\"{u}rgisser, Peter and Do\u{g}an, M. Levent and Makam, Visu and Walter, Michael and Wigderson, Avi},
  title =	{{Polynomial Time Algorithms in Invariant Theory for Torus Actions}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{32:1--32:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.32},
  URN =		{urn:nbn:de:0030-drops-143062},
  doi =		{10.4230/LIPIcs.CCC.2021.32},
  annote =	{Keywords: computational invariant theory, geometric complexity theory, orbit closure intersection problem}
}
Document
Pseudodistributions That Beat All Pseudorandom Generators (Extended Abstract)

Authors: Edward Pyne and Salil Vadhan


Abstract
A recent paper of Braverman, Cohen, and Garg (STOC 2018) introduced the concept of a weighted pseudorandom generator (WPRG), which amounts to a pseudorandom generator (PRG) whose outputs are accompanied with real coefficients that scale the acceptance probabilities of any potential distinguisher. They gave an explicit construction of WPRGs for ordered branching programs whose seed length has a better dependence on the error parameter ε than the classic PRG construction of Nisan (STOC 1990 and Combinatorica 1992). In this work, we give an explicit construction of WPRGs that achieve parameters that are impossible to achieve by a PRG. In particular, we construct a WPRG for ordered permutation branching programs of unbounded width with a single accept state that has seed length Õ(log^{3/2} n) for error parameter ε = 1/poly(n), where n is the input length. In contrast, recent work of Hoza et al. (ITCS 2021) shows that any PRG for this model requires seed length Ω(log² n) to achieve error ε = 1/poly(n). As a corollary, we obtain explicit WPRGs with seed length Õ(log^{3/2} n) and error ε = 1/poly(n) for ordered permutation branching programs of width w = poly(n) with an arbitrary number of accept states. Previously, seed length o(log² n) was only known when both the width and the reciprocal of the error are subpolynomial, i.e. w = n^{o(1)} and ε = 1/n^{o(1)} (Braverman, Rao, Raz, Yehudayoff, FOCS 2010 and SICOMP 2014). The starting point for our results are the recent space-efficient algorithms for estimating random-walk probabilities in directed graphs by Ahmadenijad, Kelner, Murtagh, Peebles, Sidford, and Vadhan (FOCS 2020), which are based on spectral graph theory and space-efficient Laplacian solvers. We interpret these algorithms as giving WPRGs with large seed length, which we then derandomize to obtain our results. We also note that this approach gives a simpler proof of the original result of Braverman, Cohen, and Garg, as independently discovered by Cohen, Doron, Renard, Sberlo, and Ta-Shma (these proceedings).

Cite as

Edward Pyne and Salil Vadhan. Pseudodistributions That Beat All Pseudorandom Generators (Extended Abstract). In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 33:1-33:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{pyne_et_al:LIPIcs.CCC.2021.33,
  author =	{Pyne, Edward and Vadhan, Salil},
  title =	{{Pseudodistributions That Beat All Pseudorandom Generators (Extended Abstract)}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{33:1--33:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.33},
  URN =		{urn:nbn:de:0030-drops-143070},
  doi =		{10.4230/LIPIcs.CCC.2021.33},
  annote =	{Keywords: pseudorandomness, space-bounded computation, spectral graph theory}
}
Document
GSF-Locality Is Not Sufficient For Proximity-Oblivious Testing

Authors: Isolde Adler, Noleen Köhler, and Pan Peng


Abstract
In Property Testing, proximity-oblivious testers (POTs) form a class of particularly simple testing algorithms, where a basic test is performed a number of times that may depend on the proximity parameter, but the basic test itself is independent of the proximity parameter. In their seminal work, Goldreich and Ron [STOC 2009; SICOMP 2011] show that the graph properties that allow constant-query proximity-oblivious testing in the bounded-degree model are precisely the properties that can be expressed as a generalised subgraph freeness (GSF) property that satisfies the non-propagation condition. It is left open whether the non-propagation condition is necessary. Indeed, calling properties expressible as a generalised subgraph freeness property GSF-local properties, they ask whether all GSF-local properties are non-propagating. We give a negative answer by exhibiting a property of graphs that is GSF-local and propagating. Hence in particular, our property does not admit a POT, despite being GSF-local. We prove our result by exploiting a recent work of the authors which constructed a first-order (FO) property that is not testable [SODA 2021], and a new connection between FO properties and GSF-local properties via neighbourhood profiles.

Cite as

Isolde Adler, Noleen Köhler, and Pan Peng. GSF-Locality Is Not Sufficient For Proximity-Oblivious Testing. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 34:1-34:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{adler_et_al:LIPIcs.CCC.2021.34,
  author =	{Adler, Isolde and K\"{o}hler, Noleen and Peng, Pan},
  title =	{{GSF-Locality Is Not Sufficient For Proximity-Oblivious Testing}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{34:1--34:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.34},
  URN =		{urn:nbn:de:0030-drops-143082},
  doi =		{10.4230/LIPIcs.CCC.2021.34},
  annote =	{Keywords: Property testing, proximity-oblivous testing, locality, first-order logic, lower bound}
}
Document
Hardness of KT Characterizes Parallel Cryptography

Authors: Hanlin Ren and Rahul Santhanam


Abstract
A recent breakthrough of Liu and Pass (FOCS'20) shows that one-way functions exist if and only if the (polynomial-)time-bounded Kolmogorov complexity, K^t, is bounded-error hard on average to compute. In this paper, we strengthen this result and extend it to other complexity measures: - We show, perhaps surprisingly, that the KT complexity is bounded-error average-case hard if and only if there exist one-way functions in constant parallel time (i.e. NC⁰). This result crucially relies on the idea of randomized encodings. Previously, a seminal work of Applebaum, Ishai, and Kushilevitz (FOCS'04; SICOMP'06) used the same idea to show that NC⁰-computable one-way functions exist if and only if logspace-computable one-way functions exist. - Inspired by the above result, we present randomized average-case reductions among the NC¹-versions and logspace-versions of K^t complexity, and the KT complexity. Our reductions preserve both bounded-error average-case hardness and zero-error average-case hardness. To the best of our knowledge, this is the first reduction between the KT complexity and a variant of K^t complexity. - We prove tight connections between the hardness of K^t complexity and the hardness of (the hardest) one-way functions. In analogy with the Exponential-Time Hypothesis and its variants, we define and motivate the Perebor Hypotheses for complexity measures such as K^t and KT. We show that a Strong Perebor Hypothesis for K^t implies the existence of (weak) one-way functions of near-optimal hardness 2^{n-o(n)}. To the best of our knowledge, this is the first construction of one-way functions of near-optimal hardness based on a natural complexity assumption about a search problem. - We show that a Weak Perebor Hypothesis for MCSP implies the existence of one-way functions, and establish a partial converse. This is the first unconditional construction of one-way functions from the hardness of MCSP over a natural distribution. - Finally, we study the average-case hardness of MKtP. We show that it characterizes cryptographic pseudorandomness in one natural regime of parameters, and complexity-theoretic pseudorandomness in another natural regime.

Cite as

Hanlin Ren and Rahul Santhanam. Hardness of KT Characterizes Parallel Cryptography. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 35:1-35:58, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{ren_et_al:LIPIcs.CCC.2021.35,
  author =	{Ren, Hanlin and Santhanam, Rahul},
  title =	{{Hardness of KT Characterizes Parallel Cryptography}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{35:1--35:58},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.35},
  URN =		{urn:nbn:de:0030-drops-143091},
  doi =		{10.4230/LIPIcs.CCC.2021.35},
  annote =	{Keywords: one-way function, meta-complexity, KT complexity, parallel cryptography, randomized encodings}
}
Document
On the Pseudo-Deterministic Query Complexity of NP Search Problems

Authors: Shafi Goldwasser, Russell Impagliazzo, Toniann Pitassi, and Rahul Santhanam


Abstract
We study pseudo-deterministic query complexity - randomized query algorithms that are required to output the same answer with high probability on all inputs. We prove Ω(√n) lower bounds on the pseudo-deterministic complexity of a large family of search problems based on unsatisfiable random CNF instances, and also for the promise problem (FIND1) of finding a 1 in a vector populated with at least half one’s. This gives an exponential separation between randomized query complexity and pseudo-deterministic complexity, which is tight in the quantum setting. As applications we partially solve a related combinatorial coloring problem, and we separate random tree-like Resolution from its pseudo-deterministic version. In contrast to our lower bound, we show, surprisingly, that in the zero-error, average case setting, the three notions (deterministic, randomized, pseudo-deterministic) collapse.

Cite as

Shafi Goldwasser, Russell Impagliazzo, Toniann Pitassi, and Rahul Santhanam. On the Pseudo-Deterministic Query Complexity of NP Search Problems. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 36:1-36:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{goldwasser_et_al:LIPIcs.CCC.2021.36,
  author =	{Goldwasser, Shafi and Impagliazzo, Russell and Pitassi, Toniann and Santhanam, Rahul},
  title =	{{On the Pseudo-Deterministic Query Complexity of NP Search Problems}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{36:1--36:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.36},
  URN =		{urn:nbn:de:0030-drops-143104},
  doi =		{10.4230/LIPIcs.CCC.2021.36},
  annote =	{Keywords: Pseudo-determinism, Query complexity, Proof complexity}
}
Document
A Simple Proof of a New Set Disjointness with Applications to Data Streams

Authors: Akshay Kamath, Eric Price, and David P. Woodruff


Abstract
The multiplayer promise set disjointness is one of the most widely used problems from communication complexity in applications. In this problem there are k players with subsets S¹, …, S^k, each drawn from {1, 2, …, n}, and we are promised that either the sets are (1) pairwise disjoint, or (2) there is a unique element j occurring in all the sets, which are otherwise pairwise disjoint. The total communication of solving this problem with constant probability in the blackboard model is Ω(n/k). We observe for most applications, it instead suffices to look at what we call the "mostly" set disjointness problem, which changes case (2) to say there is a unique element j occurring in at least half of the sets, and the sets are otherwise disjoint. This change gives us a much simpler proof of an Ω(n/k) randomized total communication lower bound, avoiding Hellinger distance and Poincare inequalities. Our proof also gives strong lower bounds for high probability protocols, which are much larger than what is possible for the set disjointness problem. Using this we show several new results for data streams: 1) for 𝓁₂-Heavy Hitters, any O(1)-pass streaming algorithm in the insertion-only model for detecting if an ε-𝓁₂-heavy hitter exists requires min(1/(ε²)log((ε²n)/δ), 1/(ε)n^{1/2}) bits of memory, which is optimal up to a log n factor. For deterministic algorithms and constant ε, this gives an Ω(n^{1/2}) lower bound, improving the prior Ω(log n) lower bound. We also obtain lower bounds for Zipfian distributions. 2) for 𝓁_p-Estimation, p > 2, we show an O(1)-pass Ω(n^{1-2/p} log(1/δ)) bit lower bound for outputting an O(1)- approximation with probability 1-δ, in the insertion-only model. This is optimal, and the best previous lower bound was Ω(n^{1-2/p} + log(1/δ)). 3) for low rank approximation of a sparse matrix in ℝ^{d× n}, if we see the rows of a matrix one at a time in the row-order model, each row having O(1) non-zero entries, any deterministic algorithm requires Ω(√d) memory to output an O(1)-approximate rank-1 approximation. Finally, we consider strict and general turnstile streaming models, and show separations between sketching lower bounds and non-sketching upper bounds for the heavy hitters problem.

Cite as

Akshay Kamath, Eric Price, and David P. Woodruff. A Simple Proof of a New Set Disjointness with Applications to Data Streams. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 37:1-37:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{kamath_et_al:LIPIcs.CCC.2021.37,
  author =	{Kamath, Akshay and Price, Eric and Woodruff, David P.},
  title =	{{A Simple Proof of a New Set Disjointness with Applications to Data Streams}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{37:1--37:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.37},
  URN =		{urn:nbn:de:0030-drops-143119},
  doi =		{10.4230/LIPIcs.CCC.2021.37},
  annote =	{Keywords: Streaming algorithms, heavy hitters, communication complexity, information complexity}
}
Document
Toward Better Depth Lower Bounds: The XOR-KRW Conjecture

Authors: Ivan Mihajlin and Alexander Smal


Abstract
In this paper, we propose a new conjecture, the XOR-KRW conjecture, which is a relaxation of the Karchmer-Raz-Wigderson conjecture [Mauricio Karchmer et al., 1995]. This relaxation is still strong enough to imply 𝐏 ̸ ⊆ NC¹ if proven. We also present a weaker version of this conjecture that might be used for breaking n³ lower bound for De Morgan formulas. Our study of this conjecture allows us to partially answer an open question stated in [Dmitry Gavinsky et al., 2017] regarding the composition of the universal relation with a function. To be more precise, we prove that there exists a function g such that the composition of the universal relation with g is significantly harder than just a universal relation. The fact that we can only prove the existence of g is an inherent feature of our approach. The paper’s main technical contribution is a new approach to lower bounds for multiplexer-type relations based on the non-deterministic hardness of non-equality and a new method of converting lower bounds for multiplexer-type relations into lower bounds against some function. In order to do this, we develop techniques to lower bound communication complexity in half-duplex and partially half-duplex communication models.

Cite as

Ivan Mihajlin and Alexander Smal. Toward Better Depth Lower Bounds: The XOR-KRW Conjecture. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 38:1-38:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{mihajlin_et_al:LIPIcs.CCC.2021.38,
  author =	{Mihajlin, Ivan and Smal, Alexander},
  title =	{{Toward Better Depth Lower Bounds: The XOR-KRW Conjecture}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{38:1--38:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.38},
  URN =		{urn:nbn:de:0030-drops-143121},
  doi =		{10.4230/LIPIcs.CCC.2021.38},
  annote =	{Keywords: communication complexity, KRW conjecture, circuit complexity, half-duplex communication complexity, Karchmer-Wigderson games, multiplexer relation, universal relation}
}
Document
Fourier Growth of Parity Decision Trees

Authors: Uma Girish, Avishay Tal, and Kewen Wu


Abstract
We prove that for every parity decision tree of depth d on n variables, the sum of absolute values of Fourier coefficients at level 𝓁 is at most d^{𝓁/2} ⋅ O(𝓁 ⋅ log(n))^𝓁. Our result is nearly tight for small values of 𝓁 and extends a previous Fourier bound for standard decision trees by Sherstov, Storozhenko, and Wu (STOC, 2021). As an application of our Fourier bounds, using the results of Bansal and Sinha (STOC, 2021), we show that the k-fold Forrelation problem has (randomized) parity decision tree complexity Ω̃(n^{1-1/k}), while having quantum query complexity ⌈ k/2⌉. Our proof follows a random-walk approach, analyzing the contribution of a random path in the decision tree to the level-𝓁 Fourier expression. To carry the argument, we apply a careful cleanup procedure to the parity decision tree, ensuring that the value of the random walk is bounded with high probability. We observe that step sizes for the level-𝓁 walks can be computed by the intermediate values of level ≤ 𝓁-1 walks, which calls for an inductive argument. Our approach differs from previous proofs of Tal (FOCS, 2020) and Sherstov, Storozhenko, and Wu (STOC, 2021) that relied on decompositions of the tree. In particular, for the special case of standard decision trees we view our proof as slightly simpler and more intuitive. In addition, we prove a similar bound for noisy decision trees of cost at most d - a model that was recently introduced by Ben-David and Blais (FOCS, 2020).

Cite as

Uma Girish, Avishay Tal, and Kewen Wu. Fourier Growth of Parity Decision Trees. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 39:1-39:36, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{girish_et_al:LIPIcs.CCC.2021.39,
  author =	{Girish, Uma and Tal, Avishay and Wu, Kewen},
  title =	{{Fourier Growth of Parity Decision Trees}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{39:1--39:36},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.39},
  URN =		{urn:nbn:de:0030-drops-143137},
  doi =		{10.4230/LIPIcs.CCC.2021.39},
  annote =	{Keywords: Fourier analysis of Boolean functions, noisy decision tree, parity decision tree, query complexity}
}
Document
The Power of Negative Reasoning

Authors: Susanna F. de Rezende, Massimo Lauria, Jakob Nordström, and Dmitry Sokolov


Abstract
Semialgebraic proof systems have been studied extensively in proof complexity since the late 1990s to understand the power of Gröbner basis computations, linear and semidefinite programming hierarchies, and other methods. Such proof systems are defined alternately with only the original variables of the problem and with special formal variables for positive and negative literals, but there seems to have been no study how these different definitions affect the power of the proof systems. We show for Nullstellensatz, polynomial calculus, Sherali-Adams, and sums-of-squares that adding formal variables for negative literals makes the proof systems exponentially stronger, with respect to the number of terms in the proofs. These separations are witnessed by CNF formulas that are easy for resolution, which establishes that polynomial calculus, Sherali-Adams, and sums-of-squares cannot efficiently simulate resolution without having access to variables for negative literals.

Cite as

Susanna F. de Rezende, Massimo Lauria, Jakob Nordström, and Dmitry Sokolov. The Power of Negative Reasoning. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 40:1-40:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{derezende_et_al:LIPIcs.CCC.2021.40,
  author =	{de Rezende, Susanna F. and Lauria, Massimo and Nordstr\"{o}m, Jakob and Sokolov, Dmitry},
  title =	{{The Power of Negative Reasoning}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{40:1--40:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.40},
  URN =		{urn:nbn:de:0030-drops-143140},
  doi =		{10.4230/LIPIcs.CCC.2021.40},
  annote =	{Keywords: Proof complexity, Polynomial calculus, Nullstellensatz, Sums-of-squares, Sherali-Adams}
}
Document
Matrix Rigidity Depends on the Target Field

Authors: László Babai and Bohdan Kivva


Abstract
The rigidity of a matrix A for target rank r is the minimum number of entries of A that need to be changed in order to obtain a matrix of rank at most r (Valiant, 1977). We study the dependence of rigidity on the target field. We consider especially two natural regimes: when one is allowed to make changes only from the field of definition of the matrix ("strict rigidity"), and when the changes are allowed to be in an arbitrary extension field ("absolute rigidity"). We demonstrate, apparently for the first time, a separation between these two concepts. We establish a gap of a factor of 3/2-o(1) between strict and absolute rigidities. The question seems especially timely because of recent results by Dvir and Liu (Theory of Computing, 2020) where important families of matrices, previously expected to be rigid, are shown not to be absolutely rigid, while their strict rigidity remains open. Our lower-bound method combines elementary arguments from algebraic geometry with "untouched minors" arguments. Finally, we point out that more families of long-time rigidity candidates fall as a consequence of the results of Dvir and Liu. These include the incidence matrices of projective planes over finite fields, proposed by Valiant as candidates for rigidity over 𝔽₂.

Cite as

László Babai and Bohdan Kivva. Matrix Rigidity Depends on the Target Field. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 41:1-41:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{babai_et_al:LIPIcs.CCC.2021.41,
  author =	{Babai, L\'{a}szl\'{o} and Kivva, Bohdan},
  title =	{{Matrix Rigidity Depends on the Target Field}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{41:1--41:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.41},
  URN =		{urn:nbn:de:0030-drops-143153},
  doi =		{10.4230/LIPIcs.CCC.2021.41},
  annote =	{Keywords: Matrix rigidity, field extension}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail