LIPIcs, Volume 151

11th Innovations in Theoretical Computer Science Conference (ITCS 2020)



Thumbnail PDF

Event

ITCS 2020, January 12-14, 2020, Seattle, Washington, USA

Editor

Thomas Vidick
  • California Institute of Technology, Pasadena, CA, USA

Publication Details


Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 151, ITCS'20, Complete Volume

Authors: Thomas Vidick


Abstract
LIPIcs, Volume 151, ITCS'20, Complete Volume

Cite as

11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@Proceedings{vidick:LIPIcs.ITCS.2020,
  title =	{{LIPIcs, Volume 151, ITCS'20, Complete Volume}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020},
  URN =		{urn:nbn:de:0030-drops-117829},
  doi =		{10.4230/LIPIcs.ITCS.2020},
  annote =	{Keywords: Mathematics of computing; Theory of computation}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Thomas Vidick


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 0:i-0:xvi, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{vidick:LIPIcs.ITCS.2020.0,
  author =	{Vidick, Thomas},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{0:i--0:xvi},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.0},
  URN =		{urn:nbn:de:0030-drops-116855},
  doi =		{10.4230/LIPIcs.ITCS.2020.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Hardness Amplification of Optimization Problems

Authors: Elazar Goldenberg and Karthik C. S.


Abstract
In this paper, we prove a general hardness amplification scheme for optimization problems based on the technique of direct products. We say that an optimization problem Π is direct product feasible if it is possible to efficiently aggregate any k instances of Π and form one large instance of Π such that given an optimal feasible solution to the larger instance, we can efficiently find optimal feasible solutions to all the k smaller instances. Given a direct product feasible optimization problem Π, our hardness amplification theorem may be informally stated as follows: If there is a distribution D over instances of Π of size n such that every randomized algorithm running in time t(n) fails to solve Π on 1/α(n) fraction of inputs sampled from D, then, assuming some relationships on α(n) and t(n), there is a distribution D' over instances of Π of size O(n⋅α(n)) such that every randomized algorithm running in time t(n)/poly(α(n)) fails to solve Π on 99/100 fraction of inputs sampled from D'. As a consequence of the above theorem, we show hardness amplification of problems in various classes such as NP-hard problems like Max-Clique, Knapsack, and Max-SAT, problems in P such as Longest Common Subsequence, Edit Distance, Matrix Multiplication, and even problems in TFNP such as Factoring and computing Nash equilibrium.

Cite as

Elazar Goldenberg and Karthik C. S.. Hardness Amplification of Optimization Problems. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 1:1-1:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{goldenberg_et_al:LIPIcs.ITCS.2020.1,
  author =	{Goldenberg, Elazar and Karthik C. S.},
  title =	{{Hardness Amplification of Optimization Problems}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{1:1--1:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.1},
  URN =		{urn:nbn:de:0030-drops-116863},
  doi =		{10.4230/LIPIcs.ITCS.2020.1},
  annote =	{Keywords: hardness amplification, average case complexity, direct product, optimization problems, fine-grained complexity, TFNP}
}
Document
Smooth and Strong PCPs

Authors: Orr Paradise


Abstract
Probabilistically checkable proofs (PCPs) can be verified based only on a constant amount of random queries, such that any correct claim has a proof that is always accepted, and incorrect claims are rejected with high probability (regardless of the given alleged proof). We consider two possible features of PCPs: - A PCP is strong if it rejects an alleged proof of a correct claim with probability proportional to its distance from some correct proof of that claim. - A PCP is smooth if each location in a proof is queried with equal probability. We prove that all sets in NP have PCPs that are both smooth and strong, are of polynomial length, and can be verified based on a constant number of queries. This is achieved by following the proof of the PCP theorem of Arora, Lund, Motwani, Sudan and Szegedy (JACM, 1998), providing a stronger analysis of the Hadamard and Reed - Muller based PCPs and a refined PCP composition theorem. In fact, we show that any set in NP has a smooth strong canonical PCP of Proximity (PCPP), meaning that there is an efficiently computable bijection of NP witnesses to correct proofs. This improves on the recent construction of Dinur, Gur and Goldreich (ITCS, 2019) of PCPPs that are strong canonical but inherently non-smooth. Our result implies the hardness of approximating the satisfiability of "stable" 3CNF formulae with bounded variable occurrence, where stable means that the number of clauses violated by an assignment is proportional to its distance from a satisfying assignment (in the relative Hamming metric). This proves a hypothesis used in the work of Friggstad, Khodamoradi and Salavatipour (SODA, 2019), suggesting a connection between the hardness of these instances and other stable optimization problems.

Cite as

Orr Paradise. Smooth and Strong PCPs. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 2:1-2:41, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{paradise:LIPIcs.ITCS.2020.2,
  author =	{Paradise, Orr},
  title =	{{Smooth and Strong PCPs}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{2:1--2:41},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.2},
  URN =		{urn:nbn:de:0030-drops-116875},
  doi =		{10.4230/LIPIcs.ITCS.2020.2},
  annote =	{Keywords: Interactive and probabilistic proof systems, Probabilistically checkable proofs, Hardness of approximation}
}
Document
Approximately Strategyproof Tournament Rules: On Large Manipulating Sets and Cover-Consistence

Authors: Ariel Schvartzman, S. Matthew Weinberg, Eitan Zlatin, and Albert Zuo


Abstract
We consider the manipulability of tournament rules, in which n teams play a round robin tournament and a winner is (possibly randomly) selected based on the outcome of all binom{n}{2} matches. Prior work defines a tournament rule to be k-SNM-α if no set of ≤ k teams can fix the ≤ binom{k}{2} matches among them to increase their probability of winning by >α and asks: for each k, what is the minimum α(k) such that a Condorcet-consistent (i.e. always selects a Condorcet winner when one exists) k-SNM-α(k) tournament rule exists? A simple example witnesses that α(k) ≥ (k-1)/(2k-1) for all k, and [Jon Schneider et al., 2017] conjectures that this is tight (and prove it is tight for k=2). Our first result refutes this conjecture: there exists a sufficiently large k such that no Condorcet-consistent tournament rule is k-SNM-1/2. Our second result leverages similar machinery to design a new tournament rule which is k-SNM-2/3 for all k (and this is the first tournament rule which is k-SNM-(<1) for all k). Our final result extends prior work, which proves that single-elimination bracket with random seeding is 2-SNM-1/3 [Jon Schneider et al., 2017], in a different direction by seeking a stronger notion of fairness than Condorcet-consistence. We design a new tournament rule, which we call Randomized-King-of-the-Hill, which is 2-SNM-1/3 and cover-consistent (the winner is an uncovered team with probability 1).

Cite as

Ariel Schvartzman, S. Matthew Weinberg, Eitan Zlatin, and Albert Zuo. Approximately Strategyproof Tournament Rules: On Large Manipulating Sets and Cover-Consistence. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 3:1-3:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{schvartzman_et_al:LIPIcs.ITCS.2020.3,
  author =	{Schvartzman, Ariel and Weinberg, S. Matthew and Zlatin, Eitan and Zuo, Albert},
  title =	{{Approximately Strategyproof Tournament Rules: On Large Manipulating Sets and Cover-Consistence}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{3:1--3:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.3},
  URN =		{urn:nbn:de:0030-drops-116881},
  doi =		{10.4230/LIPIcs.ITCS.2020.3},
  annote =	{Keywords: Tournament design, Non-manipulability, Cover-consistence, Strategyproofness}
}
Document
Span Programs and Quantum Space Complexity

Authors: Stacey Jeffery


Abstract
While quantum computers hold the promise of significant computational speedups, the limited size of early quantum machines motivates the study of space-bounded quantum computation. We relate the quantum space complexity of computing a function f with one-sided error to the logarithm of its span program size, a classical quantity that is well-studied in attempts to prove formula size lower bounds. In the more natural bounded error model, we show that the amount of space needed for a unitary quantum algorithm to compute f with bounded (two-sided) error is lower bounded by the logarithm of its approximate span program size. Approximate span programs were introduced in the field of quantum algorithms but not studied classically. However, the approximate span program size of a function is a natural generalization of its span program size. While no non-trivial lower bound is known on the span program size (or approximate span program size) of any concrete function, a number of lower bounds are known on the monotone span program size. We show that the approximate monotone span program size of f is a lower bound on the space needed by quantum algorithms of a particular form, called monotone phase estimation algorithms, to compute f. We then give the first non-trivial lower bound on the approximate span program size of an explicit function.

Cite as

Stacey Jeffery. Span Programs and Quantum Space Complexity. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 4:1-4:37, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{jeffery:LIPIcs.ITCS.2020.4,
  author =	{Jeffery, Stacey},
  title =	{{Span Programs and Quantum Space Complexity}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{4:1--4:37},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.4},
  URN =		{urn:nbn:de:0030-drops-116896},
  doi =		{10.4230/LIPIcs.ITCS.2020.4},
  annote =	{Keywords: Quantum space complexity, span programs}
}
Document
DEEP-FRI: Sampling Outside the Box Improves Soundness

Authors: Eli Ben-Sasson, Lior Goldberg, Swastik Kopparty, and Shubhangi Saraf


Abstract
Motivated by the quest for scalable and succinct zero knowledge arguments, we revisit worst-case-to-average-case reductions for linear spaces, raised by [Rothblum, Vadhan, Wigderson, STOC 2013]. The previous state of the art by [Ben-Sasson, Kopparty, Saraf, CCC 2018] showed that if some member of an affine space U is δ-far in relative Hamming distance from a linear code V - this is the worst-case assumption - then most elements of U are almost-δ-far from V - this is the average case. However, this result was known to hold only below the "double Johnson" function of the relative distance δ_V of the code V, i.e., only when δ < 1-(1-δ_V)^(1/4). First, we increase the soundness-bound to the "one-and-a-half Johnson" function of δ_V and show that the average distance of U from V is nearly δ for any worst-case distance δ smaller than 1-(1-δ_V)^(1/3). This bound is tight, which is somewhat surprising because the one-and-a-half Johnson function is unfamiliar in the literature on error correcting codes. To improve soundness further for Reed Solomon codes we sample outside the box. We suggest a new protocol in which the verifier samples a single point z outside the box D on which codewords are evaluated, and asks the prover for the value at z of the interpolating polynomial of a random element of U. Intuitively, the answer provided by the prover "forces" it to choose one codeword from a list of "pretenders" that are close to U. We call this technique Domain Extending for Eliminating Pretenders (DEEP). The DEEP method improves the soundness of the worst-case-to-average-case reduction for RS codes up their list decoding radius. This radius is bounded from below by the Johnson bound, implying average distance is approximately δ for all δ < 1-(1-δ_V)^(1/2). Under a plausible conjecture about the list decoding radius of Reed-Solomon codes, average distance from V is approximately δ for all δ. The DEEP technique can be generalized to all linear codes, giving improved reductions for capacity-achieving list-decodable codes. Finally, we use the DEEP technique to devise two new protocols: - An Interactive Oracle Proof of Proximity (IOPP) for RS codes, called DEEP-FRI. The soundness of the protocol improves upon that of the FRI protocol of [Ben-Sasson et al., ICALP 2018] while retaining linear arithmetic proving complexity and logarithmic verifier arithmetic complexity. - An Interactive Oracle Proof (IOP) for the Algebraic Linking IOP (ALI) protocol used to construct zero knowledge scalable transparent arguments of knowledge (ZK-STARKs) in [Ben-Sasson et al., eprint 2018]. The new protocol, called DEEP-ALI, improves soundness of this crucial step from a small constant < 1/8 to a constant arbitrarily close to 1.

Cite as

Eli Ben-Sasson, Lior Goldberg, Swastik Kopparty, and Shubhangi Saraf. DEEP-FRI: Sampling Outside the Box Improves Soundness. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 5:1-5:32, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bensasson_et_al:LIPIcs.ITCS.2020.5,
  author =	{Ben-Sasson, Eli and Goldberg, Lior and Kopparty, Swastik and Saraf, Shubhangi},
  title =	{{DEEP-FRI: Sampling Outside the Box Improves Soundness}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{5:1--5:32},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.5},
  URN =		{urn:nbn:de:0030-drops-116901},
  doi =		{10.4230/LIPIcs.ITCS.2020.5},
  annote =	{Keywords: Interactive Oracle Proofs of Proximity, STARK, Low Degree Testing}
}
Document
On the Cryptographic Hardness of Local Search

Authors: Nir Bitansky and Idan Gerichter


Abstract
We show new hardness results for the class of Polynomial Local Search problems (PLS): - Hardness of PLS based on a falsifiable assumption on bilinear groups introduced by Kalai, Paneth, and Yang (STOC 2019), and the Exponential Time Hypothesis for randomized algorithms. Previous standard model constructions relied on non-falsifiable and non-standard assumptions. - Hardness of PLS relative to random oracles. The construction is essentially different than previous constructions, and in particular is unconditionally secure. The construction also demonstrates the hardness of parallelizing local search. The core observation behind the results is that the unique proofs property of incrementally-verifiable computations previously used to demonstrate hardness in PLS can be traded with a simple incremental completeness property.

Cite as

Nir Bitansky and Idan Gerichter. On the Cryptographic Hardness of Local Search. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 6:1-6:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bitansky_et_al:LIPIcs.ITCS.2020.6,
  author =	{Bitansky, Nir and Gerichter, Idan},
  title =	{{On the Cryptographic Hardness of Local Search}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{6:1--6:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.6},
  URN =		{urn:nbn:de:0030-drops-116918},
  doi =		{10.4230/LIPIcs.ITCS.2020.6},
  annote =	{Keywords: Cryptography, PLS, Lower Bounds, Incremental Computation}
}
Document
Interactive Coding with Constant Round and Communication Blowup

Authors: Klim Efremenko, Elad Haramaty, and Yael Tauman Kalai


Abstract
The problem of constructing error-resilient interactive protocols was introduced in the seminal works of Schulman (FOCS 1992, STOC 1993). These works show how to convert any two-party interactive protocol into one that is resilient to constant-fraction of error, while blowing up the communication by only a constant factor. Since these seminal works, there have been many followup works which improve the error rate, the communication rate, and the computational efficiency. All these works only consider only an increase in communication complexity and did not consider an increase in round complexity. This work is the first one that considers the blowup of round complexity in noisy setting. While techniques from other papers can be easily adapted encode protocols with arbitrarily round complexity this coding schemes will lead to large(and usually unbounded) increase in round complexity of the protocol. In this work, we show how to convert any protocol Π, with no a priori known communication bound, into an error-resilient protocol Π', with comparable computational efficiency, that is resilient to constant fraction of adversarial error, while blowing up both the communication complexity and the round complexity by at most a constant factor. We consider the model where in each round each party may send a message of arbitrary length, where the length of the messages and the length of the protocol may be adaptive, and may depend on the private inputs of the parties and on previous communication. We consider the adversarial error model, where ε-fraction of the communication may be corrupted, where we allow each corruption to be an insertion or deletion (in addition to toggle). In addition, we try to minimize the blowup parameters: In particular, we construct such Π' with (1+Õ(ε^(1/4))) blowup in communication and O(1) blowup in rounds. We also show how to reduce the blowup in rounds at the expense of increasing the blowup in communication, and construct Π' where both the blowup in rounds and communication, approaches one (i.e., no blowup) as ε approaches zero. We give "evidence" that our parameters are "close to" optimal.

Cite as

Klim Efremenko, Elad Haramaty, and Yael Tauman Kalai. Interactive Coding with Constant Round and Communication Blowup. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 7:1-7:34, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{efremenko_et_al:LIPIcs.ITCS.2020.7,
  author =	{Efremenko, Klim and Haramaty, Elad and Kalai, Yael Tauman},
  title =	{{Interactive Coding with Constant Round and Communication Blowup}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{7:1--7:34},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.7},
  URN =		{urn:nbn:de:0030-drops-116927},
  doi =		{10.4230/LIPIcs.ITCS.2020.7},
  annote =	{Keywords: Interactive Coding, Round Complexity, Error Correcting Codes}
}
Document
From Independent Sets and Vertex Colorings to Isotropic Spaces and Isotropic Decompositions: Another Bridge Between Graphs and Alternating Matrix Spaces

Authors: Xiaohui Bei, Shiteng Chen, Ji Guan, Youming Qiao, and Xiaoming Sun


Abstract
In the 1970’s, Lovász built a bridge between graphs and alternating matrix spaces, in the context of perfect matchings (FCT 1979). A similar connection between bipartite graphs and matrix spaces plays a key role in the recent resolutions of the non-commutative rank problem (Garg-Gurvits-Oliveira-Wigderson, FOCS 2016; Ivanyos-Qiao-Subrahmanyam, ITCS 2017). In this paper, we lay the foundation for another bridge between graphs and alternating matrix spaces, in the context of independent sets and vertex colorings. The corresponding structures in alternating matrix spaces are isotropic spaces and isotropic decompositions, both useful structures in group theory and manifold theory. We first show that the maximum independent set problem and the vertex c-coloring problem reduce to the maximum isotropic space problem and the isotropic c-decomposition problem, respectively. Next, we show that several topics and results about independent sets and vertex colorings have natural correspondences for isotropic spaces and decompositions. These include algorithmic problems, such as the maximum independent set problem for bipartite graphs, and exact exponential-time algorithms for the chromatic number, as well as mathematical questions, such as the number of maximal independent sets, and the relation between the maximum degree and the chromatic number. These connections lead to new interactions between graph theory and algebra. Some results have concrete applications to group theory and manifold theory, and we initiate a variant of these structures in the context of quantum information theory. Finally, we propose several open questions for further exploration. (Dedicated to the memory of Ker-I Ko)

Cite as

Xiaohui Bei, Shiteng Chen, Ji Guan, Youming Qiao, and Xiaoming Sun. From Independent Sets and Vertex Colorings to Isotropic Spaces and Isotropic Decompositions: Another Bridge Between Graphs and Alternating Matrix Spaces. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 8:1-8:48, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bei_et_al:LIPIcs.ITCS.2020.8,
  author =	{Bei, Xiaohui and Chen, Shiteng and Guan, Ji and Qiao, Youming and Sun, Xiaoming},
  title =	{{From Independent Sets and Vertex Colorings to Isotropic Spaces and Isotropic Decompositions: Another Bridge Between Graphs and Alternating Matrix Spaces}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{8:1--8:48},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.8},
  URN =		{urn:nbn:de:0030-drops-116932},
  doi =		{10.4230/LIPIcs.ITCS.2020.8},
  annote =	{Keywords: independent set, vertex coloring, graphs, matrix spaces, isotropic subspace}
}
Document
Hard Properties with (Very) Short PCPPs and Their Applications

Authors: Omri Ben-Eliezer, Eldar Fischer, Amit Levi, and Ron D. Rothblum


Abstract
We show that there exist properties that are maximally hard for testing, while still admitting PCPPs with a proof size very close to linear. Specifically, for every fixed ℓ, we construct a property P^(ℓ)⊆ {0,1}^n satisfying the following: Any testing algorithm for P^(ℓ) requires Ω(n) many queries, and yet P^(ℓ) has a constant query PCPP whose proof size is O(n⋅log^(ℓ)n), where log^(ℓ) denotes the ℓ times iterated log function (e.g., log^(2)n = log log n). The best previously known upper bound on the PCPP proof size for a maximally hard to test property was O(n⋅polylog(n)). As an immediate application, we obtain stronger separations between the standard testing model and both the tolerant testing model and the erasure-resilient testing model: for every fixed ℓ, we construct a property that has a constant-query tester, but requires Ω(n/log^(ℓ)(n)) queries for every tolerant or erasure-resilient tester.

Cite as

Omri Ben-Eliezer, Eldar Fischer, Amit Levi, and Ron D. Rothblum. Hard Properties with (Very) Short PCPPs and Their Applications. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 9:1-9:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{beneliezer_et_al:LIPIcs.ITCS.2020.9,
  author =	{Ben-Eliezer, Omri and Fischer, Eldar and Levi, Amit and Rothblum, Ron D.},
  title =	{{Hard Properties with (Very) Short PCPPs and Their Applications}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{9:1--9:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.9},
  URN =		{urn:nbn:de:0030-drops-116949},
  doi =		{10.4230/LIPIcs.ITCS.2020.9},
  annote =	{Keywords: PCPP, Property testing, Tolerant testing, Erasure resilient testing, Randomized encoding, Coding theory}
}
Document
Kronecker Powers of Tensors and Strassen’s Laser Method

Authors: Austin Conner, Joseph M. Landsberg, Fulvio Gesmundo, and Emanuele Ventura


Abstract
We answer a question, posed implicitly in [P. Bürgisser et al., 1997] and explicitly in [M. Bläser, 2013], showing the border rank of the Kronecker square of the little Coppersmith-Winograd tensor is the square of the border rank of the tensor for all q>2, a negative result for complexity theory. We further show that when q>4, the analogous result holds for the Kronecker cube. In the positive direction, we enlarge the list of explicit tensors potentially useful for the laser method. We observe that a well-known tensor, the 3 × 3 determinant polynomial regarded as a tensor, det_3 ∈ C^9 ⊗ C^9 ⊗ C^9, could potentially be used in the laser method to prove the exponent of matrix multiplication is two. Because of this, we prove new upper bounds on its Waring rank and rank (both 18), border rank and Waring border rank (both 17), which, in addition to being promising for the laser method, are of interest in their own right. We discuss "skew" cousins of the little Coppersmith-Winograd tensor and indicate why they may be useful for the laser method. We establish general results regarding border ranks of Kronecker powers of tensors, and make a detailed study of Kronecker squares of tensors in C^3 ⊗ C^3 ⊗ C^3.

Cite as

Austin Conner, Joseph M. Landsberg, Fulvio Gesmundo, and Emanuele Ventura. Kronecker Powers of Tensors and Strassen’s Laser Method. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 10:1-10:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{conner_et_al:LIPIcs.ITCS.2020.10,
  author =	{Conner, Austin and Landsberg, Joseph M. and Gesmundo, Fulvio and Ventura, Emanuele},
  title =	{{Kronecker Powers of Tensors and Strassen’s Laser Method}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{10:1--10:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.10},
  URN =		{urn:nbn:de:0030-drops-116956},
  doi =		{10.4230/LIPIcs.ITCS.2020.10},
  annote =	{Keywords: Matrix multiplication complexity, Tensor rank, Asymptotic rank, Laser method}
}
Document
Algorithms and Lower Bounds for Cycles and Walks: Small Space and Sparse Graphs

Authors: Andrea Lincoln and Nikhil Vyas


Abstract
We consider space-efficient algorithms and conditional time lower bounds for finding cycles and walks in graphs. We give a reduction that connects the running time of undirected 2k-cycle to finding directed odd cycles, s-t connectivity in directed graphs, and Max-3-SAT. For example, we show that if 2k-cycle on O(n)-edge graphs can be solved in O(n^(1.5-ε)) time for some ε>0 then, a 2^(n(1-ε')) time algorithm exists for Max-3-SAT for some ε'>0. Additionally, we give a tight combinatorial lower bound for 2k-cycle detection, specifically when k is odd, of m^{2k/(k+1) +o(1)} given the Combinatorial k-Clique Hypothesis. On the algorithms side, we present a randomized algorithm for directed s-t connectivity using O(lg(n)^2) space and O(n^{lg(n)/2 + o(lg(n))}) expected time, giving a time improvement over Savitch’s famous algorithm, which takes at least n^{lg(n) - o(lg(n))} time. Under the conjecture that every O(lg(n)^2)-space algorithm for directed s-t connectivity requires n^Ω(lg(n)) time, we show that undirected 2k-cycle in O(lg(n)) space requires n^Ω(lg(k)) time.

Cite as

Andrea Lincoln and Nikhil Vyas. Algorithms and Lower Bounds for Cycles and Walks: Small Space and Sparse Graphs. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 11:1-11:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{lincoln_et_al:LIPIcs.ITCS.2020.11,
  author =	{Lincoln, Andrea and Vyas, Nikhil},
  title =	{{Algorithms and Lower Bounds for Cycles and Walks: Small Space and Sparse Graphs}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{11:1--11:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.11},
  URN =		{urn:nbn:de:0030-drops-116969},
  doi =		{10.4230/LIPIcs.ITCS.2020.11},
  annote =	{Keywords: k-cycle, Space, Savitch, Sparse Graphs, Max-3-SAT}
}
Document
High-Dimensional Expanders from Expanders

Authors: Siqi Liu, Sidhanth Mohanty, and Elizabeth Yang


Abstract
We present an elementary way to transform an expander graph into a simplicial complex where all high order random walks have a constant spectral gap, i.e., they converge rapidly to the stationary distribution. As an upshot, we obtain new constructions, as well as a natural probabilistic model to sample constant degree high-dimensional expanders. In particular, we show that given an expander graph G, adding self loops to G and taking the tensor product of the modified graph with a high-dimensional expander produces a new high-dimensional expander. Our proof of rapid mixing of high order random walks is based on the decomposable Markov chains framework introduced by [Jerrum et al., 2004].

Cite as

Siqi Liu, Sidhanth Mohanty, and Elizabeth Yang. High-Dimensional Expanders from Expanders. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 12:1-12:32, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:LIPIcs.ITCS.2020.12,
  author =	{Liu, Siqi and Mohanty, Sidhanth and Yang, Elizabeth},
  title =	{{High-Dimensional Expanders from Expanders}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{12:1--12:32},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.12},
  URN =		{urn:nbn:de:0030-drops-116974},
  doi =		{10.4230/LIPIcs.ITCS.2020.12},
  annote =	{Keywords: High-Dimensional Expanders, Markov Chains}
}
Document
Approximating Cumulative Pebbling Cost Is Unique Games Hard

Authors: Jeremiah Blocki, Seunghoon Lee, and Samson Zhou


Abstract
The cumulative pebbling complexity of a directed acyclic graph G is defined as cc(G) = min_P ∑_i |P_i|, where the minimum is taken over all legal (parallel) black pebblings of G and |P_i| denotes the number of pebbles on the graph during round i. Intuitively, cc(G) captures the amortized Space-Time complexity of pebbling m copies of G in parallel. The cumulative pebbling complexity of a graph G is of particular interest in the field of cryptography as cc(G) is tightly related to the amortized Area-Time complexity of the Data-Independent Memory-Hard Function (iMHF) f_{G,H} [Joël Alwen and Vladimir Serbinenko, 2015] defined using a constant indegree directed acyclic graph (DAG) G and a random oracle H(⋅). A secure iMHF should have amortized Space-Time complexity as high as possible, e.g., to deter brute-force password attacker who wants to find x such that f_{G,H}(x) = h. Thus, to analyze the (in)security of a candidate iMHF f_{G,H}, it is crucial to estimate the value cc(G) but currently, upper and lower bounds for leading iMHF candidates differ by several orders of magnitude. Blocki and Zhou recently showed that it is NP-Hard to compute cc(G), but their techniques do not even rule out an efficient (1+ε)-approximation algorithm for any constant ε>0. We show that for any constant c > 0, it is Unique Games hard to approximate cc(G) to within a factor of c. Along the way, we show the hardness of approximation of the DAG Vertex Deletion problem on DAGs of constant indegree. Namely, we show that for any k,ε >0 and given a DAG G with N nodes and constant indegree, it is Unique Games hard to distinguish between the case that G is (e_1, d_1)-reducible with e_1=N^{1/(1+2 ε)}/k and d_1=k N^{2 ε/(1+2 ε)}, and the case that G is (e_2, d_2)-depth-robust with e_2 = (1-ε)k e_1 and d_2= 0.9 N^{(1+ε)/(1+2 ε)}, which may be of independent interest. Our result generalizes a result of Svensson who proved an analogous result for DAGs with indegree ?(N).

Cite as

Jeremiah Blocki, Seunghoon Lee, and Samson Zhou. Approximating Cumulative Pebbling Cost Is Unique Games Hard. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 13:1-13:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{blocki_et_al:LIPIcs.ITCS.2020.13,
  author =	{Blocki, Jeremiah and Lee, Seunghoon and Zhou, Samson},
  title =	{{Approximating Cumulative Pebbling Cost Is Unique Games Hard}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{13:1--13:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.13},
  URN =		{urn:nbn:de:0030-drops-116983},
  doi =		{10.4230/LIPIcs.ITCS.2020.13},
  annote =	{Keywords: Cumulative Pebbling Cost, Approximation Algorithm, Unique Games Conjecture, \gamma-Extreme Depth Robust Graph, Superconcentrator, Memory-Hard Function}
}
Document
Scheduling with Predictions and the Price of Misprediction

Authors: Michael Mitzenmacher


Abstract
In many traditional job scheduling settings, it is assumed that one knows the time it will take for a job to complete service. In such cases, strategies such as shortest job first can be used to improve performance in terms of measures such as the average time a job waits in the system. We consider the setting where the service time is not known, but is predicted by for example a machine learning algorithm. Our main result is the derivation, under natural assumptions, of formulae for the performance of several strategies for queueing systems that use predictions for service times in order to schedule jobs. As part of our analysis, we suggest the framework of the "price of misprediction," which offers a measure of the cost of using predicted information.

Cite as

Michael Mitzenmacher. Scheduling with Predictions and the Price of Misprediction. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 14:1-14:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{mitzenmacher:LIPIcs.ITCS.2020.14,
  author =	{Mitzenmacher, Michael},
  title =	{{Scheduling with Predictions and the Price of Misprediction}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{14:1--14:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.14},
  URN =		{urn:nbn:de:0030-drops-116996},
  doi =		{10.4230/LIPIcs.ITCS.2020.14},
  annote =	{Keywords: Queues, shortest remaining processing time, algorithms with predictions, scheduling}
}
Document
Reducing Inefficiency in Carbon Auctions with Imperfect Competition

Authors: Kira Goldner, Nicole Immorlica, and Brendan Lucier


Abstract
We study auctions for carbon licenses, a policy tool used to control the social cost of pollution. Each identical license grants the right to produce a unit of pollution. Each buyer (i.e., firm that pollutes during the manufacturing process) enjoys a decreasing marginal value for licenses, but society suffers an increasing marginal cost for each license distributed. The seller (i.e., the government) can choose a number of licenses to put up for auction, and wishes to maximize the societal welfare: the total economic value of the buyers minus the social cost. Motivated by emission license markets deployed in practice, we focus on uniform price auctions with a price floor and/or price ceiling. The seller has distributional information about the market, and their goal is to tune the auction parameters to maximize expected welfare. The target benchmark is the maximum expected welfare achievable by any such auction under truth-telling behavior. Unfortunately, the uniform price auction is not truthful, and strategic behavior can significantly reduce (even below zero) the welfare of a given auction configuration. We describe a subclass of "safe-price" auctions for which the welfare at any Bayes-Nash equilibrium will approximate the welfare under truth-telling behavior. We then show that the better of a safe-price auction, or a truthful auction that allocates licenses to only a single buyer, will approximate the target benchmark. In particular, we show how to choose a number of licenses and a price floor so that the worst-case welfare, at any equilibrium, is a constant approximation to the best achievable welfare under truth-telling after excluding the welfare contribution of a single buyer.

Cite as

Kira Goldner, Nicole Immorlica, and Brendan Lucier. Reducing Inefficiency in Carbon Auctions with Imperfect Competition. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 15:1-15:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{goldner_et_al:LIPIcs.ITCS.2020.15,
  author =	{Goldner, Kira and Immorlica, Nicole and Lucier, Brendan},
  title =	{{Reducing Inefficiency in Carbon Auctions with Imperfect Competition}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{15:1--15:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.15},
  URN =		{urn:nbn:de:0030-drops-117006},
  doi =		{10.4230/LIPIcs.ITCS.2020.15},
  annote =	{Keywords: welfare, price of anarchy, mechanism design, equilibrium, costs}
}
Document
Preference-Informed Fairness

Authors: Michael P. Kim, Aleksandra Korolova, Guy N. Rothblum, and Gal Yona


Abstract
In this work, we study notions of fairness in decision-making systems when individuals have diverse preferences over the possible outcomes of the decisions. Our starting point is the seminal work of Dwork et al. [ITCS 2012] which introduced a notion of individual fairness (IF): given a task-specific similarity metric, every pair of individuals who are similarly qualified according to the metric should receive similar outcomes. We show that when individuals have diverse preferences over outcomes, requiring IF may unintentionally lead to less-preferred outcomes for the very individuals that IF aims to protect (e.g. a protected minority group). A natural alternative to IF is the classic notion of fair division, envy-freeness (EF): no individual should prefer another individual’s outcome over their own. Although EF allows for solutions where all individuals receive a highly-preferred outcome, EF may also be overly-restrictive for the decision-maker. For instance, if many individuals agree on the best outcome, then if any individual receives this outcome, they all must receive it, regardless of each individual’s underlying qualifications for the outcome. We introduce and study a new notion of preference-informed individual fairness (PIIF) that is a relaxation of both individual fairness and envy-freeness. At a high-level, PIIF requires that outcomes satisfy IF-style constraints, but allows for deviations provided they are in line with individuals' preferences. We show that PIIF can permit outcomes that are more favorable to individuals than any IF solution, while providing considerably more flexibility to the decision-maker than EF. In addition, we show how to efficiently optimize any convex objective over the outcomes subject to PIIF for a rich class of individual preferences. Finally, we demonstrate the broad applicability of the PIIF framework by extending our definitions and algorithms to the multiple-task targeted advertising setting introduced by Dwork and Ilvento [ITCS 2019].

Cite as

Michael P. Kim, Aleksandra Korolova, Guy N. Rothblum, and Gal Yona. Preference-Informed Fairness. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 16:1-16:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{kim_et_al:LIPIcs.ITCS.2020.16,
  author =	{Kim, Michael P. and Korolova, Aleksandra and Rothblum, Guy N. and Yona, Gal},
  title =	{{Preference-Informed Fairness}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{16:1--16:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.16},
  URN =		{urn:nbn:de:0030-drops-117010},
  doi =		{10.4230/LIPIcs.ITCS.2020.16},
  annote =	{Keywords: algorithmic fairness}
}
Document
On a Theorem of Lovász that hom(⋅, H) Determines the Isomorphism Type of H

Authors: Jin-Yi Cai and Artem Govorov


Abstract
Graph homomorphism has been an important research topic since its introduction [László Lovász, 1967]. Stated in the language of binary relational structures in that paper [László Lovász, 1967], Lovász proved a fundamental theorem that the graph homomorphism function G ↦ hom(G, H) for 0-1 valued H (as the adjacency matrix of a graph) determines the isomorphism type of H. In the past 50 years various extensions have been proved by Lovász and others [László Lovász, 2006; Michael Freedman et al., 2007; Christian Borgs et al., 2008; Alexander Schrijver, 2009; László Lovász and Balázs Szegedy, 2009]. These extend the basic 0-1 case to admit vertex and edge weights; but always with some restrictions such as all vertex weights must be positive. In this paper we prove a general form of this theorem where H can have arbitrary vertex and edge weights. An innovative aspect is that we prove this by a surprisingly simple and unified argument. This bypasses various technical obstacles and unifies and extends all previous known versions of this theorem on graphs. The constructive proof of our theorem can be used to make various complexity dichotomy theorems for graph homomorphism effective, i.e., it provides an algorithm that for any H either outputs a P-time algorithm solving hom(⋅, H) or a P-time reduction from a canonical #P-hard problem to hom(⋅, H).

Cite as

Jin-Yi Cai and Artem Govorov. On a Theorem of Lovász that hom(⋅, H) Determines the Isomorphism Type of H. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 17:1-17:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{cai_et_al:LIPIcs.ITCS.2020.17,
  author =	{Cai, Jin-Yi and Govorov, Artem},
  title =	{{On a Theorem of Lov\'{a}sz that hom(⋅, H) Determines the Isomorphism Type of H}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{17:1--17:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.17},
  URN =		{urn:nbn:de:0030-drops-117022},
  doi =		{10.4230/LIPIcs.ITCS.2020.17},
  annote =	{Keywords: Graph homomorphism, Partition function, Complexity dichotomy, Connection matrices and tensors}
}
Document
Tarski’s Theorem, Supermodular Games, and the Complexity of Equilibria

Authors: Kousha Etessami, Christos Papadimitriou, Aviad Rubinstein, and Mihalis Yannakakis


Abstract
The use of monotonicity and Tarski’s theorem in existence proofs of equilibria is very widespread in economics, while Tarski’s theorem is also often used for similar purposes in the context of verification. However, there has been relatively little in the way of analysis of the complexity of finding the fixed points and equilibria guaranteed by this result. We study a computational formalism based on monotone functions on the d-dimensional grid with sides of length N, and their fixed points, as well as the closely connected subject of supermodular games and their equilibria. It is known that finding some (any) fixed point of a monotone function can be done in time log^d N, and we show it requires at least log^2 N function evaluations already on the 2-dimensional grid, even for randomized algorithms. We show that the general Tarski problem of finding some fixed point, when the monotone function is given succinctly (by a boolean circuit), is in the class PLS of problems solvable by local search and, rather surprisingly, also in the class PPAD. Finding the greatest or least fixed point guaranteed by Tarski’s theorem, however, requires d ⋅ N steps, and is NP-hard in the white box model. For supermodular games, we show that finding an equilibrium in such games is essentially computationally equivalent to the Tarski problem, and finding the maximum or minimum equilibrium is similarly harder. Interestingly, two-player supermodular games where the strategy space of one player is one-dimensional can be solved in O(log N) steps. We also show that computing (approximating) the value of Condon’s (Shapley’s) stochastic games reduces to the Tarski problem. An important open problem highlighted by this work is proving a Ω(log^d N) lower bound for small fixed dimension d ≥ 3.

Cite as

Kousha Etessami, Christos Papadimitriou, Aviad Rubinstein, and Mihalis Yannakakis. Tarski’s Theorem, Supermodular Games, and the Complexity of Equilibria. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 18:1-18:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{etessami_et_al:LIPIcs.ITCS.2020.18,
  author =	{Etessami, Kousha and Papadimitriou, Christos and Rubinstein, Aviad and Yannakakis, Mihalis},
  title =	{{Tarski’s Theorem, Supermodular Games, and the Complexity of Equilibria}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{18:1--18:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.18},
  URN =		{urn:nbn:de:0030-drops-117037},
  doi =		{10.4230/LIPIcs.ITCS.2020.18},
  annote =	{Keywords: Tarski’s theorem, supermodular games, monotone functions, lattices, fixed points, Nash equilibria, computational complexity, PLS, PPAD, stochastic games, oracle model, lower bounds}
}
Document
Resolution with Counting: Dag-Like Lower Bounds and Different Moduli

Authors: Fedor Part and Iddo Tzameret


Abstract
Resolution over linear equations is a natural extension of the popular resolution refutation system, augmented with the ability to carry out basic counting. Denoted Res(lin_R), this refutation system operates with disjunctions of linear equations with boolean variables over a ring R, to refute unsatisfiable sets of such disjunctions. Beginning in the work of [Ran Raz and Iddo Tzameret, 2008], through the work of [Dmitry Itsykson and Dmitry Sokolov, 2014] which focused on tree-like lower bounds, this refutation system was shown to be fairly strong. Subsequent work (cf. [Jan Krajícek, 2017; Dmitry Itsykson and Dmitry Sokolov, 2014; Jan Krajícek and Igor Carboni Oliveira, 2018; Michal Garlik and Lezsek Kołodziejczyk, 2018]) made it evident that establishing lower bounds against general Res(lin_R) refutations is a challenging and interesting task since the system captures a "minimal" extension of resolution with counting gates for which no super-polynomial lower bounds are known to date. We provide the first super-polynomial size lower bounds on general (dag-like) resolution over linear equations refutations in the large characteristic regime. In particular we prove that the subset-sum principle 1+ x_1 + ̇s +2^n x_n = 0 requires refutations of exponential-size over ℚ. Our proof technique is nontrivial and novel: roughly speaking, we show that under certain conditions every refutation of a subset-sum instance f=0, where f is a linear polynomial over ℚ, must pass through a fat clause containing an equation f=α for each α in the image of f under boolean assignments. We develop a somewhat different approach to prove exponential lower bounds against tree-like refutations of any subset-sum instance that depends on n variables, hence also separating tree-like from dag-like refutations over the rationals. We then turn to the finite fields regime, showing that the work of Itsykson and Sokolov [Dmitry Itsykson and Dmitry Sokolov, 2014] who obtained tree-like lower bounds over ?_2 can be carried over and extended to every finite field. We establish new lower bounds and separations as follows: (i) for every pair of distinct primes p,q, there exist CNF formulas with short tree-like refutations in Res(lin_{?_p}) that require exponential-size tree-like Res(lin_{?_q}) refutations; (ii) random k-CNF formulas require exponential-size tree-like Res(lin_{?_p}) refutations, for every prime p and constant k; and (iii) exponential-size lower bounds for tree-like Res(lin_?) refutations of the pigeonhole principle, for every field ?.

Cite as

Fedor Part and Iddo Tzameret. Resolution with Counting: Dag-Like Lower Bounds and Different Moduli. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 19:1-19:37, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{part_et_al:LIPIcs.ITCS.2020.19,
  author =	{Part, Fedor and Tzameret, Iddo},
  title =	{{Resolution with Counting: Dag-Like Lower Bounds and Different Moduli}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{19:1--19:37},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.19},
  URN =		{urn:nbn:de:0030-drops-117041},
  doi =		{10.4230/LIPIcs.ITCS.2020.19},
  annote =	{Keywords: Proof complexity, concrete lower bounds, resolution, satisfiability, combinatorics}
}
Document
The Random-Query Model and the Memory-Bounded Coupon Collector

Authors: Ran Raz and Wei Zhan


Abstract
We study a new model of space-bounded computation, the random-query model. The model is based on a branching-program over input variables x_1,…,x_n. In each time step, the branching program gets as an input a random index i ∈ {1,…,n}, together with the input variable x_i (rather than querying an input variable of its choice, as in the case of a standard (oblivious) branching program). We motivate the new model in various ways and study time-space tradeoff lower bounds in this model. Our main technical result is a quadratic time-space lower bound for zero-error computations in the random-query model, for XOR, Majority and many other functions. More precisely, a zero-error computation is a computation that stops with high probability and such that conditioning on the event that the computation stopped, the output is correct with probability 1. We prove that for any Boolean function f: {0,1}^n → {0,1}, with sensitivity k, any zero-error computation with time T and space S, satisfies T ⋅ (S+log n) ≥ Ω(n⋅k). We note that the best time-space lower bounds for standard oblivious branching programs are only slightly super linear and improving these bounds is an important long-standing open problem. To prove our results, we study a memory-bounded variant of the coupon-collector problem that seems to us of independent interest and to the best of our knowledge has not been studied before. We consider a zero-error version of the coupon-collector problem. In this problem, the coupon-collector could explicitly choose to stop when he/she is sure with zero-error that all coupons have already been collected. We prove that any zero-error coupon-collector that stops with high probability in time T, and uses space S, satisfies T⋅(S+log n) ≥ Ω(n^2), where n is the number of different coupons.

Cite as

Ran Raz and Wei Zhan. The Random-Query Model and the Memory-Bounded Coupon Collector. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 20:1-20:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{raz_et_al:LIPIcs.ITCS.2020.20,
  author =	{Raz, Ran and Zhan, Wei},
  title =	{{The Random-Query Model and the Memory-Bounded Coupon Collector}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{20:1--20:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.20},
  URN =		{urn:nbn:de:0030-drops-117053},
  doi =		{10.4230/LIPIcs.ITCS.2020.20},
  annote =	{Keywords: random-query model, time-space trade-offs, branching programs}
}
Document
Strategy-Stealing Is Non-Constructive

Authors: Greg Bodwin and Ofer Grossman


Abstract
In many combinatorial games, one can prove that the first player wins under best play using a simple but non-constructive argument called strategy-stealing. This work is about the complexity behind these proofs: how hard is it to actually find a winning move in a game, when you know by strategy-stealing that one exists? We prove that this problem is PSPACE-Complete already for Minimum Poset Games and Symmetric Maker-Maker Games, which are simple classes of games that capture two of the main types of strategy-stealing arguments in the current literature.

Cite as

Greg Bodwin and Ofer Grossman. Strategy-Stealing Is Non-Constructive. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 21:1-21:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bodwin_et_al:LIPIcs.ITCS.2020.21,
  author =	{Bodwin, Greg and Grossman, Ofer},
  title =	{{Strategy-Stealing Is Non-Constructive}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{21:1--21:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.21},
  URN =		{urn:nbn:de:0030-drops-117069},
  doi =		{10.4230/LIPIcs.ITCS.2020.21},
  annote =	{Keywords: PSPACE-hard, Hex, Combinatorial Game Theory}
}
Document
Distribution-Free Testing of Linear Functions on ℝⁿ

Authors: Noah Fleming and Yuichi Yoshida


Abstract
We study the problem of testing whether a function f:ℝⁿ → ℝ is linear (i.e., both additive and homogeneous) in the distribution-free property testing model, where the distance between functions is measured with respect to an unknown probability distribution over ℝⁿ. We show that, given query access to f, sampling access to the unknown distribution as well as the standard Gaussian, and ε > 0, we can distinguish additive functions from functions that are ε-far from additive functions with O((1/ε)log(1/ε)) queries, independent of n. Furthermore, under the assumption that f is a continuous function, the additivity tester can be extended to a distribution-free tester for linearity using the same number of queries. On the other hand, we show that if we are only allowed to get values of f on sampled points, then any distribution-free tester requires Ω(n) samples, even if the underlying distribution is the standard Gaussian.

Cite as

Noah Fleming and Yuichi Yoshida. Distribution-Free Testing of Linear Functions on ℝⁿ. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 22:1-22:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{fleming_et_al:LIPIcs.ITCS.2020.22,
  author =	{Fleming, Noah and Yoshida, Yuichi},
  title =	{{Distribution-Free Testing of Linear Functions on \mathbb{R}ⁿ}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{22:1--22:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.22},
  URN =		{urn:nbn:de:0030-drops-117076},
  doi =		{10.4230/LIPIcs.ITCS.2020.22},
  annote =	{Keywords: Property Testing, Distribution-Free Testing, Linearity Testing}
}
Document
Random Sketching, Clustering, and Short-Term Memory in Spiking Neural Networks

Authors: Yael Hitron, Nancy Lynch, Cameron Musco, and Merav Parter


Abstract
We study input compression in a biologically inspired model of neural computation. We demonstrate that a network consisting of a random projection step (implemented via random synaptic connectivity) followed by a sparsification step (implemented via winner-take-all competition) can reduce well-separated high-dimensional input vectors to well-separated low-dimensional vectors. By augmenting our network with a third module, we can efficiently map each input (along with any small perturbations of the input) to a unique representative neuron, solving a neural clustering problem. Both the size of our network and its processing time, i.e., the time it takes the network to compute the compressed output given a presented input, are independent of the (potentially large) dimension of the input patterns and depend only on the number of distinct inputs that the network must encode and the pairwise relative Hamming distance between these inputs. The first two steps of our construction mirror known biological networks, for example, in the fruit fly olfactory system [Caron et al., 2013; Lin et al., 2014; Dasgupta et al., 2017]. Our analysis helps provide a theoretical understanding of these networks and lay a foundation for how random compression and input memorization may be implemented in biological neural networks. Technically, a contribution in our network design is the implementation of a short-term memory. Our network can be given a desired memory time t_m as an input parameter and satisfies the following with high probability: any pattern presented several times within a time window of t_m rounds will be mapped to a single representative output neuron. However, a pattern not presented for c⋅t_m rounds for some constant c>1 will be "forgotten", and its representative output neuron will be released, to accommodate newly introduced patterns.

Cite as

Yael Hitron, Nancy Lynch, Cameron Musco, and Merav Parter. Random Sketching, Clustering, and Short-Term Memory in Spiking Neural Networks. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 23:1-23:31, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{hitron_et_al:LIPIcs.ITCS.2020.23,
  author =	{Hitron, Yael and Lynch, Nancy and Musco, Cameron and Parter, Merav},
  title =	{{Random Sketching, Clustering, and Short-Term Memory in Spiking Neural Networks}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{23:1--23:31},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.23},
  URN =		{urn:nbn:de:0030-drops-117087},
  doi =		{10.4230/LIPIcs.ITCS.2020.23},
  annote =	{Keywords: biological distributed computing, spiking neural networks, compressed sensing, clustering, random projection, dimensionality reduction, winner-take-all}
}
Document
Signed Tropical Convexity

Authors: Georg Loho and László A. Végh


Abstract
We establish a new notion of tropical convexity for signed tropical numbers. We provide several equivalent descriptions involving balance relations and intersections of open halfspaces as well as the image of a union of polytopes over Puiseux series and hyperoperations. Along the way, we deduce a new Farkas' lemma and Fourier-Motzkin elimination without the non-negativity restriction on the variables. This leads to a Minkowski-Weyl theorem for polytopes over the signed tropical numbers.

Cite as

Georg Loho and László A. Végh. Signed Tropical Convexity. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 24:1-24:35, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{loho_et_al:LIPIcs.ITCS.2020.24,
  author =	{Loho, Georg and V\'{e}gh, L\'{a}szl\'{o} A.},
  title =	{{Signed Tropical Convexity}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{24:1--24:35},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.24},
  URN =		{urn:nbn:de:0030-drops-117097},
  doi =		{10.4230/LIPIcs.ITCS.2020.24},
  annote =	{Keywords: tropical convexity, signed tropical numbers, Farkas' lemma}
}
Document
Distributional Property Testing in a Quantum World

Authors: András Gilyén and Tongyang Li


Abstract
A fundamental problem in statistics and learning theory is to test properties of distributions. We show that quantum computers can solve such problems with significant speed-ups. We also introduce a novel access model for quantum distributions, enabling the coherent preparation of quantum samples, and propose a general framework that can naturally handle both classical and quantum distributions in a unified manner. Our framework generalizes and improves previous quantum algorithms for testing closeness between unknown distributions, testing independence between two distributions, and estimating the Shannon / von Neumann entropy of distributions. For classical distributions our algorithms significantly improve the precision dependence of some earlier results. We also show that in our framework procedures for classical distributions can be directly lifted to the more general case of quantum distributions, and thus obtain the first speed-ups for testing properties of density operators that can be accessed coherently rather than only via sampling.

Cite as

András Gilyén and Tongyang Li. Distributional Property Testing in a Quantum World. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 25:1-25:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{gilyen_et_al:LIPIcs.ITCS.2020.25,
  author =	{Gily\'{e}n, Andr\'{a}s and Li, Tongyang},
  title =	{{Distributional Property Testing in a Quantum World}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{25:1--25:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.25},
  URN =		{urn:nbn:de:0030-drops-117100},
  doi =		{10.4230/LIPIcs.ITCS.2020.25},
  annote =	{Keywords: distributional property testing, quantum algorithms, quantum query complexity}
}
Document
On Local Testability in the Non-Signaling Setting

Authors: Alessandro Chiesa, Peter Manohar, and Igor Shinkar


Abstract
Non-signaling strategies are a generalization of quantum strategies that have been studied in physics for decades, and have recently found applications in theoretical computer science. These applications motivate the study of local-to-global phenomena for non-signaling functions. We prove that low-degree testing in the non-signaling setting is possible, assuming that the locality of the non-signaling function exceeds a threshold. We additionally show that if the locality is below the threshold then the test fails spectacularly, in that there exists a non-signaling function which passes the test with probability 1 and yet is maximally far from being low-degree. Along the way, we present general results about the local testability of linear codes in the non-signaling setting. These include formulating natural definitions that capture the condition that a non-signaling function "belongs" to a given code, and characterizing the sets of local constraints that imply membership in the code. We prove these results by formulating a logical inference system for linear constraints on non-signaling functions that is complete and sound.

Cite as

Alessandro Chiesa, Peter Manohar, and Igor Shinkar. On Local Testability in the Non-Signaling Setting. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 26:1-26:37, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{chiesa_et_al:LIPIcs.ITCS.2020.26,
  author =	{Chiesa, Alessandro and Manohar, Peter and Shinkar, Igor},
  title =	{{On Local Testability in the Non-Signaling Setting}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{26:1--26:37},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.26},
  URN =		{urn:nbn:de:0030-drops-117112},
  doi =		{10.4230/LIPIcs.ITCS.2020.26},
  annote =	{Keywords: non-signaling strategies, locally testable codes, low-degree testing, Fourier analysis}
}
Document
Local Access to Huge Random Objects Through Partial Sampling

Authors: Amartya Shankha Biswas, Ronitt Rubinfeld, and Anak Yodpinyanee


Abstract
Consider an algorithm performing a computation on a huge random object (for example a random graph or a "long" random walk). Is it necessary to generate the entire object prior to the computation, or is it possible to provide query access to the object and sample it incrementally "on-the-fly" (as requested by the algorithm)? Such an implementation should emulate the random object by answering queries in a manner consistent with an instance of the random object sampled from the true distribution (or close to it). This paradigm is useful when the algorithm is sub-linear and thus, sampling the entire object up front would ruin its efficiency. Our first set of results focus on undirected graphs with independent edge probabilities, i.e. each edge is chosen as an independent Bernoulli random variable. We provide a general implementation for this model under certain assumptions. Then, we use this to obtain the first efficient local implementations for the Erdös-Rényi G(n,p) model for all values of p, and the Stochastic Block model. As in previous local-access implementations for random graphs, we support Vertex-Pair and Next-Neighbor queries. In addition, we introduce a new Random-Neighbor query. Next, we give the first local-access implementation for All-Neighbors queries in the (sparse and directed) Kleinberg’s Small-World model. Our implementations require no pre-processing time, and answer each query using O(poly(log n)) time, random bits, and additional space. Next, we show how to implement random Catalan objects, specifically focusing on Dyck paths (balanced random walks on the integer line that are always non-negative). Here, we support Height queries to find the location of the walk, and First-Return queries to find the time when the walk returns to a specified location. This in turn can be used to implement Next-Neighbor queries on random rooted ordered trees, and Matching-Bracket queries on random well bracketed expressions (the Dyck language). Finally, we introduce two features to define a new model that: (1) allows multiple independent (and even simultaneous) instantiations of the same implementation, to be consistent with each other without the need for communication, (2) allows us to generate a richer class of random objects that do not have a succinct description. Specifically, we study uniformly random valid q-colorings of an input graph G with maximum degree Δ. This is in contrast to prior work in the area, where the relevant random objects are defined as a distribution with O(1) parameters (for example, n and p in the G(n,p) model). The distribution over valid colorings is instead specified via a "huge" input (the underlying graph G), that is far too large to be read by a sub-linear time algorithm. Instead, our implementation accesses G through local neighborhood probes, and is able to answer queries to the color of any given vertex in sub-linear time for q ≥ 9Δ, in a manner that is consistent with a specific random valid coloring of G. Furthermore, the implementation is memory-less, and can maintain consistency with non-communicating copies of itself.

Cite as

Amartya Shankha Biswas, Ronitt Rubinfeld, and Anak Yodpinyanee. Local Access to Huge Random Objects Through Partial Sampling. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 27:1-27:65, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{biswas_et_al:LIPIcs.ITCS.2020.27,
  author =	{Biswas, Amartya Shankha and Rubinfeld, Ronitt and Yodpinyanee, Anak},
  title =	{{Local Access to Huge Random Objects Through Partial Sampling}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{27:1--27:65},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.27},
  URN =		{urn:nbn:de:0030-drops-117126},
  doi =		{10.4230/LIPIcs.ITCS.2020.27},
  annote =	{Keywords: sublinear time algorithms, random generation, local computation}
}
Document
Monotone Probability Distributions over the Boolean Cube Can Be Learned with Sublinear Samples

Authors: Ronitt Rubinfeld and Arsen Vasilyan


Abstract
A probability distribution over the Boolean cube is monotone if flipping the value of a coordinate from zero to one can only increase the probability of an element. Given samples of an unknown monotone distribution over the Boolean cube, we give (to our knowledge) the first algorithm that learns an approximation of the distribution in statistical distance using a number of samples that is sublinear in the domain. To do this, we develop a structural lemma describing monotone probability distributions. The structural lemma has further implications to the sample complexity of basic testing tasks for analyzing monotone probability distributions over the Boolean cube: We use it to give nontrivial upper bounds on the tasks of estimating the distance of a monotone distribution to uniform and of estimating the support size of a monotone distribution. In the setting of monotone probability distributions over the Boolean cube, our algorithms are the first to have sample complexity lower than known lower bounds for the same testing tasks on arbitrary (not necessarily monotone) probability distributions. One further consequence of our learning algorithm is an improved sample complexity for the task of testing whether a distribution on the Boolean cube is monotone.

Cite as

Ronitt Rubinfeld and Arsen Vasilyan. Monotone Probability Distributions over the Boolean Cube Can Be Learned with Sublinear Samples. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 28:1-28:34, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{rubinfeld_et_al:LIPIcs.ITCS.2020.28,
  author =	{Rubinfeld, Ronitt and Vasilyan, Arsen},
  title =	{{Monotone Probability Distributions over the Boolean Cube Can Be Learned with Sublinear Samples}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{28:1--28:34},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.28},
  URN =		{urn:nbn:de:0030-drops-117138},
  doi =		{10.4230/LIPIcs.ITCS.2020.28},
  annote =	{Keywords: Learning distributions, monotone probability distributions, estimating support size}
}
Document
Sample Complexity Bounds for Influence Maximization

Authors: Gal Sadeh, Edith Cohen, and Haim Kaplan


Abstract
Influence maximization (IM) is the problem of finding for a given s ≥ 1 a set S of |S|=s nodes in a network with maximum influence. With stochastic diffusion models, the influence of a set S of seed nodes is defined as the expectation of its reachability over simulations, where each simulation specifies a deterministic reachability function. Two well-studied special cases are the Independent Cascade (IC) and the Linear Threshold (LT) models of Kempe, Kleinberg, and Tardos [Kempe et al., 2003]. The influence function in stochastic diffusion is unbiasedly estimated by averaging reachability values over i.i.d. simulations. We study the IM sample complexity: the number of simulations needed to determine a (1-ε)-approximate maximizer with confidence 1-δ. Our main result is a surprising upper bound of O(s τ ε^{-2} ln (n/δ)) for a broad class of models that includes IC and LT models and their mixtures, where n is the number of nodes and τ is the number of diffusion steps. Generally τ ≪ n, so this significantly improves over the generic upper bound of O(s n ε^{-2} ln (n/δ)). Our sample complexity bounds are derived from novel upper bounds on the variance of the reachability that allow for small relative error for influential sets and additive error when influence is small. Moreover, we provide a data-adaptive method that can detect and utilize fewer simulations on models where it suffices. Finally, we provide an efficient greedy design that computes an (1-1/e-ε)-approximate maximizer from simulations and applies to any submodular stochastic diffusion model that satisfies the variance bounds.

Cite as

Gal Sadeh, Edith Cohen, and Haim Kaplan. Sample Complexity Bounds for Influence Maximization. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 29:1-29:36, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{sadeh_et_al:LIPIcs.ITCS.2020.29,
  author =	{Sadeh, Gal and Cohen, Edith and Kaplan, Haim},
  title =	{{Sample Complexity Bounds for Influence Maximization}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{29:1--29:36},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.29},
  URN =		{urn:nbn:de:0030-drops-117140},
  doi =		{10.4230/LIPIcs.ITCS.2020.29},
  annote =	{Keywords: Sample complexity, Influence maximization, Submodular maximization}
}
Document
On Oblivious Amplification of Coin-Tossing Protocols

Authors: Nir Bitansky and Nathan Geier


Abstract
We consider the problem of amplifying two-party coin-tossing protocols: given a protocol where it is possible to bias the common output by at most ρ, we aim to obtain a new protocol where the output can be biased by at most ρ* < ρ. We rule out the existence of a natural type of amplifiers called oblivious amplifiers for every ρ* < ρ. Such amplifiers ignore the way that the underlying ρ-bias protocol works and can only invoke an oracle that provides ρ-bias bits. We provide two proofs of this impossibility. The first is by a reduction to the impossibility of deterministic randomness extraction from Santha-Vazirani sources. The second is a direct proof that is more general and also rules outs certain types of asymmetric amplification. In addition, it gives yet another proof for the Santha-Vazirani impossibility.

Cite as

Nir Bitansky and Nathan Geier. On Oblivious Amplification of Coin-Tossing Protocols. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 30:1-30:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bitansky_et_al:LIPIcs.ITCS.2020.30,
  author =	{Bitansky, Nir and Geier, Nathan},
  title =	{{On Oblivious Amplification of Coin-Tossing Protocols}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{30:1--30:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.30},
  URN =		{urn:nbn:de:0030-drops-117150},
  doi =		{10.4230/LIPIcs.ITCS.2020.30},
  annote =	{Keywords: Coin Tossing, Amplification, Lower Bound, Santha Vazirani}
}
Document
A New Analysis of Differential Privacy’s Generalization Guarantees

Authors: Christopher Jung, Katrina Ligett, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, and Moshe Shenfeld


Abstract
We give a new proof of the "transfer theorem" underlying adaptive data analysis: that any mechanism for answering adaptively chosen statistical queries that is differentially private and sample-accurate is also accurate out-of-sample. Our new proof is elementary and gives structural insights that we expect will be useful elsewhere. We show: 1) that differential privacy ensures that the expectation of any query on the conditional distribution on datasets induced by the transcript of the interaction is close to its expectation on the data distribution, and 2) sample accuracy on its own ensures that any query answer produced by the mechanism is close to the expectation of the query on the conditional distribution. This second claim follows from a thought experiment in which we imagine that the dataset is resampled from the conditional distribution after the mechanism has committed to its answers. The transfer theorem then follows by summing these two bounds, and in particular, avoids the "monitor argument" used to derive high probability bounds in prior work. An upshot of our new proof technique is that the concrete bounds we obtain are substantially better than the best previously known bounds, even though the improvements are in the constants, rather than the asymptotics (which are known to be tight). As we show, our new bounds outperform the naive "sample-splitting" baseline at dramatically smaller dataset sizes compared to the previous state of the art, bringing techniques from this literature closer to practicality.

Cite as

Christopher Jung, Katrina Ligett, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, and Moshe Shenfeld. A New Analysis of Differential Privacy’s Generalization Guarantees. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 31:1-31:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{jung_et_al:LIPIcs.ITCS.2020.31,
  author =	{Jung, Christopher and Ligett, Katrina and Neel, Seth and Roth, Aaron and Sharifi-Malvajerdi, Saeed and Shenfeld, Moshe},
  title =	{{A New Analysis of Differential Privacy’s Generalization Guarantees}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{31:1--31:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.31},
  URN =		{urn:nbn:de:0030-drops-117165},
  doi =		{10.4230/LIPIcs.ITCS.2020.31},
  annote =	{Keywords: Differential Privacy, Adaptive Data Analysis, Transfer Theorem}
}
Document
Robust Algorithms for the Secretary Problem

Authors: Domagoj Bradac, Anupam Gupta, Sahil Singla, and Goran Zuzic


Abstract
In classical secretary problems, a sequence of n elements arrive in a uniformly random order, and we want to choose a single item, or a set of size K. The random order model allows us to escape from the strong lower bounds for the adversarial order setting, and excellent algorithms are known in this setting. However, one worrying aspect of these results is that the algorithms overfit to the model: they are not very robust. Indeed, if a few "outlier" arrivals are adversarially placed in the arrival sequence, the algorithms perform poorly. E.g., Dynkin’s popular 1/e-secretary algorithm is sensitive to even a single adversarial arrival: if the adversary gives one large bid at the beginning of the stream, the algorithm does not select any element at all. We investigate a robust version of the secretary problem. In the Byzantine Secretary model, we have two kinds of elements: green (good) and red (rogue). The values of all elements are chosen by the adversary. The green elements arrive at times uniformly randomly drawn from [0,1]. The red elements, however, arrive at adversarially chosen times. Naturally, the algorithm does not see these colors: how well can it solve secretary problems? We show that selecting the highest value red set, or the single largest green element is not possible with even a small fraction of red items. However, on the positive side, we show that these are the only bad cases, by giving algorithms which get value comparable to the value of the optimal green set minus the largest green item. (This benchmark reminds us of regret minimization and digital auctions, where we subtract an additive term depending on the "scale" of the problem.) Specifically, we give an algorithm to pick K elements, which gets within (1-ε) factor of the above benchmark, as long as K ≥ poly(ε^{-1} log n). We extend this to the knapsack secretary problem, for large knapsack size K. For the single-item case, an analogous benchmark is the value of the second-largest green item. For value-maximization, we give a poly log^* n-competitive algorithm, using a multi-layered bucketing scheme that adaptively refines our estimates of second-max over time. For probability-maximization, we show the existence of a good randomized algorithm, using the minimax principle. We hope that this work will spur further research on robust algorithms for the secretary problem, and for other problems in sequential decision-making, where the existing algorithms are not robust and often tend to overfit to the model.

Cite as

Domagoj Bradac, Anupam Gupta, Sahil Singla, and Goran Zuzic. Robust Algorithms for the Secretary Problem. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 32:1-32:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bradac_et_al:LIPIcs.ITCS.2020.32,
  author =	{Bradac, Domagoj and Gupta, Anupam and Singla, Sahil and Zuzic, Goran},
  title =	{{Robust Algorithms for the Secretary Problem}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{32:1--32:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.32},
  URN =		{urn:nbn:de:0030-drops-117171},
  doi =		{10.4230/LIPIcs.ITCS.2020.32},
  annote =	{Keywords: stochastic optimization, robust optimization, secretary problem, matroid secretary, robust secretary}
}
Document
Universal Communication, Universal Graphs, and Graph Labeling

Authors: Nathaniel Harms


Abstract
We introduce a communication model called universal SMP, in which Alice and Bob receive a function f belonging to a family ℱ, and inputs x and y. Alice and Bob use shared randomness to send a message to a third party who cannot see f, x, y, or the shared randomness, and must decide f(x,y). Our main application of universal SMP is to relate communication complexity to graph labeling, where the goal is to give a short label to each vertex in a graph, so that adjacency or other functions of two vertices x and y can be determined from the labels ℓ(x), ℓ(y). We give a universal SMP protocol using O(k^2) bits of communication for deciding whether two vertices have distance at most k in distributive lattices (generalizing the k-Hamming Distance problem in communication complexity), and explain how this implies a O(k^2 log n) labeling scheme for deciding dist(x,y) ≤ k on distributive lattices with size n; in contrast, we show that a universal SMP protocol for determining dist(x,y) ≤ 2 in modular lattices (a superset of distributive lattices) has super-constant Ω(n^{1/4}) communication cost. On the other hand, we demonstrate that many graph families known to have efficient adjacency labeling schemes, such as trees, low-arboricity graphs, and planar graphs, admit constant-cost communication protocols for adjacency. Trees also have an O(k) protocol for deciding dist(x,y) ≤ k and planar graphs have an O(1) protocol for dist(x,y) ≤ 2, which implies a new O(log n) labeling scheme for the same problem on planar graphs.

Cite as

Nathaniel Harms. Universal Communication, Universal Graphs, and Graph Labeling. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 33:1-33:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{harms:LIPIcs.ITCS.2020.33,
  author =	{Harms, Nathaniel},
  title =	{{Universal Communication, Universal Graphs, and Graph Labeling}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{33:1--33:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.33},
  URN =		{urn:nbn:de:0030-drops-117182},
  doi =		{10.4230/LIPIcs.ITCS.2020.33},
  annote =	{Keywords: Universal graphs, graph labeling, distance labeling, planar graphs, lattices, hamming distance}
}
Document
Approaching MCSP from Above and Below: Hardness for a Conditional Variant and AC^0[p]

Authors: Rahul Ilango


Abstract
The Minimum Circuit Size Problem (MCSP) asks whether a given Boolean function has a circuit of at most a given size. MCSP has been studied for over a half-century and has deep connections throughout theoretical computer science including to cryptography, computational learning theory, and proof complexity. For example, we know (informally) that if MCSP is easy to compute, then most cryptography can be broken. Despite this cryptographic hardness connection and extensive research, we still know relatively little about the hardness of MCSP unconditionally. Indeed, until very recently it was unknown whether MCSP can be computed in AC^0[2] (Golovnev et al., ICALP 2019). Our main contribution in this paper is to formulate a new "oracle" variant of circuit complexity and prove that this problem is NP-complete under randomized reductions. In more detail, we define the Minimum Oracle Circuit Size Problem (MOCSP) that takes as input the truth table of a Boolean function f, a size threshold s, and the truth table of an oracle Boolean function O, and determines whether there is a circuit with O-oracle gates and at most s wires that computes f. We prove that MOCSP is NP-complete under randomized polynomial-time reductions. We also extend the recent AC^0[p] lower bound against MCSP by Golovnev et al. to a lower bound against the circuit minimization problem for depth-d formulas, (AC^0_d)-MCSP. We view this result as primarily a technical contribution. In particular, our proof takes a radically different approach from prior MCSP-related hardness results.

Cite as

Rahul Ilango. Approaching MCSP from Above and Below: Hardness for a Conditional Variant and AC^0[p]. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 34:1-34:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{ilango:LIPIcs.ITCS.2020.34,
  author =	{Ilango, Rahul},
  title =	{{Approaching MCSP from Above and Below: Hardness for a Conditional Variant and AC^0\lbrackp\rbrack}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{34:1--34:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.34},
  URN =		{urn:nbn:de:0030-drops-117191},
  doi =		{10.4230/LIPIcs.ITCS.2020.34},
  annote =	{Keywords: Minimum Circuit Size Problem, reductions, NP-completeness, circuit lower bounds}
}
Document
Equivalence of Systematic Linear Data Structures and Matrix Rigidity

Authors: Sivaramakrishnan Natarajan Ramamoorthy and Cyrus Rashtchian


Abstract
Recently, Dvir, Golovnev, and Weinstein have shown that sufficiently strong lower bounds for linear data structures would imply new bounds for rigid matrices. However, their result utilizes an algorithm that requires an NP oracle, and hence, the rigid matrices are not explicit. In this work, we derive an equivalence between rigidity and the systematic linear model of data structures. For the n-dimensional inner product problem with m queries, we prove that lower bounds on the query time imply rigidity lower bounds for the query set itself. In particular, an explicit lower bound of ω(n/r log m) for r redundant storage bits would yield better rigidity parameters than the best bounds due to Alon, Panigrahy, and Yekhanin. We also prove a converse result, showing that rigid matrices directly correspond to hard query sets for the systematic linear model. As an application, we prove that the set of vectors obtained from rank one binary matrices is rigid with parameters matching the known results for explicit sets. This implies that the vector-matrix-vector problem requires query time Ω(n^(3/2)/r) for redundancy r ≥ √n in the systematic linear model, improving a result of Chakraborty, Kamma, and Larsen. Finally, we prove a cell probe lower bound for the vector-matrix-vector problem in the high error regime, improving a result of Chattopadhyay, Koucký, Loff, and Mukhopadhyay.

Cite as

Sivaramakrishnan Natarajan Ramamoorthy and Cyrus Rashtchian. Equivalence of Systematic Linear Data Structures and Matrix Rigidity. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 35:1-35:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{natarajanramamoorthy_et_al:LIPIcs.ITCS.2020.35,
  author =	{Natarajan Ramamoorthy, Sivaramakrishnan and Rashtchian, Cyrus},
  title =	{{Equivalence of Systematic Linear Data Structures and Matrix Rigidity}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{35:1--35:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.35},
  URN =		{urn:nbn:de:0030-drops-117204},
  doi =		{10.4230/LIPIcs.ITCS.2020.35},
  annote =	{Keywords: matrix rigidity, systematic linear data structures, cell probe model, communication complexity}
}
Document
Computationally Data-Independent Memory Hard Functions

Authors: Mohammad Hassan Ameri, Jeremiah Blocki, and Samson Zhou


Abstract
Memory hard functions (MHFs) are an important cryptographic primitive that are used to design egalitarian proofs of work and in the construction of moderately expensive key-derivation functions resistant to brute-force attacks. Broadly speaking, MHFs can be divided into two categories: data-dependent memory hard functions (dMHFs) and data-independent memory hard functions (iMHFs). iMHFs are resistant to certain side-channel attacks as the memory access pattern induced by the honest evaluation algorithm is independent of the potentially sensitive input e.g., password. While dMHFs are potentially vulnerable to side-channel attacks (the induced memory access pattern might leak useful information to a brute-force attacker), they can achieve higher cumulative memory complexity (CMC) in comparison than an iMHF. In particular, any iMHF that can be evaluated in N steps on a sequential machine has CMC at most ?((N^2 log log N)/log N). By contrast, the dMHF scrypt achieves maximal CMC Ω(N^2) - though the CMC of scrypt would be reduced to just ?(N) after a side-channel attack. In this paper, we introduce the notion of computationally data-independent memory hard functions (ciMHFs). Intuitively, we require that memory access pattern induced by the (randomized) ciMHF evaluation algorithm appears to be independent from the standpoint of a computationally bounded eavesdropping attacker - even if the attacker selects the initial input. We then ask whether it is possible to circumvent known upper bound for iMHFs and build a ciMHF with CMC Ω(N^2). Surprisingly, we answer the question in the affirmative when the ciMHF evaluation algorithm is executed on a two-tiered memory architecture (RAM/Cache). We introduce the notion of a k-restricted dynamic graph to quantify the continuum between unrestricted dMHFs (k=n) and iMHFs (k=1). For any ε > 0 we show how to construct a k-restricted dynamic graph with k=Ω(N^(1-ε)) that provably achieves maximum cumulative pebbling cost Ω(N^2). We can use k-restricted dynamic graphs to build a ciMHF provided that cache is large enough to hold k hash outputs and the dynamic graph satisfies a certain property that we call "amenable to shuffling". In particular, we prove that the induced memory access pattern is indistinguishable to a polynomial time attacker who can monitor the locations of read/write requests to RAM, but not cache. We also show that when k=o(N^(1/log log N)) , then any k-restricted graph with constant indegree has cumulative pebbling cost o(N^2). Our results almost completely characterize the spectrum of k-restricted dynamic graphs.

Cite as

Mohammad Hassan Ameri, Jeremiah Blocki, and Samson Zhou. Computationally Data-Independent Memory Hard Functions. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 36:1-36:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{ameri_et_al:LIPIcs.ITCS.2020.36,
  author =	{Ameri, Mohammad Hassan and Blocki, Jeremiah and Zhou, Samson},
  title =	{{Computationally Data-Independent Memory Hard Functions}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{36:1--36:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.36},
  URN =		{urn:nbn:de:0030-drops-117214},
  doi =		{10.4230/LIPIcs.ITCS.2020.36},
  annote =	{Keywords: Computationally Data-Independent Memory Hard Function, Cumulative Memory Complexity, Dynamic Pebbling Game}
}
Document
Learning and Testing Variable Partitions

Authors: Andrej Bogdanov and Baoxiang Wang


Abstract
Let F be a multivariate function from a product set Σ^n to an Abelian group G. A k-partition of F with cost δ is a partition of the set of variables V into k non-empty subsets (X_1, ̇s, X_k) such that F(V) is δ-close to F_1(X_1)+ ̇s+F_k(X_k) for some F_1, ̇s, F_k with respect to a given error metric. We study algorithms for agnostically learning k partitions and testing k-partitionability over various groups and error metrics given query access to F. In particular we show that 1) Given a function that has a k-partition of cost δ, a partition of cost O(k n^2)(δ + ε) can be learned in time Õ(n^2 poly 1/ε) for any ε > 0. In contrast, for k = 2 and n = 3 learning a partition of cost δ + ε is NP-hard. 2) When F is real-valued and the error metric is the 2-norm, a 2-partition of cost √(δ^2 + ε) can be learned in time Õ(n^5/ε^2). 3) When F is Z_q-valued and the error metric is Hamming weight, k-partitionability is testable with one-sided error and O(kn^3/ε) non-adaptive queries. We also show that even two-sided testers require Ω(n) queries when k = 2. This work was motivated by reinforcement learning control tasks in which the set of control variables can be partitioned. The partitioning reduces the task into multiple lower-dimensional ones that are relatively easier to learn. Our second algorithm empirically increases the scores attained over previous heuristic partitioning methods applied in this context.

Cite as

Andrej Bogdanov and Baoxiang Wang. Learning and Testing Variable Partitions. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 37:1-37:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bogdanov_et_al:LIPIcs.ITCS.2020.37,
  author =	{Bogdanov, Andrej and Wang, Baoxiang},
  title =	{{Learning and Testing Variable Partitions}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{37:1--37:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.37},
  URN =		{urn:nbn:de:0030-drops-117221},
  doi =		{10.4230/LIPIcs.ITCS.2020.37},
  annote =	{Keywords: partitioning, agnostic learning, property testing, sublinear-time algorithms, hypergraph cut, reinforcement learning}
}
Document
Linear Time Subgraph Counting, Graph Degeneracy, and the Chasm at Size Six

Authors: Suman K. Bera, Noujan Pashanasangi, and C. Seshadhri


Abstract
We consider the problem of counting all k-vertex subgraphs in an input graph, for any constant k. This problem (denoted SUB-CNT_k) has been studied extensively in both theory and practice. In a classic result, Chiba and Nishizeki (SICOMP 85) gave linear time algorithms for clique and 4-cycle counting for bounded degeneracy graphs. This is a rich class of sparse graphs that contains, for example, all minor-free families and preferential attachment graphs. The techniques from this result have inspired a number of recent practical algorithms for SUB-CNT_k. Towards a better understanding of the limits of these techniques, we ask: for what values of k can SUB_CNT_k be solved in linear time? We discover a chasm at k=6. Specifically, we prove that for k < 6, SUB_CNT_k can be solved in linear time. Assuming a standard conjecture in fine-grained complexity, we prove that for all k ⩾ 6, SUB-CNT_k cannot be solved even in near-linear time.

Cite as

Suman K. Bera, Noujan Pashanasangi, and C. Seshadhri. Linear Time Subgraph Counting, Graph Degeneracy, and the Chasm at Size Six. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 38:1-38:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bera_et_al:LIPIcs.ITCS.2020.38,
  author =	{Bera, Suman K. and Pashanasangi, Noujan and Seshadhri, C.},
  title =	{{Linear Time Subgraph Counting, Graph Degeneracy, and the Chasm at Size Six}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{38:1--38:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.38},
  URN =		{urn:nbn:de:0030-drops-117239},
  doi =		{10.4230/LIPIcs.ITCS.2020.38},
  annote =	{Keywords: Subgraph counting, bounded degeneracy graphs, fine-grained complexity}
}
Document
Parameterization Above a Multiplicative Guarantee

Authors: Fedor V. Fomin, Petr A. Golovach, Daniel Lokshtanov, Fahad Panolan, Saket Saurabh, and Meirav Zehavi


Abstract
Parameterization above a guarantee is a successful paradigm in Parameterized Complexity. To the best of our knowledge, all fixed-parameter tractable problems in this paradigm share an additive form defined as follows. Given an instance (I,k) of some (parameterized) problem Π with a guarantee g(I), decide whether I admits a solution of size at least (at most) k+g(I). Here, g(I) is usually a lower bound (resp. upper bound) on the maximum (resp. minimum) size of a solution. Since its introduction in 1999 for Max SAT and Max Cut (with g(I) being half the number of clauses and half the number of edges, respectively, in the input), analysis of parameterization above a guarantee has become a very active and fruitful topic of research. We highlight a multiplicative form of parameterization above a guarantee: Given an instance (I,k) of some (parameterized) problem Π with a guarantee g(I), decide whether I admits a solution of size at least (resp. at most) k ⋅ g(I). In particular, we study the Long Cycle problem with a multiplicative parameterization above the girth g(I) of the input graph, and provide a parameterized algorithm for this problem. Apart from being of independent interest, this exemplifies how parameterization above a multiplicative guarantee can arise naturally. We also show that, for any fixed constant ε>0, multiplicative parameterization above g(I)^(1+ε) of Long Cycle yields para-NP-hardness, thus our parameterization is tight in this sense. We complement our main result with the design (or refutation of the existence) of algorithms for other problems parameterized multiplicatively above girth.

Cite as

Fedor V. Fomin, Petr A. Golovach, Daniel Lokshtanov, Fahad Panolan, Saket Saurabh, and Meirav Zehavi. Parameterization Above a Multiplicative Guarantee. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 39:1-39:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{fomin_et_al:LIPIcs.ITCS.2020.39,
  author =	{Fomin, Fedor V. and Golovach, Petr A. and Lokshtanov, Daniel and Panolan, Fahad and Saurabh, Saket and Zehavi, Meirav},
  title =	{{Parameterization Above a Multiplicative Guarantee}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{39:1--39:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.39},
  URN =		{urn:nbn:de:0030-drops-117248},
  doi =		{10.4230/LIPIcs.ITCS.2020.39},
  annote =	{Keywords: Parameterized Complexity, Above-Guarantee Parameterization, Girth}
}
Document
Ad Hoc Multi-Input Functional Encryption

Authors: Shweta Agrawal, Michael Clear, Ophir Frieder, Sanjam Garg, Adam O'Neill, and Justin Thaler


Abstract
Consider sources that supply sensitive data to an aggregator. Standard encryption only hides the data from eavesdroppers, but using specialized encryption one can hope to hide the data (to the extent possible) from the aggregator itself. For flexibility and security, we envision schemes that allow sources to supply encrypted data, such that at any point a dynamically-chosen subset of sources can allow an agreed-upon joint function of their data to be computed by the aggregator. A primitive called multi-input functional encryption (MIFE), due to Goldwasser et al. (EUROCRYPT 2014), comes close, but has two main limitations: - it requires trust in a third party, who is able to decrypt all the data, and - it requires function arity to be fixed at setup time and to be equal to the number of parties. To drop these limitations, we introduce a new notion of ad hoc MIFE. In our setting, each source generates its own public key and issues individual, function-specific secret keys to an aggregator. For successful decryption, an aggregator must obtain a separate key from each source whose ciphertext is being computed upon. The aggregator could obtain multiple such secret-keys from a user corresponding to functions of varying arity. For this primitive, we obtain the following results: - We show that standard MIFE for general functions can be bootstrapped to ad hoc MIFE for free, i.e. without making any additional assumption. - We provide a direct construction of ad hoc MIFE for the inner product functionality based on the Learning with Errors (LWE) assumption. This yields the first construction of this natural primitive based on a standard assumption. At a technical level, our results are obtained by combining standard MIFE schemes and two-round secure multiparty computation (MPC) protocols in novel ways highlighting an interesting interplay between MIFE and two-round MPC.

Cite as

Shweta Agrawal, Michael Clear, Ophir Frieder, Sanjam Garg, Adam O'Neill, and Justin Thaler. Ad Hoc Multi-Input Functional Encryption. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 40:1-40:41, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{agrawal_et_al:LIPIcs.ITCS.2020.40,
  author =	{Agrawal, Shweta and Clear, Michael and Frieder, Ophir and Garg, Sanjam and O'Neill, Adam and Thaler, Justin},
  title =	{{Ad Hoc Multi-Input Functional Encryption}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{40:1--40:41},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.40},
  URN =		{urn:nbn:de:0030-drops-117258},
  doi =		{10.4230/LIPIcs.ITCS.2020.40},
  annote =	{Keywords: Multi-Input Functional Encryption}
}
Document
Unexpected Power of Random Strings

Authors: Shuichi Hirahara


Abstract
There has been a line of work trying to characterize BPP (the class of languages that are solvable by efficient randomized algorithms) by efficient nonadaptive reductions to the set of Kolmogorov-random strings: Buhrman, Fortnow, Koucký, and Loff (CCC 2010 [Buhrman et al., 2010]) showed that every language in BPP is reducible to the set of random strings via a polynomial-time nonadaptive reduction (irrespective of the choice of a universal Turing machine used to define Kolmogorov-random strings). It was conjectured by Allender (CiE 2012 [Allender, 2012]) and others that their lower bound is tight when a reduction works for every universal Turing machine; i.e., "the only way to make use of random strings by a nonadaptive polynomial-time algorithm is to derandomize BPP." In this paper, we refute this conjecture under the plausible assumption that the exponential-time hierarchy does not collapse, by showing that the exponential-time hierarchy EXPH can be solved in exponential time by nonadaptively asking the oracle whether a string is Kolmogorov-random or not. In addition, we provide an exact characterization of S_2^{exp} in terms of exponential-time-computable nonadaptive reductions to arbitrary dense subsets of random strings.

Cite as

Shuichi Hirahara. Unexpected Power of Random Strings. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 41:1-41:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{hirahara:LIPIcs.ITCS.2020.41,
  author =	{Hirahara, Shuichi},
  title =	{{Unexpected Power of Random Strings}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{41:1--41:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.41},
  URN =		{urn:nbn:de:0030-drops-117262},
  doi =		{10.4230/LIPIcs.ITCS.2020.41},
  annote =	{Keywords: Kolmogorov-Randomness, Nonadaptive Reduction, BPP, Symmetric Alternation}
}
Document
Extended Abstract
Consensus vs Broadcast, with and Without Noise (Extended Abstract)

Authors: Andrea Clementi, Luciano Gualà, Emanuele Natale, Francesco Pasquale, Giacomo Scornavacca, and Luca Trevisan


Abstract
Consensus and Broadcast are two fundamental problems in distributed computing, whose solutions have several applications. Intuitively, Consensus should be no harder than Broadcast, and this can be rigorously established in several models. Can Consensus be easier than Broadcast? In models that allow noiseless communication, we prove a reduction of (a suitable variant of) Broadcast to binary Consensus, that preserves the communication model and all complexity parameters such as randomness, number of rounds, communication per round, etc., while there is a loss in the success probability of the protocol. Using this reduction, we get, among other applications, the first logarithmic lower bound on the number of rounds needed to achieve Consensus in the uniform GOSSIP model on the complete graph. The lower bound is tight and, in this model, Consensus and Broadcast are equivalent. We then turn to distributed models with noisy communication channels that have been studied in the context of some bio-inspired systems. In such models, only one noisy bit is exchanged when a communication channel is established between two nodes, and so one cannot easily simulate a noiseless protocol by using error-correcting codes. An Ω(ε^{-2} n) lower bound is proved by Boczkowski et al. [PLOS Comp. Bio. 2018] on the convergence time of binary Broadcast in one such model (noisy uniform PULL), where ε is a parameter that measures the amount of noise). We prove an O(ε^{-2} log n) upper bound on the convergence time of binary Consensus in such model, thus establishing an exponential complexity gap between Consensus versus Broadcast. We also prove our upper bound above is tight and this implies, for binary Consensus, a further strong complexity gap between noisy uniform PULL and noisy uniform PUSH. Finally, we show a Θ(ε^{-2} n log n) bound for Broadcast in the noisy uniform PULL.

Cite as

Andrea Clementi, Luciano Gualà, Emanuele Natale, Francesco Pasquale, Giacomo Scornavacca, and Luca Trevisan. Consensus vs Broadcast, with and Without Noise (Extended Abstract). In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 42:1-42:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{clementi_et_al:LIPIcs.ITCS.2020.42,
  author =	{Clementi, Andrea and Gual\`{a}, Luciano and Natale, Emanuele and Pasquale, Francesco and Scornavacca, Giacomo and Trevisan, Luca},
  title =	{{Consensus vs Broadcast, with and Without Noise}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{42:1--42:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.42},
  URN =		{urn:nbn:de:0030-drops-117277},
  doi =		{10.4230/LIPIcs.ITCS.2020.42},
  annote =	{Keywords: Distributed Computing, Consensus, Broadcast, Gossip Models, Noisy Communication Channels}
}
Document
Testing Linear Inequalities of Subgraph Statistics

Authors: Lior Gishboliner, Asaf Shapira, and Henrique Stagni


Abstract
Property testers are fast randomized algorithms whose task is to distinguish between inputs satisfying some predetermined property ? and those that are far from satisfying it. Since these algorithms operate by inspecting a small randomly selected portion of the input, the most natural property one would like to be able to test is whether the input does not contain certain forbidden small substructures. In the setting of graphs, such a result was obtained by Alon et al., who proved that for any finite family of graphs ℱ, the property of being induced ℱ-free (i.e. not containing an induced copy of any F ∈ ℱ) is testable. It is natural to ask if one can go one step further and prove that more elaborate properties involving induced subgraphs are also testable. One such generalization of the result of Alon et al. was formulated by Goldreich and Shinkar who conjectured that for any finite family of graphs ℱ, and any linear inequality involving the densities of the graphs F ∈ ℱ in the input graph, the property of satisfying this inequality can be tested in a certain restricted model of graph property testing. Our main result in this paper disproves this conjecture in the following strong form: some properties of this type are not testable even in the classical (i.e. unrestricted) model of graph property testing. The proof deviates significantly from prior non-testability results in this area. The main idea is to use a linear inequality relating induced subgraph densities in order to encode the property of being a pseudo-random graph.

Cite as

Lior Gishboliner, Asaf Shapira, and Henrique Stagni. Testing Linear Inequalities of Subgraph Statistics. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 43:1-43:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{gishboliner_et_al:LIPIcs.ITCS.2020.43,
  author =	{Gishboliner, Lior and Shapira, Asaf and Stagni, Henrique},
  title =	{{Testing Linear Inequalities of Subgraph Statistics}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{43:1--43:9},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.43},
  URN =		{urn:nbn:de:0030-drops-117287},
  doi =		{10.4230/LIPIcs.ITCS.2020.43},
  annote =	{Keywords: graph property testing, subgraph statistics}
}
Document
Top-Down Induction of Decision Trees: Rigorous Guarantees and Inherent Limitations

Authors: Guy Blanc, Jane Lange, and Li-Yang Tan


Abstract
Consider the following heuristic for building a decision tree for a function f : {0,1}^n → {± 1}. Place the most influential variable x_i of f at the root, and recurse on the subfunctions f_{x_i=0} and f_{x_i=1} on the left and right subtrees respectively; terminate once the tree is an ε-approximation of f. We analyze the quality of this heuristic, obtaining near-matching upper and lower bounds: - Upper bound: For every f with decision tree size s and every ε ∈ (0,1/2), this heuristic builds a decision tree of size at most s^O(log(s/ε)log(1/ε)). - Lower bound: For every ε ∈ (0,1/2) and s ≤ 2^Õ(√n), there is an f with decision tree size s such that this heuristic builds a decision tree of size s^Ω~(log s). We also obtain upper and lower bounds for monotone functions: s^O(√{log s}/ε) and s^Ω(∜{log s}) respectively. The lower bound disproves conjectures of Fiat and Pechyony (2004) and Lee (2009). Our upper bounds yield new algorithms for properly learning decision trees under the uniform distribution. We show that these algorithms - which are motivated by widely employed and empirically successful top-down decision tree learning heuristics such as ID3, C4.5, and CART - achieve provable guarantees that compare favorably with those of the current fastest algorithm (Ehrenfeucht and Haussler, 1989), and even have certain qualitative advantages. Our lower bounds shed new light on the limitations of these heuristics. Finally, we revisit the classic work of Ehrenfeucht and Haussler. We extend it to give the first uniform-distribution proper learning algorithm that achieves polynomial sample and memory complexity, while matching its state-of-the-art quasipolynomial runtime.

Cite as

Guy Blanc, Jane Lange, and Li-Yang Tan. Top-Down Induction of Decision Trees: Rigorous Guarantees and Inherent Limitations. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 44:1-44:44, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{blanc_et_al:LIPIcs.ITCS.2020.44,
  author =	{Blanc, Guy and Lange, Jane and Tan, Li-Yang},
  title =	{{Top-Down Induction of Decision Trees: Rigorous Guarantees and Inherent Limitations}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{44:1--44:44},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.44},
  URN =		{urn:nbn:de:0030-drops-117295},
  doi =		{10.4230/LIPIcs.ITCS.2020.44},
  annote =	{Keywords: Decision trees, Influence of variables, Analysis of boolean functions, Learning theory, Top-down decision tree heuristics}
}
Document
Algorithms and Adaptivity Gaps for Stochastic k-TSP

Authors: Haotian Jiang, Jian Li, Daogao Liu, and Sahil Singla


Abstract
Given a metric (V,d) and a root ∈ V, the classic k-TSP problem is to find a tour originating at the root of minimum length that visits at least k nodes in V. In this work, motivated by applications where the input to an optimization problem is uncertain, we study two stochastic versions of k-TSP. In Stoch-Reward k-TSP, originally defined by Ene-Nagarajan-Saket [Ene et al., 2018], each vertex v in the given metric (V,d) contains a stochastic reward R_v. The goal is to adaptively find a tour of minimum expected length that collects at least reward k; here "adaptively" means our next decision may depend on previous outcomes. Ene et al. give an O(log k)-approximation adaptive algorithm for this problem, and left open if there is an O(1)-approximation algorithm. We totally resolve their open question, and even give an O(1)-approximation non-adaptive algorithm for Stoch-Reward k-TSP. We also introduce and obtain similar results for the Stoch-Cost k-TSP problem. In this problem each vertex v has a stochastic cost C_v, and the goal is to visit and select at least k vertices to minimize the expected sum of tour length and cost of selected vertices. Besides being a natural stochastic generalization of k-TSP, this problem is also interesting because it generalizes the Price of Information framework [Singla, 2018] from deterministic probing costs to metric probing costs. Our techniques are based on two crucial ideas: "repetitions" and "critical scaling". In general, replacing a random variable with its expectation leads to very poor results. We show that for our problems, if we truncate the random variables at an ideal threshold, then their expected values form a good surrogate. Here, we rely on running several repetitions of our algorithm with the same threshold, and then argue concentration using Freedman’s and Jogdeo-Samuels' inequalities. Unfortunately, this ideal threshold depends on how far we are from achieving our target k, which a non-adaptive algorithm does not know. To overcome this barrier, we truncate the random variables at various different scales and identify a "critical" scale.

Cite as

Haotian Jiang, Jian Li, Daogao Liu, and Sahil Singla. Algorithms and Adaptivity Gaps for Stochastic k-TSP. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 45:1-45:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{jiang_et_al:LIPIcs.ITCS.2020.45,
  author =	{Jiang, Haotian and Li, Jian and Liu, Daogao and Singla, Sahil},
  title =	{{Algorithms and Adaptivity Gaps for Stochastic k-TSP}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{45:1--45:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.45},
  URN =		{urn:nbn:de:0030-drops-117308},
  doi =		{10.4230/LIPIcs.ITCS.2020.45},
  annote =	{Keywords: approximation algorithms, stochastic optimization, travelling salesman problem}
}
Document
Strategic Payments in Financial Networks

Authors: Nils Bertschinger, Martin Hoefer, and Daniel Schmand


Abstract
In their seminal work on systemic risk in financial markets, Eisenberg and Noe [Larry Eisenberg and Thomas Noe, 2001] proposed and studied a model with n firms embedded into a network of debt relations. We analyze this model from a game-theoretic point of view. Every firm is a rational agent in a directed graph that has an incentive to allocate payments in order to clear as much of its debt as possible. Each edge is weighted and describes a liability between the firms. We consider several variants of the game that differ in the permissible payment strategies. We study the existence and computational complexity of pure Nash and strong equilibria, and we provide bounds on the (strong) prices of anarchy and stability for a natural notion of social welfare. Our results highlight the power of financial regulation - if payments of insolvent firms can be centrally assigned, a socially optimal strong equilibrium can be found in polynomial time. In contrast, worst-case strong equilibria can be a factor of Ω(n) away from optimal, and, in general, computing a best response is an NP-hard problem. For less permissible sets of strategies, we show that pure equilibria might not exist, and deciding their existence as well as computing them if they exist constitute NP-hard problems.

Cite as

Nils Bertschinger, Martin Hoefer, and Daniel Schmand. Strategic Payments in Financial Networks. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 46:1-46:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bertschinger_et_al:LIPIcs.ITCS.2020.46,
  author =	{Bertschinger, Nils and Hoefer, Martin and Schmand, Daniel},
  title =	{{Strategic Payments in Financial Networks}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{46:1--46:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.46},
  URN =		{urn:nbn:de:0030-drops-117316},
  doi =		{10.4230/LIPIcs.ITCS.2020.46},
  annote =	{Keywords: Nash Equilibrium, Financial Network, Systemic Risk, Price of Anarchy, Equilibrium Computation}
}
Document
Fault Tolerant Subgraphs with Applications in Kernelization

Authors: William Lochet, Daniel Lokshtanov, Pranabendu Misra, Saket Saurabh, Roohani Sharma, and Meirav Zehavi


Abstract
In the past decade, the design of fault tolerant data structures for networks has become a central topic of research. Particular attention has been given to the construction of a subgraph H of a given digraph D with as fewest arcs/vertices as possible such that, after the failure of any set F of at most k ≥ 1 arcs, testing whether D-F has a certain property P is equivalent to testing whether H-F has that property. Here, reachability (or, more generally, distance preservation) is the most basic requirement to maintain to ensure that the network functions properly. Given a vertex s ∈ V(D), Baswana et al. [STOC'16] presented a construction of H with O(2^kn) arcs in time O(2^{k}nm) where n=|V(D)| and m= |E(D)| such that for any vertex v ∈ V(D): if there exists a path from s to v in D-F, then there also exists a path from s to v in H-F. Additionally, they gave a tight matching lower bound. While the question of the improvement of the dependency on k arises for special classes of digraphs, an arguably more basic research direction concerns the dependency on n (for reachability between a pair of vertices s,t ∈ V(D)) - which are the largest classes of digraphs where the dependency on n can be made sublinear, logarithmic or even constant? Already for the simple classes of directed paths and tournaments, Ω(n) arcs are mandatory. Nevertheless, we prove that "almost acyclicity" suffices to eliminate the dependency on n entirely for a broad class of dense digraphs called bounded independence digraphs. Also, the dependence in k is only a polynomial factor for this class of digraphs. In fact, our sparsification procedure extends to preserve parity-based reachability. Additionally, it finds notable applications in Kernelization: we prove that the classic Directed Feedback Arc Set (DFAS) problem as well as Directed Edge Odd Cycle Transversal (DEOCT) (which, in sharp contrast to DFAS, is W[1]-hard on general digraphs) admit polynomial kernels on bounded independence digraphs. In fact, for any p ∈ N, we can design a polynomial kernel for the problem of hitting all cycles of length ℓ where (ℓ mod p = 1). As a complementary result, we prove that DEOCT is NP-hard on tournaments by establishing a combinatorial identity between the minimum size of a feedback arc set and the minimum size of an edge odd cycle transversal. In passing, we also improve upon the running time of the sub-exponential FPT algorithm for DFAS in digraphs of bounded independence number given by Misra et at. [FSTTCS 2018], and give the first sub-exponential FPT algorithm for DEOCT in digraphs of bounded independence number.

Cite as

William Lochet, Daniel Lokshtanov, Pranabendu Misra, Saket Saurabh, Roohani Sharma, and Meirav Zehavi. Fault Tolerant Subgraphs with Applications in Kernelization. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 47:1-47:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{lochet_et_al:LIPIcs.ITCS.2020.47,
  author =	{Lochet, William and Lokshtanov, Daniel and Misra, Pranabendu and Saurabh, Saket and Sharma, Roohani and Zehavi, Meirav},
  title =	{{Fault Tolerant Subgraphs with Applications in Kernelization}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{47:1--47:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.47},
  URN =		{urn:nbn:de:0030-drops-117326},
  doi =		{10.4230/LIPIcs.ITCS.2020.47},
  annote =	{Keywords: sparsification, kernelization, fault tolerant subgraphs, directed feedback arc set, directed edge odd cycle transversal, bounded independence number digraphs}
}
Document
The Computational Cost of Asynchronous Neural Communication

Authors: Yael Hitron, Merav Parter, and Gur Perri


Abstract
Biological neural computation is inherently asynchronous due to large variations in neuronal spike timing and transmission delays. So-far, most theoretical work on neural networks assumes the synchronous setting where neurons fire simultaneously in discrete rounds. In this work we aim at understanding the barriers of asynchronous neural computation from an algorithmic perspective. We consider an extension of the widely studied model of synchronized spiking neurons [Maass, Neural Networks 97] to the asynchronous setting by taking into account edge and node delays. - Edge Delays: We define an asynchronous model for spiking neurons in which the latency values (i.e., transmission delays) of non self-loop edges vary adversarially over time. This extends the recent work of [Hitron and Parter, ESA'19] in which the latency values are restricted to be fixed over time. Our first contribution is an impossibility result that implies that the assumption that self-loop edges have no delays (as assumed in Hitron and Parter) is indeed necessary. Interestingly, in real biological networks self-loop edges (a.k.a. autapse) are indeed free of delays, and the latter has been noted by neuroscientists to be crucial for network synchronization. To capture the computational challenges in this setting, we first consider the implementation of a single NOT gate. This simple function already captures the fundamental difficulties in the asynchronous setting. Our key technical results are space and time upper and lower bounds for the NOT function, our time bounds are tight. In the spirit of the distributed synchronizers [Awerbuch and Peleg, FOCS'90] and following [Hitron and Parter, ESA'19], we then provide a general synchronizer machinery. Our construction is very modular and it is based on efficient circuit implementation of threshold gates. The complexity of our scheme is measured by the overhead in the number of neurons and the computation time, both are shown to be polynomial in the largest latency value, and the largest incoming degree Δ of the original network. - Node Delays: We introduce the study of asynchronous communication due to variations in the response rates of the neurons in the network. In real brain networks, the round duration varies between different neurons in the network. Our key result is a simulation methodology that allows one to transform the above mentioned synchronized solution under edge delays into a synchronized under node delays while incurring a small overhead w.r.t space and time.

Cite as

Yael Hitron, Merav Parter, and Gur Perri. The Computational Cost of Asynchronous Neural Communication. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 48:1-48:47, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{hitron_et_al:LIPIcs.ITCS.2020.48,
  author =	{Hitron, Yael and Parter, Merav and Perri, Gur},
  title =	{{The Computational Cost of Asynchronous Neural Communication}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{48:1--48:47},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.48},
  URN =		{urn:nbn:de:0030-drops-117330},
  doi =		{10.4230/LIPIcs.ITCS.2020.48},
  annote =	{Keywords: asynchronous communication, asynchronous computation, spiking neurons, synchronizers}
}
Document
Certified Algorithms: Worst-Case Analysis and Beyond

Authors: Konstantin Makarychev and Yury Makarychev


Abstract
In this paper, we introduce the notion of a certified algorithm. Certified algorithms provide worst-case and beyond-worst-case performance guarantees. First, a γ-certified algorithm is also a γ-approximation algorithm - it finds a γ-approximation no matter what the input is. Second, it exactly solves γ-perturbation-resilient instances (γ-perturbation-resilient instances model real-life instances). Additionally, certified algorithms have a number of other desirable properties: they solve both maximization and minimization versions of a problem (e.g. Max Cut and Min Uncut), solve weakly perturbation-resilient instances, and solve optimization problems with hard constraints. In the paper, we define certified algorithms, describe their properties, present a framework for designing certified algorithms, provide examples of certified algorithms for Max Cut/Min Uncut, Minimum Multiway Cut, k-medians and k-means. We also present some negative results.

Cite as

Konstantin Makarychev and Yury Makarychev. Certified Algorithms: Worst-Case Analysis and Beyond. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 49:1-49:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{makarychev_et_al:LIPIcs.ITCS.2020.49,
  author =	{Makarychev, Konstantin and Makarychev, Yury},
  title =	{{Certified Algorithms: Worst-Case Analysis and Beyond}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{49:1--49:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.49},
  URN =		{urn:nbn:de:0030-drops-117347},
  doi =		{10.4230/LIPIcs.ITCS.2020.49},
  annote =	{Keywords: certified algorithm, perturbation resilience, Bilu, Linial stability, beyond-worst-case analysis, approximation algorithm, integrality}
}
Document
Low Diameter Graph Decompositions by Approximate Distance Computation

Authors: Ruben Becker, Yuval Emek, and Christoph Lenzen


Abstract
In many models for large-scale computation, decomposition of the problem is key to efficient algorithms. For distance-related graph problems, it is often crucial that such a decomposition results in clusters of small diameter, while the probability that an edge is cut by the decomposition scales linearly with the length of the edge. There is a large body of literature on low diameter graph decomposition with small edge cutting probabilities, with all existing techniques heavily building on single source shortest paths (SSSP) computations. Unfortunately, in many theoretical models for large-scale computations, the SSSP task constitutes a complexity bottleneck. Therefore, it is desirable to replace exact SSSP computations with approximate ones. However this imposes a fundamental challenge since the existing constructions of low diameter graph decomposition with small edge cutting probabilities inherently rely on the subtractive form of the triangle inequality, which fails to hold under distance approximation. The current paper overcomes this obstacle by developing a technique termed blurry ball growing. By combining this technique with a clever algorithmic idea of Miller et al. (SPAA 2013), we obtain a construction of low diameter decompositions with small edge cutting probabilities which replaces exact SSSP computations by (a small number of) approximate ones. The utility of our approach is showcased by deriving efficient algorithms that work in the CONGEST, PRAM, and semi-streaming models of computation. As an application, we obtain metric tree embedding algorithms in the vein of Bartal (FOCS 1996) whose computational complexities in these models are optimal up to polylogarithmic factors. Our embeddings have the additional useful property that the tree can be mapped back to the original graph such that each edge is "used" only logaritmically many times, which is of interest for capacitated problems and simulating CONGEST algorithms on the tree into which the graph is embedded.

Cite as

Ruben Becker, Yuval Emek, and Christoph Lenzen. Low Diameter Graph Decompositions by Approximate Distance Computation. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 50:1-50:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{becker_et_al:LIPIcs.ITCS.2020.50,
  author =	{Becker, Ruben and Emek, Yuval and Lenzen, Christoph},
  title =	{{Low Diameter Graph Decompositions by Approximate Distance Computation}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{50:1--50:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.50},
  URN =		{urn:nbn:de:0030-drops-117355},
  doi =		{10.4230/LIPIcs.ITCS.2020.50},
  annote =	{Keywords: graph decompositions, metric tree embeddings, distributed graph algorithms, parallel graph algorithms, (semi-)streaming graph algorithms}
}
Document
Generalized List Decoding

Authors: Yihan Zhang, Amitalok J. Budkuley, and Sidharth Jaggi


Abstract
This paper concerns itself with the question of list decoding for general adversarial channels, e.g., bit-flip (XOR) channels, erasure channels, AND (Z-) channels, OR channels, real adder channels, noisy typewriter channels, etc. We precisely characterize when exponential-sized (or positive rate) (L-1)-list decodable codes (where the list size L is a universal constant) exist for such channels. Our criterion essentially asserts that: For any given general adversarial channel, it is possible to construct positive rate (L-1)-list decodable codes if and only if the set of completely positive tensors of order-L with admissible marginals is not entirely contained in the order-L confusability set associated to the channel. The sufficiency is shown via random code construction (combined with expurgation or time-sharing). The necessity is shown by 1) extracting approximately equicoupled subcodes (generalization of equidistant codes) from any using hypergraph Ramsey’s theorem, and 2) significantly extending the classic Plotkin bound in coding theory to list decoding for general channels using duality between the completely positive tensor cone and the copositive tensor cone. In the proof, we also obtain a new fact regarding asymmetry of joint distributions, which may be of independent interest. Other results include 1) List decoding capacity with asymptotically large L for general adversarial channels; 2) A tight list size bound for most constant composition codes (generalization of constant weight codes); 3) Rederivation and demystification of Blinovsky’s [Blinovsky, 1986] characterization of the list decoding Plotkin points (threshold at which large codes are impossible) for bit-flip channels; 4) Evaluation of general bounds [Wang et al., 2019] for unique decoding in the error correction code setting.

Cite as

Yihan Zhang, Amitalok J. Budkuley, and Sidharth Jaggi. Generalized List Decoding. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 51:1-51:83, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{zhang_et_al:LIPIcs.ITCS.2020.51,
  author =	{Zhang, Yihan and Budkuley, Amitalok J. and Jaggi, Sidharth},
  title =	{{Generalized List Decoding}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{51:1--51:83},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.51},
  URN =		{urn:nbn:de:0030-drops-117368},
  doi =		{10.4230/LIPIcs.ITCS.2020.51},
  annote =	{Keywords: Generalized Plotkin bound, general adversarial channels, equicoupled codes, random coding, completely positive tensors, copositive tensors, hypergraph Ramsey theory}
}
Document
Online Computation with Untrusted Advice

Authors: Spyros Angelopoulos, Christoph Dürr, Shendan Jin, Shahin Kamali, and Marc Renault


Abstract
The advice model of online computation captures the setting in which the online algorithm is given some partial information concerning the request sequence. This paradigm allows to establish tradeoffs between the amount of this additional information and the performance of the online algorithm. However, unlike real life in which advice is a recommendation that we can choose to follow or to ignore based on trustworthiness, in the current advice model, the online algorithm treats it as infallible. This means that if the advice is corrupt or, worse, if it comes from a malicious source, the algorithm may perform poorly. In this work, we study online computation in a setting in which the advice is provided by an untrusted source. Our objective is to quantify the impact of untrusted advice so as to design and analyze online algorithms that are robust and perform well even when the advice is generated in a malicious, adversarial manner. To this end, we focus on well- studied online problems such as ski rental, online bidding, bin packing, and list update. For ski-rental and online bidding, we show how to obtain algorithms that are Pareto-optimal with respect to the competitive ratios achieved; this improves upon the framework of Purohit et al. [NeurIPS 2018] in which Pareto-optimality is not necessarily guaranteed. For bin packing and list update, we give online algorithms with worst-case tradeoffs in their competitiveness, depending on whether the advice is trusted or not; this is motivated by work of Lykouris and Vassilvitskii [ICML 2018] on the paging problem, but in which the competitiveness depends on the reliability of the advice. Furthermore, we demonstrate how to prove lower bounds, within this model, on the tradeoff between the number of advice bits and the competitiveness of any online algorithm. Last, we study the effect of randomization: here we show that for ski-rental there is a randomized algorithm that Pareto-dominates any deterministic algorithm with advice of any size. We also show that a single random bit is not always inferior to a single advice bit, as it happens in the standard model.

Cite as

Spyros Angelopoulos, Christoph Dürr, Shendan Jin, Shahin Kamali, and Marc Renault. Online Computation with Untrusted Advice. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 52:1-52:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{angelopoulos_et_al:LIPIcs.ITCS.2020.52,
  author =	{Angelopoulos, Spyros and D\"{u}rr, Christoph and Jin, Shendan and Kamali, Shahin and Renault, Marc},
  title =	{{Online Computation with Untrusted Advice}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{52:1--52:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.52},
  URN =		{urn:nbn:de:0030-drops-117372},
  doi =		{10.4230/LIPIcs.ITCS.2020.52},
  annote =	{Keywords: Online computation, competitive analysis, advice complexity, robust algorithms, untrusted advice}
}
Document
Monochromatic Triangles, Intermediate Matrix Products, and Convolutions

Authors: Andrea Lincoln, Adam Polak, and Virginia Vassilevska Williams


Abstract
The most studied linear algebraic operation, matrix multiplication, has surprisingly fast O(n^ω) time algorithms for ω < 2.373. On the other hand, the (min,+) matrix product which is at the heart of many fundamental graph problems such as All-Pairs Shortest Paths, has received only minor n^o(1) improvements over its brute-force cubic running time and is widely conjectured to require n^{3-o(1)} time. There is a plethora of matrix products and graph problems whose complexity seems to lie in the middle of these two problems. For instance, the Min-Max matrix product, the Minimum Witness matrix product, All-Pairs Shortest Paths in directed unweighted graphs and determining whether an edge-colored graph contains a monochromatic triangle, can all be solved in Õ(n^{(3+ω)/2}) time. While slight improvements are sometimes possible using rectangular matrix multiplication, if ω=2, the best runtimes for these "intermediate" problems are all Õ(n^2.5). A similar phenomenon occurs for convolution problems. Here, using the FFT, the usual (+,×)-convolution of two n-length sequences can be solved in O(n log n) time, while the (min,+) convolution is conjectured to require n^{2-o(1)} time, the brute force running time for convolution problems. There are analogous intermediate problems that can be solved in O(n^1.5) time, but seemingly not much faster: Min-Max convolution, Minimum Witness convolution, etc. Can one improve upon the running times for these intermediate problems, in either the matrix product or the convolution world? Or, alternatively, can one relate these problems to each other and to other key problems in a meaningful way? This paper makes progress on these questions by providing a network of fine-grained reductions. We show for instance that APSP in directed unweighted graphs and Minimum Witness product can be reduced to both the Min-Max product and a variant of the monochromatic triangle problem, so that a significant improvement over n^{(3+ω)/2} time for any of the latter problems would result in a similar improvement for both of the former problems. We also show that a natural convolution variant of monochromatic triangle is fine-grained equivalent to the famous 3SUM problem. As this variant is solvable in O(n^1.5) time and 3SUM is in O(n^2) time (and is conjectured to require n^{2-o(1)} time), our result gives the first fine-grained equivalence between natural problems of different running times. We also relate 3SUM to monochromatic triangle, and a coin change problem to monochromatic convolution, and thus to 3SUM.

Cite as

Andrea Lincoln, Adam Polak, and Virginia Vassilevska Williams. Monochromatic Triangles, Intermediate Matrix Products, and Convolutions. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 53:1-53:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{lincoln_et_al:LIPIcs.ITCS.2020.53,
  author =	{Lincoln, Andrea and Polak, Adam and Vassilevska Williams, Virginia},
  title =	{{Monochromatic Triangles, Intermediate Matrix Products, and Convolutions}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{53:1--53:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.53},
  URN =		{urn:nbn:de:0030-drops-117382},
  doi =		{10.4230/LIPIcs.ITCS.2020.53},
  annote =	{Keywords: 3SUM, fine-grained complexity, matrix multiplication, monochromatic triangle}
}
Document
Matching Is as Easy as the Decision Problem, in the NC Model

Authors: Nima Anari and Vijay V. Vazirani


Abstract
Is matching in NC, i.e., is there a deterministic fast parallel algorithm for it? This has been an outstanding open question in TCS for over three decades, ever since the discovery of randomized NC matching algorithms [Karp et al., 1985; Mulmuley et al., 1987]. Over the last five years, the theoretical computer science community has launched a relentless attack on this question, leading to the discovery of several powerful ideas. We give what appears to be the culmination of this line of work: An NC algorithm for finding a minimum-weight perfect matching in a general graph with polynomially bounded edge weights, provided it is given an oracle for the decision problem. Consequently, for settling the main open problem, it suffices to obtain an NC algorithm for the decision problem. We believe this new fact has qualitatively changed the nature of this open problem. All known efficient matching algorithms for general graphs follow one of two approaches: given by [Edmonds, 1965] and [Lovász, 1979]. Our oracle-based algorithm follows a new approach and uses many of ideas discovered in the last five years. The difficulty of obtaining an NC perfect matching algorithm led researchers to study matching vis-a-vis clever relaxations of the class NC. In this vein, recently [Goldwasser and Grossman, 2015] gave a pseudo-deterministic RNC algorithm for finding a perfect matching in a bipartite graph, i.e., an RNC algorithm with the additional requirement that on the same graph, it should return the same (i.e., unique) perfect matching for almost all choices of random bits. A corollary of our reduction is an analogous algorithm for general graphs.

Cite as

Nima Anari and Vijay V. Vazirani. Matching Is as Easy as the Decision Problem, in the NC Model. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 54:1-54:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{anari_et_al:LIPIcs.ITCS.2020.54,
  author =	{Anari, Nima and Vazirani, Vijay V.},
  title =	{{Matching Is as Easy as the Decision Problem, in the NC Model}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{54:1--54:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.54},
  URN =		{urn:nbn:de:0030-drops-117399},
  doi =		{10.4230/LIPIcs.ITCS.2020.54},
  annote =	{Keywords: Parallel Algorithm, Pseudo-Deterministic, Perfect Matching, Tutte Matrix}
}
Document
Advancing Subgroup Fairness via Sleeping Experts

Authors: Avrim Blum and Thodoris Lykouris


Abstract
We study methods for improving fairness to subgroups in settings with overlapping populations and sequential predictions. Classical notions of fairness focus on the balance of some property across different populations. However, in many applications the goal of the different groups is not to be predicted equally but rather to be predicted well. We demonstrate that the task of satisfying this guarantee for multiple overlapping groups is not straightforward and show that for the simple objective of unweighted average of false negative and false positive rate, satisfying this for overlapping populations can be statistically impossible even when we are provided predictors that perform well separately on each subgroup. On the positive side, we show that when individuals are equally important to the different groups they belong to, this goal is achievable; to do so, we draw a connection to the sleeping experts literature in online learning. Motivated by the one-sided feedback in natural settings of interest, we extend our results to such a feedback model. We also provide a game-theoretic interpretation of our results, examining the incentives of participants to join the system and to provide the system full information about predictors they may possess. We end with several interesting open problems concerning the strength of guarantees that can be achieved in a computationally efficient manner.

Cite as

Avrim Blum and Thodoris Lykouris. Advancing Subgroup Fairness via Sleeping Experts. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 55:1-55:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{blum_et_al:LIPIcs.ITCS.2020.55,
  author =	{Blum, Avrim and Lykouris, Thodoris},
  title =	{{Advancing Subgroup Fairness via Sleeping Experts}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{55:1--55:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.55},
  URN =		{urn:nbn:de:0030-drops-117402},
  doi =		{10.4230/LIPIcs.ITCS.2020.55},
  annote =	{Keywords: Online learning, Fairness, Game theory}
}
Document
Instance Complexity and Unlabeled Certificates in the Decision Tree Model

Authors: Tomer Grossman, Ilan Komargodski, and Moni Naor


Abstract
Instance complexity is a measure of goodness of an algorithm in which the performance of one algorithm is compared to others per input. This is in sharp contrast to worst-case and average-case complexity measures, where the performance is compared either on the worst input or on an average one, respectively. We initiate the systematic study of instance complexity and optimality in the query model (a.k.a. the decision tree model). In this model, instance optimality of an algorithm for computing a function is the requirement that the complexity of an algorithm on any input is at most a constant factor larger than the complexity of the best correct algorithm. That is we compare the decision tree to one that receives a certificate and its complexity is measured only if the certificate is correct (but correctness should hold on any input). We study both deterministic and randomized decision trees and provide various characterizations and barriers for more general results. We introduce a new measure of complexity called unlabeled-certificate complexity, appropriate for graph properties and other functions with symmetries, where only information about the structure of the graph is known to the competing algorithm. More precisely, the certificate is some permutation of the input (rather than the input itself) and the correctness should be maintained even if the certificate is wrong. First we show that such an unlabeled certificate is sometimes very helpful in the worst-case. We then study instance optimality with respect to this measure of complexity, where an algorithm is said to be instance optimal if for every input it performs roughly as well as the best algorithm that is given an unlabeled certificate (but is correct on every input). We show that instance optimality depends on the group of permutations in consideration. Our proofs rely on techniques from hypothesis testing and analysis of random graphs.

Cite as

Tomer Grossman, Ilan Komargodski, and Moni Naor. Instance Complexity and Unlabeled Certificates in the Decision Tree Model. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 56:1-56:38, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{grossman_et_al:LIPIcs.ITCS.2020.56,
  author =	{Grossman, Tomer and Komargodski, Ilan and Naor, Moni},
  title =	{{Instance Complexity and Unlabeled Certificates in the Decision Tree Model}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{56:1--56:38},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.56},
  URN =		{urn:nbn:de:0030-drops-117418},
  doi =		{10.4230/LIPIcs.ITCS.2020.56},
  annote =	{Keywords: decision tree complexity, instance complexity, instance optimality, query complexity, unlabeled certificates}
}
Document
On the Impossibility of Probabilistic Proofs in Relativized Worlds

Authors: Alessandro Chiesa and Siqi Liu


Abstract
We initiate the systematic study of probabilistic proofs in relativized worlds, where the goal is to understand, for a given oracle, the possibility of "non-trivial" proof systems for deterministic or nondeterministic computations that make queries to the oracle. This question is intimately related to a recent line of work that seeks to improve the efficiency of probabilistic proofs for computations that use functionalities such as cryptographic hash functions and digital signatures, by instantiating them via constructions that are "friendly" to known constructions of probabilistic proofs. Informally, negative results about probabilistic proofs in relativized worlds provide evidence that this line of work is inherent and, conversely, positive results provide a way to bypass it. We prove several impossibility results for probabilistic proofs relative to natural oracles. Our results provide strong evidence that tailoring certain natural functionalities to known probabilistic proofs is inherent.

Cite as

Alessandro Chiesa and Siqi Liu. On the Impossibility of Probabilistic Proofs in Relativized Worlds. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 57:1-57:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{chiesa_et_al:LIPIcs.ITCS.2020.57,
  author =	{Chiesa, Alessandro and Liu, Siqi},
  title =	{{On the Impossibility of Probabilistic Proofs in Relativized Worlds}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{57:1--57:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.57},
  URN =		{urn:nbn:de:0030-drops-117420},
  doi =		{10.4230/LIPIcs.ITCS.2020.57},
  annote =	{Keywords: probabilistically checkable proofs, relativization}
}
Document
Lower Bounds for (Non-Monotone) Comparator Circuits

Authors: Anna Gál and Robert Robere


Abstract
Comparator circuits are a natural circuit model for studying the concept of bounded fan-out computations, which intuitively corresponds to whether or not a computational model can make "copies" of intermediate computational steps. Comparator circuits are believed to be weaker than general Boolean circuits, but they can simulate Branching Programs and Boolean formulas. In this paper we prove the first superlinear lower bounds in the general (non-monotone) version of this model for an explicitly defined function. More precisely, we prove that the n-bit Element Distinctness function requires Ω((n/ log n)^(3/2)) size comparator circuits.

Cite as

Anna Gál and Robert Robere. Lower Bounds for (Non-Monotone) Comparator Circuits. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 58:1-58:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{gal_et_al:LIPIcs.ITCS.2020.58,
  author =	{G\'{a}l, Anna and Robere, Robert},
  title =	{{Lower Bounds for (Non-Monotone) Comparator Circuits}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{58:1--58:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.58},
  URN =		{urn:nbn:de:0030-drops-117431},
  doi =		{10.4230/LIPIcs.ITCS.2020.58},
  annote =	{Keywords: comparator circuits, circuit complexity, Nechiporuk, lower bounds}
}
Document
A Tight Lower Bound For Non-Coherent Index Erasure

Authors: Nathan Lindzey and Ansis Rosmanis


Abstract
The index erasure problem is a quantum state generation problem that asks a quantum computer to prepare a uniform superposition over the image of an injective function given by an oracle. We prove a tight Ω(√n) lower bound on the quantum query complexity of the non-coherent case of the problem, where, in addition to preparing the required superposition, the algorithm is allowed to leave the ancillary memory in an arbitrary function-dependent state. This resolves an open question of Ambainis, Magnin, Roetteler, and Roland (CCC 2011), who gave a tight bound for the coherent case, the case where the ancillary memory must return to its initial state. To prove our main result, we first extend the so-called automorphism principle (Høyer et al. STOC 2007) to the general adversary method for state conversion problems (Lee et al. STOC 2011), which allows one to exploit the symmetries of these problems to lower bound their quantum query complexity. Using this method, we establish a strong connection between the quantum query complexity of non-coherent symmetric state generation problems and the well-known Krein parameters of association schemes. Krein parameters are usually hard to determine, nevertheless, we give a novel way of computing certain Krein parameters of a commutative association scheme defined over partial permutations. We believe the study of this association scheme may also be of independent interest.

Cite as

Nathan Lindzey and Ansis Rosmanis. A Tight Lower Bound For Non-Coherent Index Erasure. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 59:1-59:37, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{lindzey_et_al:LIPIcs.ITCS.2020.59,
  author =	{Lindzey, Nathan and Rosmanis, Ansis},
  title =	{{A Tight Lower Bound For Non-Coherent Index Erasure}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{59:1--59:37},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.59},
  URN =		{urn:nbn:de:0030-drops-117446},
  doi =		{10.4230/LIPIcs.ITCS.2020.59},
  annote =	{Keywords: General Adversary Method, Quantum Query Complexity, Association Schemes, Krein Parameters, Representation Theory}
}
Document
Optimal Single-Choice Prophet Inequalities from Samples

Authors: Aviad Rubinstein, Jack Z. Wang, and S. Matthew Weinberg


Abstract
We study the single-choice Prophet Inequality problem when the gambler is given access to samples. We show that the optimal competitive ratio of 1/2 can be achieved with a single sample from each distribution. When the distributions are identical, we show that for any constant ε > 0, O(n) samples from the distribution suffice to achieve the optimal competitive ratio (≈ 0.745) within (1+ε), resolving an open problem of [José R. Correa et al., 2019].

Cite as

Aviad Rubinstein, Jack Z. Wang, and S. Matthew Weinberg. Optimal Single-Choice Prophet Inequalities from Samples. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 60:1-60:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{rubinstein_et_al:LIPIcs.ITCS.2020.60,
  author =	{Rubinstein, Aviad and Wang, Jack Z. and Weinberg, S. Matthew},
  title =	{{Optimal Single-Choice Prophet Inequalities from Samples}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{60:1--60:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.60},
  URN =		{urn:nbn:de:0030-drops-117452},
  doi =		{10.4230/LIPIcs.ITCS.2020.60},
  annote =	{Keywords: Online algorithms, Probability, Optimization, Prophet inequalities, Samples, Auctions}
}
Document
Implementation in Advised Strategies: Welfare Guarantees from Posted-Price Mechanisms When Demand Queries Are NP-Hard

Authors: Linda Cai, Clay Thomas, and S. Matthew Weinberg


Abstract
State-of-the-art posted-price mechanisms for submodular bidders with m items achieve approximation guarantees of O((log log m)^3) [Sepehr Assadi and Sahil Singla, 2019]. Their truthfulness, however, requires bidders to compute an NP-hard demand-query. Some computational complexity of this form is unavoidable, as it is NP-hard for truthful mechanisms to guarantee even an m^(1/2-ε)-approximation for any ε > 0 [Shahar Dobzinski and Jan Vondrák, 2016]. Together, these establish a stark distinction between computationally-efficient and communication-efficient truthful mechanisms. We show that this distinction disappears with a mild relaxation of truthfulness, which we term implementation in advised strategies. Specifically, advice maps a tentative strategy either to that same strategy itself, or one that dominates it. We say that a player follows advice as long as they never play actions which are dominated by advice. A poly-time mechanism guarantees an α-approximation in implementation in advised strategies if there exists advice (which runs in poly-time) for each player such that an α-approximation is achieved whenever all players follow advice. Using an appropriate bicriterion notion of approximate demand queries (which can be computed in poly-time), we establish that (a slight modification of) the [Sepehr Assadi and Sahil Singla, 2019] mechanism achieves the same O((log log m)^3)-approximation in implementation in advised strategies.

Cite as

Linda Cai, Clay Thomas, and S. Matthew Weinberg. Implementation in Advised Strategies: Welfare Guarantees from Posted-Price Mechanisms When Demand Queries Are NP-Hard. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 61:1-61:32, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{cai_et_al:LIPIcs.ITCS.2020.61,
  author =	{Cai, Linda and Thomas, Clay and Weinberg, S. Matthew},
  title =	{{Implementation in Advised Strategies: Welfare Guarantees from Posted-Price Mechanisms When Demand Queries Are NP-Hard}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{61:1--61:32},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.61},
  URN =		{urn:nbn:de:0030-drops-117464},
  doi =		{10.4230/LIPIcs.ITCS.2020.61},
  annote =	{Keywords: Combinatorial auctions, Posted-Price mechanisms, Submodular valuations, Incentive compatible}
}
Document
Toward a General Complexity Theory of Motion Planning: Characterizing Which Gadgets Make Games Hard

Authors: Erik D. Demaine, Dylan H. Hendrickson, and Jayson Lynch


Abstract
We begin a general theory for characterizing the computational complexity of motion planning of robot(s) through a graph of "gadgets", where each gadget has its own state defining a set of allowed traversals which in turn modify the gadget’s state. We study two general families of such gadgets within this theory, one which naturally leads to motion planning problems with polynomially bounded solutions, and another which leads to polynomially unbounded (potentially exponential) solutions. We also study a range of competitive game-theoretic scenarios, from one player controlling one robot to teams of players each controlling their own robot and racing to achieve their team’s goal. Under certain restrictions on these gadgets, we fully characterize the complexity of bounded 1-player motion planning (NL vs. NP-complete), unbounded 1-player motion planning (NL vs. PSPACE-complete), and bounded 2-player motion planning (P vs. PSPACE-complete), and we partially characterize the complexity of unbounded 2-player motion planning (P vs. EXPTIME-complete), bounded 2-team motion planning (P vs. NEXPTIME-complete), and unbounded 2-team motion planning (P vs. undecidable). These results can be seen as an alternative to Constraint Logic (which has already proved useful as a basis for hardness reductions), providing a wide variety of agent-based gadgets, any one of which suffices to prove a problem hard.

Cite as

Erik D. Demaine, Dylan H. Hendrickson, and Jayson Lynch. Toward a General Complexity Theory of Motion Planning: Characterizing Which Gadgets Make Games Hard. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 62:1-62:42, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{demaine_et_al:LIPIcs.ITCS.2020.62,
  author =	{Demaine, Erik D. and Hendrickson, Dylan H. and Lynch, Jayson},
  title =	{{Toward a General Complexity Theory of Motion Planning: Characterizing Which Gadgets Make Games Hard}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{62:1--62:42},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.62},
  URN =		{urn:nbn:de:0030-drops-117478},
  doi =		{10.4230/LIPIcs.ITCS.2020.62},
  annote =	{Keywords: motion planning, computational complexity, NP, PSPACE, EXP, NEXP, undecidability, games}
}
Document
Abstract
Computational Pseudorandomness, the Wormhole Growth Paradox, and Constraints on the AdS/CFT Duality (Abstract)

Authors: Adam Bouland, Bill Fefferman, and Umesh Vazirani


Abstract
The AdS/CFT correspondence is central to efforts to reconcile gravity and quantum mechanics, a fundamental goal of physics. It posits a duality between a gravitational theory in Anti de Sitter (AdS) space and a quantum mechanical conformal field theory (CFT), embodied in a map known as the AdS/CFT dictionary mapping states to states and operators to operators. This dictionary map is not well understood and has only been computed on special, structured instances. In this work we introduce cryptographic ideas to the study of AdS/CFT, and provide evidence that either the dictionary must be exponentially hard to compute, or else the quantum Extended Church-Turing thesis must be false in quantum gravity. Our argument has its origins in a fundamental paradox in the AdS/CFT correspondence known as the wormhole growth paradox. The paradox is that the CFT is believed to be "scrambling" - i.e. the expectation value of local operators equilibrates in polynomial time - whereas the gravity theory is not, because the interiors of certain black holes known as "wormholes" do not equilibrate and instead their volume grows at a linear rate for at least an exponential amount of time. So what could be the CFT dual to wormhole volume? Susskind’s proposed resolution was to equate the wormhole volume with the quantum circuit complexity of the CFT state. From a computer science perspective, circuit complexity seems like an unusual choice because it should be difficult to compute, in contrast to physical quantities such as wormhole volume. We show how to create pseudorandom quantum states in the CFT, thereby arguing that their quantum circuit complexity is not "feelable", in the sense that it cannot be approximated by any efficient experiment. This requires a specialized construction inspired by symmetric block ciphers such as DES and AES, since unfortunately existing constructions based on quantum-resistant one way functions cannot be used in the context of the wormhole growth paradox as only very restricted operations are allowed in the CFT. By contrast we argue that the wormhole volume is "feelable" in some general but non-physical sense. The duality between a "feelable" quantity and an "unfeelable" quantity implies that some aspect of this duality must have exponential complexity. More precisely, it implies that either the dictionary is exponentially complex, or else the quantum gravity theory is exponentially difficult to simulate on a quantum computer. While at first sight this might seem to justify the discomfort of complexity theorists with equating computational complexity with a physical quantity, a further examination of our arguments shows that any resolution of the wormhole growth paradox must equate wormhole volume to an "unfeelable" quantity, leading to the same conclusions. In other words this discomfort is an inevitable consequence of the paradox.

Cite as

Adam Bouland, Bill Fefferman, and Umesh Vazirani. Computational Pseudorandomness, the Wormhole Growth Paradox, and Constraints on the AdS/CFT Duality (Abstract). In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 63:1-63:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bouland_et_al:LIPIcs.ITCS.2020.63,
  author =	{Bouland, Adam and Fefferman, Bill and Vazirani, Umesh},
  title =	{{Computational Pseudorandomness, the Wormhole Growth Paradox, and Constraints on the AdS/CFT Duality}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{63:1--63:2},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.63},
  URN =		{urn:nbn:de:0030-drops-117486},
  doi =		{10.4230/LIPIcs.ITCS.2020.63},
  annote =	{Keywords: Quantum complexity theory, pseudorandomness, AdS/CFT correspondence}
}
Document
New Query Lower Bounds for Submodular Function Minimization

Authors: Andrei Graur, Tristan Pollner, Vidhya Ramaswamy, and S. Matthew Weinberg


Abstract
We consider submodular function minimization in the oracle model: given black-box access to a submodular set function f:2^[n] → ℝ, find an element of arg min_S {f(S)} using as few queries to f(⋅) as possible. State-of-the-art algorithms succeed with Õ(n²) queries [Yin Tat Lee et al., 2015], yet the best-known lower bound has never been improved beyond n [Nicholas J. A. Harvey, 2008]. We provide a query lower bound of 2n for submodular function minimization, a 3n/2-2 query lower bound for the non-trivial minimizer of a symmetric submodular function, and a binom{n}{2} query lower bound for the non-trivial minimizer of an asymmetric submodular function. Our 3n/2-2 lower bound results from a connection between SFM lower bounds and a novel concept we term the cut dimension of a graph. Interestingly, this yields a 3n/2-2 cut-query lower bound for finding the global mincut in an undirected, weighted graph, but we also prove it cannot yield a lower bound better than n+1 for s-t mincut, even in a directed, weighted graph.

Cite as

Andrei Graur, Tristan Pollner, Vidhya Ramaswamy, and S. Matthew Weinberg. New Query Lower Bounds for Submodular Function Minimization. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 64:1-64:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{graur_et_al:LIPIcs.ITCS.2020.64,
  author =	{Graur, Andrei and Pollner, Tristan and Ramaswamy, Vidhya and Weinberg, S. Matthew},
  title =	{{New Query Lower Bounds for Submodular Function Minimization}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{64:1--64:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.64},
  URN =		{urn:nbn:de:0030-drops-117493},
  doi =		{10.4230/LIPIcs.ITCS.2020.64},
  annote =	{Keywords: submodular functions, query lower bounds, min cut}
}
Document
Computation-Aware Data Aggregation

Authors: Bernhard Haeupler, D. Ellis Hershkowitz, Anson Kahng, and Ariel D. Procaccia


Abstract
Data aggregation is a fundamental primitive in distributed computing wherein a network computes a function of every nodes' input. However, while compute time is non-negligible in modern systems, standard models of distributed computing do not take compute time into account. Rather, most distributed models of computation only explicitly consider communication time. In this paper, we introduce a model of distributed computation that considers both computation and communication so as to give a theoretical treatment of data aggregation. We study both the structure of and how to compute the fastest data aggregation schedule in this model. As our first result, we give a polynomial-time algorithm that computes the optimal schedule when the input network is a complete graph. Moreover, since one may want to aggregate data over a pre-existing network, we also study data aggregation scheduling on arbitrary graphs. We demonstrate that this problem on arbitrary graphs is hard to approximate within a multiplicative 1.5 factor. Finally, we give an O(log n ⋅ log(OPT/t_m))-approximation algorithm for this problem on arbitrary graphs, where n is the number of nodes and OPT is the length of the optimal schedule.

Cite as

Bernhard Haeupler, D. Ellis Hershkowitz, Anson Kahng, and Ariel D. Procaccia. Computation-Aware Data Aggregation. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 65:1-65:38, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{haeupler_et_al:LIPIcs.ITCS.2020.65,
  author =	{Haeupler, Bernhard and Hershkowitz, D. Ellis and Kahng, Anson and Procaccia, Ariel D.},
  title =	{{Computation-Aware Data Aggregation}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{65:1--65:38},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.65},
  URN =		{urn:nbn:de:0030-drops-117506},
  doi =		{10.4230/LIPIcs.ITCS.2020.65},
  annote =	{Keywords: Data aggregation, distributed algorithm scheduling, approximation algorithms}
}
Document
Convertible Codes: New Class of Codes for Efficient Conversion of Coded Data in Distributed Storage

Authors: Francisco Maturana and K. V. Rashmi


Abstract
Erasure codes are typically used in large-scale distributed storage systems to provide durability of data in the face of failures. In this setting, a set of k blocks to be stored is encoded using an [n, k] code to generate n blocks that are then stored on different storage nodes. A recent work by Kadekodi et al. [Kadekodi et al., 2019] shows that the failure rate of storage devices vary significantly over time, and that changing the rate of the code (via a change in the parameters n and k) in response to such variations provides significant reduction in storage space requirement. However, the resource overhead of realizing such a change in the code rate on already encoded data in traditional codes is prohibitively high. Motivated by this application, in this work we first present a new framework to formalize the notion of code conversion - the process of converting data encoded with an [n^I, k^I] code into data encoded with an [n^F, k^F] code while maintaining desired decodability properties, such as the maximum-distance-separable (MDS) property. We then introduce convertible codes, a new class of code pairs that allow for code conversions in a resource-efficient manner. For an important parameter regime (which we call the merge regime) along with the widely used linearity and MDS decodability constraint, we prove tight bounds on the number of nodes accessed during code conversion. In particular, our achievability result is an explicit construction of MDS convertible codes that are optimal for all parameter values in the merge regime albeit with a high field size. We then present explicit low-field-size constructions of optimal MDS convertible codes for a broad range of parameters in the merge regime. Our results thus show that it is indeed possible to achieve code conversions with significantly lesser resources as compared to the default approach of re-encoding.

Cite as

Francisco Maturana and K. V. Rashmi. Convertible Codes: New Class of Codes for Efficient Conversion of Coded Data in Distributed Storage. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 66:1-66:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{maturana_et_al:LIPIcs.ITCS.2020.66,
  author =	{Maturana, Francisco and Rashmi, K. V.},
  title =	{{Convertible Codes: New Class of Codes for Efficient Conversion of Coded Data in Distributed Storage}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{66:1--66:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.66},
  URN =		{urn:nbn:de:0030-drops-117510},
  doi =		{10.4230/LIPIcs.ITCS.2020.66},
  annote =	{Keywords: Coding theory, Reed-Solomon codes, Erasure codes, Code conversion, Distributed storage}
}
Document
Incentive Compatible Active Learning

Authors: Federico Echenique and Siddharth Prasad


Abstract
We consider active learning under incentive compatibility constraints. The main application of our results is to economic experiments, in which a learner seeks to infer the parameters of a subject’s preferences: for example their attitudes towards risk, or their beliefs over uncertain events. By cleverly adapting the experimental design, one can save on the time spent by subjects in the laboratory, or maximize the information obtained from each subject in a given laboratory session; but the resulting adaptive design raises complications due to incentive compatibility. A subject in the lab may answer questions strategically, and not truthfully, so as to steer subsequent questions in a profitable direction. We analyze two standard economic problems: inference of preferences over risk from multiple price lists, and belief elicitation in experiments on choice over uncertainty. In the first setting, we tune a simple and fast learning algorithm to retain certain incentive compatibility properties. In the second setting, we provide an incentive compatible learning algorithm based on scoring rules with query complexity that differs from obvious methods of achieving fast learning rates only by subpolynomial factors. Thus, for these areas of application, incentive compatibility may be achieved without paying a large sample complexity price.

Cite as

Federico Echenique and Siddharth Prasad. Incentive Compatible Active Learning. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 67:1-67:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{echenique_et_al:LIPIcs.ITCS.2020.67,
  author =	{Echenique, Federico and Prasad, Siddharth},
  title =	{{Incentive Compatible Active Learning}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{67:1--67:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.67},
  URN =		{urn:nbn:de:0030-drops-117525},
  doi =		{10.4230/LIPIcs.ITCS.2020.67},
  annote =	{Keywords: Active Learning, Incentive Compatibility, Preference Elicitation}
}
Document
Pseudorandomness and the Minimum Circuit Size Problem

Authors: Rahul Santhanam


Abstract
We explore the possibility of basing one-way functions on the average-case hardness of the fundamental Minimum Circuit Size Problem (MCSP[s]), which asks whether a Boolean function on n bits specified by its truth table has circuits of size s(n). 1) (Pseudorandomness from Zero-Error Average-Case Hardness) We show that for a given size function s, the following are equivalent: Pseudorandom distributions supported on strings describable by s(O(n))-size circuits exist; Hitting sets supported on strings describable by s(O(n))-size circuits exist; MCSP[s(O(n))] is zero-error average-case hard. Using similar techniques, we show that Feige’s hypothesis for random k-CNFs implies that there is a pseudorandom distribution (with constant error) supported entirely on satisfiable formulas. Underlying our results is a general notion of semantic sampling, which might be of independent interest. 2) (A New Conjecture) In analogy to a known universal construction of succinct hitting sets against arbitrary polynomial-size adversaries, we propose the Universality Conjecture: there is a universal construction of succinct pseudorandom distributions against arbitrary polynomial-size adversaries. We show that under the Universality Conjecture, the following are equivalent: One-way functions exist; Natural proofs useful against sub-exponential size circuits do not exist; Learning polynomial-size circuits with membership queries over the uniform distribution is hard; MCSP[2^(ε n)] is zero-error hard on average for some ε > 0; Cryptographic succinct hitting set generators exist. 3) (Non-Black-Box Results) We show that for weak circuit classes ℭ against which there are natural proofs [Alexander A. Razborov and Steven Rudich, 1997], pseudorandom functions secure against poly-size circuits in ℭ imply superpolynomial lower bounds in P against poly-size circuits in ℭ. We also show that for a certain natural variant of MCSP, there is a polynomial-time reduction from approximating the problem well in the worst case to solving it on average. These results are shown using non-black-box techniques, and in the first case we show that there is no black-box proof of the result under standard crypto assumptions.

Cite as

Rahul Santhanam. Pseudorandomness and the Minimum Circuit Size Problem. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 68:1-68:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{santhanam:LIPIcs.ITCS.2020.68,
  author =	{Santhanam, Rahul},
  title =	{{Pseudorandomness and the Minimum Circuit Size Problem}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{68:1--68:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.68},
  URN =		{urn:nbn:de:0030-drops-117532},
  doi =		{10.4230/LIPIcs.ITCS.2020.68},
  annote =	{Keywords: Minimum Circuit Size Problem, Pseudorandomness, Average-case Complexity, Natural Proofs, Universality Conjecture}
}
Document
Testing Properties of Multiple Distributions with Few Samples

Authors: Maryam Aliakbarpour and Sandeep Silwal


Abstract
We propose a new setting for testing properties of distributions while receiving samples from several distributions, but few samples per distribution. Given samples from s distributions, p_1, p_2, …, p_s, we design testers for the following problems: (1) Uniformity Testing: Testing whether all the p_i’s are uniform or ε-far from being uniform in ℓ_1-distance (2) Identity Testing: Testing whether all the p_i’s are equal to an explicitly given distribution q or ε-far from q in ℓ_1-distance, and (3) Closeness Testing: Testing whether all the p_i’s are equal to a distribution q which we have sample access to, or ε-far from q in ℓ_1-distance. By assuming an additional natural condition about the source distributions, we provide sample optimal testers for all of these problems.

Cite as

Maryam Aliakbarpour and Sandeep Silwal. Testing Properties of Multiple Distributions with Few Samples. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 69:1-69:41, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{aliakbarpour_et_al:LIPIcs.ITCS.2020.69,
  author =	{Aliakbarpour, Maryam and Silwal, Sandeep},
  title =	{{Testing Properties of Multiple Distributions with Few Samples}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{69:1--69:41},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.69},
  URN =		{urn:nbn:de:0030-drops-117545},
  doi =		{10.4230/LIPIcs.ITCS.2020.69},
  annote =	{Keywords: Hypothesis Testing, Property Testing, Distribution Testing, Identity Testing, Closeness Testing, Multiple Sources}
}
Document
Beyond Natural Proofs: Hardness Magnification and Locality

Authors: Lijie Chen, Shuichi Hirahara, Igor C. Oliveira, Ján Pich, Ninad Rajgopal, and Rahul Santhanam


Abstract
Hardness magnification reduces major complexity separations (such as EXP ⊈ NC^1) to proving lower bounds for some natural problem Q against weak circuit models. Several recent works [Igor Carboni Oliveira and Rahul Santhanam, 2018; Dylan M. McKay et al., 2019; Lijie Chen and Roei Tell, 2019; Igor Carboni Oliveira et al., 2019; Lijie Chen et al., 2019; Igor Carboni Oliveira, 2019; Lijie Chen et al., 2019] have established results of this form. In the most intriguing cases, the required lower bound is known for problems that appear to be significantly easier than Q, while Q itself is susceptible to lower bounds but these are not yet sufficient for magnification. In this work, we provide more examples of this phenomenon, and investigate the prospects of proving new lower bounds using this approach. In particular, we consider the following essential questions associated with the hardness magnification program: - Does hardness magnification avoid the natural proofs barrier of Razborov and Rudich [Alexander A. Razborov and Steven Rudich, 1997]? - Can we adapt known lower bound techniques to establish the desired lower bound for Q? We establish that some instantiations of hardness magnification overcome the natural proofs barrier in the following sense: slightly superlinear-size circuit lower bounds for certain versions of the minimum circuit size problem MCSP imply the non-existence of natural proofs. As a corollary of our result, we show that certain magnification theorems not only imply strong worst-case circuit lower bounds but also rule out the existence of efficient learning algorithms. Hardness magnification might sidestep natural proofs, but we identify a source of difficulty when trying to adapt existing lower bound techniques to prove strong lower bounds via magnification. This is captured by a locality barrier: existing magnification theorems unconditionally show that the problems Q considered above admit highly efficient circuits extended with small fan-in oracle gates, while lower bound techniques against weak circuit models quite often easily extend to circuits containing such oracles. This explains why direct adaptations of certain lower bounds are unlikely to yield strong complexity separations via hardness magnification.

Cite as

Lijie Chen, Shuichi Hirahara, Igor C. Oliveira, Ján Pich, Ninad Rajgopal, and Rahul Santhanam. Beyond Natural Proofs: Hardness Magnification and Locality. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 70:1-70:48, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ITCS.2020.70,
  author =	{Chen, Lijie and Hirahara, Shuichi and Oliveira, Igor C. and Pich, J\'{a}n and Rajgopal, Ninad and Santhanam, Rahul},
  title =	{{Beyond Natural Proofs: Hardness Magnification and Locality}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{70:1--70:48},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.70},
  URN =		{urn:nbn:de:0030-drops-117550},
  doi =		{10.4230/LIPIcs.ITCS.2020.70},
  annote =	{Keywords: Hardness Magnification, Natural Proofs, Minimum Circuit Size Problem, Circuit Lower Bounds}
}
Document
Separating Two-Round Secure Computation From Oblivious Transfer

Authors: Benny Applebaum, Zvika Brakerski, Sanjam Garg, Yuval Ishai, and Akshayaram Srinivasan


Abstract
We consider the question of minimizing the round complexity of protocols for secure multiparty computation (MPC) with security against an arbitrary number of semi-honest parties. Very recently, Garg and Srinivasan (Eurocrypt 2018) and Benhamouda and Lin (Eurocrypt 2018) constructed such 2-round MPC protocols from minimal assumptions. This was done by showing a round preserving reduction to the task of secure 2-party computation of the oblivious transfer functionality (OT). These constructions made a novel non-black-box use of the underlying OT protocol. The question remained whether this can be done by only making black-box use of 2-round OT. This is of theoretical and potentially also practical value as black-box use of primitives tends to lead to more efficient constructions. Our main result proves that such a black-box construction is impossible, namely that non-black-box use of OT is necessary. As a corollary, a similar separation holds when starting with any 2-party functionality other than OT. As a secondary contribution, we prove several additional results that further clarify the landscape of black-box MPC with minimal interaction. In particular, we complement the separation from 2-party functionalities by presenting a complete 4-party functionality, give evidence for the difficulty of ruling out a complete 3-party functionality and for the difficulty of ruling out black-box constructions of 3-round MPC from 2-round OT, and separate a relaxed "non-compact" variant of 2-party homomorphic secret sharing from 2-round OT.

Cite as

Benny Applebaum, Zvika Brakerski, Sanjam Garg, Yuval Ishai, and Akshayaram Srinivasan. Separating Two-Round Secure Computation From Oblivious Transfer. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 71:1-71:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{applebaum_et_al:LIPIcs.ITCS.2020.71,
  author =	{Applebaum, Benny and Brakerski, Zvika and Garg, Sanjam and Ishai, Yuval and Srinivasan, Akshayaram},
  title =	{{Separating Two-Round Secure Computation From Oblivious Transfer}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{71:1--71:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.71},
  URN =		{urn:nbn:de:0030-drops-117560},
  doi =		{10.4230/LIPIcs.ITCS.2020.71},
  annote =	{Keywords: Oracle Separation, Oblivious Transfer, Secure Multiparty Computation}
}
Document
Trade-Offs Between Size and Degree in Polynomial Calculus

Authors: Guillaume Lagarde, Jakob Nordström, Dmitry Sokolov, and Joseph Swernofsky


Abstract
Building on [Clegg et al. '96], [Impagliazzo et al. '99] established that if an unsatisfiable k-CNF formula over n variables has a refutation of size S in the polynomial calculus resolution proof system, then this formula also has a refutation of degree k + O(√(n log S)). The proof of this works by converting a small-size refutation into a small-degree one, but at the expense of increasing the proof size exponentially. This raises the question of whether it is possible to achieve both small size and small degree in the same refutation, or whether the exponential blow-up is inherent. Using and extending ideas from [Thapen '16], who studied the analogous question for the resolution proof system, we prove that a strong size-degree trade-off is necessary.

Cite as

Guillaume Lagarde, Jakob Nordström, Dmitry Sokolov, and Joseph Swernofsky. Trade-Offs Between Size and Degree in Polynomial Calculus. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 72:1-72:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{lagarde_et_al:LIPIcs.ITCS.2020.72,
  author =	{Lagarde, Guillaume and Nordstr\"{o}m, Jakob and Sokolov, Dmitry and Swernofsky, Joseph},
  title =	{{Trade-Offs Between Size and Degree in Polynomial Calculus}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{72:1--72:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.72},
  URN =		{urn:nbn:de:0030-drops-117573},
  doi =		{10.4230/LIPIcs.ITCS.2020.72},
  annote =	{Keywords: proof complexity, polynomial calculus, polynomial calculus resolution, PCR, size-degree trade-off, resolution, colored polynomial local search}
}
Document
Smoothed Efficient Algorithms and Reductions for Network Coordination Games

Authors: Shant Boodaghians, Rucha Kulkarni, and Ruta Mehta


Abstract
We study the smoothed complexity of finding pure Nash equilibria in Network Coordination Games, a PLS-complete problem in the worst case, even when each player has two strategies. This is a potential game where the sequential-better-response algorithm is known to converge to a pure NE, albeit in exponential time. First, we prove polynomial (respectively, quasi-polynomial) smoothed complexity when the underlying game graph is complete (resp. arbitrary), and every player has constantly many strategies. The complete graph assumption is reminiscent of perturbing all parameters, a common assumption in most known polynomial smoothed complexity results. We develop techniques to bound the probability that an (adversarial) better-response sequence makes slow improvements to the potential. Our approach combines and generalizes the local-max-cut approaches of Etscheid and Röglin (SODA `14; ACM TALG, `17) and Angel, Bubeck, Peres, and Wei (STOC `17), to handle the multi-strategy case. We believe that the approach and notions developed herein could be of interest in addressing the smoothed complexity of other potential games. Further, we define a notion of a smoothness-preserving reduction among search problems, and obtain reductions from 2-strategy network coordination games to local-max-cut, and from k-strategy games (k arbitrary) to local-max-bisection. The former, with the recent result of Bibak, Chandrasekaran, and Carlson (SODA `18) gives an alternate O(n^8)-time smoothed algorithm when k=2. These reductions extend smoothed efficient algorithms from one problem to another.

Cite as

Shant Boodaghians, Rucha Kulkarni, and Ruta Mehta. Smoothed Efficient Algorithms and Reductions for Network Coordination Games. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 73:1-73:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{boodaghians_et_al:LIPIcs.ITCS.2020.73,
  author =	{Boodaghians, Shant and Kulkarni, Rucha and Mehta, Ruta},
  title =	{{Smoothed Efficient Algorithms and Reductions for Network Coordination Games}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{73:1--73:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.73},
  URN =		{urn:nbn:de:0030-drops-117581},
  doi =		{10.4230/LIPIcs.ITCS.2020.73},
  annote =	{Keywords: Network Coordination Games, Smoothed Analysis}
}
Document
Local-To-Global Agreement Expansion via the Variance Method

Authors: Tali Kaufman and David Mass


Abstract
Agreement expansion is concerned with set systems for which local assignments to the sets with almost perfect pairwise consistency (i.e., most overlapping pairs of sets agree on their intersections) implies the existence of a global assignment to the ground set (from which the sets are defined) that agrees with most of the local assignments. It is currently known that if a set system forms a two-sided or a partite high dimensional expander then agreement expansion is implied. However, it was not known whether agreement expansion can be implied for one-sided high dimensional expanders. In this work we show that agreement expansion can be deduced for one-sided high dimensional expanders assuming that all the vertices' links (i.e., the neighborhoods of the vertices) are agreement expanders. Thus, for one-sided high dimensional expander, an agreement expansion of the large complicated complex can be deduced from agreement expansion of its small simple links. Using our result, we settle the open question whether the well studied Ramanujan complexes are agreement expanders. These complexes are neither partite nor two-sided high dimensional expanders. However, they are one-sided high dimensional expanders for which their links are partite and hence are agreement expanders. Thus, our result implies that Ramanujan complexes are agreement expanders, answering affirmatively the aforementioned open question. The local-to-global agreement expansion that we prove is based on the variance method that we develop. We show that for a high dimensional expander, if we define a function on its top faces and consider its local averages over the links then the variance of these local averages is much smaller than the global variance of the original function. This decreasing in the variance enables us to construct one global agreement function that ties together all local agreement functions.

Cite as

Tali Kaufman and David Mass. Local-To-Global Agreement Expansion via the Variance Method. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 74:1-74:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{kaufman_et_al:LIPIcs.ITCS.2020.74,
  author =	{Kaufman, Tali and Mass, David},
  title =	{{Local-To-Global Agreement Expansion via the Variance Method}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{74:1--74:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.74},
  URN =		{urn:nbn:de:0030-drops-117597},
  doi =		{10.4230/LIPIcs.ITCS.2020.74},
  annote =	{Keywords: Agreement testing, High dimensional expanders, Local-to-global, Variance method}
}
Document
MPC for MPC: Secure Computation on a Massively Parallel Computing Architecture

Authors: T-H. Hubert Chan, Kai-Min Chung, Wei-Kai Lin, and Elaine Shi


Abstract
Massively Parallel Computation (MPC) is a model of computation widely believed to best capture realistic parallel computing architectures such as large-scale MapReduce and Hadoop clusters. Motivated by the fact that many data analytics tasks performed on these platforms involve sensitive user data, we initiate the theoretical exploration of how to leverage MPC architectures to enable efficient, privacy-preserving computation over massive data. Clearly if a computation task does not lend itself to an efficient implementation on MPC even without security, then we cannot hope to compute it efficiently on MPC with security. We show, on the other hand, that any task that can be efficiently computed on MPC can also be securely computed with comparable efficiency. Specifically, we show the following results: - any MPC algorithm can be compiled to a communication-oblivious counterpart while asymptotically preserving its round and space complexity, where communication-obliviousness ensures that any network intermediary observing the communication patterns learn no information about the secret inputs; - assuming the existence of Fully Homomorphic Encryption with a suitable notion of compactness and other standard cryptographic assumptions, any MPC algorithm can be compiled to a secure counterpart that defends against an adversary who controls not only intermediate network routers but additionally up to 1/3 - η fraction of machines (for an arbitrarily small constant η) - moreover, this compilation preserves the round complexity tightly, and preserves the space complexity upto a multiplicative security parameter related blowup. As an initial exploration of this important direction, our work suggests new definitions and proposes novel protocols that blend algorithmic and cryptographic techniques.

Cite as

T-H. Hubert Chan, Kai-Min Chung, Wei-Kai Lin, and Elaine Shi. MPC for MPC: Secure Computation on a Massively Parallel Computing Architecture. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 75:1-75:52, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{chan_et_al:LIPIcs.ITCS.2020.75,
  author =	{Chan, T-H. Hubert and Chung, Kai-Min and Lin, Wei-Kai and Shi, Elaine},
  title =	{{MPC for MPC: Secure Computation on a Massively Parallel Computing Architecture}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{75:1--75:52},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.75},
  URN =		{urn:nbn:de:0030-drops-117600},
  doi =		{10.4230/LIPIcs.ITCS.2020.75},
  annote =	{Keywords: massively parallel computation, secure multi-party computation}
}
Document
Choice and Bias in Random Walks

Authors: Agelos Georgakopoulos, John Haslegrave, Thomas Sauerwald, and John Sylvester


Abstract
We analyse the following random walk process inspired by the power-of-two-choice paradigm: starting from a given vertex, at each step, unlike the simple random walk (SRW) that always moves to a randomly chosen neighbour, we have the choice between two uniformly and independently chosen neighbours. We call this process the choice random walk (CRW). We first prove that for any graph, there is a strategy for the CRW that visits any given vertex in expected time ?(|E|). Then we introduce a general tool that quantifies by how much the probability of a rare event in the simple random walk can be boosted under a suitable CRW strategy. We believe this result to be of independent interest, and apply it here to derive an almost optimal ?(n log log n) bound for the cover time of bounded-degree expanders. This tool also applies to so-called biased walks, and allows us to make progress towards a conjecture of Azar et al. [STOC 1992]. Finally, we prove the following dichotomy: computing an optimal strategy to minimise the hitting time of a vertex takes polynomial time, whereas computing one to minimise the cover time is NP-hard.

Cite as

Agelos Georgakopoulos, John Haslegrave, Thomas Sauerwald, and John Sylvester. Choice and Bias in Random Walks. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 76:1-76:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{georgakopoulos_et_al:LIPIcs.ITCS.2020.76,
  author =	{Georgakopoulos, Agelos and Haslegrave, John and Sauerwald, Thomas and Sylvester, John},
  title =	{{Choice and Bias in Random Walks}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{76:1--76:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.76},
  URN =		{urn:nbn:de:0030-drops-117612},
  doi =		{10.4230/LIPIcs.ITCS.2020.76},
  annote =	{Keywords: Power of Two Choices, Markov Chains, Random Walks, Cover Time, Markov Decision Processes}
}
Document
Graph Spanners in the Message-Passing Model

Authors: Manuel Fernández V, David P. Woodruff, and Taisuke Yasuda


Abstract
Graph spanners are sparse subgraphs which approximately preserve all pairwise shortest-path distances in an input graph. The notion of approximation can be additive, multiplicative, or both, and many variants of this problem have been extensively studied. We study the problem of computing a graph spanner when the edges of the input graph are distributed across two or more sites in an arbitrary, possibly worst-case partition, and the goal is for the sites to minimize the communication used to output a spanner. We assume the message-passing model of communication, for which there is a point-to-point link between all pairs of sites as well as a coordinator who is responsible for producing the output. We stress that the subset of edges that each site has is not related to the network topology, which is fixed to be point-to-point. While this model has been extensively studied for related problems such as graph connectivity, it has not been systematically studied for graph spanners. We present the first tradeoffs for total communication versus the quality of the spanners computed, for two or more sites, as well as for additive and multiplicative notions of distortion. We show separations in the communication complexity when edges are allowed to occur on multiple sites, versus when each edge occurs on at most one site. We obtain nearly tight bounds (up to polylog factors) for the communication of additive 2-spanners in both the with and without duplication models, multiplicative (2k-1)-spanners in the with duplication model, and multiplicative 3 and 5-spanners in the without duplication model. Our lower bound for multiplicative 3-spanners employs biregular bipartite graphs rather than the usual Erdős girth conjecture graphs and may be of wider interest.

Cite as

Manuel Fernández V, David P. Woodruff, and Taisuke Yasuda. Graph Spanners in the Message-Passing Model. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 77:1-77:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{fernandezv_et_al:LIPIcs.ITCS.2020.77,
  author =	{Fern\'{a}ndez V, Manuel and Woodruff, David P. and Yasuda, Taisuke},
  title =	{{Graph Spanners in the Message-Passing Model}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{77:1--77:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.77},
  URN =		{urn:nbn:de:0030-drops-117620},
  doi =		{10.4230/LIPIcs.ITCS.2020.77},
  annote =	{Keywords: Graph spanners, Message-passing model, Communication complexity}
}
Document
Computational Hardness of Certifying Bounds on Constrained PCA Problems

Authors: Afonso S. Bandeira, Dmitriy Kunisky, and Alexander S. Wein


Abstract
Given a random n × n symmetric matrix ? drawn from the Gaussian orthogonal ensemble (GOE), we consider the problem of certifying an upper bound on the maximum value of the quadratic form ?^⊤ ? ? over all vectors ? in a constraint set ? ⊂ ℝⁿ. For a certain class of normalized constraint sets we show that, conditional on a certain complexity-theoretic conjecture, no polynomial-time algorithm can certify a better upper bound than the largest eigenvalue of ?. A notable special case included in our results is the hypercube ? = {±1/√n}ⁿ, which corresponds to the problem of certifying bounds on the Hamiltonian of the Sherrington-Kirkpatrick spin glass model from statistical physics. Our results suggest a striking gap between optimization and certification for this problem. Our proof proceeds in two steps. First, we give a reduction from the detection problem in the negatively-spiked Wishart model to the above certification problem. We then give evidence that this Wishart detection problem is computationally hard below the classical spectral threshold, by showing that no low-degree polynomial can (in expectation) distinguish the spiked and unspiked models. This method for predicting computational thresholds was proposed in a sequence of recent works on the sum-of-squares hierarchy, and is conjectured to be correct for a large class of problems. Our proof can be seen as constructing a distribution over symmetric matrices that appears computationally indistinguishable from the GOE, yet is supported on matrices whose maximum quadratic form over ? ∈ ? is much larger than that of a GOE matrix.

Cite as

Afonso S. Bandeira, Dmitriy Kunisky, and Alexander S. Wein. Computational Hardness of Certifying Bounds on Constrained PCA Problems. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 78:1-78:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bandeira_et_al:LIPIcs.ITCS.2020.78,
  author =	{Bandeira, Afonso S. and Kunisky, Dmitriy and Wein, Alexander S.},
  title =	{{Computational Hardness of Certifying Bounds on Constrained PCA Problems}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{78:1--78:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.78},
  URN =		{urn:nbn:de:0030-drops-117633},
  doi =		{10.4230/LIPIcs.ITCS.2020.78},
  annote =	{Keywords: Certification, Sherrington-Kirkpatrick model, spiked Wishart model, low-degree likelihood ratio}
}
Document
Pseudo-Deterministic Streaming

Authors: Shafi Goldwasser, Ofer Grossman, Sidhanth Mohanty, and David P. Woodruff


Abstract
A pseudo-deterministic algorithm is a (randomized) algorithm which, when run multiple times on the same input, with high probability outputs the same result on all executions. Classic streaming algorithms, such as those for finding heavy hitters, approximate counting, ?_2 approximation, finding a nonzero entry in a vector (for turnstile algorithms) are not pseudo-deterministic. For example, in the instance of finding a nonzero entry in a vector, for any known low-space algorithm A, there exists a stream x so that running A twice on x (using different randomness) would with high probability result in two different entries as the output. In this work, we study whether it is inherent that these algorithms output different values on different executions. That is, we ask whether these problems have low-memory pseudo-deterministic algorithms. For instance, we show that there is no low-memory pseudo-deterministic algorithm for finding a nonzero entry in a vector (given in a turnstile fashion), and also that there is no low-dimensional pseudo-deterministic sketching algorithm for ?_2 norm estimation. We also exhibit problems which do have low memory pseudo-deterministic algorithms but no low memory deterministic algorithm, such as outputting a nonzero row of a matrix, or outputting a basis for the row-span of a matrix. We also investigate multi-pseudo-deterministic algorithms: algorithms which with high probability output one of a few options. We show the first lower bounds for such algorithms. This implies that there are streaming problems such that every low space algorithm for the problem must have inputs where there are many valid outputs, all with a significant probability of being outputted.

Cite as

Shafi Goldwasser, Ofer Grossman, Sidhanth Mohanty, and David P. Woodruff. Pseudo-Deterministic Streaming. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 79:1-79:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{goldwasser_et_al:LIPIcs.ITCS.2020.79,
  author =	{Goldwasser, Shafi and Grossman, Ofer and Mohanty, Sidhanth and Woodruff, David P.},
  title =	{{Pseudo-Deterministic Streaming}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{79:1--79:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.79},
  URN =		{urn:nbn:de:0030-drops-117644},
  doi =		{10.4230/LIPIcs.ITCS.2020.79},
  annote =	{Keywords: streaming, pseudo-deterministic}
}
Document
Limits to Non-Malleability

Authors: Marshall Ball, Dana Dachman-Soled, Mukul Kulkarni, and Tal Malkin


Abstract
There have been many successes in constructing explicit non-malleable codes for various classes of tampering functions in recent years, and strong existential results are also known. In this work we ask the following question: When can we rule out the existence of a non-malleable code for a tampering class ℱ? First, we start with some classes where positive results are well-known, and show that when these classes are extended in a natural way, non-malleable codes are no longer possible. Specifically, we show that no non-malleable codes exist for any of the following tampering classes: - Functions that change d/2 symbols, where d is the distance of the code; - Functions where each input symbol affects only a single output symbol; - Functions where each of the n output bits is a function of n-log n input bits. Furthermore, we rule out constructions of non-malleable codes for certain classes ℱ via reductions to the assumption that a distributional problem is hard for ℱ, that make black-box use of the tampering functions in the proof. In particular, this yields concrete obstacles for the construction of efficient codes for NC, even assuming average-case variants of P ⊈ NC.

Cite as

Marshall Ball, Dana Dachman-Soled, Mukul Kulkarni, and Tal Malkin. Limits to Non-Malleability. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 80:1-80:32, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{ball_et_al:LIPIcs.ITCS.2020.80,
  author =	{Ball, Marshall and Dachman-Soled, Dana and Kulkarni, Mukul and Malkin, Tal},
  title =	{{Limits to Non-Malleability}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{80:1--80:32},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.80},
  URN =		{urn:nbn:de:0030-drops-117657},
  doi =		{10.4230/LIPIcs.ITCS.2020.80},
  annote =	{Keywords: non-malleable codes, black-box impossibility, tamper-resilient cryptogtaphy, average-case hardness}
}
Document
Cryptography from Information Loss

Authors: Marshall Ball, Elette Boyle, Akshay Degwekar, Apoorvaa Deshpande, Alon Rosen, Vinod Vaikuntanathan, and Prashant Nalini Vasudevan


Abstract
Reductions between problems, the mainstay of theoretical computer science, efficiently map an instance of one problem to an instance of another in such a way that solving the latter allows solving the former. The subject of this work is "lossy" reductions, where the reduction loses some information about the input instance. We show that such reductions, when they exist, have interesting and powerful consequences for lifting hardness into "useful" hardness, namely cryptography. Our first, conceptual, contribution is a definition of lossy reductions in the language of mutual information. Roughly speaking, our definition says that a reduction C is t-lossy if, for any distribution X over its inputs, the mutual information I(X;C(X)) ≤ t. Our treatment generalizes a variety of seemingly related but distinct notions such as worst-case to average-case reductions, randomized encodings (Ishai and Kushilevitz, FOCS 2000), homomorphic computations (Gentry, STOC 2009), and instance compression (Harnik and Naor, FOCS 2006). We then proceed to show several consequences of lossy reductions: 1. We say that a language L has an f-reduction to a language L' for a Boolean function f if there is a (randomized) polynomial-time algorithm C that takes an m-tuple of strings X = (x_1,…,x_m), with each x_i ∈ {0,1}^n, and outputs a string z such that with high probability, L'(z) = f(L(x_1),L(x_2),…,L(x_m)). Suppose a language L has an f-reduction C to L' that is t-lossy. Our first result is that one-way functions exist if L is worst-case hard and one of the following conditions holds: - f is the OR function, t ≤ m/100, and L' is the same as L - f is the Majority function, and t ≤ m/100 - f is the OR function, t ≤ O(m log n), and the reduction has no error This improves on the implications that follow from combining (Drucker, FOCS 2012) with (Ostrovsky and Wigderson, ISTCS 1993) that result in auxiliary-input one-way functions. 2. Our second result is about the stronger notion of t-compressing f-reductions - reductions that only output t bits. We show that if there is an average-case hard language L that has a t-compressing Majority reduction to some language for t=m/100, then there exist collision-resistant hash functions. This improves on the result of (Harnik and Naor, STOC 2006), whose starting point is a cryptographic primitive (namely, one-way functions) rather than average-case hardness, and whose assumption is a compressing OR-reduction of SAT (which is now known to be false unless the polynomial hierarchy collapses). Along the way, we define a non-standard one-sided notion of average-case hardness, which is the notion of hardness used in the second result above, that may be of independent interest.

Cite as

Marshall Ball, Elette Boyle, Akshay Degwekar, Apoorvaa Deshpande, Alon Rosen, Vinod Vaikuntanathan, and Prashant Nalini Vasudevan. Cryptography from Information Loss. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 81:1-81:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{ball_et_al:LIPIcs.ITCS.2020.81,
  author =	{Ball, Marshall and Boyle, Elette and Degwekar, Akshay and Deshpande, Apoorvaa and Rosen, Alon and Vaikuntanathan, Vinod and Vasudevan, Prashant Nalini},
  title =	{{Cryptography from Information Loss}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{81:1--81:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.81},
  URN =		{urn:nbn:de:0030-drops-117667},
  doi =		{10.4230/LIPIcs.ITCS.2020.81},
  annote =	{Keywords: Compression, Information Loss, One-Way Functions, Reductions, Generic Constructions}
}
Document
Affine Determinant Programs: A Framework for Obfuscation and Witness Encryption

Authors: James Bartusek, Yuval Ishai, Aayush Jain, Fermi Ma, Amit Sahai, and Mark Zhandry


Abstract
An affine determinant program ADP: {0,1}^n → {0,1} is specified by a tuple (A,B_1,…,B_n) of square matrices over ?_q and a function Eval: ?_q → {0,1}, and evaluated on x ∈ {0,1}^n by computing Eval(det(A + ∑_{i∈[n]} x_i B_i)). In this work, we suggest ADPs as a new framework for building general-purpose obfuscation and witness encryption. We provide evidence to suggest that constructions following our ADP-based framework may one day yield secure, practically feasible obfuscation. As a proof-of-concept, we give a candidate ADP-based construction of indistinguishability obfuscation (i?) for all circuits along with a simple witness encryption candidate. We provide cryptanalysis demonstrating that our schemes resist several potential attacks, and leave further cryptanalysis to future work. Lastly, we explore practically feasible applications of our witness encryption candidate, such as public-key encryption with near-optimal key generation.

Cite as

James Bartusek, Yuval Ishai, Aayush Jain, Fermi Ma, Amit Sahai, and Mark Zhandry. Affine Determinant Programs: A Framework for Obfuscation and Witness Encryption. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 82:1-82:39, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bartusek_et_al:LIPIcs.ITCS.2020.82,
  author =	{Bartusek, James and Ishai, Yuval and Jain, Aayush and Ma, Fermi and Sahai, Amit and Zhandry, Mark},
  title =	{{Affine Determinant Programs: A Framework for Obfuscation and Witness Encryption}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{82:1--82:39},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.82},
  URN =		{urn:nbn:de:0030-drops-117679},
  doi =		{10.4230/LIPIcs.ITCS.2020.82},
  annote =	{Keywords: Obfuscation, Witness Encryption}
}
Document
OV Graphs Are (Probably) Hard Instances

Authors: Josh Alman and Virginia Vassilevska Williams


Abstract
A graph G on n nodes is an Orthogonal Vectors (OV) graph of dimension d if there are vectors v_1, …, v_n ∈ {0,1}^d such that nodes i and j are adjacent in G if and only if ⟨v_i,v_j⟩ = 0 over Z. In this paper, we study a number of basic graph algorithm problems, except where one is given as input the vectors defining an OV graph instead of a general graph. We show that for each of the following problems, an algorithm solving it faster on such OV graphs G of dimension only d=O(log n) than in the general case would refute a plausible conjecture about the time required to solve sparse MAX-k-SAT instances: - Determining whether G contains a triangle. - More generally, determining whether G contains a directed k-cycle for any k ≥ 3. - Computing the square of the adjacency matrix of G over ℤ or ?_2. - Maintaining the shortest distance between two fixed nodes of G, or whether G has a perfect matching, when G is a dynamically updating OV graph. We also prove some complementary results about OV graphs. We show that any problem which is NP-hard on constant-degree graphs is also NP-hard on OV graphs of dimension O(log n), and we give two problems which can be solved faster on OV graphs than in general: Maximum Clique, and Online Matrix-Vector Multiplication.

Cite as

Josh Alman and Virginia Vassilevska Williams. OV Graphs Are (Probably) Hard Instances. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 83:1-83:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{alman_et_al:LIPIcs.ITCS.2020.83,
  author =	{Alman, Josh and Vassilevska Williams, Virginia},
  title =	{{OV Graphs Are (Probably) Hard Instances}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{83:1--83:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.83},
  URN =		{urn:nbn:de:0030-drops-117686},
  doi =		{10.4230/LIPIcs.ITCS.2020.83},
  annote =	{Keywords: Orthogonal Vectors, Fine-Grained Reductions, Cycle Finding}
}
Document
Finding Skewed Subcubes Under a Distribution

Authors: Parikshit Gopalan, Roie Levin, and Udi Wieder


Abstract
Say that we are given samples from a distribution ψ over an n-dimensional space. We expect or desire ψ to behave like a product distribution (or a k-wise independent distribution over its marginals for small k). We propose the problem of enumerating/list-decoding all large subcubes where the distribution ψ deviates markedly from what we expect; we refer to such subcubes as skewed subcubes. Skewed subcubes are certificates of dependencies between small subsets of variables in ψ. We motivate this problem by showing that it arises naturally in the context of algorithmic fairness and anomaly detection. In this work we focus on the special but important case where the space is the Boolean hypercube, and the expected marginals are uniform. We show that the obvious definition of skewed subcubes can lead to intractable list sizes, and propose a better definition of a minimal skewed subcube, which are subcubes whose skew cannot be attributed to a larger subcube that contains it. Our main technical contribution is a list-size bound for this definition and an algorithm to efficiently find all such subcubes. Both the bound and the algorithm rely on Fourier-analytic techniques, especially the powerful hypercontractive inequality. On the lower bounds side, we show that finding skewed subcubes is as hard as the sparse noisy parity problem, and hence our algorithms cannot be improved on substantially without a breakthrough on this problem which is believed to be intractable. Motivated by this, we study alternate models allowing query access to ψ where finding skewed subcubes might be easier.

Cite as

Parikshit Gopalan, Roie Levin, and Udi Wieder. Finding Skewed Subcubes Under a Distribution. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 84:1-84:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{gopalan_et_al:LIPIcs.ITCS.2020.84,
  author =	{Gopalan, Parikshit and Levin, Roie and Wieder, Udi},
  title =	{{Finding Skewed Subcubes Under a Distribution}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{84:1--84:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.84},
  URN =		{urn:nbn:de:0030-drops-117691},
  doi =		{10.4230/LIPIcs.ITCS.2020.84},
  annote =	{Keywords: Fourier Analysis, Anomaly Detection, Algorithmic Fairness, Probability, Unsupervised Learning}
}
Document
Combinatorial Lower Bounds for 3-Query LDCs

Authors: Arnab Bhattacharyya, L. Sunil Chandran, and Suprovat Ghoshal


Abstract
A code is called a q-query locally decodable code (LDC) if there is a randomized decoding algorithm that, given an index i and a received word w close to an encoding of a message x, outputs x_i by querying only at most q coordinates of w. Understanding the tradeoffs between the dimension, length and query complexity of LDCs is a fascinating and unresolved research challenge. In particular, for 3-query binary LDC’s of dimension k and length n, the best known bounds are: 2^{k^o(1)} ≥ n ≥ Ω ̃(k²). In this work, we take a second look at binary 3-query LDCs. We investigate a class of 3-uniform hypergraphs that are equivalent to strong binary 3-query LDCs. We prove an upper bound on the number of edges in these hypergraphs, reproducing the known lower bound of Ω ̃(k²) for the length of strong 3-query LDCs. In contrast to previous work, our techniques are purely combinatorial and do not rely on a direct reduction to 2-query LDCs, opening up a potentially different approach to analyzing 3-query LDCs.

Cite as

Arnab Bhattacharyya, L. Sunil Chandran, and Suprovat Ghoshal. Combinatorial Lower Bounds for 3-Query LDCs. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 85:1-85:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bhattacharyya_et_al:LIPIcs.ITCS.2020.85,
  author =	{Bhattacharyya, Arnab and Chandran, L. Sunil and Ghoshal, Suprovat},
  title =	{{Combinatorial Lower Bounds for 3-Query LDCs}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{85:1--85:8},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.85},
  URN =		{urn:nbn:de:0030-drops-117704},
  doi =		{10.4230/LIPIcs.ITCS.2020.85},
  annote =	{Keywords: Coding theory, Graph theory, Hypergraphs}
}
Document
On the Complexity of Decomposable Randomized Encodings, Or: How Friendly Can a Garbling-Friendly PRF Be?

Authors: Marshall Ball, Justin Holmgren, Yuval Ishai, Tianren Liu, and Tal Malkin


Abstract
Garbling schemes, also known as decomposable randomized encodings (DRE), have found many applications in cryptography. However, despite a large body of work on constructing such schemes, very little is known about their limitations. We initiate a systematic study of the DRE complexity of Boolean functions, obtaining the following main results: - Near-quadratic lower bounds. We use a classical lower bound technique of Nečiporuk [Dokl. Akad. Nauk SSSR '66] to show an Ω(n²/log n) lower bound on the size of any DRE for many explicit Boolean functions. For some natural functions, we obtain a corresponding upper bound, thus settling their DRE complexity up to polylogarithmic factors. Prior to our work, no superlinear lower bounds were known, even for non-explicit functions. - Garbling-friendly PRFs. We show that any exponentially secure PRF has Ω(n²/log n) DRE size, and present a plausible candidate for a "garbling-optimal" PRF that nearly meets this bound. This candidate establishes a barrier for super-quadratic DRE lower bounds via natural proof techniques. In contrast, we show a candidate for a weak PRF with near-exponential security and linear DRE size. Our results establish several qualitative separations, including near-quadratic separations between computational and information-theoretic DRE size of Boolean functions, and between DRE size of weak vs. strong PRFs.

Cite as

Marshall Ball, Justin Holmgren, Yuval Ishai, Tianren Liu, and Tal Malkin. On the Complexity of Decomposable Randomized Encodings, Or: How Friendly Can a Garbling-Friendly PRF Be?. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 86:1-86:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{ball_et_al:LIPIcs.ITCS.2020.86,
  author =	{Ball, Marshall and Holmgren, Justin and Ishai, Yuval and Liu, Tianren and Malkin, Tal},
  title =	{{On the Complexity of Decomposable Randomized Encodings, Or: How Friendly Can a Garbling-Friendly PRF Be?}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{86:1--86:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.86},
  URN =		{urn:nbn:de:0030-drops-117714},
  doi =		{10.4230/LIPIcs.ITCS.2020.86},
  annote =	{Keywords: Randomized Encoding, Private Simultaneous Messages}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail