LIPIcs, Volume 185

12th Innovations in Theoretical Computer Science Conference (ITCS 2021)



Thumbnail PDF

Event

ITCS 2021, January 6-8, 2021, Virtual Conference

Editor

James R. Lee
  • University of Washington, Seattle, USA

Publication Details


Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 185, ITCS 2021, Complete Volume

Authors: James R. Lee


Abstract
LIPIcs, Volume 185, ITCS 2021, Complete Volume

Cite as

12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 1-1550, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@Proceedings{lee:LIPIcs.ITCS.2021,
  title =	{{LIPIcs, Volume 185, ITCS 2021, Complete Volume}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{1--1550},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021},
  URN =		{urn:nbn:de:0030-drops-135381},
  doi =		{10.4230/LIPIcs.ITCS.2021},
  annote =	{Keywords: LIPIcs, Volume 185, ITCS 2021, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: James R. Lee


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 0:i-0:xiv, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{lee:LIPIcs.ITCS.2021.0,
  author =	{Lee, James R.},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{0:i--0:xiv},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.0},
  URN =		{urn:nbn:de:0030-drops-135397},
  doi =		{10.4230/LIPIcs.ITCS.2021.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
The Entropy of Lies: Playing Twenty Questions with a Liar

Authors: Yuval Dagan, Yuval Filmus, Daniel Kane, and Shay Moran


Abstract
"Twenty questions" is a guessing game played by two players: Bob thinks of an integer between 1 and n, and Alice’s goal is to recover it using a minimal number of Yes/No questions. Shannon’s entropy has a natural interpretation in this context. It characterizes the average number of questions used by an optimal strategy in the distributional variant of the game: let μ be a distribution over [n], then the average number of questions used by an optimal strategy that recovers x∼ μ is between H(μ) and H(μ)+1. We consider an extension of this game where at most k questions can be answered falsely. We extend the classical result by showing that an optimal strategy uses roughly H(μ) + k H_2(μ) questions, where H_2(μ) = ∑_x μ(x)log log 1/μ(x). This also generalizes a result by Rivest et al. (1980) for the uniform distribution. Moreover, we design near optimal strategies that only use comparison queries of the form "x ≤ c?" for c ∈ [n]. The usage of comparison queries lends itself naturally to the context of sorting, where we derive sorting algorithms in the presence of adversarial noise.

Cite as

Yuval Dagan, Yuval Filmus, Daniel Kane, and Shay Moran. The Entropy of Lies: Playing Twenty Questions with a Liar. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 1:1-1:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{dagan_et_al:LIPIcs.ITCS.2021.1,
  author =	{Dagan, Yuval and Filmus, Yuval and Kane, Daniel and Moran, Shay},
  title =	{{The Entropy of Lies: Playing Twenty Questions with a Liar}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{1:1--1:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.1},
  URN =		{urn:nbn:de:0030-drops-135400},
  doi =		{10.4230/LIPIcs.ITCS.2021.1},
  annote =	{Keywords: entropy, twenty questions, algorithms, sorting}
}
Document
Comparing Computational Entropies Below Majority (Or: When Is the Dense Model Theorem False?)

Authors: Russell Impagliazzo and Sam McGuire


Abstract
Computational pseudorandomness studies the extent to which a random variable Z looks like the uniform distribution according to a class of tests ℱ. Computational entropy generalizes computational pseudorandomness by studying the extent which a random variable looks like a high entropy distribution. There are different formal definitions of computational entropy with different advantages for different applications. Because of this, it is of interest to understand when these definitions are equivalent. We consider three notions of computational entropy which are known to be equivalent when the test class ℱ is closed under taking majorities. This equivalence constitutes (essentially) the so-called dense model theorem of Green and Tao (and later made explicit by Tao-Zeigler, Reingold et al., and Gowers). The dense model theorem plays a key role in Green and Tao’s proof that the primes contain arbitrarily long arithmetic progressions and has since been connected to a surprisingly wide range of topics in mathematics and computer science, including cryptography, computational complexity, combinatorics and machine learning. We show that, in different situations where ℱ is not closed under majority, this equivalence fails. This in turn provides examples where the dense model theorem is false.

Cite as

Russell Impagliazzo and Sam McGuire. Comparing Computational Entropies Below Majority (Or: When Is the Dense Model Theorem False?). In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 2:1-2:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{impagliazzo_et_al:LIPIcs.ITCS.2021.2,
  author =	{Impagliazzo, Russell and McGuire, Sam},
  title =	{{Comparing Computational Entropies Below Majority (Or: When Is the Dense Model Theorem False?)}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{2:1--2:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.2},
  URN =		{urn:nbn:de:0030-drops-135417},
  doi =		{10.4230/LIPIcs.ITCS.2021.2},
  annote =	{Keywords: Computational entropy, dense model theorem, coin problem}
}
Document
Algorithmic Persuasion with Evidence

Authors: Martin Hoefer, Pasin Manurangsi, and Alexandros Psomas


Abstract
We consider a game of persuasion with evidence between a sender and a receiver. The sender has private information. By presenting evidence on the information, the sender wishes to persuade the receiver to take a single action (e.g., hire a job candidate, or convict a defendant). The sender’s utility depends solely on whether or not the receiver takes the action. The receiver’s utility depends on both the action as well as the sender’s private information. We study three natural variations. First, we consider sequential equilibria of the game without commitment power. Second, we consider a persuasion variant, where the sender commits to a signaling scheme and then the receiver, after seeing the evidence, takes the action or not. Third, we study a delegation variant, where the receiver first commits to taking the action if being presented certain evidence, and then the sender presents evidence to maximize the probability the action is taken. We study these variants through the computational lens, and give hardness results, optimal approximation algorithms, as well as polynomial-time algorithms for special cases. Among our results is an approximation algorithm that rounds a semidefinite program that might be of independent interest, since, to the best of our knowledge, it is the first such approximation algorithm for a natural problem in algorithmic economics.

Cite as

Martin Hoefer, Pasin Manurangsi, and Alexandros Psomas. Algorithmic Persuasion with Evidence. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 3:1-3:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{hoefer_et_al:LIPIcs.ITCS.2021.3,
  author =	{Hoefer, Martin and Manurangsi, Pasin and Psomas, Alexandros},
  title =	{{Algorithmic Persuasion with Evidence}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{3:1--3:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.3},
  URN =		{urn:nbn:de:0030-drops-135420},
  doi =		{10.4230/LIPIcs.ITCS.2021.3},
  annote =	{Keywords: Bayesian Persuasion, Semidefinite Programming, Approximation Algorithms}
}
Document
The Complexity of Finding Fair Independent Sets in Cycles

Authors: Ishay Haviv


Abstract
Let G be a cycle graph and let V₁,…,V_m be a partition of its vertex set into m sets. An independent set S of G is said to fairly represent the partition if |S ∩ V_i| ≥ 1/2⋅|V_i| - 1 for all i ∈ [m]. It is known that for every cycle and every partition of its vertex set, there exists an independent set that fairly represents the partition (Aharoni et al., A Journey through Discrete Math., 2017). We prove that the problem of finding such an independent set is PPA-complete. As an application, we show that the problem of finding a monochromatic edge in a Schrijver graph, given a succinct representation of a coloring that uses fewer colors than its chromatic number, is PPA-complete as well. The work is motivated by the computational aspects of the "cycle plus triangles" problem and of its extensions.

Cite as

Ishay Haviv. The Complexity of Finding Fair Independent Sets in Cycles. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 4:1-4:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{haviv:LIPIcs.ITCS.2021.4,
  author =	{Haviv, Ishay},
  title =	{{The Complexity of Finding Fair Independent Sets in Cycles}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{4:1--4:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.4},
  URN =		{urn:nbn:de:0030-drops-135431},
  doi =		{10.4230/LIPIcs.ITCS.2021.4},
  annote =	{Keywords: Fair independent sets in cycles, the complexity class \{PPA\}, Schrijver graphs}
}
Document
Sharp Threshold Rates for Random Codes

Authors: Venkatesan Guruswami, Jonathan Mosheiff, Nicolas Resch, Shashwat Silas, and Mary Wootters


Abstract
Suppose that 𝒫 is a property that may be satisfied by a random code C ⊂ Σⁿ. For example, for some p ∈ (0,1), 𝒫 might be the property that there exist three elements of C that lie in some Hamming ball of radius pn. We say that R^* is the threshold rate for 𝒫 if a random code of rate R^* + ε is very likely to satisfy 𝒫, while a random code of rate R^* - ε is very unlikely to satisfy 𝒫. While random codes are well-studied in coding theory, even the threshold rates for relatively simple properties like the one above are not well understood. We characterize threshold rates for a rich class of properties. These properties, like the example above, are defined by the inclusion of specific sets of codewords which are also suitably "symmetric." For properties in this class, we show that the threshold rate is in fact equal to the lower bound that a simple first-moment calculation obtains. Our techniques not only pin down the threshold rate for the property 𝒫 above, they give sharp bounds on the threshold rate for list-recovery in several parameter regimes, as well as an efficient algorithm for estimating the threshold rates for list-recovery in general.

Cite as

Venkatesan Guruswami, Jonathan Mosheiff, Nicolas Resch, Shashwat Silas, and Mary Wootters. Sharp Threshold Rates for Random Codes. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 5:1-5:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.ITCS.2021.5,
  author =	{Guruswami, Venkatesan and Mosheiff, Jonathan and Resch, Nicolas and Silas, Shashwat and Wootters, Mary},
  title =	{{Sharp Threshold Rates for Random Codes}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{5:1--5:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.5},
  URN =		{urn:nbn:de:0030-drops-135446},
  doi =		{10.4230/LIPIcs.ITCS.2021.5},
  annote =	{Keywords: Coding theory, Random codes, Sharp thresholds}
}
Document
Simple Heuristics Yield Provable Algorithms for Masked Low-Rank Approximation

Authors: Cameron Musco, Christopher Musco, and David P. Woodruff


Abstract
In the masked low-rank approximation problem, one is given data matrix A ∈ ℝ^{n × n} and binary mask matrix W ∈ {0,1}^{n × n}. The goal is to find a rank-k matrix L for which: cost(L) := ∑_{i=1}^n ∑_{j=1}^n W_{i,j} ⋅ (A_{i,j} - L_{i,j})² ≤ OPT + ε ‖A‖_F², where OPT = min_{rank-k L̂} cost(L̂) and ε is a given error parameter. Depending on the choice of W, the above problem captures factor analysis, low-rank plus diagonal decomposition, robust PCA, low-rank matrix completion, low-rank plus block matrix approximation, low-rank recovery from monotone missing data, and a number of other important problems. Many of these problems are NP-hard, and while algorithms with provable guarantees are known in some cases, they either 1) run in time n^Ω(k²/ε) or 2) make strong assumptions, for example, that A is incoherent or that the entries in W are chosen independently and uniformly at random. In this work, we show that a common polynomial time heuristic, which simply sets A to 0 where W is 0, and then finds a standard low-rank approximation, yields bicriteria approximation guarantees for this problem. In particular, for rank k' > k depending on the public coin partition number of W, the heuristic outputs rank-k' L with cost(L) ≤ OPT + ε ‖A‖_F². This partition number is in turn bounded by the randomized communication complexity of W, when interpreted as a two-player communication matrix. For many important cases, including all those listed above, this yields bicriteria approximation guarantees with rank k' = k ⋅ poly(log n/ε). Beyond this result, we show that different notions of communication complexity yield bicriteria algorithms for natural variants of masked low-rank approximation. For example, multi-player number-in-hand communication complexity connects to masked tensor decomposition and non-deterministic communication complexity to masked Boolean low-rank factorization.

Cite as

Cameron Musco, Christopher Musco, and David P. Woodruff. Simple Heuristics Yield Provable Algorithms for Masked Low-Rank Approximation. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 6:1-6:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{musco_et_al:LIPIcs.ITCS.2021.6,
  author =	{Musco, Cameron and Musco, Christopher and Woodruff, David P.},
  title =	{{Simple Heuristics Yield Provable Algorithms for Masked Low-Rank Approximation}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{6:1--6:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.6},
  URN =		{urn:nbn:de:0030-drops-135452},
  doi =		{10.4230/LIPIcs.ITCS.2021.6},
  annote =	{Keywords: low-rank approximation, communication complexity, weighted low-rank approximation, bicriteria approximation algorithms}
}
Document
Pseudorandom Generators for Unbounded-Width Permutation Branching Programs

Authors: William M. Hoza, Edward Pyne, and Salil Vadhan


Abstract
We prove that the Impagliazzo-Nisan-Wigderson [Impagliazzo et al., 1994] pseudorandom generator (PRG) fools ordered (read-once) permutation branching programs of unbounded width with a seed length of Õ(log d + log n ⋅ log(1/ε)), assuming the program has only one accepting vertex in the final layer. Here, n is the length of the program, d is the degree (equivalently, the alphabet size), and ε is the error of the PRG. In contrast, we show that a randomly chosen generator requires seed length Ω(n log d) to fool such unbounded-width programs. Thus, this is an unusual case where an explicit construction is "better than random." Except when the program’s width w is very small, this is an improvement over prior work. For example, when w = poly(n) and d = 2, the best prior PRG for permutation branching programs was simply Nisan’s PRG [Nisan, 1992], which fools general ordered branching programs with seed length O(log(wn/ε) log n). We prove a seed length lower bound of Ω̃(log d + log n ⋅ log(1/ε)) for fooling these unbounded-width programs, showing that our seed length is near-optimal. In fact, when ε ≤ 1/log n, our seed length is within a constant factor of optimal. Our analysis of the INW generator uses the connection between the PRG and the derandomized square of Rozenman and Vadhan [Rozenman and Vadhan, 2005] and the recent analysis of the latter in terms of unit-circle approximation by Ahmadinejad et al. [Ahmadinejad et al., 2020].

Cite as

William M. Hoza, Edward Pyne, and Salil Vadhan. Pseudorandom Generators for Unbounded-Width Permutation Branching Programs. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 7:1-7:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{hoza_et_al:LIPIcs.ITCS.2021.7,
  author =	{Hoza, William M. and Pyne, Edward and Vadhan, Salil},
  title =	{{Pseudorandom Generators for Unbounded-Width Permutation Branching Programs}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{7:1--7:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.7},
  URN =		{urn:nbn:de:0030-drops-135464},
  doi =		{10.4230/LIPIcs.ITCS.2021.7},
  annote =	{Keywords: Pseudorandom generators, permutation branching programs}
}
Document
Pipeline Interventions

Authors: Eshwar Ram Arunachaleswaran, Sampath Kannan, Aaron Roth, and Juba Ziani


Abstract
We introduce the pipeline intervention problem, defined by a layered directed acyclic graph and a set of stochastic matrices governing transitions between successive layers. The graph is a stylized model for how people from different populations are presented opportunities, eventually leading to some reward. In our model, individuals are born into an initial position (i.e. some node in the first layer of the graph) according to a fixed probability distribution, and then stochastically progress through the graph according to the transition matrices, until they reach a node in the final layer of the graph; each node in the final layer has a reward associated with it. The pipeline intervention problem asks how to best make costly changes to the transition matrices governing people’s stochastic transitions through the graph, subject to a budget constraint. We consider two objectives: social welfare maximization, and a fairness-motivated maximin objective that seeks to maximize the value to the population (starting node) with the least expected value. We consider two variants of the maximin objective that turn out to be distinct, depending on whether we demand a deterministic solution or allow randomization. For each objective, we give an efficient approximation algorithm (an additive FPTAS) for constant width networks. We also tightly characterize the "price of fairness" in our setting: the ratio between the highest achievable social welfare and the social welfare consistent with a maximin optimal solution. Finally we show that for polynomial width networks, even approximating the maximin objective to any constant factor is NP hard, even for networks with constant depth. This shows that the restriction on the width in our positive results is essential.

Cite as

Eshwar Ram Arunachaleswaran, Sampath Kannan, Aaron Roth, and Juba Ziani. Pipeline Interventions. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 8:1-8:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{arunachaleswaran_et_al:LIPIcs.ITCS.2021.8,
  author =	{Arunachaleswaran, Eshwar Ram and Kannan, Sampath and Roth, Aaron and Ziani, Juba},
  title =	{{Pipeline Interventions}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{8:1--8:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.8},
  URN =		{urn:nbn:de:0030-drops-135478},
  doi =		{10.4230/LIPIcs.ITCS.2021.8},
  annote =	{Keywords: Interventions for fairness, fairness in navigating life paths, social welfare, maximin welfare, budget-constrained optimization, hardness of approximation}
}
Document
A Polynomial Degree Bound on Equations for Non-Rigid Matrices and Small Linear Circuits

Authors: Mrinal Kumar and Ben Lee Volk


Abstract
We show that there is an equation of degree at most poly(n) for the (Zariski closure of the) set of the non-rigid matrices: that is, we show that for every large enough field 𝔽, there is a non-zero n²-variate polynomial P ∈ 𝔽[x_{1, 1}, …, x_{n, n}] of degree at most poly(n) such that every matrix M which can be written as a sum of a matrix of rank at most n/100 and a matrix of sparsity at most n²/100 satisfies P(M) = 0. This confirms a conjecture of Gesmundo, Hauenstein, Ikenmeyer and Landsberg [Fulvio Gesmundo et al., 2016] and improves the best upper bound known for this problem down from exp(n²) [Abhinav Kumar et al., 2014; Fulvio Gesmundo et al., 2016] to poly(n). We also show a similar polynomial degree bound for the (Zariski closure of the) set of all matrices M such that the linear transformation represented by M can be computed by an algebraic circuit with at most n²/200 edges (without any restriction on the depth). As far as we are aware, no such bound was known prior to this work when the depth of the circuits is unbounded. Our methods are elementary and short and rely on a polynomial map of Shpilka and Volkovich [Amir Shpilka and Ilya Volkovich, 2015] to construct low degree "universal" maps for non-rigid matrices and small linear circuits. Combining this construction with a simple dimension counting argument to show that any such polynomial map has a low degree annihilating polynomial completes the proof. As a corollary, we show that any derandomization of the polynomial identity testing problem will imply new circuit lower bounds. A similar (but incomparable) theorem was proved by Kabanets and Impagliazzo [Valentine Kabanets and Russell Impagliazzo, 2004].

Cite as

Mrinal Kumar and Ben Lee Volk. A Polynomial Degree Bound on Equations for Non-Rigid Matrices and Small Linear Circuits. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 9:1-9:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{kumar_et_al:LIPIcs.ITCS.2021.9,
  author =	{Kumar, Mrinal and Volk, Ben Lee},
  title =	{{A Polynomial Degree Bound on Equations for Non-Rigid Matrices and Small Linear Circuits}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{9:1--9:9},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.9},
  URN =		{urn:nbn:de:0030-drops-135486},
  doi =		{10.4230/LIPIcs.ITCS.2021.9},
  annote =	{Keywords: Rigid Matrices, Linear Circuits, Degree Bounds, Circuit Lower Bounds}
}
Document
The Strongish Planted Clique Hypothesis and Its Consequences

Authors: Pasin Manurangsi, Aviad Rubinstein, and Tselil Schramm


Abstract
We formulate a new hardness assumption, the Strongish Planted Clique Hypothesis (SPCH), which postulates that any algorithm for planted clique must run in time n^Ω(log n) (so that the state-of-the-art running time of n^O(log n) is optimal up to a constant in the exponent). We provide two sets of applications of the new hypothesis. First, we show that SPCH implies (nearly) tight inapproximability results for the following well-studied problems in terms of the parameter k: Densest k-Subgraph, Smallest k-Edge Subgraph, Densest k-Subhypergraph, Steiner k-Forest, and Directed Steiner Network with k terminal pairs. For example, we show, under SPCH, that no polynomial time algorithm achieves o(k)-approximation for Densest k-Subgraph. This inapproximability ratio improves upon the previous best k^o(1) factor from (Chalermsook et al., FOCS 2017). Furthermore, our lower bounds hold even against fixed-parameter tractable algorithms with parameter k. Our second application focuses on the complexity of graph pattern detection. For both induced and non-induced graph pattern detection, we prove hardness results under SPCH, improving the running time lower bounds obtained by (Dalirrooyfard et al., STOC 2019) under the Exponential Time Hypothesis.

Cite as

Pasin Manurangsi, Aviad Rubinstein, and Tselil Schramm. The Strongish Planted Clique Hypothesis and Its Consequences. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 10:1-10:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{manurangsi_et_al:LIPIcs.ITCS.2021.10,
  author =	{Manurangsi, Pasin and Rubinstein, Aviad and Schramm, Tselil},
  title =	{{The Strongish Planted Clique Hypothesis and Its Consequences}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{10:1--10:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.10},
  URN =		{urn:nbn:de:0030-drops-135491},
  doi =		{10.4230/LIPIcs.ITCS.2021.10},
  annote =	{Keywords: Planted Clique, Densest k-Subgraph, Hardness of Approximation}
}
Document
Sample Efficient Identity Testing and Independence Testing of Quantum States

Authors: Nengkun Yu


Abstract
In this paper, we study the quantum identity testing problem, i.e., testing whether two given quantum states are identical, and quantum independence testing problem, i.e., testing whether a given multipartite quantum state is in tensor product form. For the quantum identity testing problem of 𝒟(ℂ^d) system, we provide a deterministic measurement scheme that uses 𝒪(d²/ε²) copies via independent measurements with d being the dimension of the state and ε being the additive error. For the independence testing problem 𝒟(ℂ^d₁⊗ℂ^{d₂}⊗⋯⊗ℂ^{d_m}) system, we show that the sample complexity is Θ̃((Π_{i = 1}^m d_i)/ε²) via collective measurements, and 𝒪((Π_{i = 1}^m d_i²)/ε²) via independent measurements. If randomized choice of independent measurements are allowed, the sample complexity is Θ(d^{3/2}/ε²) for the quantum identity testing problem, and Θ̃((Π_{i = 1}^m d_i^{3/2})/ε²) for the quantum independence testing problem.

Cite as

Nengkun Yu. Sample Efficient Identity Testing and Independence Testing of Quantum States. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 11:1-11:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{yu:LIPIcs.ITCS.2021.11,
  author =	{Yu, Nengkun},
  title =	{{Sample Efficient Identity Testing and Independence Testing of Quantum States}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{11:1--11:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.11},
  URN =		{urn:nbn:de:0030-drops-135504},
  doi =		{10.4230/LIPIcs.ITCS.2021.11},
  annote =	{Keywords: Quantum property testing}
}
Document
Understanding the Relative Strength of QBF CDCL Solvers and QBF Resolution

Authors: Olaf Beyersdorff and Benjamin Böhm


Abstract
QBF solvers implementing the QCDCL paradigm are powerful algorithms that successfully tackle many computationally complex applications. However, our theoretical understanding of the strength and limitations of these QCDCL solvers is very limited. In this paper we suggest to formally model QCDCL solvers as proof systems. We define different policies that can be used for decision heuristics and unit propagation and give rise to a number of sound and complete QBF proof systems (and hence new QCDCL algorithms). With respect to the standard policies used in practical QCDCL solving, we show that the corresponding QCDCL proof system is incomparable (via exponential separations) to Q-resolution, the classical QBF resolution system used in the literature. This is in stark contrast to the propositional setting where CDCL and resolution are known to be p-equivalent. This raises the question what formulas are hard for standard QCDCL, since Q-resolution lower bounds do not necessarily apply to QCDCL as we show here. In answer to this question we prove several lower bounds for QCDCL, including exponential lower bounds for a large class of random QBFs. We also introduce a strengthening of the decision heuristic used in classical QCDCL, which does not necessarily decide variables in order of the prefix, but still allows to learn asserting clauses. We show that with this decision policy, QCDCL can be exponentially faster on some formulas. We further exhibit a QCDCL proof system that is p-equivalent to Q-resolution. In comparison to classical QCDCL, this new QCDCL version adapts both decision and unit propagation policies.

Cite as

Olaf Beyersdorff and Benjamin Böhm. Understanding the Relative Strength of QBF CDCL Solvers and QBF Resolution. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 12:1-12:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{beyersdorff_et_al:LIPIcs.ITCS.2021.12,
  author =	{Beyersdorff, Olaf and B\"{o}hm, Benjamin},
  title =	{{Understanding the Relative Strength of QBF CDCL Solvers and QBF Resolution}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{12:1--12:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.12},
  URN =		{urn:nbn:de:0030-drops-135519},
  doi =		{10.4230/LIPIcs.ITCS.2021.12},
  annote =	{Keywords: CDCL, QBF, QCDCL, proof complexity, resolution, Q-resolution}
}
Document
The Quantum Supremacy Tsirelson Inequality

Authors: William Kretschmer


Abstract
A leading proposal for verifying near-term quantum supremacy experiments on noisy random quantum circuits is linear cross-entropy benchmarking. For a quantum circuit C on n qubits and a sample z ∈ {0,1}ⁿ, the benchmark involves computing |⟨z|C|0ⁿ⟩|², i.e. the probability of measuring z from the output distribution of C on the all zeros input. Under a strong conjecture about the classical hardness of estimating output probabilities of quantum circuits, no polynomial-time classical algorithm given C can output a string z such that |⟨z|C|0ⁿ⟩|² is substantially larger than 1/(2ⁿ) (Aaronson and Gunn, 2019). On the other hand, for a random quantum circuit C, sampling z from the output distribution of C achieves |⟨z|C|0ⁿ⟩|² ≈ 2/(2ⁿ) on average (Arute et al., 2019). In analogy with the Tsirelson inequality from quantum nonlocal correlations, we ask: can a polynomial-time quantum algorithm do substantially better than 2/(2ⁿ)? We study this question in the query (or black box) model, where the quantum algorithm is given oracle access to C. We show that, for any ε ≥ 1/poly(n), outputting a sample z such that |⟨z|C|0ⁿ⟩|² ≥ (2 + ε)/2ⁿ on average requires at least Ω((2^{n/4})/poly(n)) queries to C, but not more than O (2^{n/3}) queries to C, if C is either a Haar-random n-qubit unitary, or a canonical state preparation oracle for a Haar-random n-qubit state. We also show that when C samples from the Fourier distribution of a random Boolean function, the naive algorithm that samples from C is the optimal 1-query algorithm for maximizing |⟨z|C|0ⁿ⟩|² on average.

Cite as

William Kretschmer. The Quantum Supremacy Tsirelson Inequality. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 13:1-13:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{kretschmer:LIPIcs.ITCS.2021.13,
  author =	{Kretschmer, William},
  title =	{{The Quantum Supremacy Tsirelson Inequality}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{13:1--13:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.13},
  URN =		{urn:nbn:de:0030-drops-135524},
  doi =		{10.4230/LIPIcs.ITCS.2021.13},
  annote =	{Keywords: quantum supremacy, quantum query complexity, random circuit sampling}
}
Document
Approximately Strategyproof Tournament Rules in the Probabilistic Setting

Authors: Kimberly Ding and S. Matthew Weinberg


Abstract
We consider the manipulability of tournament rules which map the results of binom(n,2) pairwise matches and select a winner. Prior work designs simple tournament rules such that no pair of teams can manipulate the outcome of their match to improve their probability of winning by more than 1/3, and this is the best possible among any Condorcet-consistent tournament rule (which selects an undefeated team whenever one exists) [Jon Schneider et al., 2017; Ariel Schvartzman et al., 2020]. These lower bounds require the manipulators to know precisely the outcome of all future matches. We take a beyond worst-case view and instead consider tournaments which are "close to uniform": the outcome of all matches are independent, and no team is believed to win any match with probability exceeding 1/2+ε. We show that Randomized Single Elimination Bracket [Jon Schneider et al., 2017] and a new tournament rule we term Randomized Death Match have the property that no pair of teams can manipulate the outcome of their match to improve their probability of winning by more than ε/3 + 2ε²/3, for all ε, and this is the best possible among any Condorcet-consistent tournament rule. Our main technical contribution is a recursive framework to analyze the manipulability of certain forms of tournament rules. In addition to our main results, this view helps streamline previous analysis of Randomized Single Elimination Bracket, and may be of independent interest.

Cite as

Kimberly Ding and S. Matthew Weinberg. Approximately Strategyproof Tournament Rules in the Probabilistic Setting. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 14:1-14:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{ding_et_al:LIPIcs.ITCS.2021.14,
  author =	{Ding, Kimberly and Weinberg, S. Matthew},
  title =	{{Approximately Strategyproof Tournament Rules in the Probabilistic Setting}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{14:1--14:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.14},
  URN =		{urn:nbn:de:0030-drops-135532},
  doi =		{10.4230/LIPIcs.ITCS.2021.14},
  annote =	{Keywords: Tournaments, Incentive Compatibility, Recursive Analysis, Social Choice Theory}
}
Document
Even the Easiest(?) Graph Coloring Problem Is Not Easy in Streaming!

Authors: Anup Bhattacharya, Arijit Bishnu, Gopinath Mishra, and Anannya Upasana


Abstract
We study a graph coloring problem that is otherwise easy in the RAM model but becomes quite non-trivial in the one-pass streaming model. In contrast to previous graph coloring problems in streaming that try to find an assignment of colors to vertices, our main work is on estimating the number of conflicting or monochromatic edges given a coloring function that is streaming along with the graph; we call the problem Conflict-Est. The coloring function on a vertex can be read or accessed only when the vertex is revealed in the stream. If we need the color on a vertex that has streamed past, then that color, along with its vertex, has to be stored explicitly. We provide algorithms for a graph that is streaming in different variants of the vertex arrival in one-pass streaming model, viz. the Vertex Arrival (VA), Vertex Arrival With Degree Oracle (VAdeg), Vertex Arrival in Random Order (VArand) models, with special focus on the random order model. We also provide matching lower bounds for most of the cases. The mainstay of our work is in showing that the properties of a random order stream can be exploited to design efficient streaming algorithms for estimating the number of monochromatic edges. We have also obtained a lower bound, though not matching the upper bound, for the random order model. Among all the three models vis-a-vis this problem, we can show a clear separation of power in favor of the VArand model.

Cite as

Anup Bhattacharya, Arijit Bishnu, Gopinath Mishra, and Anannya Upasana. Even the Easiest(?) Graph Coloring Problem Is Not Easy in Streaming!. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 15:1-15:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bhattacharya_et_al:LIPIcs.ITCS.2021.15,
  author =	{Bhattacharya, Anup and Bishnu, Arijit and Mishra, Gopinath and Upasana, Anannya},
  title =	{{Even the Easiest(?) Graph Coloring Problem Is Not Easy in Streaming!}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{15:1--15:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.15},
  URN =		{urn:nbn:de:0030-drops-135544},
  doi =		{10.4230/LIPIcs.ITCS.2021.15},
  annote =	{Keywords: Streaming, random ordering, graph coloring, estimation, lower bounds}
}
Document
The Variable-Processor Cup Game

Authors: William Kuszmaul and Alek Westover


Abstract
The problem of scheduling tasks on p processors so that no task ever gets too far behind is often described as a game with cups and water. In the p-processor cup game on n cups, there are two players, a filler and an emptier, that take turns adding and removing water from a set of n cups. In each turn, the filler adds p units of water to the cups, placing at most 1 unit of water in each cup, and then the emptier selects p cups to remove up to 1 unit of water from. The emptier’s goal is to minimize the backlog, which is the height of the fullest cup. The p-processor cup game has been studied in many different settings, dating back to the late 1960’s. All of the past work shares one common assumption: that p is fixed. This paper initiates the study of what happens when the number of available processors p varies over time, resulting in what we call the variable-processor cup game. Remarkably, the optimal bounds for the variable-processor cup game differ dramatically from its classical counterpart. Whereas the p-processor cup has optimal backlog Θ(log n), the variable-processor game has optimal backlog Θ(n). Moreover, there is an efficient filling strategy that yields backlog Ω(n^{1 - ε}) in quasi-polynomial time against any deterministic emptying strategy. We additionally show that straightforward uses of randomization cannot be used to help the emptier. In particular, for any positive constant Δ, and any Δ-greedy-like randomized emptying algorithm 𝒜, there is a filling strategy that achieves backlog Ω(n^{1 - ε}) against 𝒜 in quasi-polynomial time.

Cite as

William Kuszmaul and Alek Westover. The Variable-Processor Cup Game. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 16:1-16:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{kuszmaul_et_al:LIPIcs.ITCS.2021.16,
  author =	{Kuszmaul, William and Westover, Alek},
  title =	{{The Variable-Processor Cup Game}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{16:1--16:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.16},
  URN =		{urn:nbn:de:0030-drops-135559},
  doi =		{10.4230/LIPIcs.ITCS.2021.16},
  annote =	{Keywords: scheduling, cup games, online algorithms, lower bounds}
}
Document
Comparison Graphs: A Unified Method for Uniformity Testing

Authors: Uri Meir


Abstract
Distribution testing can be described as follows: q samples are being drawn from some unknown distribution P over a known domain [n]. After the sampling process, a decision must be made about whether P holds some property, or is far from it. The most studied problem in the field is arguably uniformity testing, where one needs to distinguish the case that P is uniform over [n] from the case that P is ε-far from being uniform (in 𝓁₁). It is known that for this task Θ(√n/ε²) samples are necessary and sufficient. This problem was recently considered in various restricted models that pose, for example, communication or memory constraints. In more than one occasion, the known optimal solution boils down to counting collisions among the drawn samples (each two samples that have the same value add one to the count). This idea dates back to the first uniformity tester, and was coined the name "collision-based tester". In this paper, we introduce the notion of comparison graphs and use it to formally define a generalized collision-based tester. Roughly speaking, the edges of the graph indicate the tester which pairs of samples should be compared (that is, the original tester is induced by a clique, where all pairs are being compared). We prove a structural theorem that gives a sufficient condition for a comparison graph to induce a good uniformity tester. As an application, we develop a generic method to test uniformity, and devise nearly-optimal uniformity testers under various computational constraints. We improve and simplify a few known results, and introduce a new constrained model in which the method also produces an efficient tester. The idea behind our method is to translate computational constraints of a certain model to ones on the comparison graph, which paves the way to finding a good graph: a set of comparisons allowed by the model that suffice to test for uniformity. We believe that in future consideration of uniformity testing in new models, our method can be used to obtain efficient testers with minimal effort.

Cite as

Uri Meir. Comparison Graphs: A Unified Method for Uniformity Testing. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 17:1-17:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{meir:LIPIcs.ITCS.2021.17,
  author =	{Meir, Uri},
  title =	{{Comparison Graphs: A Unified Method for Uniformity Testing}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{17:1--17:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.17},
  URN =		{urn:nbn:de:0030-drops-135560},
  doi =		{10.4230/LIPIcs.ITCS.2021.17},
  annote =	{Keywords: Distribution Testing, Uniformity Testing, Distributed Algorithms, Streaming Algorithms, Comparison Graphs}
}
Document
Circular Trace Reconstruction

Authors: Shyam Narayanan and Michael Ren


Abstract
Trace reconstruction is the problem of learning an unknown string x from independent traces of x, where traces are generated by independently deleting each bit of x with some deletion probability q. In this paper, we initiate the study of Circular trace reconstruction, where the unknown string x is circular and traces are now rotated by a random cyclic shift. Trace reconstruction is related to many computational biology problems studying DNA, which is a primary motivation for this problem as well, as many types of DNA are known to be circular. Our main results are as follows. First, we prove that we can reconstruct arbitrary circular strings of length n using exp(Õ(n^{1/3})) traces for any constant deletion probability q, as long as n is prime or the product of two primes. For n of this form, this nearly matches what was the best known bound of exp(O(n^{1/3})) for standard trace reconstruction when this paper was initially released. We note, however, that Chase very recently improved the standard trace reconstruction bound to exp(Õ(n^{1/5})). Next, we prove that we can reconstruct random circular strings with high probability using n^O(1) traces for any constant deletion probability q. Finally, we prove a lower bound of Ω̃(n³) traces for arbitrary circular strings, which is greater than the best known lower bound of Ω̃(n^{3/2}) in standard trace reconstruction.

Cite as

Shyam Narayanan and Michael Ren. Circular Trace Reconstruction. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 18:1-18:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{narayanan_et_al:LIPIcs.ITCS.2021.18,
  author =	{Narayanan, Shyam and Ren, Michael},
  title =	{{Circular Trace Reconstruction}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{18:1--18:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.18},
  URN =		{urn:nbn:de:0030-drops-135573},
  doi =		{10.4230/LIPIcs.ITCS.2021.18},
  annote =	{Keywords: Trace Reconstruction, Deletion Channel, Cyclotomic Integers}
}
Document
Self-Testing of a Single Quantum Device Under Computational Assumptions

Authors: Tony Metger and Thomas Vidick


Abstract
Self-testing is a method to characterise an arbitrary quantum system based only on its classical input-output correlations, and plays an important role in device-independent quantum information processing as well as quantum complexity theory. Prior works on self-testing require the assumption that the system’s state is shared among multiple parties that only perform local measurements and cannot communicate. Here, we replace the setting of multiple non-communicating parties, which is difficult to enforce in practice, by a single computationally bounded party. Specifically, we construct a protocol that allows a classical verifier to robustly certify that a single computationally bounded quantum device must have prepared a Bell pair and performed single-qubit measurements on it, up to a change of basis applied to both the device’s state and measurements. This means that under computational assumptions, the verifier is able to certify the presence of entanglement, a property usually closely associated with two separated subsystems, inside a single quantum device. To achieve this, we build on techniques first introduced by Brakerski et al. (2018) and Mahadev (2018) which allow a classical verifier to constrain the actions of a quantum device assuming the device does not break post-quantum cryptography.

Cite as

Tony Metger and Thomas Vidick. Self-Testing of a Single Quantum Device Under Computational Assumptions. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 19:1-19:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{metger_et_al:LIPIcs.ITCS.2021.19,
  author =	{Metger, Tony and Vidick, Thomas},
  title =	{{Self-Testing of a Single Quantum Device Under Computational Assumptions}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{19:1--19:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.19},
  URN =		{urn:nbn:de:0030-drops-135581},
  doi =		{10.4230/LIPIcs.ITCS.2021.19},
  annote =	{Keywords: Quantum computing, quantum cryptography, device-independence, self-testing, post-quantum cryptography}
}
Document
Polynomial-Time Trace Reconstruction in the Low Deletion Rate Regime

Authors: Xi Chen, Anindya De, Chin Ho Lee, Rocco A. Servedio, and Sandip Sinha


Abstract
In the trace reconstruction problem, an unknown source string x ∈ {0,1}ⁿ is transmitted through a probabilistic deletion channel which independently deletes each bit with some fixed probability δ and concatenates the surviving bits, resulting in a trace of x. The problem is to reconstruct x given access to independent traces. Trace reconstruction of arbitrary (worst-case) strings is a challenging problem, with the current state of the art for poly(n)-time algorithms being the 2004 algorithm of Batu et al. [T. Batu et al., 2004]. This algorithm can reconstruct an arbitrary source string x ∈ {0,1}ⁿ in poly(n) time provided that the deletion rate δ satisfies δ ≤ n^{-(1/2 + ε)} for some ε > 0. In this work we improve on the result of [T. Batu et al., 2004] by giving a poly(n)-time algorithm for trace reconstruction for any deletion rate δ ≤ n^{-(1/3 + ε)}. Our algorithm works by alternating an alignment-based procedure, which we show effectively reconstructs portions of the source string that are not "highly repetitive", with a novel procedure that efficiently determines the length of highly repetitive subwords of the source string.

Cite as

Xi Chen, Anindya De, Chin Ho Lee, Rocco A. Servedio, and Sandip Sinha. Polynomial-Time Trace Reconstruction in the Low Deletion Rate Regime. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 20:1-20:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ITCS.2021.20,
  author =	{Chen, Xi and De, Anindya and Lee, Chin Ho and Servedio, Rocco A. and Sinha, Sandip},
  title =	{{Polynomial-Time Trace Reconstruction in the Low Deletion Rate Regime}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{20:1--20:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.20},
  URN =		{urn:nbn:de:0030-drops-135595},
  doi =		{10.4230/LIPIcs.ITCS.2021.20},
  annote =	{Keywords: trace reconstruction}
}
Document
Metrical Service Systems with Transformations

Authors: Sébastien Bubeck, Niv Buchbinder, Christian Coester, and Mark Sellke


Abstract
We consider a generalization of the fundamental online metrical service systems (MSS) problem where the feasible region can be transformed between requests. In this problem, which we call T-MSS, an algorithm maintains a point in a metric space and has to serve a sequence of requests. Each request is a map (transformation) f_t: A_t → B_t between subsets A_t and B_t of the metric space. To serve it, the algorithm has to go to a point a_t ∈ A_t, paying the distance from its previous position. Then, the transformation is applied, modifying the algorithm’s state to f_t(a_t). Such transformations can model, e.g., changes to the environment that are outside of an algorithm’s control, and we therefore do not charge any additional cost to the algorithm when the transformation is applied. The transformations also allow to model requests occurring in the k-taxi problem. We show that for α-Lipschitz transformations, the competitive ratio is Θ(α)^{n-2} on n-point metrics. Here, the upper bound is achieved by a deterministic algorithm and the lower bound holds even for randomized algorithms. For the k-taxi problem, we prove a competitive ratio of Õ((nlog k)²). For chasing convex bodies, we show that even with contracting transformations no competitive algorithm exists. The problem T-MSS has a striking connection to the following deep mathematical question: Given a finite metric space M, what is the required cardinality of an extension M̂ ⊇ M where each partial isometry on M extends to an automorphism? We give partial answers for special cases.

Cite as

Sébastien Bubeck, Niv Buchbinder, Christian Coester, and Mark Sellke. Metrical Service Systems with Transformations. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 21:1-21:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bubeck_et_al:LIPIcs.ITCS.2021.21,
  author =	{Bubeck, S\'{e}bastien and Buchbinder, Niv and Coester, Christian and Sellke, Mark},
  title =	{{Metrical Service Systems with Transformations}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{21:1--21:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.21},
  URN =		{urn:nbn:de:0030-drops-135608},
  doi =		{10.4230/LIPIcs.ITCS.2021.21},
  annote =	{Keywords: Online algorithms, competitive analysis, metrical task systems, k-taxi}
}
Document
Tight Hardness Results for Training Depth-2 ReLU Networks

Authors: Surbhi Goel, Adam Klivans, Pasin Manurangsi, and Daniel Reichman


Abstract
We prove several hardness results for training depth-2 neural networks with the ReLU activation function; these networks are simply weighted sums (that may include negative coefficients) of ReLUs. Our goal is to output a depth-2 neural network that minimizes the square loss with respect to a given training set. We prove that this problem is NP-hard already for a network with a single ReLU. We also prove NP-hardness for outputting a weighted sum of k ReLUs minimizing the squared error (for k > 1) even in the realizable setting (i.e., when the labels are consistent with an unknown depth-2 ReLU network). We are also able to obtain lower bounds on the running time in terms of the desired additive error ε. To obtain our lower bounds, we use the Gap Exponential Time Hypothesis (Gap-ETH) as well as a new hypothesis regarding the hardness of approximating the well known Densest κ-Subgraph problem in subexponential time (these hypotheses are used separately in proving different lower bounds). For example, we prove that under reasonable hardness assumptions, any proper learning algorithm for finding the best fitting ReLU must run in time exponential in 1/ε². Together with a previous work regarding improperly learning a ReLU [Surbhi Goel et al., 2017], this implies the first separation between proper and improper algorithms for learning a ReLU. We also study the problem of properly learning a depth-2 network of ReLUs with bounded weights giving new (worst-case) upper bounds on the running time needed to learn such networks both in the realizable and agnostic settings. Our upper bounds on the running time essentially matches our lower bounds in terms of the dependency on ε.

Cite as

Surbhi Goel, Adam Klivans, Pasin Manurangsi, and Daniel Reichman. Tight Hardness Results for Training Depth-2 ReLU Networks. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 22:1-22:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{goel_et_al:LIPIcs.ITCS.2021.22,
  author =	{Goel, Surbhi and Klivans, Adam and Manurangsi, Pasin and Reichman, Daniel},
  title =	{{Tight Hardness Results for Training Depth-2 ReLU Networks}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{22:1--22:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.22},
  URN =		{urn:nbn:de:0030-drops-135611},
  doi =		{10.4230/LIPIcs.ITCS.2021.22},
  annote =	{Keywords: ReLU, Learning Algorithm, Running Time Lower Bound}
}
Document
A Largish Sum-Of-Squares Implies Circuit Hardness and Derandomization

Authors: Pranjal Dutta, Nitin Saxena, and Thomas Thierauf


Abstract
For a polynomial f, we study the sum of squares representation (SOS), i.e. f = ∑_{i ∈ [s]} c_i f_i² , where c_i are field elements and the f_i’s are polynomials. The size of the representation is the number of monomials that appear across the f_i’s. Its minimum is the support-sum S(f) of f. For simplicity of exposition, we consider univariate f. A trivial lower bound for the support-sum of, a full-support univariate polynomial, f of degree d is S(f) ≥ d^{0.5}. We show that the existence of an explicit polynomial f with support-sum just slightly larger than the trivial bound, that is, S(f) ≥ d^{0.5+ε(d)}, for a sub-constant function ε(d) > ω(√{log log d/log d}), implies that VP ≠ VNP. The latter is a major open problem in algebraic complexity. A further consequence is that blackbox-PIT is in SUBEXP. Note that a random polynomial fulfills the condition, as there we have S(f) = Θ(d). We also consider the sum-of-cubes representation (SOC) of polynomials. In a similar way, we show that here, an explicit hard polynomial even implies that blackbox-PIT is in P.

Cite as

Pranjal Dutta, Nitin Saxena, and Thomas Thierauf. A Largish Sum-Of-Squares Implies Circuit Hardness and Derandomization. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 23:1-23:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{dutta_et_al:LIPIcs.ITCS.2021.23,
  author =	{Dutta, Pranjal and Saxena, Nitin and Thierauf, Thomas},
  title =	{{A Largish Sum-Of-Squares Implies Circuit Hardness and Derandomization}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{23:1--23:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.23},
  URN =		{urn:nbn:de:0030-drops-135629},
  doi =		{10.4230/LIPIcs.ITCS.2021.23},
  annote =	{Keywords: VP, VNP, hitting set, circuit, polynomial, sparsity, SOS, SOC, PIT, lower bound}
}
Document
Circuit Depth Reductions

Authors: Alexander Golovnev, Alexander S. Kulikov, and R. Ryan Williams


Abstract
The best known size lower bounds against unrestricted circuits have remained around 3n for several decades. Moreover, the only known technique for proving lower bounds in this model, gate elimination, is inherently limited to proving lower bounds of less than 5n. In this work, we propose a non-gate-elimination approach for obtaining circuit lower bounds, via certain depth-three lower bounds. We prove that every (unbounded-depth) circuit of size s can be expressed as an OR of 2^{s/3.9} 16-CNFs. For DeMorgan formulas, the best known size lower bounds have been stuck at around n^{3-o(1)} for decades. Under a plausible hypothesis about probabilistic polynomials, we show that n^{4-ε}-size DeMorgan formulas have 2^{n^{1-Ω(ε)}}-size depth-3 circuits which are approximate sums of n^{1-Ω(ε)}-degree polynomials over F₂. While these structural results do not immediately lead to new lower bounds, they do suggest new avenues of attack on these longstanding lower bound problems. Our results complement the classical depth-3 reduction results of Valiant, which show that logarithmic-depth circuits of linear size can be computed by an OR of 2^{ε n} n^δ-CNFs, and slightly stronger results for series-parallel circuits. It is known that no purely graph-theoretic reduction could yield interesting depth-3 circuits from circuits of super-logarithmic depth. We overcome this limitation (for small-size circuits) by taking into account both the graph-theoretic and functional properties of circuits and formulas. We show that improvements of the following pseudorandom constructions imply super-linear circuit lower bounds for log-depth circuits via Valiant’s reduction: dispersers for varieties, correlation with constant degree polynomials, matrix rigidity, and hardness for depth-3 circuits with constant bottom fan-in. On the other hand, our depth reductions show that even modest improvements of the known constructions give elementary proofs of improved (but still linear) circuit lower bounds.

Cite as

Alexander Golovnev, Alexander S. Kulikov, and R. Ryan Williams. Circuit Depth Reductions. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 24:1-24:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{golovnev_et_al:LIPIcs.ITCS.2021.24,
  author =	{Golovnev, Alexander and Kulikov, Alexander S. and Williams, R. Ryan},
  title =	{{Circuit Depth Reductions}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{24:1--24:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.24},
  URN =		{urn:nbn:de:0030-drops-135633},
  doi =		{10.4230/LIPIcs.ITCS.2021.24},
  annote =	{Keywords: Circuit complexity, formula complexity, pseudorandomness, matrix rigidity}
}
Document
Dynamic Inference in Probabilistic Graphical Models

Authors: Weiming Feng, Kun He, Xiaoming Sun, and Yitong Yin


Abstract
Probabilistic graphical models, such as Markov random fields (MRFs), are useful for describing high-dimensional distributions in terms of local dependence structures. The {probabilistic inference} is a fundamental problem related to graphical models, and sampling is a main approach for the problem. In this paper, we study probabilistic inference problems when the graphical model itself is changing dynamically with time. Such dynamic inference problems arise naturally in today’s application, e.g. multivariate time-series data analysis and practical learning procedures. We give a dynamic algorithm for sampling-based probabilistic inferences in MRFs, where each dynamic update can change the underlying graph and all parameters of the MRF simultaneously, as long as the total amount of changes is bounded. More precisely, suppose that the MRF has n variables and polylogarithmic-bounded maximum degree, and N(n) independent samples are sufficient for the inference for a polynomial function N(⋅). Our algorithm dynamically maintains an answer to the inference problem using Õ(n N(n)) space cost, and Õ(N(n) + n) incremental time cost upon each update to the MRF, as long as the Dobrushin-Shlosman condition is satisfied by the MRFs. This well-known condition has long been used for guaranteeing the efficiency of Markov chain Monte Carlo (MCMC) sampling in the traditional static setting. Compared to the static case, which requires Ω(n N(n)) time cost for redrawing all N(n) samples whenever the MRF changes, our dynamic algorithm gives a 𝛺^~(min{n, N(n)})-factor speedup. Our approach relies on a novel dynamic sampling technique, which transforms local Markov chains (a.k.a. single-site dynamics) to dynamic sampling algorithms, and an "algorithmic Lipschitz" condition that we establish for sampling from graphical models, namely, when the MRF changes by a small difference, samples can be modified to reflect the new distribution, with cost proportional to the difference on MRF.

Cite as

Weiming Feng, Kun He, Xiaoming Sun, and Yitong Yin. Dynamic Inference in Probabilistic Graphical Models. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 25:1-25:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{feng_et_al:LIPIcs.ITCS.2021.25,
  author =	{Feng, Weiming and He, Kun and Sun, Xiaoming and Yin, Yitong},
  title =	{{Dynamic Inference in Probabilistic Graphical Models}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{25:1--25:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.25},
  URN =		{urn:nbn:de:0030-drops-135643},
  doi =		{10.4230/LIPIcs.ITCS.2021.25},
  annote =	{Keywords: Dynamic inference, probabilistic graphical model, Gibbs sampling, Markov random filed}
}
Document
Theorems of KKL, Friedgut, and Talagrand via Random Restrictions and Log-Sobolev Inequality

Authors: Esty Kelman, Subhash Khot, Guy Kindler, Dor Minzer, and Muli Safra


Abstract
We give alternate proofs for three related results in analysis of Boolean functions, namely the KKL Theorem, Friedgut’s Junta Theorem, and Talagrand’s strengthening of the KKL Theorem. We follow a new approach: looking at the first Fourier level of the function after a suitable random restriction and applying the Log-Sobolev inequality appropriately. In particular, we avoid using the hypercontractive inequality that is common to the original proofs. Our proofs might serve as an alternate, uniform exposition to these theorems and the techniques might benefit further research.

Cite as

Esty Kelman, Subhash Khot, Guy Kindler, Dor Minzer, and Muli Safra. Theorems of KKL, Friedgut, and Talagrand via Random Restrictions and Log-Sobolev Inequality. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 26:1-26:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{kelman_et_al:LIPIcs.ITCS.2021.26,
  author =	{Kelman, Esty and Khot, Subhash and Kindler, Guy and Minzer, Dor and Safra, Muli},
  title =	{{Theorems of KKL, Friedgut, and Talagrand via Random Restrictions and Log-Sobolev Inequality}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{26:1--26:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.26},
  URN =		{urn:nbn:de:0030-drops-135657},
  doi =		{10.4230/LIPIcs.ITCS.2021.26},
  annote =	{Keywords: Fourier Analysis, Hypercontractivity, Log-Sobolev Inequality}
}
Document
On Rich 2-to-1 Games

Authors: Mark Braverman, Subhash Khot, and Dor Minzer


Abstract
We propose a variant of the 2-to-1 Games Conjecture that we call the Rich 2-to-1 Games Conjecture and show that it is equivalent to the Unique Games Conjecture. We are motivated by two considerations. Firstly, in light of the recent proof of the 2-to-1 Games Conjecture [Subhash Khot et al., 2017; Irit Dinur et al., 2018; Irit Dinur et al., 2018; Subhash Khot et al., 2018], we hope to understand how one might make further progress towards a proof of the Unique Games Conjecture. Secondly, the new variant along with perfect completeness in addition, might imply hardness of approximation results that necessarily require perfect completeness and (hence) are not implied by the Unique Games Conjecture.

Cite as

Mark Braverman, Subhash Khot, and Dor Minzer. On Rich 2-to-1 Games. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 27:1-27:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{braverman_et_al:LIPIcs.ITCS.2021.27,
  author =	{Braverman, Mark and Khot, Subhash and Minzer, Dor},
  title =	{{On Rich 2-to-1 Games}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{27:1--27:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.27},
  URN =		{urn:nbn:de:0030-drops-135666},
  doi =		{10.4230/LIPIcs.ITCS.2021.27},
  annote =	{Keywords: PCP, Unique-Games, Perfect Completeness}
}
Document
Distributed Quantum Proofs for Replicated Data

Authors: Pierre Fraigniaud, François Le Gall, Harumichi Nishimura, and Ami Paz


Abstract
This paper tackles the issue of checking that all copies of a large data set replicated at several nodes of a network are identical. The fact that the replicas may be located at distant nodes prevents the system from verifying their equality locally, i.e., by having each node consult only nodes in its vicinity. On the other hand, it remains possible to assign certificates to the nodes, so that verifying the consistency of the replicas can be achieved locally. However, we show that, as the replicated data is large, classical certification mechanisms, including distributed Merlin-Arthur protocols, cannot guarantee good completeness and soundness simultaneously, unless they use very large certificates. The main result of this paper is a distributed quantum Merlin-Arthur protocol enabling the nodes to collectively check the consistency of the replicas, based on small certificates, and in a single round of message exchange between neighbors, with short messages. In particular, the certificate-size is logarithmic in the size of the data set, which gives an exponential advantage over classical certification mechanisms. We propose yet another usage of a fundamental quantum primitive, called the SWAP test, in order to show our main result.

Cite as

Pierre Fraigniaud, François Le Gall, Harumichi Nishimura, and Ami Paz. Distributed Quantum Proofs for Replicated Data. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 28:1-28:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{fraigniaud_et_al:LIPIcs.ITCS.2021.28,
  author =	{Fraigniaud, Pierre and Le Gall, Fran\c{c}ois and Nishimura, Harumichi and Paz, Ami},
  title =	{{Distributed Quantum Proofs for Replicated Data}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{28:1--28:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.28},
  URN =		{urn:nbn:de:0030-drops-135679},
  doi =		{10.4230/LIPIcs.ITCS.2021.28},
  annote =	{Keywords: Quantum Computing, Distributed Network Computing, Algorithmic Aspects of Networks}
}
Document
On Basing Auxiliary-Input Cryptography on NP-Hardness via Nonadaptive Black-Box Reductions

Authors: Mikito Nanashima


Abstract
Constructing one-way functions based on NP-hardness is a central challenge in theoretical computer science. Unfortunately, Akavia et al. [Akavia et al., 2006] presented strong evidence that a nonadaptive black-box (BB) reduction is insufficient to solve this challenge. However, should we give up such a central proof technique even for an intermediate step? In this paper, we turn our eyes from standard cryptographic primitives to weaker cryptographic primitives allowed to take auxiliary-input and continue to explore the capability of nonadaptive BB reductions to base auxiliary-input primitives on NP-hardness. Specifically, we prove the followings: - if we base an auxiliary-input pseudorandom generator (AIPRG) on NP-hardness via a nonadaptive BB reduction, then the polynomial hierarchy collapses; - if we base an auxiliary-input one-way function (AIOWF) or auxiliary-input hitting set generator (AIHSG) on NP-hardness via a nonadaptive BB reduction, then an (i.o.-)one-way function also exists based on NP-hardness (via an adaptive BB reduction). These theorems extend our knowledge on nonadaptive BB reductions out of the current worst-to-average framework. The first result provides new evidence that nonadaptive BB reductions are insufficient to base AIPRG on NP-hardness. The second result also yields a weaker but still surprising consequence of nonadaptive BB reductions, i.e., a one-way function based on NP-hardness. In fact, the second result is interpreted in the following two opposite ways. Pessimistically, it shows that basing AIOWF or AIHSG on NP-hardness via nonadaptive BB reductions is harder than constructing a one-way function based on NP-hardness, which can be regarded as a negative result. Note that AIHSG is a weak primitive implied even by the hardness of learning; thus, this pessimistic view provides conceptually stronger limitations than the currently known limitations on nonadaptive BB reductions. Optimistically, it offers a new hope: breakthrough construction of auxiliary-input primitives might also provide construction standard cryptographic primitives. This optimistic view enhances the significance of further investigation on constructing auxiliary-input or other intermediate cryptographic primitives instead of standard cryptographic primitives.

Cite as

Mikito Nanashima. On Basing Auxiliary-Input Cryptography on NP-Hardness via Nonadaptive Black-Box Reductions. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 29:1-29:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{nanashima:LIPIcs.ITCS.2021.29,
  author =	{Nanashima, Mikito},
  title =	{{On Basing Auxiliary-Input Cryptography on NP-Hardness via Nonadaptive Black-Box Reductions}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{29:1--29:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.29},
  URN =		{urn:nbn:de:0030-drops-135686},
  doi =		{10.4230/LIPIcs.ITCS.2021.29},
  annote =	{Keywords: Auxiliary-input cryptographic primitives, nonadaptive black-box reductions}
}
Document
Spoofing Linear Cross-Entropy Benchmarking in Shallow Quantum Circuits

Authors: Boaz Barak, Chi-Ning Chou, and Xun Gao


Abstract
The linear cross-entropy benchmark (Linear XEB) has been used as a test for procedures simulating quantum circuits. Given a quantum circuit C with n inputs and outputs and purported simulator whose output is distributed according to a distribution p over {0,1}ⁿ, the linear XEB fidelity of the simulator is ℱ_C(p) = 2ⁿ 𝔼_{x ∼ p} q_C(x) -1, where q_C(x) is the probability that x is output from the distribution C |0ⁿ⟩. A trivial simulator (e.g., the uniform distribution) satisfies ℱ_C(p) = 0, while Google’s noisy quantum simulation of a 53-qubit circuit C achieved a fidelity value of (2.24 ±0.21)×10^{-3} (Arute et. al., Nature'19). In this work we give a classical randomized algorithm that for a given circuit C of depth d with Haar random 2-qubit gates achieves in expectation a fidelity value of Ω(n/L⋅15^{-d}) in running time poly(n,2^L). Here L is the size of the light cone of C: the maximum number of input bits that each output bit depends on. In particular, we obtain a polynomial-time algorithm that achieves large fidelity of ω(1) for depth O(√{log n}) two-dimensional circuits. This is the first such result for two dimensional circuits of super-constant depth. Our results can be considered as an evidence that fooling the linear XEB test might be easier than achieving a full simulation of the quantum circuit.

Cite as

Boaz Barak, Chi-Ning Chou, and Xun Gao. Spoofing Linear Cross-Entropy Benchmarking in Shallow Quantum Circuits. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 30:1-30:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{barak_et_al:LIPIcs.ITCS.2021.30,
  author =	{Barak, Boaz and Chou, Chi-Ning and Gao, Xun},
  title =	{{Spoofing Linear Cross-Entropy Benchmarking in Shallow Quantum Circuits}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{30:1--30:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.30},
  URN =		{urn:nbn:de:0030-drops-135699},
  doi =		{10.4230/LIPIcs.ITCS.2021.30},
  annote =	{Keywords: Quantum supremacy, Linear cross-entropy benchmark}
}
Document
On the Complexity of Isomorphism Problems for Tensors, Groups, and Polynomials I: Tensor Isomorphism-Completeness

Authors: Joshua A. Grochow and Youming Qiao


Abstract
We study the complexity of isomorphism problems for tensors, groups, and polynomials. These problems have been studied in multivariate cryptography, machine learning, quantum information, and computational group theory. We show that these problems are all polynomial-time equivalent, creating bridges between problems traditionally studied in myriad research areas. This prompts us to define the complexity class TI, namely problems that reduce to the Tensor Isomorphism (TI) problem in polynomial time. Our main technical result is a polynomial-time reduction from d-tensor isomorphism to 3-tensor isomorphism. In the context of quantum information, this result gives multipartite-to-tripartite entanglement transformation procedure, that preserves equivalence under stochastic local operations and classical communication (SLOCC).

Cite as

Joshua A. Grochow and Youming Qiao. On the Complexity of Isomorphism Problems for Tensors, Groups, and Polynomials I: Tensor Isomorphism-Completeness. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 31:1-31:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{grochow_et_al:LIPIcs.ITCS.2021.31,
  author =	{Grochow, Joshua A. and Qiao, Youming},
  title =	{{On the Complexity of Isomorphism Problems for Tensors, Groups, and Polynomials I: Tensor Isomorphism-Completeness}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{31:1--31:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.31},
  URN =		{urn:nbn:de:0030-drops-135702},
  doi =		{10.4230/LIPIcs.ITCS.2021.31},
  annote =	{Keywords: complexity class, tensor isomorphism, polynomial isomorphism, group isomorphism, stochastic local operations and classical communication}
}
Document
Bounds on the QAC^0 Complexity of Approximating Parity

Authors: Gregory Rosenthal


Abstract
QAC circuits are quantum circuits with one-qubit gates and Toffoli gates of arbitrary arity. QAC^0 circuits are QAC circuits of constant depth, and are quantum analogues of AC^0 circuits. We prove the following: - For all d ≥ 7 and ε > 0 there is a depth-d QAC circuit of size exp(poly(n^{1/d}) log(n/ε)) that approximates the n-qubit parity function to within error ε on worst-case quantum inputs. Previously it was unknown whether QAC circuits of sublogarithmic depth could approximate parity regardless of size. - We introduce a class of "mostly classical" QAC circuits, including a major component of our circuit from the above upper bound, and prove a tight lower bound on the size of low-depth, mostly classical QAC circuits that approximate this component. - Arbitrary depth-d QAC circuits require at least Ω(n/d) multi-qubit gates to achieve a 1/2 + exp(-o(n/d)) approximation of parity. When d = Θ(log n) this nearly matches an easy O(n) size upper bound for computing parity exactly. - QAC circuits with at most two layers of multi-qubit gates cannot achieve a 1/2 + exp(-o(n)) approximation of parity, even non-cleanly. Previously it was known only that such circuits could not cleanly compute parity exactly for sufficiently large n. The proofs use a new normal form for quantum circuits which may be of independent interest, and are based on reductions to the problem of constructing certain generalizations of the cat state which we name "nekomata" after an analogous cat yōkai.

Cite as

Gregory Rosenthal. Bounds on the QAC^0 Complexity of Approximating Parity. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 32:1-32:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{rosenthal:LIPIcs.ITCS.2021.32,
  author =	{Rosenthal, Gregory},
  title =	{{Bounds on the QAC^0 Complexity of Approximating Parity}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{32:1--32:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.32},
  URN =		{urn:nbn:de:0030-drops-135713},
  doi =		{10.4230/LIPIcs.ITCS.2021.32},
  annote =	{Keywords: quantum circuit complexity, QAC^0, fanout, parity, nekomata}
}
Document
Query Complexity Lower Bounds for Local List-Decoding and Hard-Core Predicates (Even for Small Rate and Huge Lists)

Authors: Noga Ron-Zewi, Ronen Shaltiel, and Nithin Varma


Abstract
A binary code Enc:{0,1}^k → {0,1}ⁿ is (1/2-ε,L)-list decodable if for every w ∈ {0,1}ⁿ, there exists a set List(w) of size at most L, containing all messages m ∈ {0,1}^k such that the relative Hamming distance between Enc(m) and w is at most 1/2-ε. A q-query local list-decoder for Enc is a randomized procedure Dec that when given oracle access to a string w, makes at most q oracle calls, and for every message m ∈ List(w), with high probability, there exists j ∈ [L] such that for every i ∈ [k], with high probability, Dec^w(i,j) = m_i. We prove lower bounds on q, that apply even if L is huge (say L = 2^{k^{0.9}}) and the rate of Enc is small (meaning that n ≥ 2^{k}): - For ε = 1/k^{ν} for some constant 0 < ν < 1, we prove a lower bound of q = Ω(log(1/δ)/ε²), where δ is the error probability of the local list-decoder. This bound is tight as there is a matching upper bound by Goldreich and Levin (STOC 1989) of q = O(log(1/δ)/ε²) for the Hadamard code (which has n = 2^k). This bound extends an earlier work of Grinberg, Shaltiel and Viola (FOCS 2018) which only works if n ≤ 2^{k^ν} and the number of coins tossed by Dec is small (and therefore does not apply to the Hadamard code, or other codes with low rate). - For smaller ε, we prove a lower bound of roughly q = Ω(1/(√ε)). To the best of our knowledge, this is the first lower bound on the number of queries of local list-decoders that gives q ≥ k for small ε. Local list-decoders with small ε form the key component in the celebrated theorem of Goldreich and Levin that extracts a hard-core predicate from a one-way function. We show that black-box proofs cannot improve the Goldreich-Levin theorem and produce a hard-core predicate that is hard to predict with probability 1/2 + 1/𝓁^ω(1) when provided with a one-way function f:{0,1}^𝓁 → {0,1}^𝓁, where f is such that circuits of size poly(𝓁) cannot invert f with probability ρ = 1/2^√𝓁 (or even ρ = 1/2^Ω(𝓁)). This limitation applies to any proof by black-box reduction (even if the reduction is allowed to use nonuniformity and has oracle access to f).

Cite as

Noga Ron-Zewi, Ronen Shaltiel, and Nithin Varma. Query Complexity Lower Bounds for Local List-Decoding and Hard-Core Predicates (Even for Small Rate and Huge Lists). In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 33:1-33:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{ronzewi_et_al:LIPIcs.ITCS.2021.33,
  author =	{Ron-Zewi, Noga and Shaltiel, Ronen and Varma, Nithin},
  title =	{{Query Complexity Lower Bounds for Local List-Decoding and Hard-Core Predicates (Even for Small Rate and Huge Lists)}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{33:1--33:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.33},
  URN =		{urn:nbn:de:0030-drops-135724},
  doi =		{10.4230/LIPIcs.ITCS.2021.33},
  annote =	{Keywords: Local list-decoding, Hard-core predicates, Black-box reduction, Hadamard code}
}
Document
Is the Space Complexity of Planted Clique Recovery the Same as That of Detection?

Authors: Jay Mardia


Abstract
We study the planted clique problem in which a clique of size k is planted in an Erdős-Rényi graph G(n, 1/2), and one is interested in either detecting or recovering this planted clique. This problem is interesting because it is widely believed to show a statistical-computational gap at clique size k = Θ(√n), and has emerged as the prototypical problem with such a gap from which average-case hardness of other statistical problems can be deduced. It also displays a tight computational connection between the detection and recovery variants, unlike other problems of a similar nature. This wide investigation into the computational complexity of the planted clique problem has, however, mostly focused on its time complexity. To begin investigating the robustness of these statistical-computational phenomena to changes in our notion of computational efficiency, we ask- Do the statistical-computational phenomena that make the planted clique an interesting problem also hold when we use "space efficiency" as our notion of computational efficiency? It is relatively easy to show that a positive answer to this question depends on the existence of a O(log n) space algorithm that can recover planted cliques of size k = Ω(√n). Our main result comes very close to designing such an algorithm. We show that for k = Ω(√n), the recovery problem can be solved in O((log^*{n}-log^*{k/(√n}) ⋅ log n) bits of space. 1) If k = ω(√nlog^{(𝓁)}n) for any constant integer 𝓁 > 0, the space usage is O(log n) bits. 2) If k = Θ(√n), the space usage is O(log^* n ⋅ log n) bits. Our result suggests that there does exist an O(log n) space algorithm to recover cliques of size k = Ω(√n), since we come very close to achieving such parameters. This provides evidence that the statistical-computational phenomena that (conjecturally) hold for planted clique time complexity also (conjecturally) hold for space complexity. Due to space limitations, we omit proofs from this manuscript. The entire paper with full proofs can be found on arXiv at https://arxiv.org/abs/2008.12825.

Cite as

Jay Mardia. Is the Space Complexity of Planted Clique Recovery the Same as That of Detection?. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 34:1-34:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{mardia:LIPIcs.ITCS.2021.34,
  author =	{Mardia, Jay},
  title =	{{Is the Space Complexity of Planted Clique Recovery the Same as That of Detection?}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{34:1--34:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.34},
  URN =		{urn:nbn:de:0030-drops-135734},
  doi =		{10.4230/LIPIcs.ITCS.2021.34},
  annote =	{Keywords: Statistical computational gaps, Planted clique, Space complexity, Average case computational complexity}
}
Document
Algorithms and Hardness for Multidimensional Range Updates and Queries

Authors: Joshua Lau and Angus Ritossa


Abstract
Traditional orthogonal range problems allow queries over a static set of points, each with some value. Dynamic variants allow points to be added or removed, one at a time. To support more powerful updates, we introduce the Grid Range class of data structure problems over arbitrarily large integer arrays in one or more dimensions. These problems allow range updates (such as filling all points in a range with a constant) and queries (such as finding the sum or maximum of values in a range). In this work, we consider these operations along with updates that replace each point in a range with the minimum, maximum, or sum of its existing value, and a constant. In one dimension, it is known that segment trees can be leveraged to facilitate any n of these operations in Õ(n) time overall. Other than a few specific cases, until now, higher dimensional variants have been largely unexplored. Despite their tightly-knit complexity in one dimension, we show that variants induced by subsets of these operations exhibit polynomial separation in two dimensions. In particular, no truly subquadratic time algorithm can support certain pairs of these updates simultaneously without falsifying several popular conjectures. On the positive side, we show that truly subquadratic algorithms can be obtained for variants induced by other subsets. We provide two general approaches to designing such algorithms that can be generalised to online and higher dimensional settings. First, we give almost-tight Õ(n^{3/2}) time algorithms for single-update variants where the update and query operations meet a set of natural conditions. Second, for other variants, we provide a general framework for reducing to instances with a special geometry. Using this, we show that O(m^{3/2-ε}) time algorithms for counting paths and walks of length 2 and 3 between vertex pairs in sparse graphs imply truly subquadratic data structures for certain variants; to this end, we give an Õ(m^{(4ω-1)/(2ω+1)}) = O(m^1.478) time algorithm for counting simple 3-paths between vertex pairs.

Cite as

Joshua Lau and Angus Ritossa. Algorithms and Hardness for Multidimensional Range Updates and Queries. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 35:1-35:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{lau_et_al:LIPIcs.ITCS.2021.35,
  author =	{Lau, Joshua and Ritossa, Angus},
  title =	{{Algorithms and Hardness for Multidimensional Range Updates and Queries}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{35:1--35:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.35},
  URN =		{urn:nbn:de:0030-drops-135742},
  doi =		{10.4230/LIPIcs.ITCS.2021.35},
  annote =	{Keywords: Orthogonal range, Range updates, Online and Dynamic Data Structures, Fine-grained complexity, Cycle counting}
}
Document
Two Combinatorial MA-Complete Problems

Authors: Dorit Aharonov and Alex B. Grilo


Abstract
Despite the interest in the complexity class MA, the randomized analog of NP, there are just a few known natural (promise-)MA-complete problems. The first such problem was found by Bravyi and Terhal (SIAM Journal of Computing 2009); this result was then followed by Crosson, Bacon and Brown (PRE 2010) and then by Bravyi (Quantum Information and Computation 2015). Surprisingly, each of these problems is either from or inspired by quantum computation. This fact makes it hard for classical complexity theorists to study these problems, and prevents potential progress, e.g., on the important question of derandomizing MA. In this note we define two new natural combinatorial problems and we prove their MA-completeness. The first problem, that we call approximately-clean approximate-connected-component (ACAC), gets as input a succinctly described graph, some of whose vertices are marked. The problem is to decide whether there is a connected component whose vertices are all unmarked, or the graph is far from having this property. The second problem, called SetCSP, generalizes in a novel way the standard constraint satisfaction problem (CSP) into constraints involving sets of strings. Technically, our proof that SetCSP is MA-complete is a fleshing out of an observation made in (Aharonov and Grilo, FOCS 2019), where it was noted that a restricted case of Bravyi and Terhal’s MA complete problem (namely, the uniform case) is already MA complete; and, moreover, that this restricted case can be stated using classical, combinatorial language. The fact that the first, arguably more natural, problem of ACAC is MA-hard follows quite naturally from this proof as well; while containment of ACAC in MA is simple, based on the theory of random walks. We notice that this work, along with a translation of the main result of Aharonov and Grilo to the SetCSP problem, implies that finding a gap-amplification procedure for SetCSP (in the spirit of the gap-amplification procedure introduced in Dinur’s PCP proof) would imply MA=NP. In fact, the problem of finding gap-amplification for SetCSP is equivalent to the MA=NP problem. This provides an alternative new path towards the major problem of derandomizing MA. Deriving a similar statement regarding gap amplification of a natural restriction of $ACAC$ remains an open question.

Cite as

Dorit Aharonov and Alex B. Grilo. Two Combinatorial MA-Complete Problems. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 36:1-36:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{aharonov_et_al:LIPIcs.ITCS.2021.36,
  author =	{Aharonov, Dorit and Grilo, Alex B.},
  title =	{{Two Combinatorial MA-Complete Problems}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{36:1--36:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.36},
  URN =		{urn:nbn:de:0030-drops-135754},
  doi =		{10.4230/LIPIcs.ITCS.2021.36},
  annote =	{Keywords: Merlin-Arthur proof systems, Constraint sastifation problem, Random walks}
}
Document
Delegated Stochastic Probing

Authors: Curtis Bechtel and Shaddin Dughmi


Abstract
Delegation covers a broad class of problems in which a principal doesn't have the resources or expertise necessary to complete a task by themselves, so they delegate the task to an agent whose interests may not be aligned with their own. Stochastic probing describes problems in which we are tasked with maximizing expected utility by "probing" known distributions for acceptable solutions subject to certain constraints. In this work, we combine the concepts of delegation and stochastic probing into a single mechanism design framework which we term delegated stochastic probing. We study how much a principal loses by delegating a stochastic probing problem, compared to their utility in the non-delegated solution. Our model and results are heavily inspired by the work of Kleinberg and Kleinberg in "Delegated Search Approximates Efficient Search." Building on their work, we show that there exists a connection between delegated stochastic probing and generalized prophet inequalities, which provides us with constant-factor deterministic mechanisms for a large class of delegated stochastic probing problems. We also explore randomized mechanisms in a simple delegated probing setting, and show that they outperform deterministic mechanisms in some instances but not in the worst case.

Cite as

Curtis Bechtel and Shaddin Dughmi. Delegated Stochastic Probing. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 37:1-37:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bechtel_et_al:LIPIcs.ITCS.2021.37,
  author =	{Bechtel, Curtis and Dughmi, Shaddin},
  title =	{{Delegated Stochastic Probing}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{37:1--37:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.37},
  URN =		{urn:nbn:de:0030-drops-135763},
  doi =		{10.4230/LIPIcs.ITCS.2021.37},
  annote =	{Keywords: Delegation, Mechanism Design, Algorithmic Game Theory}
}
Document
Explicit SoS Lower Bounds from High-Dimensional Expanders

Authors: Irit Dinur, Yuval Filmus, Prahladh Harsha, and Madhur Tulsiani


Abstract
We construct an explicit and structured family of 3XOR instances which is hard for O(√{log n}) levels of the Sum-of-Squares hierarchy. In contrast to earlier constructions, which involve a random component, our systems are highly structured and can be constructed explicitly in deterministic polynomial time. Our construction is based on the high-dimensional expanders devised by Lubotzky, Samuels and Vishne, known as LSV complexes or Ramanujan complexes, and our analysis is based on two notions of expansion for these complexes: cosystolic expansion, and a local isoperimetric inequality due to Gromov. Our construction offers an interesting contrast to the recent work of Alev, Jeronimo and the last author (FOCS 2019). They showed that 3XOR instances in which the variables correspond to vertices in a high-dimensional expander are easy to solve. In contrast, in our instances the variables correspond to the edges of the complex.

Cite as

Irit Dinur, Yuval Filmus, Prahladh Harsha, and Madhur Tulsiani. Explicit SoS Lower Bounds from High-Dimensional Expanders. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 38:1-38:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{dinur_et_al:LIPIcs.ITCS.2021.38,
  author =	{Dinur, Irit and Filmus, Yuval and Harsha, Prahladh and Tulsiani, Madhur},
  title =	{{Explicit SoS Lower Bounds from High-Dimensional Expanders}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{38:1--38:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.38},
  URN =		{urn:nbn:de:0030-drops-135774},
  doi =		{10.4230/LIPIcs.ITCS.2021.38},
  annote =	{Keywords: High-dimensional expanders, sum-of-squares, integrality gaps}
}
Document
Lower Bounds on the Running Time of Two-Way Quantum Finite Automata and Sublogarithmic-Space Quantum Turing Machines

Authors: Zachary Remscrim


Abstract
The two-way finite automaton with quantum and classical states (2QCFA), defined by Ambainis and Watrous, is a model of quantum computation whose quantum part is extremely limited; however, as they showed, 2QCFA are surprisingly powerful: a 2QCFA with only a single-qubit can recognize the language L_{pal} = {w ∈ {a,b}^*:w is a palindrome} with bounded error in expected time 2^{O(n)}. We prove that their result cannot be improved upon: a 2QCFA (of any size) cannot recognize L_{pal} with bounded error in expected time 2^{o(n)}. This is the first example of a language that can be recognized with bounded error by a 2QCFA in exponential time but not in subexponential time. Moreover, we prove that a quantum Turing machine (QTM) running in space o(log n) and expected time 2^{n^{1-Ω(1)}} cannot recognize L_{pal} with bounded error; again, this is the first lower bound of its kind. Far more generally, we establish a lower bound on the running time of any 2QCFA or o(log n)-space QTM that recognizes any language L in terms of a natural "hardness measure" of L. This allows us to exhibit a large family of languages for which we have asymptotically matching lower and upper bounds on the running time of any such 2QCFA or QTM recognizer.

Cite as

Zachary Remscrim. Lower Bounds on the Running Time of Two-Way Quantum Finite Automata and Sublogarithmic-Space Quantum Turing Machines. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 39:1-39:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{remscrim:LIPIcs.ITCS.2021.39,
  author =	{Remscrim, Zachary},
  title =	{{Lower Bounds on the Running Time of Two-Way Quantum Finite Automata and Sublogarithmic-Space Quantum Turing Machines}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{39:1--39:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.39},
  URN =		{urn:nbn:de:0030-drops-135781},
  doi =		{10.4230/LIPIcs.ITCS.2021.39},
  annote =	{Keywords: Quantum computation, Lower bounds, Finite automata}
}
Document
On the Complexity of #CSP^d

Authors: Jiabao Lin


Abstract
Counting CSP^d is the counting constraint satisfaction problem (#CSP in short) restricted to the instances where every variable occurs a multiple of d times. This paper revisits tractable structures in #CSP and gives a complexity classification theorem for #CSP^d with algebraic complex weights. The result unifies affine functions (stabilizer states in quantum information theory) and related variants such as the local affine functions, the discovery of which leads to all the recent progress on the complexity of Holant problems. The Holant is a framework that generalizes counting CSP. In the literature on Holant problems, weighted constraints are often expressed as tensors (vectors) such that projections and linear transformations help analyze the structure. This paper gives an example showing that different classes of tensors distinguished by these algebraic operations may share the same closure property under tensor product and contraction.

Cite as

Jiabao Lin. On the Complexity of #CSP^d. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 40:1-40:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{lin:LIPIcs.ITCS.2021.40,
  author =	{Lin, Jiabao},
  title =	{{On the Complexity of #CSP^d}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{40:1--40:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.40},
  URN =		{urn:nbn:de:0030-drops-135793},
  doi =		{10.4230/LIPIcs.ITCS.2021.40},
  annote =	{Keywords: Constraint satisfaction problem, counting problems, Holant, complexity dichotomy}
}
Document
Interactive Proofs for Verifying Machine Learning

Authors: Shafi Goldwasser, Guy N. Rothblum, Jonathan Shafer, and Amir Yehudayoff


Abstract
We consider the following question: using a source of labeled data and interaction with an untrusted prover, what is the complexity of verifying that a given hypothesis is "approximately correct"? We study interactive proof systems for PAC verification, where a verifier that interacts with a prover is required to accept good hypotheses, and reject bad hypotheses. Both the verifier and the prover are efficient and have access to labeled data samples from an unknown distribution. We are interested in cases where the verifier can use significantly less data than is required for (agnostic) PAC learning, or use a substantially cheaper data source (e.g., using only random samples for verification, even though learning requires membership queries). We believe that today, when data and data-driven algorithms are quickly gaining prominence, the question of verifying purported outcomes of data analyses is very well-motivated. We show three main results. First, we prove that for a specific hypothesis class, verification is significantly cheaper than learning in terms of sample complexity, even if the verifier engages with the prover only in a single-round (NP-like) protocol. Moreover, for this class we prove that single-round verification is also significantly cheaper than testing closeness to the class. Second, for the broad class of Fourier-sparse boolean functions, we show a multi-round (IP-like) verification protocol, where the prover uses membership queries, and the verifier is able to assess the result while only using random samples. Third, we show that verification is not always more efficient. Namely, we show a class of functions where verification requires as many samples as learning does, up to a logarithmic factor.

Cite as

Shafi Goldwasser, Guy N. Rothblum, Jonathan Shafer, and Amir Yehudayoff. Interactive Proofs for Verifying Machine Learning. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 41:1-41:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{goldwasser_et_al:LIPIcs.ITCS.2021.41,
  author =	{Goldwasser, Shafi and Rothblum, Guy N. and Shafer, Jonathan and Yehudayoff, Amir},
  title =	{{Interactive Proofs for Verifying Machine Learning}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{41:1--41:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.41},
  URN =		{urn:nbn:de:0030-drops-135806},
  doi =		{10.4230/LIPIcs.ITCS.2021.41},
  annote =	{Keywords: PAC learning, Fourier analysis of boolean functions, Complexity gaps, Complexity lower bounds, Goldreich-Levin algorithm, Kushilevitz-Mansour algorithm, Distribution testing}
}
Document
Ordered Graph Limits and Their Applications

Authors: Omri Ben-Eliezer, Eldar Fischer, Amit Levi, and Yuichi Yoshida


Abstract
The emerging theory of graph limits exhibits an analytic perspective on graphs, showing that many important concepts and tools in graph theory and its applications can be described more naturally (and sometimes proved more easily) in analytic language. We extend the theory of graph limits to the ordered setting, presenting a limit object for dense vertex-ordered graphs, which we call an orderon. As a special case, this yields limit objects for matrices whose rows and columns are ordered, and for dynamic graphs that expand (via vertex insertions) over time. Along the way, we devise an ordered locality-preserving variant of the cut distance between ordered graphs, showing that two graphs are close with respect to this distance if and only if they are similar in terms of their ordered subgraph frequencies. We show that the space of orderons is compact with respect to this distance notion, which is key to a successful analysis of combinatorial objects through their limits. For the proof we combine techniques used in the unordered setting with several new techniques specifically designed to overcome the challenges arising in the ordered setting. We derive several applications of the ordered limit theory in extremal combinatorics, sampling, and property testing in ordered graphs. In particular, we prove a new ordered analogue of the well-known result by Alon and Stav [RS&A'08] on the furthest graph from a hereditary property; this is the first known result of this type in the ordered setting. Unlike the unordered regime, here the Erdős–Rényi random graph 𝐆(n, p) with an ordering over the vertices is not always asymptotically the furthest from the property for some p. However, using our ordered limit theory, we show that random graphs generated by a stochastic block model, where the blocks are consecutive in the vertex ordering, are (approximately) the furthest. Additionally, we describe an alternative analytic proof of the ordered graph removal lemma [Alon et al., FOCS'17].

Cite as

Omri Ben-Eliezer, Eldar Fischer, Amit Levi, and Yuichi Yoshida. Ordered Graph Limits and Their Applications. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 42:1-42:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{beneliezer_et_al:LIPIcs.ITCS.2021.42,
  author =	{Ben-Eliezer, Omri and Fischer, Eldar and Levi, Amit and Yoshida, Yuichi},
  title =	{{Ordered Graph Limits and Their Applications}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{42:1--42:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.42},
  URN =		{urn:nbn:de:0030-drops-135815},
  doi =		{10.4230/LIPIcs.ITCS.2021.42},
  annote =	{Keywords: graph limits, ordered graph, graphon, cut distance, removal lemma}
}
Document
Error Correcting Codes for Uncompressed Messages

Authors: Ofer Grossman and Justin Holmgren


Abstract
Most types of messages we transmit (e.g., video, audio, images, text) are not fully compressed, since they do not have known efficient and information theoretically optimal compression algorithms. When transmitting such messages, standard error correcting codes fail to take advantage of the fact that messages are not fully compressed. We show that in this setting, it is sub-optimal to use standard error correction. We consider a model where there is a set of "valid messages" which the sender may send that may not be efficiently compressible, but where it is possible for the receiver to recognize valid messages. In this model, we construct a (probabilistic) encoding procedure that achieves better tradeoffs between data rates and error-resilience (compared to just applying a standard error correcting code). Additionally, our techniques yield improved efficiently decodable (probabilistic) codes for fully compressed messages (the standard setting where the set of valid messages is all binary strings) in the high-rate regime.

Cite as

Ofer Grossman and Justin Holmgren. Error Correcting Codes for Uncompressed Messages. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 43:1-43:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{grossman_et_al:LIPIcs.ITCS.2021.43,
  author =	{Grossman, Ofer and Holmgren, Justin},
  title =	{{Error Correcting Codes for Uncompressed Messages}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{43:1--43:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.43},
  URN =		{urn:nbn:de:0030-drops-135828},
  doi =		{10.4230/LIPIcs.ITCS.2021.43},
  annote =	{Keywords: Coding Theory, List Decoding}
}
Document
Total Functions in the Polynomial Hierarchy

Authors: Robert Kleinberg, Oliver Korten, Daniel Mitropolsky, and Christos Papadimitriou


Abstract
We identify several genres of search problems beyond NP for which existence of solutions is guaranteed. One class that seems especially rich in such problems is PEPP (for "polynomial empty pigeonhole principle"), which includes problems related to existence theorems proved through the union bound, such as finding a bit string that is far from all codewords, finding an explicit rigid matrix, as well as a problem we call Complexity, capturing Complexity Theory’s quest. When the union bound is generous, in that solutions constitute at least a polynomial fraction of the domain, we have a family of seemingly weaker classes α-PEPP, which are inside FP^NP|poly. Higher in the hierarchy, we identify the constructive version of the Sauer-Shelah lemma and the appropriate generalization of PPP that contains it, as well as the problem of finding a king in a tournament (a vertex k such that all other vertices are defeated by k, or by somebody k defeated).

Cite as

Robert Kleinberg, Oliver Korten, Daniel Mitropolsky, and Christos Papadimitriou. Total Functions in the Polynomial Hierarchy. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 44:1-44:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{kleinberg_et_al:LIPIcs.ITCS.2021.44,
  author =	{Kleinberg, Robert and Korten, Oliver and Mitropolsky, Daniel and Papadimitriou, Christos},
  title =	{{Total Functions in the Polynomial Hierarchy}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{44:1--44:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.44},
  URN =		{urn:nbn:de:0030-drops-135835},
  doi =		{10.4230/LIPIcs.ITCS.2021.44},
  annote =	{Keywords: total complexity, polynomial hierarchy, pigeonhole principle}
}
Document
Relaxing Common Belief for Social Networks

Authors: Noah Burrell and Grant Schoenebeck


Abstract
We propose a relaxation of common belief called factional belief that is suitable for the analysis of strategic coordination on social networks. We show how this definition can be used to analyze revolt games on general graphs, including by giving an efficient algorithm that characterizes a structural result about the possible equilibria of such games. This extends prior work on common knowledge and common belief, which has been too restrictive for use in understanding strategic coordination and cooperation in social network settings.

Cite as

Noah Burrell and Grant Schoenebeck. Relaxing Common Belief for Social Networks. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 45:1-45:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{burrell_et_al:LIPIcs.ITCS.2021.45,
  author =	{Burrell, Noah and Schoenebeck, Grant},
  title =	{{Relaxing Common Belief for Social Networks}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{45:1--45:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.45},
  URN =		{urn:nbn:de:0030-drops-135841},
  doi =		{10.4230/LIPIcs.ITCS.2021.45},
  annote =	{Keywords: Social networks, network revolt games, common belief}
}
Document
Tiered Random Matching Markets: Rank Is Proportional to Popularity

Authors: Itai Ashlagi, Mark Braverman, Amin Saberi, Clayton Thomas, and Geng Zhao


Abstract
We study the stable marriage problem in two-sided markets with randomly generated preferences. Agents on each side of the market are divided into a constant number of "soft" tiers, which capture agents' qualities. Specifically, every agent within a tier has the same public score, and agents on each side have preferences independently generated proportionally to the public scores of the other side. We compute the expected average rank which agents in each tier have for their partners in the man-optimal stable matching, and prove concentration results for the average rank in asymptotically large markets. Furthermore, despite having a significant effect on ranks, public scores do not strongly influence the probability of an agent matching to a given tier of the other side. This generalizes the results by Pittel [Pittel, 1989], which analyzed markets with uniform preferences. The results quantitatively demonstrate the effect of competition due to the heterogeneous attractiveness of agents in the market.

Cite as

Itai Ashlagi, Mark Braverman, Amin Saberi, Clayton Thomas, and Geng Zhao. Tiered Random Matching Markets: Rank Is Proportional to Popularity. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 46:1-46:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{ashlagi_et_al:LIPIcs.ITCS.2021.46,
  author =	{Ashlagi, Itai and Braverman, Mark and Saberi, Amin and Thomas, Clayton and Zhao, Geng},
  title =	{{Tiered Random Matching Markets: Rank Is Proportional to Popularity}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{46:1--46:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.46},
  URN =		{urn:nbn:de:0030-drops-135851},
  doi =		{10.4230/LIPIcs.ITCS.2021.46},
  annote =	{Keywords: Stable matching, stable marriage problem, tiered random markets, deferred acceptance}
}
Document
Black-Box Uselessness: Composing Separations in Cryptography

Authors: Geoffroy Couteau, Pooya Farshim, and Mohammad Mahmoody


Abstract
Black-box separations have been successfully used to identify the limits of a powerful set of tools in cryptography, namely those of black-box reductions. They allow proving that a large set of techniques are not capable of basing one primitive 𝒫 on another 𝒬. Such separations, however, do not say anything about the power of the combination of primitives 𝒬₁,𝒬₂ for constructing 𝒫, even if 𝒫 cannot be based on 𝒬₁ or 𝒬₂ alone. By introducing and formalizing the notion of black-box uselessness, we develop a framework that allows us to make such conclusions. At an informal level, we call primitive 𝒬 black-box useless (BBU) for 𝒫 if 𝒬 cannot help constructing 𝒫 in a black-box way, even in the presence of another primitive 𝒵. This is formalized by saying that 𝒬 is BBU for 𝒫 if for any auxiliary primitive 𝒵, whenever there exists a black-box construction of 𝒫 from (𝒬,𝒵), then there must already also exist a black-box construction of 𝒫 from 𝒵 alone. We also formalize various other notions of black-box uselessness, and consider in particular the setting of efficient black-box constructions when the number of queries to 𝒬 is below a threshold. Impagliazzo and Rudich (STOC'89) initiated the study of black-box separations by separating key agreement from one-way functions. We prove a number of initial results in this direction, which indicate that one-way functions are perhaps also black-box useless for key agreement. In particular, we show that OWFs are black-box useless in any construction of key agreement in either of the following settings: (1) the key agreement has perfect correctness and one of the parties calls the OWF a constant number of times; (2) the key agreement consists of a single round of interaction (as in Merkle-type protocols). We conjecture that OWFs are indeed black-box useless for general key agreement. We also show that certain techniques for proving black-box separations can be lifted to the uselessness regime. In particular, we show that the lower bounds of Canetti, Kalai, and Paneth (TCC'15) as well as Garg, Mahmoody, and Mohammed (Crypto'17 & TCC'17) for assumptions behind indistinguishability obfuscation (IO) can be extended to derive black-box uselessness of a variety of primitives for obtaining (approximately correct) IO. These results follow the so-called "compiling out" technique, which we prove to imply black-box uselessness. Eventually, we study the complementary landscape of black-box uselessness, namely black-box helpfulness. We put forth the conjecture that one-way functions are black-box helpful for building collision-resistant hash functions. We define two natural relaxations of this conjecture, and prove that both of these conjectures are implied by a natural conjecture regarding random permutations equipped with a collision finder oracle, as defined by Simon (Eurocrypt'98). This conjecture may also be of interest in other contexts, such as amplification of hardness.

Cite as

Geoffroy Couteau, Pooya Farshim, and Mohammad Mahmoody. Black-Box Uselessness: Composing Separations in Cryptography. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 47:1-47:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{couteau_et_al:LIPIcs.ITCS.2021.47,
  author =	{Couteau, Geoffroy and Farshim, Pooya and Mahmoody, Mohammad},
  title =	{{Black-Box Uselessness: Composing Separations in Cryptography}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{47:1--47:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.47},
  URN =		{urn:nbn:de:0030-drops-135869},
  doi =		{10.4230/LIPIcs.ITCS.2021.47},
  annote =	{Keywords: Black-Box Reductions, Separations, One-Way Functions, Key Agreement}
}
Document
Pseudobinomiality of the Sticky Random Walk

Authors: Venkatesan Guruswami and Vinayak M. Kumar


Abstract
Random walks on expanders are a central and versatile tool in pseudorandomness. If an arbitrary half of the vertices of an expander graph are marked, known Chernoff bounds for expander walks imply that the number M of marked vertices visited in a long n-step random walk strongly concentrates around the expected n/2 value. Surprisingly, it was recently shown that the parity of M also has exponentially small bias. Is there a common unification of these results? What other statistics about M resemble the binomial distribution (the Hamming weight of a random n-bit string)? To gain insight into such questions, we analyze a simpler model called the sticky random walk. This model is a natural stepping stone towards understanding expander random walks, and we also show that it is a necessary step. The sticky random walk starts with a random bit and then each subsequent bit independently equals the previous bit with probability (1+λ)/2. Here λ is the proxy for the expander’s (second largest) eigenvalue. Using Krawtchouk expansion of functions, we derive several probabilistic results about the sticky random walk. We show an asymptotically tight Θ(λ) bound on the total variation distance between the (Hamming weight of the) sticky walk and the binomial distribution. We prove that the correlation between the majority and parity bit of the sticky walk is bounded by O(n^{-1/4}). This lends hope to unifying Chernoff bounds and parity concentration, as well as establishing other interesting statistical properties, of expander random walks.

Cite as

Venkatesan Guruswami and Vinayak M. Kumar. Pseudobinomiality of the Sticky Random Walk. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 48:1-48:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.ITCS.2021.48,
  author =	{Guruswami, Venkatesan and Kumar, Vinayak M.},
  title =	{{Pseudobinomiality of the Sticky Random Walk}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{48:1--48:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.48},
  URN =		{urn:nbn:de:0030-drops-135870},
  doi =		{10.4230/LIPIcs.ITCS.2021.48},
  annote =	{Keywords: Expander Graphs, Fourier analysis, Markov Chains, Pseudorandomness, Random Walks}
}
Document
Robust Quantum Entanglement at (Nearly) Room Temperature

Authors: Lior Eldar


Abstract
We formulate an average-case analog of the NLTS conjecture of Freedman and Hastings (QIC 2014) by asking whether there exist topologically ordered systems with corresponding local Hamiltonians for which the thermal Gibbs state for constant temperature cannot even be approximated by shallow quantum circuits. We then prove this conjecture for nearly optimal parameters: we construct a quantum error correcting code whose corresponding (log) local Hamiltonian has the following property: for nearly constant temperature (temperature decays as 1/log²log(n)) the thermal Gibbs state of that Hamiltonian cannot be approximated by any circuit of depth less than log(n), and it is highly entangled in a well-defined way. This implies that appropriately chosen local Hamiltonians can give rise to ground-state long-range entanglement which can survive without active error correction at temperatures which are nearly independent of the system size: thereby improving exponentially upon previously known bounds.

Cite as

Lior Eldar. Robust Quantum Entanglement at (Nearly) Room Temperature. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 49:1-49:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{eldar:LIPIcs.ITCS.2021.49,
  author =	{Eldar, Lior},
  title =	{{Robust Quantum Entanglement at (Nearly) Room Temperature}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{49:1--49:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.49},
  URN =		{urn:nbn:de:0030-drops-135886},
  doi =		{10.4230/LIPIcs.ITCS.2021.49},
  annote =	{Keywords: Quantum error-correcting codes, Quantum Entanglement, Quantum Locally-Testable Codes, Local Hamiltonians, quantum PCP, NLTS}
}
Document
Time-Space Lower Bounds for Simulating Proof Systems with Quantum and Randomized Verifiers

Authors: Abhijit S. Mudigonda and R. Ryan Williams


Abstract
A line of work initiated by Fortnow in 1997 has proven model-independent time-space lower bounds for the SAT problem and related problems within the polynomial-time hierarchy. For example, for the SAT problem, the state-of-the-art is that the problem cannot be solved by random-access machines in n^c time and n^o(1) space simultaneously for c < 2cos(π/7) ≈ 1.801. We extend this lower bound approach to the quantum and randomized domains. Combining Grover’s algorithm with components from SAT time-space lower bounds, we show that there are problems verifiable in O(n) time with quantum Merlin-Arthur protocols that cannot be solved in n^c time and n^o(1) space simultaneously for c < (3+√3)/2 ≈ 2.366, a super-quadratic time lower bound. This result and the prior work on SAT can both be viewed as consequences of a more general formula for time lower bounds against small-space algorithms, whose asymptotics we study in full. We also show lower bounds against randomized algorithms: there are problems verifiable in O(n) time with (classical) Merlin-Arthur protocols that cannot be solved in n^c randomized time and O(log n) space simultaneously for c < 1.465, improving a result of Diehl. For quantum Merlin-Arthur protocols, the lower bound in this setting can be improved to c < 1.5.

Cite as

Abhijit S. Mudigonda and R. Ryan Williams. Time-Space Lower Bounds for Simulating Proof Systems with Quantum and Randomized Verifiers. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 50:1-50:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{mudigonda_et_al:LIPIcs.ITCS.2021.50,
  author =	{Mudigonda, Abhijit S. and Williams, R. Ryan},
  title =	{{Time-Space Lower Bounds for Simulating Proof Systems with Quantum and Randomized Verifiers}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{50:1--50:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.50},
  URN =		{urn:nbn:de:0030-drops-135897},
  doi =		{10.4230/LIPIcs.ITCS.2021.50},
  annote =	{Keywords: Time-space tradeoffs, lower bounds, QMA}
}
Document
Online Search with a Hint

Authors: Spyros Angelopoulos


Abstract
The linear search problem, informally known as the cow path problem, is one of the fundamental problems in search theory. In this problem, an immobile target is hidden at some unknown position on an unbounded line, and a mobile searcher, initially positioned at some specific point of the line called the root, must traverse the line so as to locate the target. The objective is to minimize the worst-case ratio of the distance traversed by the searcher to the distance of the target from the root, which is known as the competitive ratio of the search. In this work we study this problem in a setting in which the searcher has a hint concerning the target. We consider three settings in regards to the nature of the hint: i) the hint suggests the exact position of the target on the line; ii) the hint suggests the direction of the optimal search (i.e., to the left or the right of the root); and iii) the hint is a general k-bit string that encodes some information concerning the target. Our objective is to study the Pareto-efficiency of strategies in this model. Namely, we seek optimal, or near-optimal tradeoffs between the searcher’s performance if the hint is correct (i.e., provided by a trusted source) and if the hint is incorrect (i.e., provided by an adversary).

Cite as

Spyros Angelopoulos. Online Search with a Hint. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 51:1-51:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{angelopoulos:LIPIcs.ITCS.2021.51,
  author =	{Angelopoulos, Spyros},
  title =	{{Online Search with a Hint}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{51:1--51:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.51},
  URN =		{urn:nbn:de:0030-drops-135902},
  doi =		{10.4230/LIPIcs.ITCS.2021.51},
  annote =	{Keywords: Search problems, searching on the line, competitive analysis, predictions}
}
Document
Sequential Defaulting in Financial Networks

Authors: Pál András Papp and Roger Wattenhofer


Abstract
We consider financial networks, where banks are connected by contracts such as debts or credit default swaps. We study the clearing problem in these systems: we want to know which banks end up in a default, and what portion of their liabilities can these defaulting banks fulfill. We analyze these networks in a sequential model where banks announce their default one at a time, and the system evolves in a step-by-step manner. We first consider the reversible model of these systems, where banks may return from a default. We show that the stabilization time in this model can heavily depend on the ordering of announcements. However, we also show that there are systems where for any choice of ordering, the process lasts for an exponential number of steps before an eventual stabilization. We also show that finding the ordering with the smallest (or largest) number of banks ending up in default is an NP-hard problem. Furthermore, we prove that defaulting early can be an advantageous strategy for banks in some cases, and in general, finding the best time for a default announcement is NP-hard. Finally, we discuss how changing some properties of this setting affects the stabilization time of the process, and then use these techniques to devise a monotone model of the systems, which ensures that every network stabilizes eventually.

Cite as

Pál András Papp and Roger Wattenhofer. Sequential Defaulting in Financial Networks. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 52:1-52:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{papp_et_al:LIPIcs.ITCS.2021.52,
  author =	{Papp, P\'{a}l Andr\'{a}s and Wattenhofer, Roger},
  title =	{{Sequential Defaulting in Financial Networks}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{52:1--52:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.52},
  URN =		{urn:nbn:de:0030-drops-135919},
  doi =		{10.4230/LIPIcs.ITCS.2021.52},
  annote =	{Keywords: Financial Network, Sequential Defaulting, Credit Default Swap, Clearing Problem, Stabilization Time}
}
Document
No Quantum Speedup over Gradient Descent for Non-Smooth Convex Optimization

Authors: Ankit Garg, Robin Kothari, Praneeth Netrapalli, and Suhail Sherif


Abstract
We study the first-order convex optimization problem, where we have black-box access to a (not necessarily smooth) function f:ℝⁿ → ℝ and its (sub)gradient. Our goal is to find an ε-approximate minimum of f starting from a point that is distance at most R from the true minimum. If f is G-Lipschitz, then the classic gradient descent algorithm solves this problem with O((GR/ε)²) queries. Importantly, the number of queries is independent of the dimension n and gradient descent is optimal in this regard: No deterministic or randomized algorithm can achieve better complexity that is still independent of the dimension n. In this paper we reprove the randomized lower bound of Ω((GR/ε)²) using a simpler argument than previous lower bounds. We then show that although the function family used in the lower bound is hard for randomized algorithms, it can be solved using O(GR/ε) quantum queries. We then show an improved lower bound against quantum algorithms using a different set of instances and establish our main result that in general even quantum algorithms need Ω((GR/ε)²) queries to solve the problem. Hence there is no quantum speedup over gradient descent for black-box first-order convex optimization without further assumptions on the function family.

Cite as

Ankit Garg, Robin Kothari, Praneeth Netrapalli, and Suhail Sherif. No Quantum Speedup over Gradient Descent for Non-Smooth Convex Optimization. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 53:1-53:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{garg_et_al:LIPIcs.ITCS.2021.53,
  author =	{Garg, Ankit and Kothari, Robin and Netrapalli, Praneeth and Sherif, Suhail},
  title =	{{No Quantum Speedup over Gradient Descent for Non-Smooth Convex Optimization}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{53:1--53:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.53},
  URN =		{urn:nbn:de:0030-drops-135921},
  doi =		{10.4230/LIPIcs.ITCS.2021.53},
  annote =	{Keywords: Quantum algorithms, Gradient descent, Convex optimization}
}
Document
Quantum Versus Randomized Communication Complexity, with Efficient Players

Authors: Uma Girish, Ran Raz, and Avishay Tal


Abstract
We study a new type of separations between quantum and classical communication complexity, separations that are obtained using quantum protocols where all parties are efficient, in the sense that they can be implemented by small quantum circuits, with oracle access to their inputs. Our main result qualitatively matches the strongest known separation between quantum and classical communication complexity [Dmitry Gavinsky, 2016] and is obtained using a quantum protocol where all parties are efficient. More precisely, we give an explicit partial Boolean function f over inputs of length N, such that: (1) f can be computed by a simultaneous-message quantum protocol with communication complexity polylog(N) (where at the beginning of the protocol Alice and Bob also have polylog(N) entangled EPR pairs). (2) Any classical randomized protocol for f, with any number of rounds, has communication complexity at least Ω̃(N^{1/4}). (3) All parties in the quantum protocol of Item (1) (Alice, Bob and the referee) can be implemented by quantum circuits of size polylog(N) (where Alice and Bob have oracle access to their inputs). Items (1), (2) qualitatively match the strongest known separation between quantum and classical communication complexity, proved by Gavinsky [Dmitry Gavinsky, 2016]. Item (3) is new. (Our result is incomparable to the one of Gavinsky. While he obtained a quantitatively better lower bound of Ω(N^{1/2}) in the classical case, the referee in his quantum protocol is inefficient). Exponential separations of quantum and classical communication complexity have been studied in numerous previous works, but to the best of our knowledge the efficiency of the parties in the quantum protocol has not been addressed, and in most previous separations the quantum parties seem to be inefficient. The only separations that we know of that have efficient quantum parties are the recent separations that are based on lifting [Arkadev Chattopadhyay et al., 2019; Arkadev Chattopadhyay et al., 2019]. However, these separations seem to require quantum protocols with at least two rounds of communication, so they imply a separation of two-way quantum and classical communication complexity but they do not give the stronger separations of simultaneous-message quantum communication complexity vs. two-way classical communication complexity (or even one-way quantum communication complexity vs. two-way classical communication complexity). Our proof technique is completely new, in the context of communication complexity, and is based on techniques from [Ran Raz and Avishay Tal, 2019]. Our function f is based on a lift of the forrelation problem, using xor as a gadget.

Cite as

Uma Girish, Ran Raz, and Avishay Tal. Quantum Versus Randomized Communication Complexity, with Efficient Players. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 54:1-54:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{girish_et_al:LIPIcs.ITCS.2021.54,
  author =	{Girish, Uma and Raz, Ran and Tal, Avishay},
  title =	{{Quantum Versus Randomized Communication Complexity, with Efficient Players}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{54:1--54:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.54},
  URN =		{urn:nbn:de:0030-drops-135932},
  doi =		{10.4230/LIPIcs.ITCS.2021.54},
  annote =	{Keywords: Exponential Separation, Quantum, Randomized, Communication, Complexity, Forrelation}
}
Document
Agnostic Learning with Unknown Utilities

Authors: Kush Bhatia, Peter L. Bartlett, Anca D. Dragan, and Jacob Steinhardt


Abstract
Traditional learning approaches for classification implicitly assume that each mistake has the same cost. In many real-world problems though, the utility of a decision depends on the underlying context x and decision y; for instance, misclassifying a stop sign is worse than misclassifying a road-side postbox. However, directly incorporating these utilities into the learning objective is often infeasible since these can be quite complex and difficult for humans to specify. We formally study this as agnostic learning with unknown utilities: given a dataset S = {x_1, …, x_n} where each data point x_i ∼ 𝒟_x from some unknown distribution 𝒟_x, the objective of the learner is to output a function f in some class of decision functions ℱ with small excess risk. This risk measures the performance of the output predictor f with respect to the best predictor in the class ℱ on the unknown underlying utility u^*:𝒳×𝒴↦ [0,1]. This utility u^* is not assumed to have any specific structure and is allowed to be any bounded function. This raises an interesting question whether learning is even possible in our setup, given that obtaining a generalizable estimate of utility u^* might not be possible from finitely many samples. Surprisingly, we show that estimating the utilities of only the sampled points S suffices to learn a decision function which generalizes well. With this insight, we study mechanisms for eliciting information from human experts which allow a learner to estimate the utilities u^* on the set S. While humans find it difficult to directly provide utility values reliably, it is often easier for them to provide comparison feedback based on these utilities. We show that, unlike in the realizable setup, the vanilla comparison queries where humans compare a pair of decisions for a single input x are insufficient. We introduce a family of elicitation mechanisms by generalizing comparisons, called the k-comparison oracle, which enables the learner to ask for comparisons across k different inputs x at once. We show that the excess risk in our agnostic learning framework decreases at a rate of O (1/k) with such queries. This result brings out an interesting accuracy-elicitation trade-off - as the order k of the oracle increases, the comparative queries become harder to elicit from humans but allow for more accurate learning.

Cite as

Kush Bhatia, Peter L. Bartlett, Anca D. Dragan, and Jacob Steinhardt. Agnostic Learning with Unknown Utilities. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 55:1-55:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bhatia_et_al:LIPIcs.ITCS.2021.55,
  author =	{Bhatia, Kush and Bartlett, Peter L. and Dragan, Anca D. and Steinhardt, Jacob},
  title =	{{Agnostic Learning with Unknown Utilities}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{55:1--55:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.55},
  URN =		{urn:nbn:de:0030-drops-135947},
  doi =		{10.4230/LIPIcs.ITCS.2021.55},
  annote =	{Keywords: agnostic learning, learning by comparisons, utility estimation, active learning}
}
Document
On Distributed Differential Privacy and Counting Distinct Elements

Authors: Lijie Chen, Badih Ghazi, Ravi Kumar, and Pasin Manurangsi


Abstract
We study the setup where each of n users holds an element from a discrete set, and the goal is to count the number of distinct elements across all users, under the constraint of (ε,δ)-differentially privacy: - In the non-interactive local setting, we prove that the additive error of any protocol is Ω(n) for any constant ε and for any δ inverse polynomial in n. - In the single-message shuffle setting, we prove a lower bound of Ω̃(n) on the error for any constant ε and for some δ inverse quasi-polynomial in n. We do so by building on the moment-matching method from the literature on distribution estimation. - In the multi-message shuffle setting, we give a protocol with at most one message per user in expectation and with an error of Õ(√n) for any constant ε and for any δ inverse polynomial in n. Our protocol is also robustly shuffle private, and our error of √n matches a known lower bound for such protocols. Our proof technique relies on a new notion, that we call dominated protocols, and which can also be used to obtain the first non-trivial lower bounds against multi-message shuffle protocols for the well-studied problems of selection and learning parity. Our first lower bound for estimating the number of distinct elements provides the first ω(√n) separation between global sensitivity and error in local differential privacy, thus answering an open question of Vadhan (2017). We also provide a simple construction that gives Ω̃(n) separation between global sensitivity and error in two-party differential privacy, thereby answering an open question of McGregor et al. (2011).

Cite as

Lijie Chen, Badih Ghazi, Ravi Kumar, and Pasin Manurangsi. On Distributed Differential Privacy and Counting Distinct Elements. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 56:1-56:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ITCS.2021.56,
  author =	{Chen, Lijie and Ghazi, Badih and Kumar, Ravi and Manurangsi, Pasin},
  title =	{{On Distributed Differential Privacy and Counting Distinct Elements}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{56:1--56:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.56},
  URN =		{urn:nbn:de:0030-drops-135953},
  doi =		{10.4230/LIPIcs.ITCS.2021.56},
  annote =	{Keywords: Differential Privacy, Shuffle Model}
}
Document
A Generalized Matching Reconfiguration Problem

Authors: Noam Solomon and Shay Solomon


Abstract
The goal in reconfiguration problems is to compute a gradual transformation between two feasible solutions of a problem such that all intermediate solutions are also feasible. In the Matching Reconfiguration Problem (MRP), proposed in a pioneering work by Ito et al. from 2008, we are given a graph G and two matchings M and M', and we are asked whether there is a sequence of matchings in G starting with M and ending at M', each resulting from the previous one by either adding or deleting a single edge in G, without ever going through a matching of size < min{|M|,|M'|}-1. Ito et al. gave a polynomial time algorithm for the problem, which uses the Edmonds-Gallai decomposition. In this paper we introduce a natural generalization of the MRP that depends on an integer parameter Δ ≥ 1: here we are allowed to make Δ changes to the current solution rather than 1 at each step of the {transformation procedure}. There is always a valid sequence of matchings transforming M to M' if Δ is sufficiently large, and naturally we would like to minimize Δ. We first devise an optimal transformation procedure for unweighted matching with Δ = 3, and then extend it to weighted matchings to achieve asymptotically optimal guarantees. The running time of these procedures is linear. We further demonstrate the applicability of this generalized problem to dynamic graph matchings. In this area, the number of changes to the maintained matching per update step (the recourse bound) is an important quality measure. Nevertheless, the worst-case recourse bounds of almost all known dynamic matching algorithms are prohibitively large, much larger than the corresponding update times. We fill in this gap via a surprisingly simple black-box reduction: Any dynamic algorithm for maintaining a β-approximate maximum cardinality matching with update time T, for any β ≥ 1, T and ε > 0, can be transformed into an algorithm for maintaining a (β(1 +ε))-approximate maximum cardinality matching with update time T + O(1/ε) and worst-case recourse bound O(1/ε). This result generalizes for approximate maximum weight matching, where the update time and worst-case recourse bound grow from T + O(1/ε) and O(1/ε) to T + O(ψ/ε) and O(ψ/ε), respectively; ψ is the graph aspect-ratio. We complement this positive result by showing that, for β = 1+ε, the worst-case recourse bound of any algorithm produced by our reduction is optimal. As a corollary, several key dynamic approximate matching algorithms - with poor worst-case recourse bounds - are strengthened to achieve near-optimal worst-case recourse bounds with no loss in update time.

Cite as

Noam Solomon and Shay Solomon. A Generalized Matching Reconfiguration Problem. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 57:1-57:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{solomon_et_al:LIPIcs.ITCS.2021.57,
  author =	{Solomon, Noam and Solomon, Shay},
  title =	{{A Generalized Matching Reconfiguration Problem}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{57:1--57:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.57},
  URN =		{urn:nbn:de:0030-drops-135965},
  doi =		{10.4230/LIPIcs.ITCS.2021.57},
  annote =	{Keywords: Dynamic algorithms, graph matching, reconfiguration problem, recourse bound}
}
Document
Sensitivity Analysis of the Maximum Matching Problem

Authors: Yuichi Yoshida and Samson Zhou


Abstract
We consider the sensitivity of algorithms for the maximum matching problem against edge and vertex modifications. When an algorithm A for the maximum matching problem is deterministic, the sensitivity of A on G is defined as max_{e ∈ E(G)}|A(G) △ A(G - e)|, where G-e is the graph obtained from G by removing an edge e ∈ E(G) and △ denotes the symmetric difference. When A is randomized, the sensitivity is defined as max_{e ∈ E(G)}d_{EM}(A(G),A(G-e)), where d_{EM}(⋅,⋅) denotes the earth mover’s distance between two distributions. Thus the sensitivity measures the difference between the output of an algorithm after the input is slightly perturbed. Algorithms with low sensitivity, or stable algorithms are desirable because they are robust to edge failure or attack. In this work, we show a randomized (1-ε)-approximation algorithm with worst-case sensitivity O_ε(1), which substantially improves upon the (1-ε)-approximation algorithm of Varma and Yoshida (SODA'21) that obtains average sensitivity n^O(1/(1+ε²)) sensitivity algorithm, and show a deterministic 1/2-approximation algorithm with sensitivity exp(O(log^*n)) for bounded-degree graphs. We then show that any deterministic constant-factor approximation algorithm must have sensitivity Ω(log^* n). Our results imply that randomized algorithms are strictly more powerful than deterministic ones in that the former can achieve sensitivity independent of n whereas the latter cannot. We also show analogous results for vertex sensitivity, where we remove a vertex instead of an edge. Finally, we introduce the notion of normalized weighted sensitivity, a natural generalization of sensitivity that accounts for the weights of deleted edges. For a graph with weight function w, the normalized weighted sensitivity is defined to be the sum of the weighted edges in the symmetric difference of the algorithm normalized by the altered edge, i.e., max_{e ∈ E(G)}1/(w(e))w (A(G) △ A(G - e)). Hence the normalized weighted sensitivity measures the weighted difference between the output of an algorithm after the input is slightly perturbed, normalized by the weight of the perturbation. We show that if all edges in a graph have polynomially bounded weight, then given a trade-off parameter α > 2, there exists an algorithm that outputs a 1/(4α)-approximation to the maximum weighted matching in O(m log_α n) time, with normalized weighted sensitivity O(1).

Cite as

Yuichi Yoshida and Samson Zhou. Sensitivity Analysis of the Maximum Matching Problem. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 58:1-58:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{yoshida_et_al:LIPIcs.ITCS.2021.58,
  author =	{Yoshida, Yuichi and Zhou, Samson},
  title =	{{Sensitivity Analysis of the Maximum Matching Problem}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{58:1--58:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.58},
  URN =		{urn:nbn:de:0030-drops-135979},
  doi =		{10.4230/LIPIcs.ITCS.2021.58},
  annote =	{Keywords: Sensitivity analysis, maximum matching, graph algorithms}
}
Document
Computational Complexity of the Hylland-Zeckhauser Scheme for One-Sided Matching Markets

Authors: Vijay V. Vazirani and Mihalis Yannakakis


Abstract
In 1979, Hylland and Zeckhauser [Hylland and Zeckhauser, 1979] gave a simple and general scheme for implementing a one-sided matching market using the power of a pricing mechanism. Their method has nice properties - it is incentive compatible in the large and produces an allocation that is Pareto optimal - and hence it provides an attractive, off-the-shelf method for running an application involving such a market. With matching markets becoming ever more prevalent and impactful, it is imperative to finally settle the computational complexity of this scheme. We present the following partial resolution: 1) A combinatorial, strongly polynomial time algorithm for the dichotomous case, i.e., 0/1 utilities, and more generally, when each agent’s utilities come from a bi-valued set. 2) An example that has only irrational equilibria, hence proving that this problem is not in PPAD. 3) A proof of membership of the problem in the class FIXP. 4) A proof of membership of the problem of computing an approximate HZ equilibrium in the class PPAD. We leave open the (difficult) questions of determining if computing an exact HZ equilibrium is FIXP-hard and an approximate HZ equilibrium is PPAD-hard.

Cite as

Vijay V. Vazirani and Mihalis Yannakakis. Computational Complexity of the Hylland-Zeckhauser Scheme for One-Sided Matching Markets. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 59:1-59:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{vazirani_et_al:LIPIcs.ITCS.2021.59,
  author =	{Vazirani, Vijay V. and Yannakakis, Mihalis},
  title =	{{Computational Complexity of the Hylland-Zeckhauser Scheme for One-Sided Matching Markets}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{59:1--59:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.59},
  URN =		{urn:nbn:de:0030-drops-135987},
  doi =		{10.4230/LIPIcs.ITCS.2021.59},
  annote =	{Keywords: Hyland-Zeckhauser scheme, one-sided matching markets, mechanism design, dichotomous utilities, PPAD, FIXP}
}
Document
An O(N) Time Algorithm for Finding Hamilton Cycles with High Probability

Authors: Rajko Nenadov, Angelika Steger, and Pascal Su


Abstract
We design a randomized algorithm that finds a Hamilton cycle in 𝒪(n) time with high probability in a random graph G_{n,p} with edge probability p ≥ C log n / n. This closes a gap left open in a seminal paper by Angluin and Valiant from 1979.

Cite as

Rajko Nenadov, Angelika Steger, and Pascal Su. An O(N) Time Algorithm for Finding Hamilton Cycles with High Probability. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 60:1-60:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{nenadov_et_al:LIPIcs.ITCS.2021.60,
  author =	{Nenadov, Rajko and Steger, Angelika and Su, Pascal},
  title =	{{An O(N) Time Algorithm for Finding Hamilton Cycles with High Probability}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{60:1--60:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.60},
  URN =		{urn:nbn:de:0030-drops-135997},
  doi =		{10.4230/LIPIcs.ITCS.2021.60},
  annote =	{Keywords: Random Graphs, Hamilton Cycle, Perfect Matching, Linear Time, Sublinear Algorithm, Random Walk, Coupon Collector}
}
Document
Communication Memento: Memoryless Communication Complexity

Authors: Srinivasan Arunachalam and Supartha Podder


Abstract
We study the communication complexity of computing functions F: {0,1}ⁿ × {0,1}ⁿ → {0,1} in the memoryless communication model. Here, Alice is given x ∈ {0,1}ⁿ, Bob is given y ∈ {0,1}ⁿ and their goal is to compute F(x,y) subject to the following constraint: at every round, Alice receives a message from Bob and her reply to Bob solely depends on the message received and her input x (in particular, her reply is independent of the information from the previous rounds); the same applies to Bob. The cost of computing F in this model is the maximum number of bits exchanged in any round between Alice and Bob (on the worst case input x,y). In this paper, we also consider variants of our memoryless model wherein one party is allowed to have memory, the parties are allowed to communicate quantum bits, only one player is allowed to send messages. We show that some of these different variants of our memoryless communication model capture the garden-hose model of computation by Buhrman et al. (ITCS'13), space-bounded communication complexity by Brody et al. (ITCS'13) and the overlay communication complexity by Papakonstantinou et al. (CCC'14). Thus the memoryless communication complexity model provides a unified framework to study all these space-bounded communication complexity models. We establish the following main results: (1) We show that the memoryless communication complexity of F equals the logarithm of the size of the smallest bipartite branching program computing F (up to a factor 2); (2) We show that memoryless communication complexity equals garden-hose model of computation; (3) We exhibit various exponential separations between these memoryless communication models. We end with an intriguing open question: can we find an explicit function F and universal constant c > 1 for which the memoryless communication complexity is at least c log n? Note that c ≥ 2+ε would imply a Ω(n^{2+ε}) lower bound for general formula size, improving upon the best lower bound by [Nečiporuk, 1966].

Cite as

Srinivasan Arunachalam and Supartha Podder. Communication Memento: Memoryless Communication Complexity. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 61:1-61:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{arunachalam_et_al:LIPIcs.ITCS.2021.61,
  author =	{Arunachalam, Srinivasan and Podder, Supartha},
  title =	{{Communication Memento: Memoryless Communication Complexity}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{61:1--61:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.61},
  URN =		{urn:nbn:de:0030-drops-136007},
  doi =		{10.4230/LIPIcs.ITCS.2021.61},
  annote =	{Keywords: Communication complexity, space complexity, branching programs, garden-hose model, quantum computing}
}
Document
Relative Lipschitzness in Extragradient Methods and a Direct Recipe for Acceleration

Authors: Michael B. Cohen, Aaron Sidford, and Kevin Tian


Abstract
We show that standard extragradient methods (i.e. mirror prox [Arkadi Nemirovski, 2004] and dual extrapolation [Yurii Nesterov, 2007]) recover optimal accelerated rates for first-order minimization of smooth convex functions. To obtain this result we provide fine-grained characterization of the convergence rates of extragradient methods for solving monotone variational inequalities in terms of a natural condition we call relative Lipschitzness. We further generalize this framework to handle local and randomized notions of relative Lipschitzness and thereby recover rates for box-constrained 𝓁_∞ regression based on area convexity [Jonah Sherman, 2017] and complexity bounds achieved by accelerated (randomized) coordinate descent [Zeyuan {Allen Zhu} et al., 2016; Yurii Nesterov and Sebastian U. Stich, 2017] for smooth convex function minimization.

Cite as

Michael B. Cohen, Aaron Sidford, and Kevin Tian. Relative Lipschitzness in Extragradient Methods and a Direct Recipe for Acceleration. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 62:1-62:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{cohen_et_al:LIPIcs.ITCS.2021.62,
  author =	{Cohen, Michael B. and Sidford, Aaron and Tian, Kevin},
  title =	{{Relative Lipschitzness in Extragradient Methods and a Direct Recipe for Acceleration}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{62:1--62:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.62},
  URN =		{urn:nbn:de:0030-drops-136011},
  doi =		{10.4230/LIPIcs.ITCS.2021.62},
  annote =	{Keywords: Variational inequalities, minimax optimization, acceleration, 𝓁\underline∞ regression}
}
Document
Training (Overparametrized) Neural Networks in Near-Linear Time

Authors: Jan van den Brand, Binghui Peng, Zhao Song, and Omri Weinstein


Abstract
The slow convergence rate and pathological curvature issues of first-order gradient methods for training deep neural networks, initiated an ongoing effort for developing faster second-order optimization algorithms beyond SGD, without compromising the generalization error. Despite their remarkable convergence rate (independent of the training batch size n), second-order algorithms incur a daunting slowdown in the cost per iteration (inverting the Hessian matrix of the loss function), which renders them impractical. Very recently, this computational overhead was mitigated by the works of [Zhang et al., 2019; Cai et al., 2019], yielding an O(mn²)-time second-order algorithm for training two-layer overparametrized neural networks of polynomial width m. We show how to speed up the algorithm of [Cai et al., 2019], achieving an Õ(mn)-time backpropagation algorithm for training (mildly overparametrized) ReLU networks, which is near-linear in the dimension (mn) of the full gradient (Jacobian) matrix. The centerpiece of our algorithm is to reformulate the Gauss-Newton iteration as an 𝓁₂-regression problem, and then use a Fast-JL type dimension reduction to precondition the underlying Gram matrix in time independent of M, allowing to find a sufficiently good approximate solution via first-order conjugate gradient. Our result provides a proof-of-concept that advanced machinery from randomized linear algebra - which led to recent breakthroughs in convex optimization (ERM, LPs, Regression) - can be carried over to the realm of deep learning as well.

Cite as

Jan van den Brand, Binghui Peng, Zhao Song, and Omri Weinstein. Training (Overparametrized) Neural Networks in Near-Linear Time. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 63:1-63:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{vandenbrand_et_al:LIPIcs.ITCS.2021.63,
  author =	{van den Brand, Jan and Peng, Binghui and Song, Zhao and Weinstein, Omri},
  title =	{{Training (Overparametrized) Neural Networks in Near-Linear Time}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{63:1--63:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.63},
  URN =		{urn:nbn:de:0030-drops-136025},
  doi =		{10.4230/LIPIcs.ITCS.2021.63},
  annote =	{Keywords: Deep learning theory, Nonconvex optimization}
}
Document
A New Connection Between Node and Edge Depth Robust Graphs

Authors: Jeremiah Blocki and Mike Cinkoske


Abstract
Given a directed acyclic graph (DAG) G = (V,E), we say that G is (e,d)-depth-robust (resp. (e,d)-edge-depth-robust) if for any set S ⊂ V (resp. S ⊆ E) of at most |S| ≤ e nodes (resp. edges) the graph G-S contains a directed path of length d. While edge-depth-robust graphs are potentially easier to construct many applications in cryptography require node depth-robust graphs with small indegree. We create a graph reduction that transforms an (e, d)-edge-depth-robust graph with m edges into a (e/2,d)-depth-robust graph with O(m) nodes and constant indegree. One immediate consequence of this result is the first construction of a provably ((n log log n)/log n, n/{(log n)^{1 + log log n}})-depth-robust graph with constant indegree, where previous constructions for e = (n log log n)/log n had d = O(n^{1-ε}). Our reduction crucially relies on ST-Robust graphs, a new graph property we introduce which may be of independent interest. We say that a directed, acyclic graph with n inputs and n outputs is (k₁, k₂)-ST-Robust if we can remove any k₁ nodes and there exists a subgraph containing at least k₂ inputs and k₂ outputs such that each of the k₂ inputs is connected to all of the k₂ outputs. If the graph if (k₁,n-k₁)-ST-Robust for all k₁ ≤ n we say that the graph is maximally ST-robust. We show how to construct maximally ST-robust graphs with constant indegree and O(n) nodes. Given a family 𝕄 of ST-robust graphs and an arbitrary (e, d)-edge-depth-robust graph G we construct a new constant-indegree graph Reduce(G, 𝕄) by replacing each node in G with an ST-robust graph from 𝕄. We also show that ST-robust graphs can be used to construct (tight) proofs-of-space and (asymptotically) improved wide-block labeling functions.

Cite as

Jeremiah Blocki and Mike Cinkoske. A New Connection Between Node and Edge Depth Robust Graphs. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 64:1-64:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{blocki_et_al:LIPIcs.ITCS.2021.64,
  author =	{Blocki, Jeremiah and Cinkoske, Mike},
  title =	{{A New Connection Between Node and Edge Depth Robust Graphs}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{64:1--64:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.64},
  URN =		{urn:nbn:de:0030-drops-136038},
  doi =		{10.4230/LIPIcs.ITCS.2021.64},
  annote =	{Keywords: Depth robust graphs, memory hard functions}
}
Document
Towards Local Testability for Quantum Coding

Authors: Anthony Leverrier, Vivien Londe, and Gilles Zémor


Abstract
We introduce the hemicubic codes, a family of quantum codes obtained by associating qubits with the p-faces of the n-cube (for n > p) and stabilizer constraints with faces of dimension (p ± 1). The quantum code obtained by identifying antipodal faces of the resulting complex encodes one logical qubit into N = 2^{n-p-1} binom(n,p) physical qubits and displays local testability with a soundness of Ω(1/log(N)) beating the current state-of-the-art of 1/log²(N) due to Hastings. We exploit this local testability to devise an efficient decoding algorithm that corrects arbitrary errors of size less than the minimum distance, up to polylog factors. We then extend this code family by considering the quotient of the n-cube by arbitrary linear classical codes of length n. We establish the parameters of these generalized hemicubic codes. Interestingly, if the soundness of the hemicubic code could be shown to be constant, similarly to the ordinary n-cube, then the generalized hemicubic codes could yield quantum locally testable codes of length not exceeding an exponential or even polynomial function of the code dimension.

Cite as

Anthony Leverrier, Vivien Londe, and Gilles Zémor. Towards Local Testability for Quantum Coding. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 65:1-65:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{leverrier_et_al:LIPIcs.ITCS.2021.65,
  author =	{Leverrier, Anthony and Londe, Vivien and Z\'{e}mor, Gilles},
  title =	{{Towards Local Testability for Quantum Coding}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{65:1--65:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.65},
  URN =		{urn:nbn:de:0030-drops-136049},
  doi =		{10.4230/LIPIcs.ITCS.2021.65},
  annote =	{Keywords: Quantum error correcting code}
}
Document
Complete Problems for Multi-Pseudodeterministic Computations

Authors: Peter Dixon, A. Pavan, and N. V. Vinodchandran


Abstract
We exhibit several computational problems that are complete for multi-pseudodeterministic computations in the following sense: (1) these problems admit 2-pseudodeterministic algorithms (2) if there exists a pseudodeterministic algorithm for any of these problems, then any multi-valued function that admits a k-pseudodeterministic algorithm for a constant k, also admits a pseudodeterministic algorithm. We also show that these computational problems are complete for Search-BPP: a pseudodeterministic algorithm for any of these problems implies a pseudodeterministic algorithm for all problems in Search-BPP.

Cite as

Peter Dixon, A. Pavan, and N. V. Vinodchandran. Complete Problems for Multi-Pseudodeterministic Computations. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 66:1-66:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{dixon_et_al:LIPIcs.ITCS.2021.66,
  author =	{Dixon, Peter and Pavan, A. and Vinodchandran, N. V.},
  title =	{{Complete Problems for Multi-Pseudodeterministic Computations}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{66:1--66:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.66},
  URN =		{urn:nbn:de:0030-drops-136050},
  doi =		{10.4230/LIPIcs.ITCS.2021.66},
  annote =	{Keywords: Pseudodeterminism, Completeness, Collision Probability, Circuit Acceptance, Entropy Approximation}
}
Document
Online Paging with a Vanishing Regret

Authors: Yuval Emek, Shay Kutten, and Yangguang Shi


Abstract
This paper considers a variant of the online paging problem, where the online algorithm has access to multiple predictors, each producing a sequence of predictions for the page arrival times. The predictors may have occasional prediction errors and it is assumed that at least one of them makes a sublinear number of prediction errors in total. Our main result states that this assumption suffices for the design of a randomized online algorithm whose time-average regret with respect to the optimal offline algorithm tends to zero as the time tends to infinity. This holds (with different regret bounds) for both the full information access model, where in each round, the online algorithm gets the predictions of all predictors, and the bandit access model, where in each round, the online algorithm queries a single predictor. While online algorithms that exploit inaccurate predictions have been a topic of growing interest in the last few years, to the best of our knowledge, this is the first paper that studies this topic in the context of multiple predictors for an online problem with unbounded request sequences. Moreover, to the best of our knowledge, this is also the first paper that aims for (and achieves) online algorithms with a vanishing regret for a classic online problem under reasonable assumptions.

Cite as

Yuval Emek, Shay Kutten, and Yangguang Shi. Online Paging with a Vanishing Regret. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 67:1-67:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{emek_et_al:LIPIcs.ITCS.2021.67,
  author =	{Emek, Yuval and Kutten, Shay and Shi, Yangguang},
  title =	{{Online Paging with a Vanishing Regret}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{67:1--67:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.67},
  URN =		{urn:nbn:de:0030-drops-136065},
  doi =		{10.4230/LIPIcs.ITCS.2021.67},
  annote =	{Keywords: online paging, inaccurate predictions, multiple predictors, vanishing regret, full information vs. bandit access}
}
Document
Differentially Oblivious Turing Machines

Authors: Ilan Komargodski and Elaine Shi


Abstract
Oblivious RAM (ORAM) is a machinery that protects any RAM from leaking information about its secret input by observing only the access pattern. It is known that every ORAM must incur a logarithmic overhead compared to the non-oblivious RAM. In fact, even the seemingly weaker notion of differential obliviousness, which intuitively "protects" a single access by guaranteeing that the observed access pattern for every two "neighboring" logical access sequences satisfy (ε,δ)-differential privacy, is subject to a logarithmic lower bound. In this work, we show that any Turing machine computation can be generically compiled into a differentially oblivious one with only doubly logarithmic overhead. More precisely, given a Turing machine that makes N transitions, the compiled Turing machine makes O(N ⋅ log log N) transitions in total and the physical head movements sequence satisfies (ε,δ)-differential privacy (for a constant ε and a negligible δ). We additionally show that Ω(log log N) overhead is necessary in a natural range of parameters (and in the balls and bins model). As a corollary, we show that there exist natural data structures such as stack and queues (supporting online operations) on N elements for which there is a differentially oblivious implementation on a Turing machine incurring amortized O(log log N) overhead per operation, while it is known that any oblivious implementation must consume Ω(log N) operations unconditionally even on a RAM. Therefore, we obtain the first unconditional separation between obliviousness and differential obliviousness in the most natural setting of parameters where ε is a constant and δ is negligible. Before this work, such a separation was only known in the balls and bins model. Note that the lower bound applies in the RAM model while our upper bound is in the Turing machine model, making our separation stronger.

Cite as

Ilan Komargodski and Elaine Shi. Differentially Oblivious Turing Machines. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 68:1-68:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{komargodski_et_al:LIPIcs.ITCS.2021.68,
  author =	{Komargodski, Ilan and Shi, Elaine},
  title =	{{Differentially Oblivious Turing Machines}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{68:1--68:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.68},
  URN =		{urn:nbn:de:0030-drops-136071},
  doi =		{10.4230/LIPIcs.ITCS.2021.68},
  annote =	{Keywords: Differential privacy, Turing machines, obliviousness}
}
Document
Quantitative Correlation Inequalities via Semigroup Interpolation

Authors: Anindya De, Shivam Nadimpalli, and Rocco A. Servedio


Abstract
Most correlation inequalities for high-dimensional functions in the literature, such as the Fortuin-Kasteleyn-Ginibre inequality and the celebrated Gaussian Correlation Inequality of Royen, are qualitative statements which establish that any two functions of a certain type have non-negative correlation. We give a general approach that can be used to bootstrap many qualitative correlation inequalities for functions over product spaces into quantitative statements. The approach combines a new extremal result about power series, proved using complex analysis, with harmonic analysis of functions over product spaces. We instantiate this general approach in several different concrete settings to obtain a range of new and near-optimal quantitative correlation inequalities, including: - A {quantitative} version of Royen’s celebrated Gaussian Correlation Inequality [Royen, 2014]. In [Royen, 2014] Royen confirmed a conjecture, open for 40 years, stating that any two symmetric convex sets must be non-negatively correlated under any centered Gaussian distribution. We give a lower bound on the correlation in terms of the vector of degree-2 Hermite coefficients of the two convex sets, conceptually similar to Talagrand’s quantitative correlation bound for monotone Boolean functions over {0,1}ⁿ [M. Talagrand, 1996]. We show that our quantitative version of Royen’s theorem is within a logarithmic factor of being optimal. - A quantitative version of the well-known FKG inequality for monotone functions over any finite product probability space. This is a broad generalization of Talagrand’s quantitative correlation bound for functions from {0,1}ⁿ to {0,1} under the uniform distribution [M. Talagrand, 1996]; the only prior generalization of which we are aware is due to Keller [Nathan Keller, 2012; Keller, 2008; Nathan Keller, 2009], which extended [M. Talagrand, 1996] to product distributions over {0,1}ⁿ. In the special case of p-biased distributions over {0,1}ⁿ that was considered by Keller, our new bound essentially saves a factor of p log(1/p) over the quantitative bounds given in [Nathan Keller, 2012; Keller, 2008; Nathan Keller, 2009]. We also give {a quantitative version of} the FKG inequality for monotone functions over the continuous domain [0,1]ⁿ, answering a question of Keller [Nathan Keller, 2009].

Cite as

Anindya De, Shivam Nadimpalli, and Rocco A. Servedio. Quantitative Correlation Inequalities via Semigroup Interpolation. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 69:1-69:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{de_et_al:LIPIcs.ITCS.2021.69,
  author =	{De, Anindya and Nadimpalli, Shivam and Servedio, Rocco A.},
  title =	{{Quantitative Correlation Inequalities via Semigroup Interpolation}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{69:1--69:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.69},
  URN =		{urn:nbn:de:0030-drops-136081},
  doi =		{10.4230/LIPIcs.ITCS.2021.69},
  annote =	{Keywords: complex analysis, correlation inequality, FKG inequality, Gaussian correlation inequality, harmonic analysis, Markov semigroups}
}
Document
Shrinkage of Decision Lists and DNF Formulas

Authors: Benjamin Rossman


Abstract
We establish nearly tight bounds on the expected shrinkage of decision lists and DNF formulas under the p-random restriction R_p for all values of p ∈ [0,1]. For a function f with domain {0,1}ⁿ, let DL(f) denote the minimum size of a decision list that computes f. We show that E[DL(f ↾ R_p)] ≤ DL(f)^log_{2/(1-p)}((1+p)/(1-p)). For example, this bound is √{DL(f)} when p = √5-2 ≈ 0.24. For Boolean functions f, we obtain the same shrinkage bound with respect to DNF formula size plus 1 (i.e., replacing DL(⋅) with DNF(⋅)+1 on both sides of the inequality).

Cite as

Benjamin Rossman. Shrinkage of Decision Lists and DNF Formulas. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 70:1-70:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{rossman:LIPIcs.ITCS.2021.70,
  author =	{Rossman, Benjamin},
  title =	{{Shrinkage of Decision Lists and DNF Formulas}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{70:1--70:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.70},
  URN =		{urn:nbn:de:0030-drops-136098},
  doi =		{10.4230/LIPIcs.ITCS.2021.70},
  annote =	{Keywords: shrinkage, decision lists, DNF formulas}
}
Document
Block Rigidity: Strong Multiplayer Parallel Repetition Implies Super-Linear Lower Bounds for Turing Machines

Authors: Kunal Mittal and Ran Raz


Abstract
We prove that a sufficiently strong parallel repetition theorem for a special case of multiplayer (multiprover) games implies super-linear lower bounds for multi-tape Turing machines with advice. To the best of our knowledge, this is the first connection between parallel repetition and lower bounds for time complexity and the first major potential implication of a parallel repetition theorem with more than two players. Along the way to proving this result, we define and initiate a study of block rigidity, a weakening of Valiant’s notion of rigidity [Valiant, 1977]. While rigidity was originally defined for matrices, or, equivalently, for (multi-output) linear functions, we extend and study both rigidity and block rigidity for general (multi-output) functions. Using techniques of Paul, Pippenger, Szemerédi and Trotter [Paul et al., 1983], we show that a block-rigid function cannot be computed by multi-tape Turing machines that run in linear (or slightly super-linear) time, even in the non-uniform setting, where the machine gets an arbitrary advice tape. We then describe a class of multiplayer games, such that, a sufficiently strong parallel repetition theorem for that class of games implies an explicit block-rigid function. The games in that class have the following property that may be of independent interest: for every random string for the verifier (which, in particular, determines the vector of queries to the players), there is a unique correct answer for each of the players, and the verifier accepts if and only if all answers are correct. We refer to such games as independent games. The theorem that we need is that parallel repetition reduces the value of games in this class from v to v^Ω(n), where n is the number of repetitions. As another application of block rigidity, we show conditional size-depth tradeoffs for boolean circuits, where the gates compute arbitrary functions over large sets.

Cite as

Kunal Mittal and Ran Raz. Block Rigidity: Strong Multiplayer Parallel Repetition Implies Super-Linear Lower Bounds for Turing Machines. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 71:1-71:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{mittal_et_al:LIPIcs.ITCS.2021.71,
  author =	{Mittal, Kunal and Raz, Ran},
  title =	{{Block Rigidity: Strong Multiplayer Parallel Repetition Implies Super-Linear Lower Bounds for Turing Machines}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{71:1--71:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.71},
  URN =		{urn:nbn:de:0030-drops-136101},
  doi =		{10.4230/LIPIcs.ITCS.2021.71},
  annote =	{Keywords: Block-rigidity, Matrix Rigidity, Parallel Repetition, Size-depth tradeoffs, Turing Machines}
}
Document
Lower Bounds for Off-Chain Protocols: Exploring the Limits of Plasma

Authors: Stefan Dziembowski, Grzegorz Fabiański, Sebastian Faust, and Siavash Riahi


Abstract
Blockchain is a disruptive new technology introduced around a decade ago. It can be viewed as a method for recording timestamped transactions in a public database. Most of blockchain protocols do not scale well, i.e., they cannot process quickly large amounts of transactions. A natural idea to deal with this problem is to use the blockchain only as a timestamping service, i.e., to hash several transactions tx_1,…,tx_m into one short string, and just put this string on the blockchain, while at the same time posting the hashed transactions tx_1,…,tx_m to some public place on the Internet ("off-chain"). In this way the transactions tx_i remain timestamped, but the amount of data put on the blockchain is greatly reduced. This idea was introduced in 2017 under the name Plasma by Poon and Buterin. Shortly after this proposal, several variants of Plasma have been proposed. They are typically built on top of the Ethereum blockchain, as they strongly rely on so-called smart contracts (in order to resolve disputes between the users if some of them start cheating). Plasmas are an example of so-called off-chain protocols. In this work we initiate the study of the inherent limitations of Plasma protocols. More concretely, we show that in every Plasma system the adversary can either (a) force the honest parties to communicate a lot with the blockchain, even though they did not intend to (this is traditionally called mass exit); or (b) an honest party that wants to leave the system needs to quickly communicate large amounts of data to the blockchain. What makes these attacks particularly hard to handle in real life is that these attacks do not have so-called uniquely attributable faults, i.e. the smart contract cannot determine which party is malicious, and hence cannot force it to pay the fees for the blockchain interaction. An important implication of our result is that the benefits of two of the most prominent Plasma types, called Plasma Cash and Fungible Plasma, cannot be achieved simultaneously. Besides of the direct implications on real-life cryptocurrency research, we believe that this work may open up a new line of theoretical research, as, up to our knowledge, this is the first work that provides an impossibility result in the area of off-chain protocols.

Cite as

Stefan Dziembowski, Grzegorz Fabiański, Sebastian Faust, and Siavash Riahi. Lower Bounds for Off-Chain Protocols: Exploring the Limits of Plasma. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 72:1-72:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{dziembowski_et_al:LIPIcs.ITCS.2021.72,
  author =	{Dziembowski, Stefan and Fabia\'{n}ski, Grzegorz and Faust, Sebastian and Riahi, Siavash},
  title =	{{Lower Bounds for Off-Chain Protocols: Exploring the Limits of Plasma}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{72:1--72:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.72},
  URN =		{urn:nbn:de:0030-drops-136113},
  doi =		{10.4230/LIPIcs.ITCS.2021.72},
  annote =	{Keywords: blockchain, lower bounds, off-chain protocol, commit chain, plasma}
}
Document
Majorizing Measures for the Optimizer

Authors: Sander Borst, Daniel Dadush, Neil Olver, and Makrand Sinha


Abstract
The theory of majorizing measures, extensively developed by Fernique, Talagrand and many others, provides one of the most general frameworks for controlling the behavior of stochastic processes. In particular, it can be applied to derive quantitative bounds on the expected suprema and the degree of continuity of sample paths for many processes. One of the crowning achievements of the theory is Talagrand’s tight alternative characterization of the suprema of Gaussian processes in terms of majorizing measures. The proof of this theorem was difficult, and thus considerable effort was put into the task of developing both shorter and easier to understand proofs. A major reason for this difficulty was considered to be theory of majorizing measures itself, which had the reputation of being opaque and mysterious. As a consequence, most recent treatments of the theory (including by Talagrand himself) have eschewed the use of majorizing measures in favor of a purely combinatorial approach (the generic chaining) where objects based on sequences of partitions provide roughly matching upper and lower bounds on the desired expected supremum. In this paper, we return to majorizing measures as a primary object of study, and give a viewpoint that we think is natural and clarifying from an optimization perspective. As our main contribution, we give an algorithmic proof of the majorizing measures theorem based on two parts: - We make the simple (but apparently new) observation that finding the best majorizing measure can be cast as a convex program. This also allows for efficiently computing the measure using off-the-shelf methods from convex optimization. - We obtain tree-based upper and lower bound certificates by rounding, in a series of steps, the primal and dual solutions to this convex program. While duality has conceptually been part of the theory since its beginnings, as far as we are aware no explicit link to convex optimization has been previously made.

Cite as

Sander Borst, Daniel Dadush, Neil Olver, and Makrand Sinha. Majorizing Measures for the Optimizer. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 73:1-73:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{borst_et_al:LIPIcs.ITCS.2021.73,
  author =	{Borst, Sander and Dadush, Daniel and Olver, Neil and Sinha, Makrand},
  title =	{{Majorizing Measures for the Optimizer}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{73:1--73:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.73},
  URN =		{urn:nbn:de:0030-drops-136120},
  doi =		{10.4230/LIPIcs.ITCS.2021.73},
  annote =	{Keywords: Majorizing measures, Generic chaining, Gaussian processes, Convex optimization, Dimensionality Reduction}
}
Document
Randomness and Fairness in Two-Sided Matching with Limited Interviews

Authors: Hedyeh Beyhaghi and Éva Tardos


Abstract
We study the outcome in a matching market where both sides have limited ability to consider options. For example, in the national residency matching program, doctors are limited to apply to a small set of hospitals, and hospitals are limited by the time required to interview candidates. Our main findings are the following: (1) In markets where jobs can only consider a limited number of candidates for interview, it increases the size of the resulting matching if the system has a limit on the number of applications a candidate can send. (2) The fair system of all applicants being allowed to apply to the exact same number of positions maximizes the expected size of the matching. More particularly, starting from an integer k as the number of applications, the matching size decreases as a few applicants are allowed to apply to one additional position (and then increases again as they are all allowed to apply to k+1). Although it seems natural to expect that the size of the matching would be a monotone increasing and concave function in the number of applications, our results show that neither is true. These results hold even in a market where a-priori all jobs and all candidates are equally likely to be good, and the judgments of different employers and candidates are independent. Our main technical contribution is computing the expected size of the matching found via the deferred acceptance algorithm as a function of the number of interviews and applications in a market where preferences are uniform and independent. Through simulations we confirm that these findings extend to markets where rankings become correlated after the interviews.

Cite as

Hedyeh Beyhaghi and Éva Tardos. Randomness and Fairness in Two-Sided Matching with Limited Interviews. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 74:1-74:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{beyhaghi_et_al:LIPIcs.ITCS.2021.74,
  author =	{Beyhaghi, Hedyeh and Tardos, \'{E}va},
  title =	{{Randomness and Fairness in Two-Sided Matching with Limited Interviews}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{74:1--74:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.74},
  URN =		{urn:nbn:de:0030-drops-136139},
  doi =		{10.4230/LIPIcs.ITCS.2021.74},
  annote =	{Keywords: Matching with Short Lists, Stable Matching, Balls in Bins Problem}
}
Document
Counterexamples to the Low-Degree Conjecture

Authors: Justin Holmgren and Alexander S. Wein


Abstract
A conjecture of Hopkins (2018) posits that for certain high-dimensional hypothesis testing problems, no polynomial-time algorithm can outperform so-called "simple statistics", which are low-degree polynomials in the data. This conjecture formalizes the beliefs surrounding a line of recent work that seeks to understand statistical-versus-computational tradeoffs via the low-degree likelihood ratio. In this work, we refute the conjecture of Hopkins. However, our counterexample crucially exploits the specifics of the noise operator used in the conjecture, and we point out a simple way to modify the conjecture to rule out our counterexample. We also give an example illustrating that (even after the above modification), the symmetry assumption in the conjecture is necessary. These results do not undermine the low-degree framework for computational lower bounds, but rather aim to better understand what class of problems it is applicable to.

Cite as

Justin Holmgren and Alexander S. Wein. Counterexamples to the Low-Degree Conjecture. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 75:1-75:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{holmgren_et_al:LIPIcs.ITCS.2021.75,
  author =	{Holmgren, Justin and Wein, Alexander S.},
  title =	{{Counterexamples to the Low-Degree Conjecture}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{75:1--75:9},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.75},
  URN =		{urn:nbn:de:0030-drops-136148},
  doi =		{10.4230/LIPIcs.ITCS.2021.75},
  annote =	{Keywords: Low-degree likelihood ratio, error-correcting codes}
}
Document
Extended Abstract
High-Entropy Dual Functions and Locally Decodable Codes (Extended Abstract)

Authors: Jop Briët and Farrokh Labib


Abstract
Locally decodable codes (LDCs) allow any single encoded message symbol to be retrieved from a codeword with good probability by reading only a tiny number of codeword symbols, even if the codeword is partially corrupted. LDCs have surprisingly many applications in computer science and mathematics (we refer to [Yekhanin, 2012; Lovett, 2007] for extensive surveys). But despite their ubiquity, they are poorly understood. Of particular interest is the tradeoff between the codeword length N as a function of message length k when the query complexity - the number of probed codeword symbols - and alphabet size are constant. The Hadamard code is a 2-query LDC of length N = 2^O(k) and this length is optimal in the 2-query regime [Lovett, 2007]. For q ≥ 3, near-exponential gaps persist between the best-known upper and lower bounds. The family of Reed-Muller codes, which generalize the Hadamard code, were for a long time the best-known examples, giving q-query LDCs of length exp(O(k^{1/(q-1)})), until breakthrough constructions of matching vector LDCs of Yekhanin and Efremenko [Yekhanin, 2008; Efremenko, 2012]. In contrast with other combinatorial objects such as expander graphs, the probabilistic method has so far not been successfully used to beat the best explicit LDC constructions. In [Lovett, 2007], a probabilistic framework was given that could in principle yield best-possible LDCs, albeit non-constructively. A special instance of this framework connects LDCs with a probabilistic version of Szemerédi’s theorem. The setup for this is as follows: For a finite abelian group G of size N = |G|, let D ⊆ G be a random subset where each element is present with probability ρ independently of all others. For k ≥ 3 and ε ∈ (0,1), let E be the event that every subset A ⊆ G of size |A| ≥ ε |G| contains a proper k-term arithmetic progression with common difference in D. For fixed ε > 0 and sufficiently large N, it is an open problem to determine the smallest value of ρ - denoted ρ_k - such that Pr[E] ≥ 1/2. In [Lovett, 2007] it is shown that there exist k-query LDCs of message length Ω(ρ_k N) and codeword length O(N). As such, Szemerédi’s theorem with random differences, in particular lower bounds on ρ_k, can be used to show the existence of LDCs. Conversely, this connection indirectly implies the best-known upper bounds on ρ_k for all k ≥ 3 [Lovett, 2007; Lovett, 2007]. However, a conjecture from [Lovett, 2007] states that over ℤ_N we have ρ_k ≤ O_k(N^{-1}log N) for all k, which would be best-possible. Truth of this conjecture would imply that over this group, Szemerédi’s theorem with random differences cannot give LDCs better than the Hadamard code. For finite fields, Altman [Lovett, 2007] showed that this is false. In particular, over 𝔽_pⁿ for p odd, he proved that ρ₃ ≥ Ω(p^{-n} n²); generally, ρ_k ≥ Ω(p^{-n} n^{k-1}) holds when p ≥ k+1 [Lovett, 2007]. In turn, these bounds are conjectured to be optimal for the finite-field setting, which would imply that over finite fields, Szemerédi’s theorem with random differences cannot give LDCs better than Reed-Muller codes. The finite-field conjecture is motivated mainly by the possibility that so-called dual functions can be approximated well by polynomial phases, functions of the form e^{2π i P(x)/p} where P is a multivariate polynomial over 𝔽_p. We show that this is false. Using Yekhanin’s matching-vector-code construction, we give dual functions of order k over 𝔽_pⁿ that cannot be approximated in L_∞-distance by polynomial phases of degree k-1. This answers in the negative a natural finite-field analog of a problem of Frantzikinakis over ℕ [Lovett, 2007].

Cite as

Jop Briët and Farrokh Labib. High-Entropy Dual Functions and Locally Decodable Codes (Extended Abstract). In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 76:1-76:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{briet_et_al:LIPIcs.ITCS.2021.76,
  author =	{Bri\"{e}t, Jop and Labib, Farrokh},
  title =	{{High-Entropy Dual Functions and Locally Decodable Codes}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{76:1--76:2},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.76},
  URN =		{urn:nbn:de:0030-drops-136157},
  doi =		{10.4230/LIPIcs.ITCS.2021.76},
  annote =	{Keywords: Higher-order Fourier analysis, dual functions, finite fields, additive combinatorics, coding theory}
}
Document
Buying Data over Time: Approximately Optimal Strategies for Dynamic Data-Driven Decisions

Authors: Nicole Immorlica, Ian A. Kash, and Brendan Lucier


Abstract
We consider a model where an agent has a repeated decision to make and wishes to maximize their total payoff. Payoffs are influenced by an action taken by the agent, but also an unknown state of the world that evolves over time. Before choosing an action each round, the agent can purchase noisy samples about the state of the world. The agent has a budget to spend on these samples, and has flexibility in deciding how to spread that budget across rounds. We investigate the problem of choosing a sampling algorithm that optimizes total expected payoff. For example: is it better to buy samples steadily over time, or to buy samples in batches? We solve for the optimal policy, and show that it is a natural instantiation of the latter. Under a more general model that includes per-round fixed costs, we prove that a variation on this batching policy is a 2-approximation.

Cite as

Nicole Immorlica, Ian A. Kash, and Brendan Lucier. Buying Data over Time: Approximately Optimal Strategies for Dynamic Data-Driven Decisions. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 77:1-77:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{immorlica_et_al:LIPIcs.ITCS.2021.77,
  author =	{Immorlica, Nicole and Kash, Ian A. and Lucier, Brendan},
  title =	{{Buying Data over Time: Approximately Optimal Strategies for Dynamic Data-Driven Decisions}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{77:1--77:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.77},
  URN =		{urn:nbn:de:0030-drops-136161},
  doi =		{10.4230/LIPIcs.ITCS.2021.77},
  annote =	{Keywords: Online Algorithms, Value of Data, Markov Processes}
}
Document
Learning and Strongly Truthful Multi-Task Peer Prediction: A Variational Approach

Authors: Grant Schoenebeck and Fang-Yi Yu


Abstract
Peer prediction mechanisms incentivize agents to truthfully report their signals even in the absence of verification by comparing agents' reports with those of their peers. In the detail-free multi-task setting, agents are asked to respond to multiple independent and identically distributed tasks, and the mechanism does not know the prior distribution of agents' signals. The goal is to provide an ε-strongly truthful mechanism where truth-telling rewards agents "strictly" more than any other strategy profile (with ε additive error) even for heterogeneous agents, and to do so while requiring as few tasks as possible. We design a family of mechanisms with a scoring function that maps a pair of reports to a score. The mechanism is strongly truthful if the scoring function is "prior ideal". Moreover, the mechanism is ε-strongly truthful as long as the scoring function used is sufficiently close to the ideal scoring function. This reduces the above mechanism design problem to a learning problem - specifically learning an ideal scoring function. Because learning the prior distribution is sufficient (but not necessary) to learn the scoring function, we can apply standard learning theory techniques that leverage side information about the prior (e.g., that it is close to some parametric model). Furthermore, we derive a variational representation of an ideal scoring function and reduce the learning problem into an empirical risk minimization. We leverage this reduction to obtain very general results for peer prediction in the multi-task setting. Specifically, - Sample Complexity: We show how to derive good bounds on the number of tasks required for different types of priors-in some cases exponentially improving previous results. In particular, we can upper bound the required number of tasks for parametric models with bounded learning complexity. Furthermore, our reduction applies to myriad continuous signal space settings. To the best of our knowledge, this is the first peer-prediction mechanism on continuous signals designed for the multi-task setting. - Connection to Machine Learning: We show how to turn a soft-predictor of an agent’s signals (given the other agents' signals) into a mechanism. This allows the practical use of machine learning algorithms that give good results even when many agents provide noisy information. - Stronger Properties: In the finite setting, we obtain ε-strongly truthful mechanisms for any stochastically relevant prior. Prior works either only apply to more restrictive settings, or achieve a weaker notion of truthfulness (informed truthfulness).

Cite as

Grant Schoenebeck and Fang-Yi Yu. Learning and Strongly Truthful Multi-Task Peer Prediction: A Variational Approach. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 78:1-78:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{schoenebeck_et_al:LIPIcs.ITCS.2021.78,
  author =	{Schoenebeck, Grant and Yu, Fang-Yi},
  title =	{{Learning and Strongly Truthful Multi-Task Peer Prediction: A Variational Approach}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{78:1--78:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.78},
  URN =		{urn:nbn:de:0030-drops-136177},
  doi =		{10.4230/LIPIcs.ITCS.2021.78},
  annote =	{Keywords: Information elicitation without verification, crowdsourcing, machine learning}
}
Document
Distributed Load Balancing: A New Framework and Improved Guarantees

Authors: Sara Ahmadian, Allen Liu, Binghui Peng, and Morteza Zadimoghaddam


Abstract
Inspired by applications on search engines and web servers, we consider a load balancing problem with a general convex objective function. In this problem, we are given a bipartite graph on a set of sources S and a set of workers W and the goal is to distribute the load from each source among its neighboring workers such that the total load of workers are as balanced as possible. We present a new distributed algorithm that works with any symmetric non-decreasing convex function for evaluating the balancedness of the workers' load. Our algorithm computes a nearly optimal allocation of loads in O(log n log² d/ε³) rounds where n is the number of nodes, d is the maximum degree, and ε is the desired precision. If the objective is to minimize the maximum load, we modify the algorithm to obtain a nearly optimal solution in O(log n log d/ε²) rounds. This improves a line of algorithms that require a polynomial number of rounds in n and d and appear to encounter a fundamental barrier that prevents them from obtaining poly-logarithmic runtime [Berenbrink et al., 2005; Berenbrink et al., 2009; Subramanian and Scherson, 1994; Rabani et al., 1998]. In our paper, we introduce a novel primal-dual approach with multiplicative weight updates that allows us to circumvent this barrier. Our algorithm is inspired by [Agrawal et al., 2018] and other distributed algorithms for optimizing linear objectives but introduces several new twists to deal with general convex objectives.

Cite as

Sara Ahmadian, Allen Liu, Binghui Peng, and Morteza Zadimoghaddam. Distributed Load Balancing: A New Framework and Improved Guarantees. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 79:1-79:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{ahmadian_et_al:LIPIcs.ITCS.2021.79,
  author =	{Ahmadian, Sara and Liu, Allen and Peng, Binghui and Zadimoghaddam, Morteza},
  title =	{{Distributed Load Balancing: A New Framework and Improved Guarantees}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{79:1--79:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.79},
  URN =		{urn:nbn:de:0030-drops-136186},
  doi =		{10.4230/LIPIcs.ITCS.2021.79},
  annote =	{Keywords: Load balancing, Distributed algorithms}
}
Document
Erasure-Resilient Sublinear-Time Graph Algorithms

Authors: Amit Levi, Ramesh Krishnan S. Pallavoor, Sofya Raskhodnikova, and Nithin Varma


Abstract
We investigate sublinear-time algorithms that take partially erased graphs represented by adjacency lists as input. Our algorithms make degree and neighbor queries to the input graph and work with a specified fraction of adversarial erasures in adjacency entries. We focus on two computational tasks: testing if a graph is connected or ε-far from connected and estimating the average degree. For testing connectedness, we discover a threshold phenomenon: when the fraction of erasures is less than ε, this property can be tested efficiently (in time independent of the size of the graph); when the fraction of erasures is at least ε, then a number of queries linear in the size of the graph representation is required. Our erasure-resilient algorithm (for the special case with no erasures) is an improvement over the previously known algorithm for connectedness in the standard property testing model and has optimal dependence on the proximity parameter ε. For estimating the average degree, our results provide an "interpolation" between the query complexity for this computational task in the model with no erasures in two different settings: with only degree queries, investigated by Feige (SIAM J. Comput. `06), and with degree queries and neighbor queries, investigated by Goldreich and Ron (Random Struct. Algorithms `08) and Eden et al. (ICALP `17). We conclude with a discussion of our model and open questions raised by our work.

Cite as

Amit Levi, Ramesh Krishnan S. Pallavoor, Sofya Raskhodnikova, and Nithin Varma. Erasure-Resilient Sublinear-Time Graph Algorithms. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 80:1-80:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{levi_et_al:LIPIcs.ITCS.2021.80,
  author =	{Levi, Amit and Pallavoor, Ramesh Krishnan S. and Raskhodnikova, Sofya and Varma, Nithin},
  title =	{{Erasure-Resilient Sublinear-Time Graph Algorithms}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{80:1--80:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.80},
  URN =		{urn:nbn:de:0030-drops-136192},
  doi =		{10.4230/LIPIcs.ITCS.2021.80},
  annote =	{Keywords: Graph property testing, Computing with incomplete information, Approximating graph parameters}
}
Document
How to Sell Information Optimally: An Algorithmic Study

Authors: Yang Cai and Grigoris Velegkas


Abstract
We investigate the algorithmic problem of selling information to agents who face a decision-making problem under uncertainty. We adopt the model recently proposed by Bergemann et al. [Bergemann et al., 2018], in which information is revealed through signaling schemes called experiments. In the single-agent setting, any mechanism can be represented as a menu of experiments. Our results show that the computational complexity of designing the revenue-optimal menu depends heavily on the way the model is specified. When all the parameters of the problem are given explicitly, we provide a polynomial time algorithm that computes the revenue-optimal menu. For cases where the model is specified with a succinct implicit description, we show that the tractability of the problem is tightly related to the efficient implementation of a Best Response Oracle: when it can be implemented efficiently, we provide an additive FPTAS whose running time is independent of the number of actions. On the other hand, we provide a family of problems, where it is computationally intractable to construct a best response oracle, and we show that it is NP-hard to get even a constant fraction of the optimal revenue. Moreover, we investigate a generalization of the original model by Bergemann et al. [Bergemann et al., 2018] that allows multiple agents to compete for useful information. We leverage techniques developed in the study of auction design (see e.g. [Yang Cai et al., 2012; Saeed Alaei et al., 2012; Yang Cai et al., 2012; Yang Cai et al., 2013; Yang Cai et al., 2013]) to design a polynomial time algorithm that computes the revenue-optimal mechanism for selling information.

Cite as

Yang Cai and Grigoris Velegkas. How to Sell Information Optimally: An Algorithmic Study. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 81:1-81:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{cai_et_al:LIPIcs.ITCS.2021.81,
  author =	{Cai, Yang and Velegkas, Grigoris},
  title =	{{How to Sell Information Optimally: An Algorithmic Study}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{81:1--81:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.81},
  URN =		{urn:nbn:de:0030-drops-136205},
  doi =		{10.4230/LIPIcs.ITCS.2021.81},
  annote =	{Keywords: Mechanism Design, Algorithmic Game Theory, Information Design}
}
Document
Computation over the Noisy Broadcast Channel with Malicious Parties

Authors: Klim Efremenko, Gillat Kol, Dmitry Paramonov, and Raghuvansh R. Saxena


Abstract
We study the n-party noisy broadcast channel with a constant fraction of malicious parties. Specifically, we assume that each non-malicious party holds an input bit, and communicates with the others in order to learn the input bits of all non-malicious parties. In each communication round, one of the parties broadcasts a bit to all other parties, and the bit received by each party is flipped with a fixed constant probability (independently for each recipient). How many rounds are needed? Assuming there are no malicious parties, Gallager gave an 𝒪(n log log n)-round protocol for the above problem, which was later shown to be optimal. This protocol, however, inherently breaks down in the presence of malicious parties. We present a novel n ⋅ 𝒪̃(√{log n})-round protocol, that solves this problem even when almost half of the parties are malicious. Our protocol uses a new type of error correcting code, which we call a locality sensitive code and which may be of independent interest. Roughly speaking, these codes map "close" messages to "close" codewords, while messages that are not close are mapped to codewords that are very far apart. We view our result as a first step towards a theory of property preserving interactive coding, i.e., interactive codes that preserve useful properties of the protocol being encoded. In our case, the naive protocol over the noiseless broadcast channel, where all the parties broadcast their input bit and output all the bits received, works even in the presence of malicious parties. Our simulation of this protocol, unlike Gallager’s, preserves this property of the original protocol.

Cite as

Klim Efremenko, Gillat Kol, Dmitry Paramonov, and Raghuvansh R. Saxena. Computation over the Noisy Broadcast Channel with Malicious Parties. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 82:1-82:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{efremenko_et_al:LIPIcs.ITCS.2021.82,
  author =	{Efremenko, Klim and Kol, Gillat and Paramonov, Dmitry and Saxena, Raghuvansh R.},
  title =	{{Computation over the Noisy Broadcast Channel with Malicious Parties}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{82:1--82:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.82},
  URN =		{urn:nbn:de:0030-drops-136215},
  doi =		{10.4230/LIPIcs.ITCS.2021.82},
  annote =	{Keywords: Broadcast Network, Malicious Parties, Communication Complexity}
}
Document
Sampling Arborescences in Parallel

Authors: Nima Anari, Nathan Hu, Amin Saberi, and Aaron Schild


Abstract
We study the problem of sampling a uniformly random directed rooted spanning tree, also known as an arborescence, from a possibly weighted directed graph. Classically, this problem has long been known to be polynomial-time solvable; the exact number of arborescences can be computed by a determinant [Tutte, 1948], and sampling can be reduced to counting [Jerrum et al., 1986; Jerrum and Sinclair, 1996]. However, the classic reduction from sampling to counting seems to be inherently sequential. This raises the question of designing efficient parallel algorithms for sampling. We show that sampling arborescences can be done in RNC. For several well-studied combinatorial structures, counting can be reduced to the computation of a determinant, which is known to be in NC [Csanky, 1975]. These include arborescences, planar graph perfect matchings, Eulerian tours in digraphs, and determinantal point processes. However, not much is known about efficient parallel sampling of these structures. Our work is a step towards resolving this mystery.

Cite as

Nima Anari, Nathan Hu, Amin Saberi, and Aaron Schild. Sampling Arborescences in Parallel. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 83:1-83:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{anari_et_al:LIPIcs.ITCS.2021.83,
  author =	{Anari, Nima and Hu, Nathan and Saberi, Amin and Schild, Aaron},
  title =	{{Sampling Arborescences in Parallel}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{83:1--83:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.83},
  URN =		{urn:nbn:de:0030-drops-136225},
  doi =		{10.4230/LIPIcs.ITCS.2021.83},
  annote =	{Keywords: parallel algorithms, arborescences, spanning trees, random sampling}
}
Document
Extended Abstract
Non-Quasi-Linear Agents in Quasi-Linear Mechanisms (Extended Abstract)

Authors: Moshe Babaioff, Richard Cole, Jason Hartline, Nicole Immorlica, and Brendan Lucier


Abstract
Mechanisms with money are commonly designed under the assumption that agents are quasi-linear, meaning they have linear disutility for spending money. We study the implications when agents with non-linear (specifically, convex) disutility for payments participate in mechanisms designed for quasi-linear agents. We first show that any mechanism that is truthful for quasi-linear buyers has a simple best response function for buyers with non-linear disutility from payments, in which each bidder simply scales down her value for each potential outcome by a fixed factor, equal to her target return on investment (ROI). We call such a strategy ROI-optimal. We prove the existence of a Nash equilibrium in which agents use ROI-optimal strategies for a general class of allocation problems. Motivated by online marketplaces, we then focus on simultaneous second-price auctions for additive bidders and show that all ROI-optimal equilibria in this setting achieve constant-factor approximations to suitable welfare and revenue benchmarks.

Cite as

Moshe Babaioff, Richard Cole, Jason Hartline, Nicole Immorlica, and Brendan Lucier. Non-Quasi-Linear Agents in Quasi-Linear Mechanisms (Extended Abstract). In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, p. 84:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{babaioff_et_al:LIPIcs.ITCS.2021.84,
  author =	{Babaioff, Moshe and Cole, Richard and Hartline, Jason and Immorlica, Nicole and Lucier, Brendan},
  title =	{{Non-Quasi-Linear Agents in Quasi-Linear Mechanisms}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{84:1--84:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.84},
  URN =		{urn:nbn:de:0030-drops-136230},
  doi =		{10.4230/LIPIcs.ITCS.2021.84},
  annote =	{Keywords: Return on investment, Non-quasi-linear agents, Transferable Welfare, Simultaneous Second-Price Auctions}
}
Document
Extended Abstract
A Model for Ant Trail Formation and its Convergence Properties (Extended Abstract)

Authors: Moses Charikar, Shivam Garg, Deborah M. Gordon, and Kirankumar Shiragur


Abstract
We introduce a model for ant trail formation, building upon previous work on biologically feasible local algorithms that plausibly describe how ants maintain trail networks. The model is a variant of a reinforced random walk on a directed graph, where ants lay pheromone on edges as they traverse them and the next edge to traverse is chosen based on the level of pheromone; this pheromone decays with time. There is a bidirectional flow of ants in the network: the forward flow proceeds along forward edges from source (e.g. the nest) to sink (e.g. a food source), and the backward flow in the opposite direction. Some fraction of ants are lost as they pass through each node (modeling the loss of ants due to exploration observed in the field). We initiate a theoretical study of this model. We note that ant navigation has inspired the field of ant colony optimization, heuristics that have been applied to several combinatorial optimization problems; however the algorithms developed there are considerably more complex and not constrained to being biologically feasible. We first consider the linear decision rule, where the flow divides itself among the next set of edges in proportion to their pheromone level. Here, we show that the process converges to the path with minimum leakage when the forward and backward flows do not change over time. On the other hand, when the forward and backward flows increase over time (caused by positive reinforcement from the discovery of a food source, for example), we show that the process converges to the shortest path. These results are for graphs consisting of two parallel paths (a case that has been investigated before in experiments). Through simulations, we show that these results hold for more general graphs drawn from various random graph models; proving this convergence in the general case is an interesting open problem. Further, to understand the behaviour of other decision rules beyond the linear rule, we consider a general family of decision rules. For this family, we show that there is no advantage of using a non-linear decision rule, if the goal is to find the shortest or the minimum leakage path. We also show that bidirectional flow is necessary for convergence to such paths. Our results provide a plausible explanation for field observations, and open up new avenues for further theoretical and experimental investigation.

Cite as

Moses Charikar, Shivam Garg, Deborah M. Gordon, and Kirankumar Shiragur. A Model for Ant Trail Formation and its Convergence Properties (Extended Abstract). In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 85:1-85:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{charikar_et_al:LIPIcs.ITCS.2021.85,
  author =	{Charikar, Moses and Garg, Shivam and Gordon, Deborah M. and Shiragur, Kirankumar},
  title =	{{A Model for Ant Trail Formation and its Convergence Properties}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{85:1--85:2},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.85},
  URN =		{urn:nbn:de:0030-drops-136247},
  doi =		{10.4230/LIPIcs.ITCS.2021.85},
  annote =	{Keywords: Ant colonies, Reinforced random walks, Natural Algorithms, Graph Algorithms, Shortest Path, Distributed Algorithms}
}
Document
Extended Abstract
Unknown I.I.D. Prophets: Better Bounds, Streaming Algorithms, and a New Impossibility (Extended Abstract)

Authors: José Correa, Paul Dütting, Felix Fischer, Kevin Schewior, and Bruno Ziliotto


Abstract
A prophet inequality states, for some α ∈ [0,1], that the expected value achievable by a gambler who sequentially observes random variables X_1,… ,X_n and selects one of them is at least an α fraction of the maximum value in the sequence. We obtain three distinct improvements for a setting that was first studied by Correa et al. (EC, 2019) and is particularly relevant to modern applications in algorithmic pricing. In this setting, the random variables are i.i.d. from an unknown distribution and the gambler has access to an additional β n samples for some β ≥ 0. We first give improved lower bounds on α for a wide range of values of β; specifically, α ≥ (1+β)/e when β ≤ 1/(e-1), which is tight, and α ≥ 0.648 when β = 1, which improves on a bound of around 0.635 due to Correa et al. (SODA, 2020). Adding to their practical appeal, specifically in the context of algorithmic pricing, we then show that the new bounds can be obtained even in a streaming model of computation and thus in situations where the use of relevant data is complicated by the sheer amount of data available. We finally establish that the upper bound of 1/e for the case without samples is robust to additional information about the distribution, and applies also to sequences of i.i.d. random variables whose distribution is itself drawn, according to a known distribution, from a finite set of known candidate distributions. This implies a tight prophet inequality for exchangeable sequences of random variables, answering a question of Hill and Kertz (Contemporary Mathematics, 1992), but leaves open the possibility of better guarantees when the number of candidate distributions is small, a setting we believe is of strong interest to applications.

Cite as

José Correa, Paul Dütting, Felix Fischer, Kevin Schewior, and Bruno Ziliotto. Unknown I.I.D. Prophets: Better Bounds, Streaming Algorithms, and a New Impossibility (Extended Abstract). In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, p. 86:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{correa_et_al:LIPIcs.ITCS.2021.86,
  author =	{Correa, Jos\'{e} and D\"{u}tting, Paul and Fischer, Felix and Schewior, Kevin and Ziliotto, Bruno},
  title =	{{Unknown I.I.D. Prophets: Better Bounds, Streaming Algorithms, and a New Impossibility}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{86:1--86:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.86},
  URN =		{urn:nbn:de:0030-drops-136255},
  doi =		{10.4230/LIPIcs.ITCS.2021.86},
  annote =	{Keywords: Prophet Inequalities, Stopping Theory, Unknown Distributions}
}
Document
Extended Abstract
Complexity Measures on the Symmetric Group and Beyond (Extended Abstract)

Authors: Neta Dafni, Yuval Filmus, Noam Lifshitz, Nathan Lindzey, and Marc Vinyals


Abstract
We extend the definitions of complexity measures of functions to domains such as the symmetric group. The complexity measures we consider include degree, approximate degree, decision tree complexity, sensitivity, block sensitivity, and a few others. We show that these complexity measures are polynomially related for the symmetric group and for many other domains. To show that all measures but sensitivity are polynomially related, we generalize classical arguments of Nisan and others. To add sensitivity to the mix, we reduce to Huang’s sensitivity theorem using "pseudo-characters", which witness the degree of a function. Using similar ideas, we extend the characterization of Boolean degree 1 functions on the symmetric group due to Ellis, Friedgut and Pilpel to the perfect matching scheme. As another application of our ideas, we simplify the characterization of maximum-size t-intersecting families in the symmetric group and the perfect matching scheme.

Cite as

Neta Dafni, Yuval Filmus, Noam Lifshitz, Nathan Lindzey, and Marc Vinyals. Complexity Measures on the Symmetric Group and Beyond (Extended Abstract). In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 87:1-87:5, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{dafni_et_al:LIPIcs.ITCS.2021.87,
  author =	{Dafni, Neta and Filmus, Yuval and Lifshitz, Noam and Lindzey, Nathan and Vinyals, Marc},
  title =	{{Complexity Measures on the Symmetric Group and Beyond}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{87:1--87:5},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.87},
  URN =		{urn:nbn:de:0030-drops-136267},
  doi =		{10.4230/LIPIcs.ITCS.2021.87},
  annote =	{Keywords: Computational Complexity Theory, Analysis of Boolean Functions, Complexity Measures, Extremal Combinatorics}
}
Document
Extended Abstract
Batching and Optimal Multi-Stage Bipartite Allocations (Extended Abstract)

Authors: Yiding Feng and Rad Niazadeh


Abstract
In several applications of real-time matching of demand to supply in online marketplaces, the platform can allow for some latency to batch the demand and improve the matching’s efficiency. Motivated by these scenarios, we study the optimal trade-off between batching and inefficiency in online allocations. In particular, we consider K-stage variants of the classic vertex weighted bipartite b-matching and AdWords problems, where online vertices arrive in K batches. Our main result for both problems is an optimal (1-(1-1/K)^K)-competitive (fractional) matching algorithm, improving the classic (1-1/e) competitive ratios known for the online variant of these problems [Mehta et al., 2007; Aggarwal et al., 2011]. Our main technique is using a family of convex-programming based matchings that distribute the demand in a particularly balanced way among supply in different stages. More precisely, we identify a sequence of polynomials with decreasing degrees that can be used as strictly concave regularizers of the optimal matching linear program to form this family. By providing structural decompositions of the underlying graph using the optimal solutions of these convex programs, we develop a new multi-stage primal-dual framework to analyze the fractional multi-stage algorithm that returns the corresponding regularized optimal matching in each stage (by solving the stage’s convex program). We further show a matching upper-bound by providing an unweighted instance of the problem in which no online algorithm obtains a competitive ratio better than (1-(1-1/K)^K). We extend our results to integral allocations in the vertex weighted b-matching problem with large budgets, and in the AdWords problem with small bid over budget ratios.

Cite as

Yiding Feng and Rad Niazadeh. Batching and Optimal Multi-Stage Bipartite Allocations (Extended Abstract). In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, p. 88:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{feng_et_al:LIPIcs.ITCS.2021.88,
  author =	{Feng, Yiding and Niazadeh, Rad},
  title =	{{Batching and Optimal Multi-Stage Bipartite Allocations}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{88:1--88:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.88},
  URN =		{urn:nbn:de:0030-drops-136272},
  doi =		{10.4230/LIPIcs.ITCS.2021.88},
  annote =	{Keywords: Online Bipartite Matching, Primal-Dual Analysis, Multi-stage Allocation, Batching}
}
Document
Extended Abstract
Shrinkage Under Random Projections, and Cubic Formula Lower Bounds for AC0 (Extended Abstract)

Authors: Yuval Filmus, Or Meir, and Avishay Tal


Abstract
Håstad showed that any De Morgan formula (composed of AND, OR and NOT gates) shrinks by a factor of O(p²) under a random restriction that leaves each variable alive independently with probability p [SICOMP, 1998]. Using this result, he gave an Ω̃(n³) formula size lower bound for the Andreev function, which, up to lower order improvements, remains the state-of-the-art lower bound for any explicit function. In this work, we extend the shrinkage result of Håstad to hold under a far wider family of random restrictions and their generalization - random projections. Based on our shrinkage results, we obtain an Ω̃(n³) formula size lower bound for an explicit function computed in AC⁰. This improves upon the best known formula size lower bounds for AC⁰, that were only quadratic prior to our work. In addition, we prove that the KRW conjecture [Karchmer et al., Computational Complexity 5(3/4), 1995] holds for inner functions for which the unweighted quantum adversary bound is tight. In particular, this holds for inner functions with a tight Khrapchenko bound. Our random projections are tailor-made to the function’s structure so that the function maintains structure even under projection - using such projections is necessary, as standard random restrictions simplify AC⁰ circuits. In contrast, we show that any De Morgan formula shrinks by a quadratic factor under our random projections, allowing us to prove the cubic lower bound. Our proof techniques build on the proof of Håstad for the simpler case of balanced formulas. This allows for a significantly simpler proof at the cost of slightly worse parameters. As such, when specialized to the case of p-random restrictions, our proof can be used as an exposition of Håstad’s result.

Cite as

Yuval Filmus, Or Meir, and Avishay Tal. Shrinkage Under Random Projections, and Cubic Formula Lower Bounds for AC0 (Extended Abstract). In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 89:1-89:7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{filmus_et_al:LIPIcs.ITCS.2021.89,
  author =	{Filmus, Yuval and Meir, Or and Tal, Avishay},
  title =	{{Shrinkage Under Random Projections, and Cubic Formula Lower Bounds for AC0}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{89:1--89:7},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.89},
  URN =		{urn:nbn:de:0030-drops-136281},
  doi =		{10.4230/LIPIcs.ITCS.2021.89},
  annote =	{Keywords: De Morgan formulas, KRW Conjecture, shrinkage, random restrictions, random projections, bounded depth circuits, constant depth circuits, formula complexity}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail