LIPIcs, Volume 124

10th Innovations in Theoretical Computer Science Conference (ITCS 2019)



Thumbnail PDF

Publication Details


Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 124, ITCS'19, Complete Volume

Authors: Avrim Blum


Abstract
LIPIcs, Volume 124, ITCS'19, Complete Volume

Cite as

10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@Proceedings{blum:LIPIcs.ITCS.2019,
  title =	{{LIPIcs, Volume 124, ITCS'19, Complete Volume}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019},
  URN =		{urn:nbn:de:0030-drops-101704},
  doi =		{10.4230/LIPIcs.ITCS.2019},
  annote =	{Keywords: Theory of computation, Mathematics of computing}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Avrim Blum


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 0:i-0:xii, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{blum:LIPIcs.ITCS.2019.0,
  author =	{Blum, Avrim},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{0:i--0:xii},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.0},
  URN =		{urn:nbn:de:0030-drops-100937},
  doi =		{10.4230/LIPIcs.ITCS.2019.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Submodular Secretary Problem with Shortlists

Authors: Shipra Agrawal, Mohammad Shadravan, and Cliff Stein


Abstract
In submodular k-secretary problem, the goal is to select k items in a randomly ordered input so as to maximize the expected value of a given monotone submodular function on the set of selected items. In this paper, we introduce a relaxation of this problem, which we refer to as submodular k-secretary problem with shortlists. In the proposed problem setting, the algorithm is allowed to choose more than k items as part of a shortlist. Then, after seeing the entire input, the algorithm can choose a subset of size k from the bigger set of items in the shortlist. We are interested in understanding to what extent this relaxation can improve the achievable competitive ratio for the submodular k-secretary problem. In particular, using an O(k) sized shortlist, can an online algorithm achieve a competitive ratio close to the best achievable offline approximation factor for this problem? We answer this question affirmatively by giving a polynomial time algorithm that achieves a 1-1/e-epsilon-O(k^{-1}) competitive ratio for any constant epsilon>0, using a shortlist of size eta_epsilon(k)=O(k). This is especially surprising considering that the best known competitive ratio (in polynomial time) for the submodular k-secretary problem is (1/e-O(k^{-1/2}))(1-1/e) [Thomas Kesselheim and Andreas Tönnis, 2017]. The proposed algorithm also has significant implications for another important problem of submodular function maximization under random order streaming model and k-cardinality constraint. We show that our algorithm can be implemented in the streaming setting using a memory buffer of size eta_epsilon(k)=O(k) to achieve a 1-1/e-epsilon-O(k^{-1}) approximation. This result substantially improves upon [Norouzi-Fard et al., 2018], which achieved the previously best known approximation factor of 1/2 + 8 x 10^{-14} using O(k log k) memory; and closely matches the known upper bound for this problem [McGregor and Vu, 2017].

Cite as

Shipra Agrawal, Mohammad Shadravan, and Cliff Stein. Submodular Secretary Problem with Shortlists. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 1:1-1:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{agrawal_et_al:LIPIcs.ITCS.2019.1,
  author =	{Agrawal, Shipra and Shadravan, Mohammad and Stein, Cliff},
  title =	{{Submodular Secretary Problem with Shortlists}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{1:1--1:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.1},
  URN =		{urn:nbn:de:0030-drops-100949},
  doi =		{10.4230/LIPIcs.ITCS.2019.1},
  annote =	{Keywords: Submodular Optimization, Secretary Problem, Streaming Algorithms}
}
Document
Hamiltonian Sparsification and Gap-Simulation

Authors: Dorit Aharonov and Leo Zhou


Abstract
Analog quantum simulation - simulation of one Hamiltonian by another - is one of the major goals in the noisy intermediate-scale quantum computation (NISQ) era, and has many applications in quantum complexity. We initiate the rigorous study of the physical resources required for such simulations, where we focus on the task of Hamiltonian sparsification. The goal is to find a simulating Hamiltonian H~ whose underlying interaction graph has bounded degree (this is called degree-reduction) or much fewer edges than that of the original Hamiltonian H (this is called dilution). We set this study in a relaxed framework for analog simulations that we call gap-simulation, where H~ is only required to simulate the groundstate(s) and spectral gap of H instead of its full spectrum, and we believe it is of independent interest. Our main result is a proof that in stark contrast to the classical setting, general degree-reduction is impossible in the quantum world, even under our relaxed notion of gap-simulation. The impossibility proof relies on devising counterexample Hamiltonians and applying a strengthened variant of Hastings-Koma decay of correlations theorem. We also show a complementary result where degree-reduction is possible when the strength of interactions is allowed to grow polynomially. Furthermore, we prove the impossibility of the related sparsification task of generic Hamiltonian dilution, under a computational hardness assumption. We also clarify the (currently weak) implications of our results to the question of quantum PCP. Our work provides basic answers to many of the "first questions" one would ask about Hamiltonian sparsification and gap-simulation; we hope this serves as a good starting point for future research of these topics.

Cite as

Dorit Aharonov and Leo Zhou. Hamiltonian Sparsification and Gap-Simulation. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 2:1-2:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{aharonov_et_al:LIPIcs.ITCS.2019.2,
  author =	{Aharonov, Dorit and Zhou, Leo},
  title =	{{Hamiltonian Sparsification and Gap-Simulation}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{2:1--2:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.2},
  URN =		{urn:nbn:de:0030-drops-100956},
  doi =		{10.4230/LIPIcs.ITCS.2019.2},
  annote =	{Keywords: quantum simulation, quantum Hamiltonian complexity, sparsification, quantum PCP}
}
Document
On Solving Linear Systems in Sublinear Time

Authors: Alexandr Andoni, Robert Krauthgamer, and Yosef Pogrow


Abstract
We study sublinear algorithms that solve linear systems locally. In the classical version of this problem the input is a matrix S in R^{n x n} and a vector b in R^n in the range of S, and the goal is to output x in R^n satisfying Sx=b. For the case when the matrix S is symmetric diagonally dominant (SDD), the breakthrough algorithm of Spielman and Teng [STOC 2004] approximately solves this problem in near-linear time (in the input size which is the number of non-zeros in S), and subsequent papers have further simplified, improved, and generalized the algorithms for this setting. Here we focus on computing one (or a few) coordinates of x, which potentially allows for sublinear algorithms. Formally, given an index u in [n] together with S and b as above, the goal is to output an approximation x^_u for x^*_u, where x^* is a fixed solution to Sx=b. Our results show that there is a qualitative gap between SDD matrices and the more general class of positive semidefinite (PSD) matrices. For SDD matrices, we develop an algorithm that approximates a single coordinate x_{u} in time that is polylogarithmic in n, provided that S is sparse and has a small condition number (e.g., Laplacian of an expander graph). The approximation guarantee is additive | x^_u-x^*_u | <=epsilon | x^* |_infty for accuracy parameter epsilon>0. We further prove that the condition-number assumption is necessary and tight. In contrast to the SDD matrices, we prove that for certain PSD matrices S, the running time must be at least polynomial in n (for the same additive approximation), even if S has bounded sparsity and condition number.

Cite as

Alexandr Andoni, Robert Krauthgamer, and Yosef Pogrow. On Solving Linear Systems in Sublinear Time. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 3:1-3:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{andoni_et_al:LIPIcs.ITCS.2019.3,
  author =	{Andoni, Alexandr and Krauthgamer, Robert and Pogrow, Yosef},
  title =	{{On Solving Linear Systems in Sublinear Time}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{3:1--3:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.3},
  URN =		{urn:nbn:de:0030-drops-100966},
  doi =		{10.4230/LIPIcs.ITCS.2019.3},
  annote =	{Keywords: Linear systems, Laplacian solver, Sublinear time, Randomized linear algebra}
}
Document
Placing Conditional Disclosure of Secrets in the Communication Complexity Universe

Authors: Benny Applebaum and Prashant Nalini Vasudevan


Abstract
In the conditional disclosure of secrets (CDS) problem (Gertner et al., J. Comput. Syst. Sci., 2000) Alice and Bob, who hold n-bit inputs x and y respectively, wish to release a common secret z to Carol (who knows both x and y) if and only if the input (x,y) satisfies some predefined predicate f. Alice and Bob are allowed to send a single message to Carol which may depend on their inputs and some shared randomness, and the goal is to minimize the communication complexity while providing information-theoretic security. Despite the growing interest in this model, very few lower-bounds are known. In this paper, we relate the CDS complexity of a predicate f to its communication complexity under various communication games. For several basic predicates our results yield tight, or almost tight, lower-bounds of Omega(n) or Omega(n^{1-epsilon}), providing an exponential improvement over previous logarithmic lower-bounds. We also define new communication complexity classes that correspond to different variants of the CDS model and study the relations between them and their complements. Notably, we show that allowing for imperfect correctness can significantly reduce communication - a seemingly new phenomenon in the context of information-theoretic cryptography. Finally, our results show that proving explicit super-logarithmic lower-bounds for imperfect CDS protocols is a necessary step towards proving explicit lower-bounds against the class AM, or even AM cap coAM - a well known open problem in the theory of communication complexity. Thus imperfect CDS forms a new minimal class which is placed just beyond the boundaries of the "civilized" part of the communication complexity world for which explicit lower-bounds are known.

Cite as

Benny Applebaum and Prashant Nalini Vasudevan. Placing Conditional Disclosure of Secrets in the Communication Complexity Universe. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 4:1-4:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{applebaum_et_al:LIPIcs.ITCS.2019.4,
  author =	{Applebaum, Benny and Vasudevan, Prashant Nalini},
  title =	{{Placing Conditional Disclosure of Secrets in the Communication Complexity Universe}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{4:1--4:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.4},
  URN =		{urn:nbn:de:0030-drops-100976},
  doi =		{10.4230/LIPIcs.ITCS.2019.4},
  annote =	{Keywords: Conditional Disclosure of Secrets, Information-Theoretic Security}
}
Document
Bitcoin: A Natural Oligopoly

Authors: Nick Arnosti and S. Matthew Weinberg


Abstract
Although Bitcoin was intended to be a decentralized digital currency, in practice, mining power is quite concentrated. This fact is a persistent source of concern for the Bitcoin community. We provide an explanation using a simple model to capture miners' incentives to invest in equipment. In our model, n miners compete for a prize of fixed size. Each miner chooses an investment q_i, incurring cost c_i q_i, and then receives reward (q_i^alpha)/(sum_j q_j^alpha), for some alpha >= 1. When c_i = c_j for all i,j, and alpha = 1, there is a unique equilibrium where all miners invest equally. However, we prove that under seemingly mild deviations from this model, equilibrium outcomes become drastically more centralized. In particular, - When costs are asymmetric, if miner i chooses to invest, then miner j has market share at least 1-c_j/c_i. That is, if miner j has costs that are (e.g.) 20% lower than those of miner i, then miner j must control at least 20% of the total mining power. - In the presence of economies of scale (alpha > 1), every market participant has a market share of at least 1-1/(alpha), implying that the market features at most alpha/(alpha - 1) miners in total. We discuss the implications of our results for the future design of cryptocurrencies. In particular, our work further motivates the study of protocols that minimize "orphaned" blocks, proof-of-stake protocols, and incentive compatible protocols.

Cite as

Nick Arnosti and S. Matthew Weinberg. Bitcoin: A Natural Oligopoly. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, p. 5:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{arnosti_et_al:LIPIcs.ITCS.2019.5,
  author =	{Arnosti, Nick and Weinberg, S. Matthew},
  title =	{{Bitcoin: A Natural Oligopoly}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{5:1--5:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.5},
  URN =		{urn:nbn:de:0030-drops-100989},
  doi =		{10.4230/LIPIcs.ITCS.2019.5},
  annote =	{Keywords: Bitcoin, Cryptocurrencies, Rent-Seeking Competition}
}
Document
A Simple Sublinear-Time Algorithm for Counting Arbitrary Subgraphs via Edge Sampling

Authors: Sepehr Assadi, Michael Kapralov, and Sanjeev Khanna


Abstract
In the subgraph counting problem, we are given a (large) input graph G(V, E) and a (small) target graph H (e.g., a triangle); the goal is to estimate the number of occurrences of H in G. Our focus here is on designing sublinear-time algorithms for approximately computing number of occurrences of H in G in the setting where the algorithm is given query access to G. This problem has been studied in several recent papers which primarily focused on specific families of graphs H such as triangles, cliques, and stars. However, not much is known about approximate counting of arbitrary graphs H in the literature. This is in sharp contrast to the closely related subgraph enumeration problem that has received significant attention in the database community as the database join problem. The AGM bound shows that the maximum number of occurrences of any arbitrary subgraph H in a graph G with m edges is O(m^{rho(H)}), where rho(H) is the fractional edge-cover of H, and enumeration algorithms with matching runtime are known for any H. We bridge this gap between subgraph counting and subgraph enumeration by designing a simple sublinear-time algorithm that can estimate the number of occurrences of any arbitrary graph H in G, denoted by #H, to within a (1 +/- epsilon)-approximation with high probability in O(m^{rho(H)}/#H) * poly(log(n),1/epsilon) time. Our algorithm is allowed the standard set of queries for general graphs, namely degree queries, pair queries and neighbor queries, plus an additional edge-sample query that returns an edge chosen uniformly at random. The performance of our algorithm matches those of Eden et al. [FOCS 2015, STOC 2018] for counting triangles and cliques and extend them to all choices of subgraph H under the additional assumption of edge-sample queries.

Cite as

Sepehr Assadi, Michael Kapralov, and Sanjeev Khanna. A Simple Sublinear-Time Algorithm for Counting Arbitrary Subgraphs via Edge Sampling. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 6:1-6:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{assadi_et_al:LIPIcs.ITCS.2019.6,
  author =	{Assadi, Sepehr and Kapralov, Michael and Khanna, Sanjeev},
  title =	{{A Simple Sublinear-Time Algorithm for Counting Arbitrary Subgraphs via Edge Sampling}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{6:1--6:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.6},
  URN =		{urn:nbn:de:0030-drops-100996},
  doi =		{10.4230/LIPIcs.ITCS.2019.6},
  annote =	{Keywords: Sublinear-time algorithms, Subgraph counting, AGM bound}
}
Document
Tensor Network Complexity of Multilinear Maps

Authors: Per Austrin, Petteri Kaski, and Kaie Kubjas


Abstract
We study tensor networks as a model of arithmetic computation for evaluating multilinear maps. These capture any algorithm based on low border rank tensor decompositions, such as O(n^{omega+epsilon}) time matrix multiplication, and in addition many other algorithms such as O(n log n) time discrete Fourier transform and O^*(2^n) time for computing the permanent of a matrix. However tensor networks sometimes yield faster algorithms than those that follow from low-rank decompositions. For instance the fastest known O(n^{(omega +epsilon)t}) time algorithms for counting 3t-cliques can be implemented with tensor networks, even though the underlying tensor has border rank n^{3t} for all t >= 2. For counting homomorphisms of a general pattern graph P into a host graph on n vertices we obtain an upper bound of O(n^{(omega+epsilon)bw(P)/2}) where bw(P) is the branchwidth of P. This essentially matches the bound for counting cliques, and yields small improvements over previous algorithms for many choices of P. While powerful, the model still has limitations, and we are able to show a number of unconditional lower bounds for various multilinear maps, including: b) an Omega(n^{bw(P)}) time lower bound for counting homomorphisms from P to an n-vertex graph, matching the upper bound if omega = 2. In particular for P a v-clique this yields an Omega(n^{ceil[2v/3]}) time lower bound for counting v-cliques, and for P a k-uniform v-hyperclique we obtain an Omega(n^v) time lower bound for k >= 3, ruling out tensor networks as an approach to obtaining non-trivial algorithms for hyperclique counting and the Max-3-CSP problem. c) an Omega(2^{0.918n}) time lower bound for the permanent of an n x n matrix.

Cite as

Per Austrin, Petteri Kaski, and Kaie Kubjas. Tensor Network Complexity of Multilinear Maps. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 7:1-7:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{austrin_et_al:LIPIcs.ITCS.2019.7,
  author =	{Austrin, Per and Kaski, Petteri and Kubjas, Kaie},
  title =	{{Tensor Network Complexity of Multilinear Maps}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{7:1--7:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.7},
  URN =		{urn:nbn:de:0030-drops-101006},
  doi =		{10.4230/LIPIcs.ITCS.2019.7},
  annote =	{Keywords: arithmetic complexity, lower bound, multilinear map, tensor network}
}
Document
A #SAT Algorithm for Small Constant-Depth Circuits with PTF Gates

Authors: Swapnam Bajpai, Vaibhav Krishan, Deepanshu Kush, Nutan Limaye, and Srikanth Srinivasan


Abstract
We show that there is a zero-error randomized algorithm that, when given a small constant-depth Boolean circuit C made up of gates that compute constant-degree Polynomial Threshold functions or PTFs (i.e., Boolean functions that compute signs of constant-degree polynomials), counts the number of satisfying assignments to C in significantly better than brute-force time. Formally, for any constants d,k, there is an epsilon > 0 such that the zero-error randomized algorithm counts the number of satisfying assignments to a given depth-d circuit C made up of k-PTF gates such that C has size at most n^{1+epsilon}. The algorithm runs in time 2^{n-n^{Omega(epsilon)}}. Before our result, no algorithm for beating brute-force search was known for counting the number of satisfying assignments even for a single degree-k PTF (which is a depth-1 circuit of linear size). The main new tool is the use of a learning algorithm for learning degree-1 PTFs (or Linear Threshold Functions) using comparison queries due to Kane, Lovett, Moran and Zhang (FOCS 2017). We show that their ideas fit nicely into a memoization approach that yields the #SAT algorithms.

Cite as

Swapnam Bajpai, Vaibhav Krishan, Deepanshu Kush, Nutan Limaye, and Srikanth Srinivasan. A #SAT Algorithm for Small Constant-Depth Circuits with PTF Gates. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 8:1-8:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bajpai_et_al:LIPIcs.ITCS.2019.8,
  author =	{Bajpai, Swapnam and Krishan, Vaibhav and Kush, Deepanshu and Limaye, Nutan and Srinivasan, Srikanth},
  title =	{{A #SAT Algorithm for Small Constant-Depth Circuits with PTF Gates}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{8:1--8:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.8},
  URN =		{urn:nbn:de:0030-drops-101010},
  doi =		{10.4230/LIPIcs.ITCS.2019.8},
  annote =	{Keywords: SAT, Polynomial Threshold Functions, Constant-depth Boolean Circuits, Linear Decision Trees, Zero-error randomized algorithms}
}
Document
Small-Set Expansion in Shortcode Graph and the 2-to-2 Conjecture

Authors: Boaz Barak, Pravesh K. Kothari, and David Steurer


Abstract
Dinur, Khot, Kindler, Minzer and Safra (2016) recently showed that the (imperfect completeness variant of) Khot's 2 to 2 games conjecture follows from a combinatorial hypothesis about the soundness of a certain "Grassmanian agreement tester". In this work, we show that soundness of Grassmannian agreement tester follows from a conjecture we call the "Shortcode Expansion Hypothesis" characterizing the non-expanding sets of the degree-two Short code graph. We also show the latter conjecture is equivalent to a characterization of the non-expanding sets in the Grassman graph, as hypothesized by a follow-up paper of Dinur et al. (2017). Following our work, Khot, Minzer and Safra (2018) proved the "Shortcode Expansion Hypothesis". Combining their proof with our result and the reduction of Dinur et al. (2016), completes the proof of the 2 to 2 conjecture with imperfect completeness. We believe that the Shortcode graph provides a useful view of both the hypothesis and the reduction, and might be suitable for obtaining new hardness reductions.

Cite as

Boaz Barak, Pravesh K. Kothari, and David Steurer. Small-Set Expansion in Shortcode Graph and the 2-to-2 Conjecture. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 9:1-9:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{barak_et_al:LIPIcs.ITCS.2019.9,
  author =	{Barak, Boaz and Kothari, Pravesh K. and Steurer, David},
  title =	{{Small-Set Expansion in Shortcode Graph and the 2-to-2 Conjecture}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{9:1--9:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.9},
  URN =		{urn:nbn:de:0030-drops-101022},
  doi =		{10.4230/LIPIcs.ITCS.2019.9},
  annote =	{Keywords: Unique Games Conjecture, Small-Set Expansion, Grassmann Graph, Shortcode}
}
Document
Algorithms, Bounds, and Strategies for Entangled XOR Games

Authors: Adam Bene Watts, Aram W. Harrow, Gurtej Kanwar, and Anand Natarajan


Abstract
Entangled games are a quantum analog of constraint satisfaction problems and have had important applications to quantum complexity theory, quantum cryptography, and the foundations of quantum mechanics. Given a game, the basic computational problem is to compute its entangled value: the supremum success probability attainable by a quantum strategy. We study the complexity of computing the (commuting-operator) entangled value omega^* of entangled XOR games with any number of players. Based on a duality theory for systems of operator equations, we introduce necessary and sufficient criteria for an XOR game to have omega^* = 1, and use these criteria to derive the following results: 1) An algorithm for symmetric games that decides in polynomial time whether omega^* = 1 or omega^* < 1, a task that was not previously known to be decidable, together with a simple tensor-product strategy that achieves value 1 in the former case. The only previous candidate algorithm for this problem was the Navascués-Pironio-Acín (also known as noncommutative Sum of Squares or ncSoS) hierarchy, but no convergence bounds were known. 2) A family of games with three players and with omega^* < 1, where it takes doubly exponential time for the ncSoS algorithm to witness this. By contrast, our algorithm runs in polynomial time. 3) Existence of an unsatisfiable phase for random (non-symmetric) XOR games. We show that there exists a constant C_k^{unsat} depending only on the number k of players, such that a random k-XOR game over an alphabet of size n has omega^* < 1 with high probability when the number of clauses is above C_k^{unsat} n. 4) A lower bound of Omega(n log(n)/log log(n)) on the number of levels in the ncSoS hierarchy required to detect unsatisfiability for most random 3-XOR games. This is in contrast with the classical case where the (3n)^{th} level of the sum-of-squares hierarchy is equivalent to brute-force enumeration of all possible solutions.

Cite as

Adam Bene Watts, Aram W. Harrow, Gurtej Kanwar, and Anand Natarajan. Algorithms, Bounds, and Strategies for Entangled XOR Games. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 10:1-10:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{benewatts_et_al:LIPIcs.ITCS.2019.10,
  author =	{Bene Watts, Adam and Harrow, Aram W. and Kanwar, Gurtej and Natarajan, Anand},
  title =	{{Algorithms, Bounds, and Strategies for Entangled XOR Games}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{10:1--10:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.10},
  URN =		{urn:nbn:de:0030-drops-101032},
  doi =		{10.4230/LIPIcs.ITCS.2019.10},
  annote =	{Keywords: Nonlocal games, XOR Games, Pseudotelepathy games, Multipartite entanglement}
}
Document
Testing Local Properties of Arrays

Authors: Omri Ben-Eliezer


Abstract
We study testing of local properties in one-dimensional and multi-dimensional arrays. A property of d-dimensional arrays f:[n]^d -> Sigma is k-local if it can be defined by a family of k x ... x k forbidden consecutive patterns. This definition captures numerous interesting properties. For example, monotonicity, Lipschitz continuity and submodularity are 2-local; convexity is (usually) 3-local; and many typical problems in computational biology and computer vision involve o(n)-local properties. In this work, we present a generic approach to test all local properties of arrays over any finite (and not necessarily bounded size) alphabet. We show that any k-local property of d-dimensional arrays is testable by a simple canonical one-sided error non-adaptive epsilon-test, whose query complexity is O(epsilon^{-1}k log{(epsilon n)/k}) for d = 1 and O(c_d epsilon^{-1/d} k * n^{d-1}) for d > 1. The queries made by the canonical test constitute sphere-like structures of varying sizes, and are completely independent of the property and the alphabet Sigma. The query complexity is optimal for a wide range of parameters: For d=1, this matches the query complexity of many previously investigated local properties, while for d > 1 we design and analyze new constructions of k-local properties whose one-sided non-adaptive query complexity matches our upper bounds. For some previously studied properties, our method provides the first known sublinear upper bound on the query complexity.

Cite as

Omri Ben-Eliezer. Testing Local Properties of Arrays. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 11:1-11:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{beneliezer:LIPIcs.ITCS.2019.11,
  author =	{Ben-Eliezer, Omri},
  title =	{{Testing Local Properties of Arrays}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{11:1--11:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.11},
  URN =		{urn:nbn:de:0030-drops-101041},
  doi =		{10.4230/LIPIcs.ITCS.2019.11},
  annote =	{Keywords: Property Testing, Local Properties, Monotonicity Testing, Hypergrid, Pattern Matching}
}
Document
The Complexity of User Retention

Authors: Eli Ben-Sasson and Eden Saig


Abstract
This paper studies families of distributions T that are amenable to retentive learning, meaning that an expert can retain users that seek to predict their future, assuming user attributes are sampled from T and exposed gradually over time. Limited attention span is the main problem experts face in our model. We make two contributions. First, we formally define the notions of retentively learnable distributions and properties. Along the way, we define a retention complexity measure of distributions and a natural class of retentive scoring rules that model the way users evaluate experts they interact with. These rules are shown to be tightly connected to truth-eliciting "proper scoring rules" studied in Decision Theory since the 1950's [McCarthy, PNAS 1956]. Second, we take a first step towards relating retention complexity to other measures of significance in computational complexity. In particular, we show that linear properties (over the binary field) are retentively learnable, whereas random Low Density Parity Check (LDPC) codes have, with high probability, maximal retention complexity. Intriguingly, these results resemble known results from the field of property testing and suggest that deeper connections between retentive distributions and locally testable properties may exist.

Cite as

Eli Ben-Sasson and Eden Saig. The Complexity of User Retention. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 12:1-12:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bensasson_et_al:LIPIcs.ITCS.2019.12,
  author =	{Ben-Sasson, Eli and Saig, Eden},
  title =	{{The Complexity of User Retention}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{12:1--12:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.12},
  URN =		{urn:nbn:de:0030-drops-101053},
  doi =		{10.4230/LIPIcs.ITCS.2019.12},
  annote =	{Keywords: retentive learning, retention complexity, information elicitation, proper scoring rules}
}
Document
Torus Polynomials: An Algebraic Approach to ACC Lower Bounds

Authors: Abhishek Bhrushundi, Kaave Hosseini, Shachar Lovett, and Sankeerth Rao


Abstract
We propose an algebraic approach to proving circuit lower bounds for ACC^0 by defining and studying the notion of torus polynomials. We show how currently known polynomial-based approximation results for AC^0 and ACC^0 can be reformulated in this framework, implying that ACC^0 can be approximated by low-degree torus polynomials. Furthermore, as a step towards proving ACC^0 lower bounds for the majority function via our approach, we show that MAJORITY cannot be approximated by low-degree symmetric torus polynomials. We also pose several open problems related to our framework.

Cite as

Abhishek Bhrushundi, Kaave Hosseini, Shachar Lovett, and Sankeerth Rao. Torus Polynomials: An Algebraic Approach to ACC Lower Bounds. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 13:1-13:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bhrushundi_et_al:LIPIcs.ITCS.2019.13,
  author =	{Bhrushundi, Abhishek and Hosseini, Kaave and Lovett, Shachar and Rao, Sankeerth},
  title =	{{Torus Polynomials: An Algebraic Approach to ACC Lower Bounds}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{13:1--13:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.13},
  URN =		{urn:nbn:de:0030-drops-101066},
  doi =		{10.4230/LIPIcs.ITCS.2019.13},
  annote =	{Keywords: Circuit complexity, ACC, lower bounds, polynomials}
}
Document
Almost Envy-Free Allocations with Connected Bundles

Authors: Vittorio Bilò, Ioannis Caragiannis, Michele Flammini, Ayumi Igarashi, Gianpiero Monaco, Dominik Peters, Cosimo Vinci, and William S. Zwicker


Abstract
We study the existence of allocations of indivisible goods that are envy-free up to one good (EF1), under the additional constraint that each bundle needs to be connected in an underlying item graph G. When the items are arranged in a path, we show that EF1 allocations are guaranteed to exist for arbitrary monotonic utility functions over bundles, provided that either there are at most four agents, or there are any number of agents but they all have identical utility functions. Our existence proofs are based on classical arguments from the divisible cake-cutting setting, and involve discrete analogues of cut-and-choose, of Stromquist's moving-knife protocol, and of the Su-Simmons argument based on Sperner's lemma. Sperner's lemma can also be used to show that on a path, an EF2 allocation exists for any number of agents. Except for the results using Sperner's lemma, all of our procedures can be implemented by efficient algorithms. Our positive results for paths imply the existence of connected EF1 or EF2 allocations whenever G is traceable, i.e., contains a Hamiltonian path. For the case of two agents, we completely characterize the class of graphs G that guarantee the existence of EF1 allocations as the class of graphs whose biconnected components are arranged in a path. This class is strictly larger than the class of traceable graphs; one can check in linear time whether a graph belongs to this class, and if so return an EF1 allocation.

Cite as

Vittorio Bilò, Ioannis Caragiannis, Michele Flammini, Ayumi Igarashi, Gianpiero Monaco, Dominik Peters, Cosimo Vinci, and William S. Zwicker. Almost Envy-Free Allocations with Connected Bundles. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 14:1-14:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bilo_et_al:LIPIcs.ITCS.2019.14,
  author =	{Bil\`{o}, Vittorio and Caragiannis, Ioannis and Flammini, Michele and Igarashi, Ayumi and Monaco, Gianpiero and Peters, Dominik and Vinci, Cosimo and Zwicker, William S.},
  title =	{{Almost Envy-Free Allocations with Connected Bundles}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{14:1--14:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.14},
  URN =		{urn:nbn:de:0030-drops-101078},
  doi =		{10.4230/LIPIcs.ITCS.2019.14},
  annote =	{Keywords: Envy-free Division, Cake-cutting, Resource Allocation, Algorithmic Game Theory}
}
Document
"Quantum Supremacy" and the Complexity of Random Circuit Sampling

Authors: Adam Bouland, Bill Fefferman, Chinmay Nirkhe, and Umesh Vazirani


Abstract
A critical goal for the field of quantum computation is quantum supremacy - a demonstration of any quantum computation that is prohibitively hard for classical computers. It is both a necessary milestone on the path to useful quantum computers as well as a test of quantum theory in the realm of high complexity. A leading near-term candidate, put forth by the Google/UCSB team, is sampling from the probability distributions of randomly chosen quantum circuits, called Random Circuit Sampling (RCS). While RCS was defined with experimental realization in mind, we give strong complexity-theoretic evidence for the classical hardness of RCS, placing it on par with the best theoretical proposals for supremacy. Specifically, we show that RCS satisfies an average-case hardness condition - computing output probabilities of typical quantum circuits is as hard as computing them in the worst-case, and therefore #P-hard. Our reduction exploits the polynomial structure in the output amplitudes of random quantum circuits, enabled by the Feynman path integral. In addition, it follows from known results that RCS also satisfies an anti-concentration property, namely that errors in estimating output probabilities are small with respect to the probabilities themselves. This makes RCS the first proposal for quantum supremacy with both of these properties. We also give a natural condition under which an existing statistical measure, cross-entropy, verifies RCS, as well as describe a new verification measure which in some formal sense maximizes the information gained from experimental samples.

Cite as

Adam Bouland, Bill Fefferman, Chinmay Nirkhe, and Umesh Vazirani. "Quantum Supremacy" and the Complexity of Random Circuit Sampling. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 15:1-15:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bouland_et_al:LIPIcs.ITCS.2019.15,
  author =	{Bouland, Adam and Fefferman, Bill and Nirkhe, Chinmay and Vazirani, Umesh},
  title =	{{"Quantum Supremacy" and the Complexity of Random Circuit Sampling}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{15:1--15:2},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.15},
  URN =		{urn:nbn:de:0030-drops-101084},
  doi =		{10.4230/LIPIcs.ITCS.2019.15},
  annote =	{Keywords: quantum supremacy, average-case hardness, verification}
}
Document
Adversarially Robust Property-Preserving Hash Functions

Authors: Elette Boyle, Rio LaVigne, and Vinod Vaikuntanathan


Abstract
Property-preserving hashing is a method of compressing a large input x into a short hash h(x) in such a way that given h(x) and h(y), one can compute a property P(x, y) of the original inputs. The idea of property-preserving hash functions underlies sketching, compressed sensing and locality-sensitive hashing. Property-preserving hash functions are usually probabilistic: they use the random choice of a hash function from a family to achieve compression, and as a consequence, err on some inputs. Traditionally, the notion of correctness for these hash functions requires that for every two inputs x and y, the probability that h(x) and h(y) mislead us into a wrong prediction of P(x, y) is negligible. As observed in many recent works (incl. Mironov, Naor and Segev, STOC 2008; Hardt and Woodruff, STOC 2013; Naor and Yogev, CRYPTO 2015), such a correctness guarantee assumes that the adversary (who produces the offending inputs) has no information about the hash function, and is too weak in many scenarios. We initiate the study of adversarial robustness for property-preserving hash functions, provide definitions, derive broad lower bounds due to a simple connection with communication complexity, and show the necessity of computational assumptions to construct such functions. Our main positive results are two candidate constructions of property-preserving hash functions (achieving different parameters) for the (promise) gap-Hamming property which checks if x and y are "too far" or "too close". Our first construction relies on generic collision-resistant hash functions, and our second on a variant of the syndrome decoding assumption on low-density parity check codes.

Cite as

Elette Boyle, Rio LaVigne, and Vinod Vaikuntanathan. Adversarially Robust Property-Preserving Hash Functions. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 16:1-16:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{boyle_et_al:LIPIcs.ITCS.2019.16,
  author =	{Boyle, Elette and LaVigne, Rio and Vaikuntanathan, Vinod},
  title =	{{Adversarially Robust Property-Preserving Hash Functions}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{16:1--16:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.16},
  URN =		{urn:nbn:de:0030-drops-101097},
  doi =		{10.4230/LIPIcs.ITCS.2019.16},
  annote =	{Keywords: Hash function, compression, property-preserving, one-way communication}
}
Document
On Closest Pair in Euclidean Metric: Monochromatic is as Hard as Bichromatic

Authors: Karthik C. S. and Pasin Manurangsi


Abstract
Given a set of n points in R^d, the (monochromatic) Closest Pair problem asks to find a pair of distinct points in the set that are closest in the l_p-metric. Closest Pair is a fundamental problem in Computational Geometry and understanding its fine-grained complexity in the Euclidean metric when d=omega(log n) was raised as an open question in recent works (Abboud-Rubinstein-Williams [FOCS'17], Williams [SODA'18], David-Karthik-Laekhanukit [SoCG'18]). In this paper, we show that for every p in R_{>= 1} cup {0}, under the Strong Exponential Time Hypothesis (SETH), for every epsilon>0, the following holds: - No algorithm running in time O(n^{2-epsilon}) can solve the Closest Pair problem in d=(log n)^{Omega_{epsilon}(1)} dimensions in the l_p-metric. - There exists delta = delta(epsilon)>0 and c = c(epsilon)>= 1 such that no algorithm running in time O(n^{1.5-epsilon}) can approximate Closest Pair problem to a factor of (1+delta) in d >= c log n dimensions in the l_p-metric. In particular, our first result is shown by establishing the computational equivalence of the bichromatic Closest Pair problem and the (monochromatic) Closest Pair problem (up to n^{epsilon} factor in the running time) for d=(log n)^{Omega_epsilon(1)} dimensions. Additionally, under SETH, we rule out nearly-polynomial factor approximation algorithms running in subquadratic time for the (monochromatic) Maximum Inner Product problem where we are given a set of n points in n^{o(1)}-dimensional Euclidean space and are required to find a pair of distinct points in the set that maximize the inner product. At the heart of all our proofs is the construction of a dense bipartite graph with low contact dimension, i.e., we construct a balanced bipartite graph on n vertices with n^{2-epsilon} edges whose vertices can be realized as points in a (log n)^{Omega_epsilon(1)}-dimensional Euclidean space such that every pair of vertices which have an edge in the graph are at distance exactly 1 and every other pair of vertices are at distance greater than 1. This graph construction is inspired by the construction of locally dense codes introduced by Dumer-Miccancio-Sudan [IEEE Trans. Inf. Theory'03].

Cite as

Karthik C. S. and Pasin Manurangsi. On Closest Pair in Euclidean Metric: Monochromatic is as Hard as Bichromatic. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 17:1-17:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{c.s._et_al:LIPIcs.ITCS.2019.17,
  author =	{C. S., Karthik and Manurangsi, Pasin},
  title =	{{On Closest Pair in Euclidean Metric: Monochromatic is as Hard as Bichromatic}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{17:1--17:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.17},
  URN =		{urn:nbn:de:0030-drops-101100},
  doi =		{10.4230/LIPIcs.ITCS.2019.17},
  annote =	{Keywords: Closest Pair, Bichromatic Closest Pair, Contact Dimension, Fine-Grained Complexity}
}
Document
Expander-Based Cryptography Meets Natural Proofs

Authors: Igor Carboni Oliveira, Rahul Santhanam, and Roei Tell


Abstract
We introduce new forms of attack on expander-based cryptography, and in particular on Goldreich's pseudorandom generator and one-way function. Our attacks exploit low circuit complexity of the underlying expander's neighbor function and/or of the local predicate. Our two key conceptual contributions are: 1) We put forward the possibility that the choice of expander matters in expander-based cryptography. In particular, using expanders whose neighbour function has low circuit complexity might compromise the security of Goldreich's PRG and OWF in certain settings. 2) We show that the security of Goldreich's PRG and OWF is closely related to two other long-standing problems: Specifically, to the existence of unbalanced lossless expanders with low-complexity neighbor function, and to limitations on circuit lower bounds (i.e., natural proofs). In particular, our results further motivate the investigation of affine/local unbalanced lossless expanders and of average-case lower bounds against DNF-XOR circuits. We prove two types of technical results that support the above conceptual messages. First, we unconditionally break Goldreich's PRG when instantiated with a specific expander (whose existence we prove), for a class of predicates that match the parameters of the currently-best "hard" candidates, in the regime of quasi-polynomial stretch. Secondly, conditioned on the existence of expanders whose neighbor functions have extremely low circuit complexity, we present attacks on Goldreich's generator in the regime of polynomial stretch. As one corollary, conditioned on the existence of the foregoing expanders, we show that either the parameters of natural properties for several constant-depth circuit classes cannot be improved, even mildly; or Goldreich's generator is insecure in the regime of a large polynomial stretch, regardless of the predicate used.

Cite as

Igor Carboni Oliveira, Rahul Santhanam, and Roei Tell. Expander-Based Cryptography Meets Natural Proofs. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 18:1-18:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{carbonioliveira_et_al:LIPIcs.ITCS.2019.18,
  author =	{Carboni Oliveira, Igor and Santhanam, Rahul and Tell, Roei},
  title =	{{Expander-Based Cryptography Meets Natural Proofs}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{18:1--18:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.18},
  URN =		{urn:nbn:de:0030-drops-101112},
  doi =		{10.4230/LIPIcs.ITCS.2019.18},
  annote =	{Keywords: Pseudorandom Generators, One-Way Functions, Expanders, Circuit Complexity}
}
Document
A Note on the Quantum Query Complexity of Permutation Symmetric Functions

Authors: André Chailloux


Abstract
It is known since the work of [Aaronson and Ambainis, 2014] that for any permutation symmetric function f, the quantum query complexity is at most polynomially smaller than the classical randomized query complexity, more precisely that R(f) = O~(Q^7(f)). In this paper, we improve this result and show that R(f) = O(Q^3(f)) for a more general class of symmetric functions. Our proof is constructive and relies largely on the quantum hardness of distinguishing a random permutation from a random function with small range from Zhandry [Zhandry, 2015].

Cite as

André Chailloux. A Note on the Quantum Query Complexity of Permutation Symmetric Functions. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 19:1-19:7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chailloux:LIPIcs.ITCS.2019.19,
  author =	{Chailloux, Andr\'{e}},
  title =	{{A Note on the Quantum Query Complexity of Permutation Symmetric Functions}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{19:1--19:7},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.19},
  URN =		{urn:nbn:de:0030-drops-101126},
  doi =		{10.4230/LIPIcs.ITCS.2019.19},
  annote =	{Keywords: quantum query complexity, permutation symmetric functions}
}
Document
Adaptive Boolean Monotonicity Testing in Total Influence Time

Authors: Deeparnab Chakrabarty and C. Seshadhri


Abstract
Testing monotonicity of a Boolean function f:{0,1}^n -> {0,1} is an important problem in the field of property testing. It has led to connections with many interesting combinatorial questions on the directed hypercube: routing, random walks, and new isoperimetric theorems. Denoting the proximity parameter by epsilon, the best tester is the non-adaptive O~(epsilon^{-2}sqrt{n}) tester of Khot-Minzer-Safra (FOCS 2015). A series of recent results by Belovs-Blais (STOC 2016) and Chen-Waingarten-Xie (STOC 2017) have led to Omega~(n^{1/3}) lower bounds for adaptive testers. Reducing this gap is a significant question, that touches on the role of adaptivity in monotonicity testing of Boolean functions. We approach this question from the perspective of parametrized property testing, a concept recently introduced by Pallavoor-Raskhodnikova-Varma (ACM TOCT 2017), where one seeks to understand performance of testers with respect to parameters other than just the size. Our result is an adaptive monotonicity tester with one-sided error whose query complexity is O(epsilon^{-2}I(f)log^5 n), where I(f) is the total influence of the function. Therefore, adaptivity provably helps monotonicity testing for low influence functions.

Cite as

Deeparnab Chakrabarty and C. Seshadhri. Adaptive Boolean Monotonicity Testing in Total Influence Time. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 20:1-20:7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chakrabarty_et_al:LIPIcs.ITCS.2019.20,
  author =	{Chakrabarty, Deeparnab and Seshadhri, C.},
  title =	{{Adaptive Boolean Monotonicity Testing in Total Influence Time}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{20:1--20:7},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.20},
  URN =		{urn:nbn:de:0030-drops-101133},
  doi =		{10.4230/LIPIcs.ITCS.2019.20},
  annote =	{Keywords: Property Testing, Monotonicity Testing, Influence of Boolean Functions}
}
Document
On Locality-Sensitive Orderings and Their Applications

Authors: Timothy M. Chan, Sariel Har-Peled, and Mitchell Jones


Abstract
For any constant d and parameter epsilon > 0, we show the existence of (roughly) 1/epsilon^d orderings on the unit cube [0,1)^d, such that any two points p, q in [0,1)^d that are close together under the Euclidean metric are "close together" in one of these linear orderings in the following sense: the only points that could lie between p and q in the ordering are points with Euclidean distance at most epsilon | p - q | from p or q. These orderings are extensions of the Z-order, and they can be efficiently computed. Functionally, the orderings can be thought of as a replacement to quadtrees and related structures (like well-separated pair decompositions). We use such orderings to obtain surprisingly simple algorithms for a number of basic problems in low-dimensional computational geometry, including (i) dynamic approximate bichromatic closest pair, (ii) dynamic spanners, (iii) dynamic approximate minimum spanning trees, (iv) static and dynamic fault-tolerant spanners, and (v) approximate nearest neighbor search.

Cite as

Timothy M. Chan, Sariel Har-Peled, and Mitchell Jones. On Locality-Sensitive Orderings and Their Applications. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 21:1-21:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chan_et_al:LIPIcs.ITCS.2019.21,
  author =	{Chan, Timothy M. and Har-Peled, Sariel and Jones, Mitchell},
  title =	{{On Locality-Sensitive Orderings and Their Applications}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{21:1--21:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.21},
  URN =		{urn:nbn:de:0030-drops-101140},
  doi =		{10.4230/LIPIcs.ITCS.2019.21},
  annote =	{Keywords: Approximation algorithms, Data structures, Computational geometry}
}
Document
Pseudorandom Generators from the Second Fourier Level and Applications to AC0 with Parity Gates

Authors: Eshan Chattopadhyay, Pooya Hatami, Shachar Lovett, and Avishay Tal


Abstract
A recent work of Chattopadhyay et al. (CCC 2018) introduced a new framework for the design of pseudorandom generators for Boolean functions. It works under the assumption that the Fourier tails of the Boolean functions are uniformly bounded for all levels by an exponential function. In this work, we design an alternative pseudorandom generator that only requires bounds on the second level of the Fourier tails. It is based on a derandomization of the work of Raz and Tal (ECCC 2018) who used the above framework to obtain an oracle separation between BQP and PH. As an application, we give a concrete conjecture for bounds on the second level of the Fourier tails for low degree polynomials over the finite field F_2. If true, it would imply an efficient pseudorandom generator for AC^0[oplus], a well-known open problem in complexity theory. As a stepping stone towards resolving this conjecture, we prove such bounds for the first level of the Fourier tails.

Cite as

Eshan Chattopadhyay, Pooya Hatami, Shachar Lovett, and Avishay Tal. Pseudorandom Generators from the Second Fourier Level and Applications to AC0 with Parity Gates. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 22:1-22:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chattopadhyay_et_al:LIPIcs.ITCS.2019.22,
  author =	{Chattopadhyay, Eshan and Hatami, Pooya and Lovett, Shachar and Tal, Avishay},
  title =	{{Pseudorandom Generators from the Second Fourier Level and Applications to AC0 with Parity Gates}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{22:1--22:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.22},
  URN =		{urn:nbn:de:0030-drops-101150},
  doi =		{10.4230/LIPIcs.ITCS.2019.22},
  annote =	{Keywords: Derandomization, Pseudorandom generator, Explicit construction, Random walk, Small-depth circuits with parity gates}
}
Document
Classical Algorithms from Quantum and Arthur-Merlin Communication Protocols

Authors: Lijie Chen and Ruosong Wang


Abstract
In recent years, the polynomial method from circuit complexity has been applied to several fundamental problems and obtains the state-of-the-art running times (e.g., R. Williams's n^3 / 2^{Omega(sqrt{log n})} time algorithm for APSP). As observed in [Alman and Williams, STOC 2017], almost all applications of the polynomial method in algorithm design ultimately rely on certain (probabilistic) low-rank decompositions of the computation matrices corresponding to key subroutines. They suggest that making use of low-rank decompositions directly could lead to more powerful algorithms, as the polynomial method is just one way to derive such a decomposition. Inspired by their observation, in this paper, we study another way of systematically constructing low-rank decompositions of matrices which could be used by algorithms - communication protocols. Since their introduction, it is known that various types of communication protocols lead to certain low-rank decompositions (e.g., P protocols/rank, BQP protocols/approximate rank). These are usually interpreted as approaches for proving communication lower bounds, while in this work we explore the other direction. We have the following two generic algorithmic applications of communication protocols: - Quantum Communication Protocols and Deterministic Approximate Counting. Our first connection is that a fast BQP communication protocol for a function f implies a fast deterministic additive approximate counting algorithm for a related pair counting problem. Applying known BQP communication protocols, we get fast deterministic additive approximate counting algorithms for Count-OV (#OV), Sparse Count-OV and Formula of SYM circuits. In particular, our approximate counting algorithm for #OV runs in near-linear time for all dimensions d = o(log^2 n). Previously, even no truly-subquadratic time algorithm was known for d = omega(log n). - Arthur-Merlin Communication Protocols and Faster Satisfying-Pair Algorithms. Our second connection is that a fast AM^{cc} protocol for a function f implies a faster-than-bruteforce algorithm for f-Satisfying-Pair. Using the classical Goldwasser-Sisper AM protocols for approximating set size, we obtain a new algorithm for approximate Max-IP_{n,c log n} in time n^{2 - 1/O(log c)}, matching the state-of-the-art algorithms in [Chen, CCC 2018]. We also apply our second connection to shed some light on long-standing open problems in communication complexity. We show that if the Longest Common Subsequence (LCS) problem admits a fast (computationally efficient) AM^{cc} protocol (polylog(n) complexity), then polynomial-size Formula-SAT admits a 2^{n - n^{1-delta}} time algorithm for any constant delta > 0, which is conjectured to be unlikely by a recent work [Abboud and Bringmann, ICALP 2018]. The same holds even for a fast (computationally efficient) PH^{cc} protocol.

Cite as

Lijie Chen and Ruosong Wang. Classical Algorithms from Quantum and Arthur-Merlin Communication Protocols. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 23:1-23:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ITCS.2019.23,
  author =	{Chen, Lijie and Wang, Ruosong},
  title =	{{Classical Algorithms from Quantum and Arthur-Merlin Communication Protocols}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{23:1--23:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.23},
  URN =		{urn:nbn:de:0030-drops-101162},
  doi =		{10.4230/LIPIcs.ITCS.2019.23},
  annote =	{Keywords: Quantum communication protocols, Arthur-Merlin communication protocols, approximate counting, approximate rank}
}
Document
Capturing Complementarity in Set Functions by Going Beyond Submodularity/Subadditivity

Authors: Wei Chen, Shang-Hua Teng, and Hanrui Zhang


Abstract
We introduce two new "degree of complementarity" measures: supermodular width and superadditive width. Both are formulated based on natural witnesses of complementarity. We show that both measures are robust by proving that they, respectively, characterize the gap of monotone set functions from being submodular and subadditive. Thus, they define two new hierarchies over monotone set functions, which we will refer to as Supermodular Width (SMW) hierarchy and Superadditive Width (SAW) hierarchy, with foundations - i.e. level 0 of the hierarchies - resting exactly on submodular and subadditive functions, respectively. We present a comprehensive comparative analysis of the SMW hierarchy and the Supermodular Degree (SD) hierarchy, defined by Feige and Izsak. We prove that the SMW hierarchy is strictly more expressive than the SD hierarchy: Every monotone set function of supermodular degree d has supermodular width at most d, and there exists a supermodular-width-1 function over a ground set of m elements whose supermodular degree is m-1. We show that previous results regarding approximation guarantees for welfare and constrained maximization as well as regarding the Price of Anarchy (PoA) of simple auctions can be extended without any loss from the supermodular degree to the supermodular width. We also establish almost matching information-theoretical lower bounds for these two well-studied fundamental maximization problems over set functions. The combination of these approximation and hardness results illustrate that the SMW hierarchy provides not only a natural notion of complementarity, but also an accurate characterization of "near submodularity" needed for maximization approximation. While SD and SMW hierarchies support nontrivial bounds on the PoA of simple auctions, we show that our SAW hierarchy seems to capture more intrinsic properties needed to realize the efficiency of simple auctions. So far, the SAW hierarchy provides the best dependency for the PoA of Single-bid Auction, and is nearly as competitive as the Maximum over Positive Hypergraphs (MPH) hierarchy for Simultaneous Item First Price Auction (SIA). We also provide almost tight lower bounds for the PoA of both auctions with respect to the SAW hierarchy.

Cite as

Wei Chen, Shang-Hua Teng, and Hanrui Zhang. Capturing Complementarity in Set Functions by Going Beyond Submodularity/Subadditivity. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 24:1-24:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ITCS.2019.24,
  author =	{Chen, Wei and Teng, Shang-Hua and Zhang, Hanrui},
  title =	{{Capturing Complementarity in Set Functions by Going Beyond Submodularity/Subadditivity}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{24:1--24:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.24},
  URN =		{urn:nbn:de:0030-drops-101174},
  doi =		{10.4230/LIPIcs.ITCS.2019.24},
  annote =	{Keywords: set functions, measure of complementarity, submodularity, subadditivity, cardinality constrained maximization, welfare maximization, simple auctions, price of anarchy}
}
Document
Probabilistic Checking Against Non-Signaling Strategies from Linearity Testing

Authors: Alessandro Chiesa, Peter Manohar, and Igor Shinkar


Abstract
Non-signaling strategies are a generalization of quantum strategies that have been studied in physics over the past three decades. Recently, they have found applications in theoretical computer science, including to proving inapproximability results for linear programming and to constructing protocols for delegating computation. A central tool for these applications is probabilistically checkable proofs (PCPs) that are sound against non-signaling strategies. In this paper we prove that the exponential-length constant-query PCP construction due to Arora et al. (JACM 1998) is sound against non-signaling strategies. Our result offers a new length-vs-query tradeoff when compared to the non-signaling PCP of Kalai, Raz, and Rothblum (STOC 2013 and 2014) and, moreover, may serve as an intermediate step to a proof of a non-signaling analogue of the PCP Theorem.

Cite as

Alessandro Chiesa, Peter Manohar, and Igor Shinkar. Probabilistic Checking Against Non-Signaling Strategies from Linearity Testing. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 25:1-25:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chiesa_et_al:LIPIcs.ITCS.2019.25,
  author =	{Chiesa, Alessandro and Manohar, Peter and Shinkar, Igor},
  title =	{{Probabilistic Checking Against Non-Signaling Strategies from Linearity Testing}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{25:1--25:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.25},
  URN =		{urn:nbn:de:0030-drops-101188},
  doi =		{10.4230/LIPIcs.ITCS.2019.25},
  annote =	{Keywords: probabilistically checkable proofs, linearity testing, non-signaling strategies}
}
Document
On the Algorithmic Power of Spiking Neural Networks

Authors: Chi-Ning Chou, Kai-Min Chung, and Chi-Jen Lu


Abstract
Spiking Neural Networks (SNN) are mathematical models in neuroscience to describe the dynamics among a set of neurons that interact with each other by firing instantaneous signals, a.k.a., spikes. Interestingly, a recent advance in neuroscience [Barrett-Denève-Machens, NIPS 2013] showed that the neurons' firing rate, i.e., the average number of spikes fired per unit of time, can be characterized by the optimal solution of a quadratic program defined by the parameters of the dynamics. This indicated that SNN potentially has the computational power to solve non-trivial quadratic programs. However, the results were justified empirically without rigorous analysis. We put this into the context of natural algorithms and aim to investigate the algorithmic power of SNN. Especially, we emphasize on giving rigorous asymptotic analysis on the performance of SNN in solving optimization problems. To enforce a theoretical study, we first identify a simplified SNN model that is tractable for analysis. Next, we confirm the empirical observation in the work of Barrett et al. by giving an upper bound on the convergence rate of SNN in solving the quadratic program. Further, we observe that in the case where there are infinitely many optimal solutions, SNN tends to converge to the one with smaller l_1 norm. We give an affirmative answer to our finding by showing that SNN can solve the l_1 minimization problem under some regular conditions. Our main technical insight is a dual view of the SNN dynamics, under which SNN can be viewed as a new natural primal-dual algorithm for the l_1 minimization problem. We believe that the dual view is of independent interest and may potentially find interesting interpretation in neuroscience.

Cite as

Chi-Ning Chou, Kai-Min Chung, and Chi-Jen Lu. On the Algorithmic Power of Spiking Neural Networks. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 26:1-26:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chou_et_al:LIPIcs.ITCS.2019.26,
  author =	{Chou, Chi-Ning and Chung, Kai-Min and Lu, Chi-Jen},
  title =	{{On the Algorithmic Power of Spiking Neural Networks}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{26:1--26:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.26},
  URN =		{urn:nbn:de:0030-drops-101191},
  doi =		{10.4230/LIPIcs.ITCS.2019.26},
  annote =	{Keywords: Spiking Neural Networks, Natural Algorithms, l\underline1 Minimization}
}
Document
Last-Iterate Convergence: Zero-Sum Games and Constrained Min-Max Optimization

Authors: Constantinos Daskalakis and Ioannis Panageas


Abstract
Motivated by applications in Game Theory, Optimization, and Generative Adversarial Networks, recent work of Daskalakis et al [Daskalakis et al., ICLR, 2018] and follow-up work of Liang and Stokes [Liang and Stokes, 2018] have established that a variant of the widely used Gradient Descent/Ascent procedure, called "Optimistic Gradient Descent/Ascent (OGDA)", exhibits last-iterate convergence to saddle points in unconstrained convex-concave min-max optimization problems. We show that the same holds true in the more general problem of constrained min-max optimization under a variant of the no-regret Multiplicative-Weights-Update method called "Optimistic Multiplicative-Weights Update (OMWU)". This answers an open question of Syrgkanis et al [Syrgkanis et al., NIPS, 2015]. The proof of our result requires fundamentally different techniques from those that exist in no-regret learning literature and the aforementioned papers. We show that OMWU monotonically improves the Kullback-Leibler divergence of the current iterate to the (appropriately normalized) min-max solution until it enters a neighborhood of the solution. Inside that neighborhood we show that OMWU becomes a contracting map converging to the exact solution. We believe that our techniques will be useful in the analysis of the last iterate of other learning algorithms.

Cite as

Constantinos Daskalakis and Ioannis Panageas. Last-Iterate Convergence: Zero-Sum Games and Constrained Min-Max Optimization. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 27:1-27:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{daskalakis_et_al:LIPIcs.ITCS.2019.27,
  author =	{Daskalakis, Constantinos and Panageas, Ioannis},
  title =	{{Last-Iterate Convergence: Zero-Sum Games and Constrained Min-Max Optimization}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{27:1--27:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.27},
  URN =		{urn:nbn:de:0030-drops-101204},
  doi =		{10.4230/LIPIcs.ITCS.2019.27},
  annote =	{Keywords: No regret learning, Zero-sum games, Convergence, Dynamical Systems, KL divergence}
}
Document
Density Estimation for Shift-Invariant Multidimensional Distributions

Authors: Anindya De, Philip M. Long, and Rocco A. Servedio


Abstract
We study density estimation for classes of shift-invariant distributions over R^d. A multidimensional distribution is "shift-invariant" if, roughly speaking, it is close in total variation distance to a small shift of it in any direction. Shift-invariance relaxes smoothness assumptions commonly used in non-parametric density estimation to allow jump discontinuities. The different classes of distributions that we consider correspond to different rates of tail decay. For each such class we give an efficient algorithm that learns any distribution in the class from independent samples with respect to total variation distance. As a special case of our general result, we show that d-dimensional shift-invariant distributions which satisfy an exponential tail bound can be learned to total variation distance error epsilon using O~_d(1/ epsilon^{d+2}) examples and O~_d(1/ epsilon^{2d+2}) time. This implies that, for constant d, multivariate log-concave distributions can be learned in O~_d(1/epsilon^{2d+2}) time using O~_d(1/epsilon^{d+2}) samples, answering a question of [Diakonikolas et al., 2016]. All of our results extend to a model of noise-tolerant density estimation using Huber's contamination model, in which the target distribution to be learned is a (1-epsilon,epsilon) mixture of some unknown distribution in the class with some other arbitrary and unknown distribution, and the learning algorithm must output a hypothesis distribution with total variation distance error O(epsilon) from the target distribution. We show that our general results are close to best possible by proving a simple Omega (1/epsilon^d) information-theoretic lower bound on sample complexity even for learning bounded distributions that are shift-invariant.

Cite as

Anindya De, Philip M. Long, and Rocco A. Servedio. Density Estimation for Shift-Invariant Multidimensional Distributions. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 28:1-28:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{de_et_al:LIPIcs.ITCS.2019.28,
  author =	{De, Anindya and Long, Philip M. and Servedio, Rocco A.},
  title =	{{Density Estimation for Shift-Invariant Multidimensional Distributions}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{28:1--28:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.28},
  URN =		{urn:nbn:de:0030-drops-101214},
  doi =		{10.4230/LIPIcs.ITCS.2019.28},
  annote =	{Keywords: Density estimation, unsupervised learning, log-concave distributions, non-parametrics}
}
Document
From Local to Robust Testing via Agreement Testing

Authors: Irit Dinur, Prahladh Harsha, Tali Kaufman, and Noga Ron-Zewi


Abstract
A local tester for an error-correcting code is a probabilistic procedure that queries a small subset of coordinates, accepts codewords with probability one, and rejects non-codewords with probability proportional to their distance from the code. The local tester is robust if for non-codewords it satisfies the stronger property that the average distance of local views from accepting views is proportional to the distance from the code. Robust testing is an important component in constructions of locally testable codes and probabilistically checkable proofs as it allows for composition of local tests. In this work we show that for certain codes, any (natural) local tester can be converted to a roubst tester with roughly the same number of queries. Our result holds for the class of affine-invariant lifted codes which is a broad class of codes that includes Reed-Muller codes, as well as recent constructions of high-rate locally testable codes (Guo, Kopparty, and Sudan, ITCS 2013). Instantiating this with known local testing results for lifted codes gives a more direct proof that improves some of the parameters of the main result of Guo, Haramaty, and Sudan (FOCS 2015), showing robustness of lifted codes. To obtain the above transformation we relate the notions of local testing and robust testing to the notion of agreement testing that attempts to find out whether valid partial assignments can be stitched together to a global codeword. We first show that agreement testing implies robust testing, and then show that local testing implies agreement testing. Our proof is combinatorial, and is based on expansion / sampling properties of the collection of local views of local testers. Thus, it immediately applies to local testers of lifted codes that query random affine subspaces in F_q^m, and moreover seems amenable to extension to other families of locally testable codes with expanding families of local views.

Cite as

Irit Dinur, Prahladh Harsha, Tali Kaufman, and Noga Ron-Zewi. From Local to Robust Testing via Agreement Testing. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 29:1-29:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{dinur_et_al:LIPIcs.ITCS.2019.29,
  author =	{Dinur, Irit and Harsha, Prahladh and Kaufman, Tali and Ron-Zewi, Noga},
  title =	{{From Local to Robust Testing via Agreement Testing}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{29:1--29:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.29},
  URN =		{urn:nbn:de:0030-drops-101221},
  doi =		{10.4230/LIPIcs.ITCS.2019.29},
  annote =	{Keywords: Local testing, Robust testing, Agreement testing, Affine-invariant codes, Lifted codes}
}
Document
Every Set in P Is Strongly Testable Under a Suitable Encoding

Authors: Irit Dinur, Oded Goldreich, and Tom Gur


Abstract
We show that every set in P is strongly testable under a suitable encoding. By "strongly testable" we mean having a (proximity oblivious) tester that makes a constant number of queries and rejects with probability that is proportional to the distance of the tested object from the property. By a "suitable encoding" we mean one that is polynomial-time computable and invertible. This result stands in contrast to the known fact that some sets in P are extremely hard to test, providing another demonstration of the crucial role of representation in the context of property testing. The testing result is proved by showing that any set in P has a strong canonical PCP, where canonical means that (for yes-instances) there exists a single proof that is accepted with probability 1 by the system, whereas all other potential proofs are rejected with probability proportional to their distance from this proof. In fact, we show that UP equals the class of sets having strong canonical PCPs (of logarithmic randomness), whereas the class of sets having strong canonical PCPs with polynomial proof length equals "unambiguous- MA". Actually, for the testing result, we use a PCP-of-Proximity version of the foregoing notion and an analogous positive result (i.e., strong canonical PCPPs of logarithmic randomness for any set in UP).

Cite as

Irit Dinur, Oded Goldreich, and Tom Gur. Every Set in P Is Strongly Testable Under a Suitable Encoding. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 30:1-30:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{dinur_et_al:LIPIcs.ITCS.2019.30,
  author =	{Dinur, Irit and Goldreich, Oded and Gur, Tom},
  title =	{{Every Set in P Is Strongly Testable Under a Suitable Encoding}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{30:1--30:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.30},
  URN =		{urn:nbn:de:0030-drops-101234},
  doi =		{10.4230/LIPIcs.ITCS.2019.30},
  annote =	{Keywords: Probabilistically checkable proofs, property testing}
}
Document
Alea Iacta Est: Auctions, Persuasion, Interim Rules, and Dice

Authors: Shaddin Dughmi, David Kempe, and Ruixin Qiang


Abstract
To select a subset of samples or "winners" from a population of candidates, order sampling [Rosén, 1997] and the k-unit Myerson auction [Myerson, 1981] share a common scheme: assign a (random) score to each candidate, then select the k candidates with the highest scores. We study a generalization of both order sampling and Myerson's allocation rule, called winner-selecting dice. The setting for winner-selecting dice is similar to auctions with feasibility constraints: candidates have random types drawn from independent prior distributions, and the winner set must be feasible subject to certain constraints. Dice (distributions over scores) are assigned to each type, and winners are selected to maximize the sum of the dice rolls, subject to the feasibility constraints. We examine the existence of winner-selecting dice that implement prescribed probabilities of winning (i.e., an interim rule) for all types. Our first result shows that when the feasibility constraint is a matroid, then for any feasible interim rule, there always exist winner-selecting dice that implement it. Unfortunately, our proof does not yield an efficient algorithm for constructing the dice. In the special case of a 1-uniform matroid, i.e., only one winner can be selected, we give an efficient algorithm that constructs winner-selecting dice for any feasible interim rule. Furthermore, when the types of the candidates are drawn in an i.i.d. manner and the interim rule is symmetric across candidates, unsurprisingly, an algorithm can efficiently construct symmetric dice that only depend on the type but not the identity of the candidate. One may ask whether we can extend our result to "second-order" interim rules, which not only specify the winning probability of a type, but also the winning probability conditioning on each other candidate's type. We show that our result does not extend, by exhibiting an instance of Bayesian persuasion whose optimal scheme is equivalent to a second-order interim rule, but which does not admit any dice-based implementation.

Cite as

Shaddin Dughmi, David Kempe, and Ruixin Qiang. Alea Iacta Est: Auctions, Persuasion, Interim Rules, and Dice. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 31:1-31:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{dughmi_et_al:LIPIcs.ITCS.2019.31,
  author =	{Dughmi, Shaddin and Kempe, David and Qiang, Ruixin},
  title =	{{Alea Iacta Est: Auctions, Persuasion, Interim Rules, and Dice}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{31:1--31:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.31},
  URN =		{urn:nbn:de:0030-drops-101248},
  doi =		{10.4230/LIPIcs.ITCS.2019.31},
  annote =	{Keywords: Interim rule, order sampling, virtual value function, Border's theorem}
}
Document
Spanoids - An Abstraction of Spanning Structures, and a Barrier for LCCs

Authors: Zeev Dvir, Sivakanth Gopi, Yuzhou Gu, and Avi Wigderson


Abstract
We introduce a simple logical inference structure we call a spanoid (generalizing the notion of a matroid), which captures well-studied problems in several areas. These include combinatorial geometry (point-line incidences), algebra (arrangements of hypersurfaces and ideals), statistical physics (bootstrap percolation), network theory (gossip / infection processes) and coding theory. We initiate a thorough investigation of spanoids, from computational and structural viewpoints, focusing on parameters relevant to the applications areas above and, in particular, to questions regarding Locally Correctable Codes (LCCs). One central parameter we study is the rank of a spanoid, extending the rank of a matroid and related to the dimension of codes. This leads to one main application of our work, establishing the first known barrier to improving the nearly 20-year old bound of Katz-Trevisan (KT) on the dimension of LCCs. On the one hand, we prove that the KT bound (and its more recent refinements) holds for the much more general setting of spanoid rank. On the other hand we show that there exist (random) spanoids whose rank matches these bounds. Thus, to significantly improve the known bounds one must step out of the spanoid framework. Another parameter we explore is the functional rank of a spanoid, which captures the possibility of turning a given spanoid into an actual code. The question of the relationship between rank and functional rank is one of the main questions we raise as it may reveal new avenues for constructing new LCCs (perhaps even matching the KT bound). As a first step, we develop an entropy relaxation of functional rank to create a small constant gap and amplify it by tensoring to construct a spanoid whose functional rank is smaller than rank by a polynomial factor. This is evidence that the entropy method we develop can prove polynomially better bounds than KT-type methods on the dimension of LCCs. To facilitate the above results we also develop some basic structural results on spanoids including an equivalent formulation of spanoids as set systems and properties of spanoid products. We feel that given these initial findings and their motivations, the abstract study of spanoids merits further investigation. We leave plenty of concrete open problems and directions.

Cite as

Zeev Dvir, Sivakanth Gopi, Yuzhou Gu, and Avi Wigderson. Spanoids - An Abstraction of Spanning Structures, and a Barrier for LCCs. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 32:1-32:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{dvir_et_al:LIPIcs.ITCS.2019.32,
  author =	{Dvir, Zeev and Gopi, Sivakanth and Gu, Yuzhou and Wigderson, Avi},
  title =	{{Spanoids - An Abstraction of Spanning Structures, and a Barrier for LCCs}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{32:1--32:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.32},
  URN =		{urn:nbn:de:0030-drops-101258},
  doi =		{10.4230/LIPIcs.ITCS.2019.32},
  annote =	{Keywords: Locally correctable codes, spanoids, entropy, bootstrap percolation, gossip spreading, matroid, union-closed family}
}
Document
Fairness Under Composition

Authors: Cynthia Dwork and Christina Ilvento


Abstract
Algorithmic fairness, and in particular the fairness of scoring and classification algorithms, has become a topic of increasing social concern and has recently witnessed an explosion of research in theoretical computer science, machine learning, statistics, the social sciences, and law. Much of the literature considers the case of a single classifier (or scoring function) used once, in isolation. In this work, we initiate the study of the fairness properties of systems composed of algorithms that are fair in isolation; that is, we study fairness under composition. We identify pitfalls of naïve composition and give general constructions for fair composition, demonstrating both that classifiers that are fair in isolation do not necessarily compose into fair systems and also that seemingly unfair components may be carefully combined to construct fair systems. We focus primarily on the individual fairness setting proposed in [Dwork, Hardt, Pitassi, Reingold, Zemel, 2011], but also extend our results to a large class of group fairness definitions popular in the recent literature, exhibiting several cases in which group fairness definitions give misleading signals under composition.

Cite as

Cynthia Dwork and Christina Ilvento. Fairness Under Composition. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 33:1-33:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{dwork_et_al:LIPIcs.ITCS.2019.33,
  author =	{Dwork, Cynthia and Ilvento, Christina},
  title =	{{Fairness Under Composition}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{33:1--33:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.33},
  URN =		{urn:nbn:de:0030-drops-101269},
  doi =		{10.4230/LIPIcs.ITCS.2019.33},
  annote =	{Keywords: algorithmic fairness, fairness, fairness under composition}
}
Document
A Log-Sobolev Inequality for the Multislice, with Applications

Authors: Yuval Filmus, Ryan O'Donnell, and Xinyu Wu


Abstract
Let kappa in N_+^l satisfy kappa_1 + *s + kappa_l = n, and let U_kappa denote the multislice of all strings u in [l]^n having exactly kappa_i coordinates equal to i, for all i in [l]. Consider the Markov chain on U_kappa where a step is a random transposition of two coordinates of u. We show that the log-Sobolev constant rho_kappa for the chain satisfies rho_kappa^{-1} <= n * sum_{i=1}^l 1/2 log_2(4n/kappa_i), which is sharp up to constants whenever l is constant. From this, we derive some consequences for small-set expansion and isoperimetry in the multislice, including a KKL Theorem, a Kruskal - Katona Theorem for the multislice, a Friedgut Junta Theorem, and a Nisan - Szegedy Theorem.

Cite as

Yuval Filmus, Ryan O'Donnell, and Xinyu Wu. A Log-Sobolev Inequality for the Multislice, with Applications. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 34:1-34:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{filmus_et_al:LIPIcs.ITCS.2019.34,
  author =	{Filmus, Yuval and O'Donnell, Ryan and Wu, Xinyu},
  title =	{{A Log-Sobolev Inequality for the Multislice, with Applications}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{34:1--34:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.34},
  URN =		{urn:nbn:de:0030-drops-101279},
  doi =		{10.4230/LIPIcs.ITCS.2019.34},
  annote =	{Keywords: log-Sobolev inequality, small-set expansion, conductance, hypercontractivity, Fourier analysis, representation theory, Markov chains, combinatorics}
}
Document
Cubic Formula Size Lower Bounds Based on Compositions with Majority

Authors: Anna Gál, Avishay Tal, and Adrian Trejo Nuñez


Abstract
We define new functions based on the Andreev function and prove that they require n^{3}/polylog(n) formula size to compute. The functions we consider are generalizations of the Andreev function using compositions with the majority function. Our arguments apply to composing a hard function with any function that agrees with the majority function (or its negation) on the middle slices of the Boolean cube, as well as iterated compositions of such functions. As a consequence, we obtain n^{3}/polylog(n) lower bounds on the (non-monotone) formula size of an explicit monotone function by combining the monotone address function with the majority function.

Cite as

Anna Gál, Avishay Tal, and Adrian Trejo Nuñez. Cubic Formula Size Lower Bounds Based on Compositions with Majority. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 35:1-35:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{gal_et_al:LIPIcs.ITCS.2019.35,
  author =	{G\'{a}l, Anna and Tal, Avishay and Trejo Nu\~{n}ez, Adrian},
  title =	{{Cubic Formula Size Lower Bounds Based on Compositions with Majority}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{35:1--35:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.35},
  URN =		{urn:nbn:de:0030-drops-101283},
  doi =		{10.4230/LIPIcs.ITCS.2019.35},
  annote =	{Keywords: formula lower bounds, random restrictions, KRW conjecture, composition}
}
Document
The Space Complexity of Mirror Games

Authors: Sumegha Garg and Jon Schneider


Abstract
We consider the following game between two players Alice and Bob, which we call the mirror game. Alice and Bob take turns saying numbers belonging to the set {1, 2, ...,N}. A player loses if they repeat a number that has already been said. Otherwise, after N turns, when all the numbers have been spoken, both players win. When N is even, Bob, who goes second, has a very simple (and memoryless) strategy to avoid losing: whenever Alice says x, respond with N+1-x. The question is: does Alice have a similarly simple strategy to win that avoids remembering all the numbers said by Bob? The answer is no. We prove a linear lower bound on the space complexity of any deterministic winning strategy of Alice. Interestingly, this follows as a consequence of the Eventown-Oddtown theorem from extremal combinatorics. We additionally demonstrate a randomized strategy for Alice that wins with high probability that requires only O~(sqrt N) space (provided that Alice has access to a random matching on K_N). We also investigate lower bounds for a generalized mirror game where Alice and Bob alternate saying 1 number and b numbers each turn (respectively). When 1+b is a prime, our linear lower bounds continue to hold, but when 1+b is composite, we show that the existence of a o(N) space strategy for Bob (when N != 0 mod (1+b)) implies the existence of exponential-sized matching vector families over Z^N_{1+b}.

Cite as

Sumegha Garg and Jon Schneider. The Space Complexity of Mirror Games. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 36:1-36:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{garg_et_al:LIPIcs.ITCS.2019.36,
  author =	{Garg, Sumegha and Schneider, Jon},
  title =	{{The Space Complexity of Mirror Games}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{36:1--36:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.36},
  URN =		{urn:nbn:de:0030-drops-101295},
  doi =		{10.4230/LIPIcs.ITCS.2019.36},
  annote =	{Keywords: Mirror Games, Space Complexity, Eventown-Oddtown}
}
Document
The Subgraph Testing Model

Authors: Oded Goldreich and Dana Ron


Abstract
We initiate a study of testing properties of graphs that are presented as subgraphs of a fixed (or an explicitly given) graph. The tester is given free access to a base graph G=([n],E), and oracle access to a function f:E -> {0,1} that represents a subgraph of G. The tester is required to distinguish between subgraphs that posses a predetermined property and subgraphs that are far from possessing this property. We focus on bounded-degree base graphs and on the relation between testing graph properties in the subgraph model and testing the same properties in the bounded-degree graph model. We identify cases in which testing is significantly easier in one model than in the other as well as cases in which testing has approximately the same complexity in both models. Our proofs are based on the design and analysis of efficient testers and on the establishment of query-complexity lower bounds.

Cite as

Oded Goldreich and Dana Ron. The Subgraph Testing Model. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 37:1-37:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{goldreich_et_al:LIPIcs.ITCS.2019.37,
  author =	{Goldreich, Oded and Ron, Dana},
  title =	{{The Subgraph Testing Model}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{37:1--37:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.37},
  URN =		{urn:nbn:de:0030-drops-101308},
  doi =		{10.4230/LIPIcs.ITCS.2019.37},
  annote =	{Keywords: Property Testing, Graph Properties}
}
Document
Adventures in Monotone Complexity and TFNP

Authors: Mika Göös, Pritish Kamath, Robert Robere, and Dmitry Sokolov


Abstract
Separations: We introduce a monotone variant of Xor-Sat and show it has exponential monotone circuit complexity. Since Xor-Sat is in NC^2, this improves qualitatively on the monotone vs. non-monotone separation of Tardos (1988). We also show that monotone span programs over R can be exponentially more powerful than over finite fields. These results can be interpreted as separating subclasses of TFNP in communication complexity. Characterizations: We show that the communication (resp. query) analogue of PPA (subclass of TFNP) captures span programs over F_2 (resp. Nullstellensatz degree over F_2). Previously, it was known that communication FP captures formulas (Karchmer - Wigderson, 1988) and that communication PLS captures circuits (Razborov, 1995).

Cite as

Mika Göös, Pritish Kamath, Robert Robere, and Dmitry Sokolov. Adventures in Monotone Complexity and TFNP. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 38:1-38:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{goos_et_al:LIPIcs.ITCS.2019.38,
  author =	{G\"{o}\"{o}s, Mika and Kamath, Pritish and Robere, Robert and Sokolov, Dmitry},
  title =	{{Adventures in Monotone Complexity and TFNP}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{38:1--38:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.38},
  URN =		{urn:nbn:de:0030-drops-101316},
  doi =		{10.4230/LIPIcs.ITCS.2019.38},
  annote =	{Keywords: TFNP, Monotone Complexity, Communication Complexity, Proof Complexity}
}
Document
Algorithmic Polarization for Hidden Markov Models

Authors: Venkatesan Guruswami, Preetum Nakkiran, and Madhu Sudan


Abstract
Using a mild variant of polar codes we design linear compression schemes compressing Hidden Markov sources (where the source is a Markov chain, but whose state is not necessarily observable from its output), and to decode from Hidden Markov channels (where the channel has a state and the error introduced depends on the state). We give the first polynomial time algorithms that manage to compress and decompress (or encode and decode) at input lengths that are polynomial both in the gap to capacity and the mixing time of the Markov chain. Prior work achieved capacity only asymptotically in the limit of large lengths, and polynomial bounds were not available with respect to either the gap to capacity or mixing time. Our results operate in the setting where the source (or the channel) is known. If the source is unknown then compression at such short lengths would lead to effective algorithms for learning parity with noise - thus our results are the first to suggest a separation between the complexity of the problem when the source is known versus when it is unknown.

Cite as

Venkatesan Guruswami, Preetum Nakkiran, and Madhu Sudan. Algorithmic Polarization for Hidden Markov Models. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 39:1-39:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.ITCS.2019.39,
  author =	{Guruswami, Venkatesan and Nakkiran, Preetum and Sudan, Madhu},
  title =	{{Algorithmic Polarization for Hidden Markov Models}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{39:1--39:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.39},
  URN =		{urn:nbn:de:0030-drops-101326},
  doi =		{10.4230/LIPIcs.ITCS.2019.39},
  annote =	{Keywords: polar codes, error-correcting codes, compression, hidden markov model}
}
Document
On the Communication Complexity of Key-Agreement Protocols

Authors: Iftach Haitner, Noam Mazor, Rotem Oshman, Omer Reingold, and Amir Yehudayoff


Abstract
Key-agreement protocols whose security is proven in the random oracle model are an important alternative to protocols based on public-key cryptography. In the random oracle model, the parties and the eavesdropper have access to a shared random function (an "oracle"), but the parties are limited in the number of queries they can make to the oracle. The random oracle serves as an abstraction for black-box access to a symmetric cryptographic primitive, such as a collision resistant hash. Unfortunately, as shown by Impagliazzo and Rudich [STOC '89] and Barak and Mahmoody [Crypto '09], such protocols can only guarantee limited secrecy: the key of any l-query protocol can be revealed by an O(l^2)-query adversary. This quadratic gap between the query complexity of the honest parties and the eavesdropper matches the gap obtained by the Merkle's Puzzles protocol of Merkle [CACM '78]. In this work we tackle a new aspect of key-agreement protocols in the random oracle model: their communication complexity. In Merkle's Puzzles, to obtain secrecy against an eavesdropper that makes roughly l^2 queries, the honest parties need to exchange Omega(l) bits. We show that for protocols with certain natural properties, ones that Merkle's Puzzle has, such high communication is unavoidable. Specifically, this is the case if the honest parties' queries are uniformly random, or alternatively if the protocol uses non-adaptive queries and has only two rounds. Our proof for the first setting uses a novel reduction from the set-disjointness problem in two-party communication complexity. For the second setting we prove the lower bound directly, using information-theoretic arguments. Understanding the communication complexity of protocols whose security is proven (in the random-oracle model) is an important question in the study of practical protocols. Our results and proof techniques are a first step in this direction.

Cite as

Iftach Haitner, Noam Mazor, Rotem Oshman, Omer Reingold, and Amir Yehudayoff. On the Communication Complexity of Key-Agreement Protocols. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 40:1-40:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{haitner_et_al:LIPIcs.ITCS.2019.40,
  author =	{Haitner, Iftach and Mazor, Noam and Oshman, Rotem and Reingold, Omer and Yehudayoff, Amir},
  title =	{{On the Communication Complexity of Key-Agreement Protocols}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{40:1--40:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.40},
  URN =		{urn:nbn:de:0030-drops-101335},
  doi =		{10.4230/LIPIcs.ITCS.2019.40},
  annote =	{Keywords: key agreement, random oracle, communication complexity, Merkle's puzzles}
}
Document
The Paulsen Problem Made Simple

Authors: Linus Hamilton and Ankur Moitra


Abstract
The Paulsen problem is a basic problem in operator theory that was resolved in a recent tour-de-force work of Kwok, Lau, Lee and Ramachandran. In particular, they showed that every epsilon-nearly equal norm Parseval frame in d dimensions is within squared distance O(epsilon d^{13/2}) of an equal norm Parseval frame. We give a dramatically simpler proof based on the notion of radial isotropic position, and along the way show an improved bound of O(epsilon d^2).

Cite as

Linus Hamilton and Ankur Moitra. The Paulsen Problem Made Simple. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 41:1-41:6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{hamilton_et_al:LIPIcs.ITCS.2019.41,
  author =	{Hamilton, Linus and Moitra, Ankur},
  title =	{{The Paulsen Problem Made Simple}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{41:1--41:6},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.41},
  URN =		{urn:nbn:de:0030-drops-101347},
  doi =		{10.4230/LIPIcs.ITCS.2019.41},
  annote =	{Keywords: radial isotropic position, operator scaling, Paulsen problem}
}
Document
How to Subvert Backdoored Encryption: Security Against Adversaries that Decrypt All Ciphertexts

Authors: Thibaut Horel, Sunoo Park, Silas Richelson, and Vinod Vaikuntanathan


Abstract
In this work, we examine the feasibility of secure and undetectable point-to-point communication when an adversary (e.g., a government) can read all encrypted communications of surveillance targets. We consider a model where the only permitted method of communication is via a government-mandated encryption scheme, instantiated with government-mandated keys. Parties cannot simply encrypt ciphertexts of some other encryption scheme, because citizens caught trying to communicate outside the government's knowledge (e.g., by encrypting strings which do not appear to be natural language plaintexts) will be arrested. The one guarantee we suppose is that the government mandates an encryption scheme which is semantically secure against outsiders: a perhaps reasonable supposition when a government might consider it advantageous to secure its people's communication against foreign entities. But then, what good is semantic security against an adversary that holds all the keys and has the power to decrypt? We show that even in the pessimistic scenario described, citizens can communicate securely and undetectably. In our terminology, this translates to a positive statement: all semantically secure encryption schemes support subliminal communication. Informally, this means that there is a two-party protocol between Alice and Bob where the parties exchange ciphertexts of what appears to be a normal conversation even to someone who knows the secret keys and thus can read the corresponding plaintexts. And yet, at the end of the protocol, Alice will have transmitted her secret message to Bob. Our security definition requires that the adversary not be able to tell whether Alice and Bob are just having a normal conversation using the mandated encryption scheme, or they are using the mandated encryption scheme for subliminal communication. Our topics may be thought to fall broadly within the realm of steganography. However, we deal with the non-standard setting of an adversarially chosen distribution of cover objects (i.e., a stronger-than-usual adversary), and we take advantage of the fact that our cover objects are ciphertexts of a semantically secure encryption scheme to bypass impossibility results which we show for broader classes of steganographic schemes. We give several constructions of subliminal communication schemes under the assumption that key exchange protocols with pseudorandom messages exist (such as Diffie-Hellman, which in fact has truly random messages).

Cite as

Thibaut Horel, Sunoo Park, Silas Richelson, and Vinod Vaikuntanathan. How to Subvert Backdoored Encryption: Security Against Adversaries that Decrypt All Ciphertexts. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 42:1-42:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{horel_et_al:LIPIcs.ITCS.2019.42,
  author =	{Horel, Thibaut and Park, Sunoo and Richelson, Silas and Vaikuntanathan, Vinod},
  title =	{{How to Subvert Backdoored Encryption: Security Against Adversaries that Decrypt All Ciphertexts}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{42:1--42:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.42},
  URN =		{urn:nbn:de:0030-drops-101355},
  doi =		{10.4230/LIPIcs.ITCS.2019.42},
  annote =	{Keywords: Backdoored Encryption, Steganography}
}
Document
On Integer Programming and Convolution

Authors: Klaus Jansen and Lars Rohwedder


Abstract
Integer programs with a constant number of constraints are solvable in pseudo-polynomial time. We give a new algorithm with a better pseudo-polynomial running time than previous results. Moreover, we establish a strong connection to the problem (min, +)-convolution. (min, +)-convolution has a trivial quadratic time algorithm and it has been conjectured that this cannot be improved significantly. We show that further improvements to our pseudo-polynomial algorithm for any fixed number of constraints are equivalent to improvements for (min, +)-convolution. This is a strong evidence that our algorithm's running time is the best possible. We also present a faster specialized algorithm for testing feasibility of an integer program with few constraints and for this we also give a tight lower bound, which is based on the SETH.

Cite as

Klaus Jansen and Lars Rohwedder. On Integer Programming and Convolution. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 43:1-43:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{jansen_et_al:LIPIcs.ITCS.2019.43,
  author =	{Jansen, Klaus and Rohwedder, Lars},
  title =	{{On Integer Programming and Convolution}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{43:1--43:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.43},
  URN =		{urn:nbn:de:0030-drops-101365},
  doi =		{10.4230/LIPIcs.ITCS.2019.43},
  annote =	{Keywords: Integer programming, convolution, dynamic programming, SETH}
}
Document
Empowering the Configuration-IP - New PTAS Results for Scheduling with Setups Times

Authors: Klaus Jansen, Kim-Manuel Klein, Marten Maack, and Malin Rau


Abstract
Integer linear programs of configurations, or configuration IPs, are a classical tool in the design of algorithms for scheduling and packing problems, where a set of items has to be placed in multiple target locations. Herein a configuration describes a possible placement on one of the target locations, and the IP is used to chose suitable configurations covering the items. We give an augmented IP formulation, which we call the module configuration IP. It can be described within the framework of n-fold integer programming and therefore be solved efficiently. As an application, we consider scheduling problems with setup times, in which a set of jobs has to be scheduled on a set of identical machines, with the objective of minimizing the makespan. For instance, we investigate the case that jobs can be split and scheduled on multiple machines. However, before a part of a job can be processed an uninterrupted setup depending on the job has to be paid. For both of the variants that jobs can be executed in parallel or not, we obtain an efficient polynomial time approximation scheme (EPTAS) of running time f(1/epsilon) x poly(|I|) with a single exponential term in f for the first and a double exponential one for the second case. Previously, only constant factor approximations of 5/3 and 4/3 + epsilon respectively were known. Furthermore, we present an EPTAS for a problem where classes of (non-splittable) jobs are given, and a setup has to be paid for each class of jobs being executed on one machine.

Cite as

Klaus Jansen, Kim-Manuel Klein, Marten Maack, and Malin Rau. Empowering the Configuration-IP - New PTAS Results for Scheduling with Setups Times. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 44:1-44:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{jansen_et_al:LIPIcs.ITCS.2019.44,
  author =	{Jansen, Klaus and Klein, Kim-Manuel and Maack, Marten and Rau, Malin},
  title =	{{Empowering the Configuration-IP - New PTAS Results for Scheduling with Setups Times}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{44:1--44:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.44},
  URN =		{urn:nbn:de:0030-drops-101375},
  doi =		{10.4230/LIPIcs.ITCS.2019.44},
  annote =	{Keywords: Parallel Machines, Setup Time, EPTAS, n-fold integer programming}
}
Document
Being Corrupt Requires Being Clever, But Detecting Corruption Doesn't

Authors: Yan Jin, Elchanan Mossel, and Govind Ramnarayan


Abstract
We consider a variation of the problem of corruption detection on networks posed by Alon, Mossel, and Pemantle '15. In this model, each vertex of a graph can be either truthful or corrupt. Each vertex reports about the types (truthful or corrupt) of all its neighbors to a central agency, where truthful nodes report the true types they see and corrupt nodes report adversarially. The central agency aggregates these reports and attempts to find a single truthful node. Inspired by real auditing networks, we pose our problem for arbitrary graphs and consider corruption through a computational lens. We identify a key combinatorial parameter of the graph m(G), which is the minimal number of corrupted agents needed to prevent the central agency from identifying a single corrupt node. We give an efficient (in fact, linear time) algorithm for the central agency to identify a truthful node that is successful whenever the number of corrupt nodes is less than m(G)/2. On the other hand, we prove that for any constant alpha > 1, it is NP-hard to find a subset of nodes S in G such that corrupting S prevents the central agency from finding one truthful node and |S| <= alpha m(G), assuming the Small Set Expansion Hypothesis (Raghavendra and Steurer, STOC '10). We conclude that being corrupt requires being clever, while detecting corruption does not. Our main technical insight is a relation between the minimum number of corrupt nodes required to hide all truthful nodes and a certain notion of vertex separability for the underlying graph. Additionally, this insight lets us design an efficient algorithm for a corrupt party to decide which graphs require the fewest corrupted nodes, up to a multiplicative factor of O(log n).

Cite as

Yan Jin, Elchanan Mossel, and Govind Ramnarayan. Being Corrupt Requires Being Clever, But Detecting Corruption Doesn't. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 45:1-45:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{jin_et_al:LIPIcs.ITCS.2019.45,
  author =	{Jin, Yan and Mossel, Elchanan and Ramnarayan, Govind},
  title =	{{Being Corrupt Requires Being Clever, But Detecting Corruption Doesn't}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{45:1--45:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.45},
  URN =		{urn:nbn:de:0030-drops-101388},
  doi =		{10.4230/LIPIcs.ITCS.2019.45},
  annote =	{Keywords: Corruption detection, PMC Model, Small Set Expansion, Hardness of Approximation}
}
Document
Simulating Random Walks on Graphs in the Streaming Model

Authors: Ce Jin


Abstract
We study the problem of approximately simulating a t-step random walk on a graph where the input edges come from a single-pass stream. The straightforward algorithm using reservoir sampling needs O(nt) words of memory. We show that this space complexity is near-optimal for directed graphs. For undirected graphs, we prove an Omega(n sqrt{t})-bit space lower bound, and give a near-optimal algorithm using O(n sqrt{t}) words of space with 2^{-Omega(sqrt{t})} simulation error (defined as the l_1-distance between the output distribution of the simulation algorithm and the distribution of perfect random walks). We also discuss extending the algorithms to the turnstile model, where both insertion and deletion of edges can appear in the input stream.

Cite as

Ce Jin. Simulating Random Walks on Graphs in the Streaming Model. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 46:1-46:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{jin:LIPIcs.ITCS.2019.46,
  author =	{Jin, Ce},
  title =	{{Simulating Random Walks on Graphs in the Streaming Model}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{46:1--46:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.46},
  URN =		{urn:nbn:de:0030-drops-101399},
  doi =		{10.4230/LIPIcs.ITCS.2019.46},
  annote =	{Keywords: streaming models, random walks, sampling}
}
Document
On the Complexity of Symmetric Polynomials

Authors: Markus Bläser and Gorav Jindal


Abstract
The fundamental theorem of symmetric polynomials states that for a symmetric polynomial f_{Sym} in C[x_1,x_2,...,x_n], there exists a unique "witness" f in C[y_1,y_2,...,y_n] such that f_{Sym}=f(e_1,e_2,...,e_n), where the e_i's are the elementary symmetric polynomials. In this paper, we study the arithmetic complexity L(f) of the witness f as a function of the arithmetic complexity L(f_{Sym}) of f_{Sym}. We show that the arithmetic complexity L(f) of f is bounded by poly(L(f_{Sym}),deg(f),n). To the best of our knowledge, prior to this work only exponential upper bounds were known for L(f). The main ingredient in our result is an algebraic analogue of Newton's iteration on power series. As a corollary of this result, we show that if VP != VNP then there exist symmetric polynomial families which have super-polynomial arithmetic complexity. Furthermore, we study the complexity of testing whether a function is symmetric. For polynomials, this question is equivalent to arithmetic circuit identity testing. In contrast to this, we show that it is hard for Boolean functions.

Cite as

Markus Bläser and Gorav Jindal. On the Complexity of Symmetric Polynomials. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 47:1-47:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{blaser_et_al:LIPIcs.ITCS.2019.47,
  author =	{Bl\"{a}ser, Markus and Jindal, Gorav},
  title =	{{On the Complexity of Symmetric Polynomials}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{47:1--47:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.47},
  URN =		{urn:nbn:de:0030-drops-101402},
  doi =		{10.4230/LIPIcs.ITCS.2019.47},
  annote =	{Keywords: Symmetric Polynomials, Arithmetic Circuits, Arithmetic Complexity, Power Series, Elementary Symmetric Polynomials, Newton's Iteration}
}
Document
The Orthogonal Vectors Conjecture for Branching Programs and Formulas

Authors: Daniel M. Kane and Richard Ryan Williams


Abstract
In the Orthogonal Vectors (OV) problem, we wish to determine if there is an orthogonal pair of vectors among n Boolean vectors in d dimensions. The OV Conjecture (OVC) posits that OV requires n^{2-o(1)} time to solve, for all d=omega(log n). Assuming the OVC, optimal time lower bounds have been proved for many prominent problems in P, such as Edit Distance, Frechet Distance, Longest Common Subsequence, and approximating the diameter of a graph. We prove that OVC is true in several computational models of interest: - For all sufficiently large n and d, OV for n vectors in {0,1}^d has branching program complexity Theta~(n * min(n,2^d)). In particular, the lower and upper bounds match up to polylog factors. - OV has Boolean formula complexity Theta~(n * min(n,2^d)), over all complete bases of O(1) fan-in. - OV requires Theta~(n * min(n,2^d)) wires, in formulas comprised of gates computing arbitrary symmetric functions of unbounded fan-in. Our lower bounds basically match the best known (quadratic) lower bounds for any explicit function in those models. Analogous lower bounds hold for many related problems shown to be hard under OVC, such as Batch Partial Match, Batch Subset Queries, and Batch Hamming Nearest Neighbors, all of which have very succinct reductions to OV. The proofs use a certain kind of input restriction that is different from typical random restrictions where variables are assigned independently. We give a sense in which independent random restrictions cannot be used to show hardness, in that OVC is false in the "average case" even for AC^0 formulas: For all p in (0,1) there is a delta_p > 0 such that for every n and d, OV instances with input bits independently set to 1 with probability p (and 0 otherwise) can be solved with AC^0 formulas of O(n^{2-delta_p}) size, on all but a o_n(1) fraction of instances. Moreover, lim_{p - > 1}delta_p = 1.

Cite as

Daniel M. Kane and Richard Ryan Williams. The Orthogonal Vectors Conjecture for Branching Programs and Formulas. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 48:1-48:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{kane_et_al:LIPIcs.ITCS.2019.48,
  author =	{Kane, Daniel M. and Williams, Richard Ryan},
  title =	{{The Orthogonal Vectors Conjecture for Branching Programs and Formulas}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{48:1--48:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.48},
  URN =		{urn:nbn:de:0030-drops-101418},
  doi =		{10.4230/LIPIcs.ITCS.2019.48},
  annote =	{Keywords: fine-grained complexity, orthogonal vectors, branching programs, symmetric functions, Boolean formulas}
}
Document
SOS Lower Bounds with Hard Constraints: Think Global, Act Local

Authors: Pravesh K. Kothari, Ryan O'Donnell, and Tselil Schramm


Abstract
Many previous Sum-of-Squares (SOS) lower bounds for CSPs had two deficiencies related to global constraints. First, they were not able to support a "cardinality constraint", as in, say, the Min-Bisection problem. Second, while the pseudoexpectation of the objective function was shown to have some value beta, it did not necessarily actually "satisfy" the constraint "objective = beta". In this paper we show how to remedy both deficiencies in the case of random CSPs, by translating global constraints into local constraints. Using these ideas, we also show that degree-Omega(sqrt{n}) SOS does not provide a (4/3 - epsilon)-approximation for Min-Bisection, and degree-Omega(n) SOS does not provide a (11/12 + epsilon)-approximation for Max-Bisection or a (5/4 - epsilon)-approximation for Min-Bisection. No prior SOS lower bounds for these problems were known.

Cite as

Pravesh K. Kothari, Ryan O'Donnell, and Tselil Schramm. SOS Lower Bounds with Hard Constraints: Think Global, Act Local. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 49:1-49:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{kothari_et_al:LIPIcs.ITCS.2019.49,
  author =	{Kothari, Pravesh K. and O'Donnell, Ryan and Schramm, Tselil},
  title =	{{SOS Lower Bounds with Hard Constraints: Think Global, Act Local}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{49:1--49:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.49},
  URN =		{urn:nbn:de:0030-drops-101420},
  doi =		{10.4230/LIPIcs.ITCS.2019.49},
  annote =	{Keywords: sum-of-squares hierarchy, random constraint satisfaction problems}
}
Document
Semi-Online Bipartite Matching

Authors: Ravi Kumar, Manish Purohit, Aaron Schild, Zoya Svitkina, and Erik Vee


Abstract
In this paper we introduce the semi-online model that generalizes the classical online computational model. The semi-online model postulates that the unknown future has a predictable part and an adversarial part; these parts can be arbitrarily interleaved. An algorithm in this model operates as in the standard online model, i.e., makes an irrevocable decision at each step. We consider bipartite matching in the semi-online model. Our main contributions are competitive algorithms for this problem and a near-matching hardness bound. The competitive ratio of the algorithms nicely interpolates between the truly offline setting (i.e., no adversarial part) and the truly online setting (i.e., no predictable part).

Cite as

Ravi Kumar, Manish Purohit, Aaron Schild, Zoya Svitkina, and Erik Vee. Semi-Online Bipartite Matching. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 50:1-50:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{kumar_et_al:LIPIcs.ITCS.2019.50,
  author =	{Kumar, Ravi and Purohit, Manish and Schild, Aaron and Svitkina, Zoya and Vee, Erik},
  title =	{{Semi-Online Bipartite Matching}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{50:1--50:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.50},
  URN =		{urn:nbn:de:0030-drops-101436},
  doi =		{10.4230/LIPIcs.ITCS.2019.50},
  annote =	{Keywords: Semi-Online Algorithms, Bipartite Matching}
}
Document
Strategies for Quantum Races

Authors: Troy Lee, Maharshi Ray, and Miklos Santha


Abstract
We initiate the study of quantum races, games where two or more quantum computers compete to solve a computational problem. While the problem of dueling algorithms has been studied for classical deterministic algorithms [Immorlica et al., 2011], the quantum case presents additional sources of uncertainty for the players. The foremost among these is that players do not know if they have solved the problem until they measure their quantum state. This question of "when to measure?" presents a very interesting strategic problem. We develop a game-theoretic model of a multiplayer quantum race, and find an approximate Nash equilibrium where all players play the same strategy. In the two-party case, we further show that this strategy is nearly optimal in terms of payoff among all symmetric Nash equilibria. A key role in our analysis of quantum races is played by a more tractable version of the game where there is no payout on a tie; for such races we completely characterize the Nash equilibria in the two-party case. One application of our results is to the stability of the Bitcoin protocol when mining is done by quantum computers. Bitcoin mining is a race to solve a computational search problem, with the winner gaining the right to create a new block. Our results inform the strategies that eventual quantum miners should use, and also indicate that the collision probability - the probability that two miners find a new block at the same time - would not be too high in the case of quantum miners. Such collisions are undesirable as they lead to forking of the Bitcoin blockchain.

Cite as

Troy Lee, Maharshi Ray, and Miklos Santha. Strategies for Quantum Races. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 51:1-51:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{lee_et_al:LIPIcs.ITCS.2019.51,
  author =	{Lee, Troy and Ray, Maharshi and Santha, Miklos},
  title =	{{Strategies for Quantum Races}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{51:1--51:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.51},
  URN =		{urn:nbn:de:0030-drops-101446},
  doi =		{10.4230/LIPIcs.ITCS.2019.51},
  annote =	{Keywords: Game theory, Bitcoin mining, Quantum computing, Convex optimization}
}
Document
Lower Bounds for Tolerant Junta and Unateness Testing via Rejection Sampling of Graphs

Authors: Amit Levi and Erik Waingarten


Abstract
We introduce a new model for testing graph properties which we call the rejection sampling model. We show that testing bipartiteness of n-nodes graphs using rejection sampling queries requires complexity Omega~(n^2). Via reductions from the rejection sampling model, we give three new lower bounds for tolerant testing of Boolean functions of the form f : {0,1}^n -> {0,1}: - Tolerant k-junta testing with non-adaptive queries requires Omega~(k^2) queries. - Tolerant unateness testing requires Omega~(n) queries. - Tolerant unateness testing with non-adaptive queries requires Omega~(n^{3/2}) queries. Given the O~(k^{3/2})-query non-adaptive junta tester of Blais [Eric Blais, 2008], we conclude that non-adaptive tolerant junta testing requires more queries than non-tolerant junta testing. In addition, given the O~(n^{3/4})-query unateness tester of Chen, Waingarten, and Xie [Xi Chen et al., 2017] and the O~(n)-query non-adaptive unateness tester of Baleshzar, Chakrabarty, Pallavoor, Raskhodnikova, and Seshadhri [Roksana Baleshzar et al., 2017], we conclude that tolerant unateness testing requires more queries than non-tolerant unateness testing, in both adaptive and non-adaptive settings. These lower bounds provide the first separation between tolerant and non-tolerant testing for a natural property of Boolean functions.

Cite as

Amit Levi and Erik Waingarten. Lower Bounds for Tolerant Junta and Unateness Testing via Rejection Sampling of Graphs. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 52:1-52:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{levi_et_al:LIPIcs.ITCS.2019.52,
  author =	{Levi, Amit and Waingarten, Erik},
  title =	{{Lower Bounds for Tolerant Junta and Unateness Testing via Rejection Sampling of Graphs}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{52:1--52:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.52},
  URN =		{urn:nbn:de:0030-drops-101452},
  doi =		{10.4230/LIPIcs.ITCS.2019.52},
  annote =	{Keywords: Property Testing, Juntas, Tolerant Testing, Boolean functions}
}
Document
Secret Sharing with Binary Shares

Authors: Fuchun Lin, Mahdi Cheraghchi, Venkatesan Guruswami, Reihaneh Safavi-Naini, and Huaxiong Wang


Abstract
Shamir's celebrated secret sharing scheme provides an efficient method for encoding a secret of arbitrary length l among any N <= 2^l players such that for a threshold parameter t, (i) the knowledge of any t shares does not reveal any information about the secret and, (ii) any choice of t+1 shares fully reveals the secret. It is known that any such threshold secret sharing scheme necessarily requires shares of length l, and in this sense Shamir's scheme is optimal. The more general notion of ramp schemes requires the reconstruction of secret from any t+g shares, for a positive integer gap parameter g. Ramp secret sharing scheme necessarily requires shares of length l/g. Other than the bound related to secret length l, the share lengths of ramp schemes can not go below a quantity that depends only on the gap ratio g/N. In this work, we study secret sharing in the extremal case of bit-long shares and arbitrarily small gap ratio g/N, where standard ramp secret sharing becomes impossible. We show, however, that a slightly relaxed but equally effective notion of semantic security for the secret, and negligible reconstruction error probability, eliminate the impossibility. Moreover, we provide explicit constructions of such schemes. One of the consequences of our relaxation is that, unlike standard ramp schemes with perfect secrecy, adaptive and non-adaptive adversaries need different analysis and construction. For non-adaptive adversaries, we explicitly construct secret sharing schemes that provide secrecy against any tau fraction of observed shares, and reconstruction from any rho fraction of shares, for any choices of 0 <= tau < rho <= 1. Our construction achieves secret length N(rho-tau-o(1)), which we show to be optimal. For adaptive adversaries, we construct explicit schemes attaining a secret length Omega(N(rho-tau)). We discuss our results and open questions.

Cite as

Fuchun Lin, Mahdi Cheraghchi, Venkatesan Guruswami, Reihaneh Safavi-Naini, and Huaxiong Wang. Secret Sharing with Binary Shares. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 53:1-53:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{lin_et_al:LIPIcs.ITCS.2019.53,
  author =	{Lin, Fuchun and Cheraghchi, Mahdi and Guruswami, Venkatesan and Safavi-Naini, Reihaneh and Wang, Huaxiong},
  title =	{{Secret Sharing with Binary Shares}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{53:1--53:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.53},
  URN =		{urn:nbn:de:0030-drops-101461},
  doi =		{10.4230/LIPIcs.ITCS.2019.53},
  annote =	{Keywords: Secret sharing scheme, Wiretap channel}
}
Document
On the Communication Complexity of High-Dimensional Permutations

Authors: Nati Linial, Toniann Pitassi, and Adi Shraibman


Abstract
We study the multiparty communication complexity of high dimensional permutations in the Number On the Forehead (NOF) model. This model is due to Chandra, Furst and Lipton (CFL) who also gave a nontrivial protocol for the Exactly-n problem where three players receive integer inputs and need to decide if their inputs sum to a given integer n. There is a considerable body of literature dealing with the same problem, where (N,+) is replaced by some other abelian group. Our work can be viewed as a far-reaching extension of this line of research. We show that the known lower bounds for that group-theoretic problem apply to all high dimensional permutations. We introduce new proof techniques that reveal new and unexpected connections between NOF communication complexity of permutations and a variety of well-known problems in combinatorics. We also give a direct algorithmic protocol for Exactly-n. In contrast, all previous constructions relied on large sets of integers without a 3-term arithmetic progression.

Cite as

Nati Linial, Toniann Pitassi, and Adi Shraibman. On the Communication Complexity of High-Dimensional Permutations. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 54:1-54:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{linial_et_al:LIPIcs.ITCS.2019.54,
  author =	{Linial, Nati and Pitassi, Toniann and Shraibman, Adi},
  title =	{{On the Communication Complexity of High-Dimensional Permutations}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{54:1--54:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.54},
  URN =		{urn:nbn:de:0030-drops-101470},
  doi =		{10.4230/LIPIcs.ITCS.2019.54},
  annote =	{Keywords: High dimensional permutations, Number On the Forehead model, Additive combinatorics}
}
Document
Fisher Zeros and Correlation Decay in the Ising Model

Authors: Jingcheng Liu, Alistair Sinclair, and Piyush Srivastava


Abstract
The Ising model originated in statistical physics as a means of studying phase transitions in magnets, and has been the object of intensive study for almost a century. Combinatorially, it can be viewed as a natural distribution over cuts in a graph, and it has also been widely studied in computer science, especially in the context of approximate counting and sampling. In this paper, we study the complex zeros of the partition function of the Ising model, viewed as a polynomial in the "interaction parameter"; these are known as Fisher zeros in light of their introduction by Fisher in 1965. While the zeros of the partition function as a polynomial in the "field" parameter have been extensively studied since the classical work of Lee and Yang, comparatively little is known about Fisher zeros. Our main result shows that the zero-field Ising model has no Fisher zeros in a complex neighborhood of the entire region of parameters where the model exhibits correlation decay. In addition to shedding light on Fisher zeros themselves, this result also establishes a formal connection between two distinct notions of phase transition for the Ising model: the absence of complex zeros (analyticity of the free energy, or the logarithm of the partition function) and decay of correlations with distance. We also discuss the consequences of our result for efficient deterministic approximation of the partition function. Our proof relies heavily on algorithmic techniques, notably Weitz's self-avoiding walk tree, and as such belongs to a growing body of work that uses algorithmic methods to resolve classical questions in statistical physics.

Cite as

Jingcheng Liu, Alistair Sinclair, and Piyush Srivastava. Fisher Zeros and Correlation Decay in the Ising Model. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 55:1-55:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:LIPIcs.ITCS.2019.55,
  author =	{Liu, Jingcheng and Sinclair, Alistair and Srivastava, Piyush},
  title =	{{Fisher Zeros and Correlation Decay in the Ising Model}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{55:1--55:8},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.55},
  URN =		{urn:nbn:de:0030-drops-101483},
  doi =		{10.4230/LIPIcs.ITCS.2019.55},
  annote =	{Keywords: Ising model, zeros of polynomials, partition functions, approximate counting, phase transitions}
}
Document
Quadratic Time-Space Lower Bounds for Computing Natural Functions with a Random Oracle

Authors: Dylan M. McKay and Richard Ryan Williams


Abstract
We define a model of size-S R-way branching programs with oracles that can make up to S distinct oracle queries over all of their possible inputs, and generalize a lower bound proof strategy of Beame [SICOMP 1991] to apply in the case of random oracles. Through a series of succinct reductions, we prove that the following problems require randomized algorithms where the product of running time and space usage must be Omega(n^2/poly(log n)) to obtain correct answers with constant nonzero probability, even for algorithms with constant-time access to a uniform random oracle (i.e., a uniform random hash function): - Given an unordered list L of n elements from [n] (possibly with repeated elements), output [n]-L. - Counting satisfying assignments to a given 2CNF, and printing any satisfying assignment to a given 3CNF. Note it is a major open problem to prove a time-space product lower bound of n^{2-o(1)} for the decision version of SAT, or even for the decision problem Majority-SAT. - Printing the truth table of a given CNF formula F with k inputs and n=O(2^k) clauses, with values printed in lexicographical order (i.e., F(0^k), F(0^{k-1}1), ..., F(1^k)). Thus we have a 4^k/poly(k) lower bound in this case. - Evaluating a circuit with n inputs and O(n) outputs. As our lower bounds are based on R-way branching programs, they hold for any reasonable model of computation (e.g. log-word RAMs and multitape Turing machines).

Cite as

Dylan M. McKay and Richard Ryan Williams. Quadratic Time-Space Lower Bounds for Computing Natural Functions with a Random Oracle. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 56:1-56:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{mckay_et_al:LIPIcs.ITCS.2019.56,
  author =	{McKay, Dylan M. and Williams, Richard Ryan},
  title =	{{Quadratic Time-Space Lower Bounds for Computing Natural Functions with a Random Oracle}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{56:1--56:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.56},
  URN =		{urn:nbn:de:0030-drops-101493},
  doi =		{10.4230/LIPIcs.ITCS.2019.56},
  annote =	{Keywords: branching programs, random oracles, time-space tradeoffs, lower bounds, SAT, counting complexity}
}
Document
Random Projection in the Brain and Computation with Assemblies of Neurons

Authors: Christos H. Papadimitriou and Santosh S. Vempala


Abstract
It has been recently shown via simulations [Dasgupta et al., 2017] that random projection followed by a cap operation (setting to one the k largest elements of a vector and everything else to zero), a map believed to be an important part of the insect olfactory system, has strong locality sensitivity properties. We calculate the asymptotic law whereby the overlap in the input vectors is conserved, verifying mathematically this empirical finding. We then focus on the far more complex homologous operation in the mammalian brain, the creation through successive projections and caps of an assembly (roughly, a set of excitatory neurons representing a memory or concept) in the presence of recurrent synapses and plasticity. After providing a careful definition of assemblies, we prove that the operation of assembly projection converges with high probability, over the randomness of synaptic connectivity, even if plasticity is relatively small (previous proofs relied on high plasticity). We also show that assembly projection has itself some locality preservation properties. Finally, we propose a large repertoire of assembly operations, including associate, merge, reciprocal project, and append, each of them both biologically plausible and consistent with what we know from experiments, and show that this computational system is capable of simulating, again with high probability, arbitrary computation in a quite natural way. We hope that this novel way of looking at brain computation, open-ended and based on reasonably mainstream ideas in neuroscience, may prove an attractive entry point for computer scientists to work on understanding the brain.

Cite as

Christos H. Papadimitriou and Santosh S. Vempala. Random Projection in the Brain and Computation with Assemblies of Neurons. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 57:1-57:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{papadimitriou_et_al:LIPIcs.ITCS.2019.57,
  author =	{Papadimitriou, Christos H. and Vempala, Santosh S.},
  title =	{{Random Projection in the Brain and Computation with Assemblies of Neurons}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{57:1--57:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.57},
  URN =		{urn:nbn:de:0030-drops-101506},
  doi =		{10.4230/LIPIcs.ITCS.2019.57},
  annote =	{Keywords: Brain computation, random projection, assemblies, plasticity, memory, association}
}
Document
Local Computation Algorithms for Spanners

Authors: Merav Parter, Ronitt Rubinfeld, Ali Vakilian, and Anak Yodpinyanee


Abstract
A graph spanner is a fundamental graph structure that faithfully preserves the pairwise distances in the input graph up to a small multiplicative stretch. The common objective in the computation of spanners is to achieve the best-known existential size-stretch trade-off efficiently. Classical models and algorithmic analysis of graph spanners essentially assume that the algorithm can read the input graph, construct the desired spanner, and write the answer to the output tape. However, when considering massive graphs containing millions or even billions of nodes not only the input graph, but also the output spanner might be too large for a single processor to store. To tackle this challenge, we initiate the study of local computation algorithms (LCAs) for graph spanners in general graphs, where the algorithm should locally decide whether a given edge (u,v) in E belongs to the output (sparse) spanner or not. Such LCAs give the user the "illusion" that a specific sparse spanner for the graph is maintained, without ever fully computing it. We present several results for this setting, including: - For general n-vertex graphs and for parameter r in {2,3}, there exists an LCA for (2r-1)-spanners with O~(n^{1+1/r}) edges and sublinear probe complexity of O~(n^{1-1/2r}). These size/stretch trade-offs are best possible (up to polylogarithmic factors). - For every k >= 1 and n-vertex graph with maximum degree Delta, there exists an LCA for O(k^2) spanners with O~(n^{1+1/k}) edges, probe complexity of O~(Delta^4 n^{2/3}), and random seed of size polylog(n). This improves upon, and extends the work of [Lenzen-Levi, ICALP'18]. We also complement these constructions by providing a polynomial lower bound on the probe complexity of LCAs for graph spanners that holds even for the simpler task of computing a sparse connected subgraph with o(m) edges. To the best of our knowledge, our results on 3 and 5-spanners are the first LCAs with sublinear (in Delta) probe-complexity for Delta = n^{Omega(1)}.

Cite as

Merav Parter, Ronitt Rubinfeld, Ali Vakilian, and Anak Yodpinyanee. Local Computation Algorithms for Spanners. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 58:1-58:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{parter_et_al:LIPIcs.ITCS.2019.58,
  author =	{Parter, Merav and Rubinfeld, Ronitt and Vakilian, Ali and Yodpinyanee, Anak},
  title =	{{Local Computation Algorithms for Spanners}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{58:1--58:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.58},
  URN =		{urn:nbn:de:0030-drops-101510},
  doi =		{10.4230/LIPIcs.ITCS.2019.58},
  annote =	{Keywords: Local Computation Algorithms, Sub-linear Algorithms, Graph Spanners}
}
Document
Proofs of Catalytic Space

Authors: Krzysztof Pietrzak


Abstract
Proofs of space (PoS) [Dziembowski et al., CRYPTO'15] are proof systems where a prover can convince a verifier that he "wastes" disk space. PoS were introduced as a more ecological and economical replacement for proofs of work which are currently used to secure blockchains like Bitcoin. In this work we investigate extensions of PoS which allow the prover to embed useful data into the dedicated space, which later can be recovered. Our first contribution is a security proof for the original PoS from CRYPTO'15 in the random oracle model (the original proof only applied to a restricted class of adversaries which can store a subset of the data an honest prover would store). When this PoS is instantiated with recent constructions of maximally depth robust graphs, our proof implies basically optimal security. As a second contribution we show three different extensions of this PoS where useful data can be embedded into the space required by the prover. Our security proof for the PoS extends (non-trivially) to these constructions. We discuss how some of these variants can be used as proofs of catalytic space (PoCS), a notion we put forward in this work, and which basically is a PoS where most of the space required by the prover can be used to backup useful data. Finally we discuss how one of the extensions is a candidate construction for a proof of replication (PoR), a proof system recently suggested in the Filecoin whitepaper.

Cite as

Krzysztof Pietrzak. Proofs of Catalytic Space. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 59:1-59:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{pietrzak:LIPIcs.ITCS.2019.59,
  author =	{Pietrzak, Krzysztof},
  title =	{{Proofs of Catalytic Space}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{59:1--59:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.59},
  URN =		{urn:nbn:de:0030-drops-101525},
  doi =		{10.4230/LIPIcs.ITCS.2019.59},
  annote =	{Keywords: Proofs of Space, Proofs of Replication, Blockchains}
}
Document
Simple Verifiable Delay Functions

Authors: Krzysztof Pietrzak


Abstract
We construct a verifiable delay function (VDF) by showing how the Rivest-Shamir-Wagner time-lock puzzle can be made publicly verifiable. Concretely, we give a statistically sound public-coin protocol to prove that a tuple (N,x,T,y) satisfies y=x^{2^T} mod N where the prover doesn't know the factorization of N and its running time is dominated by solving the puzzle, that is, compute x^{2^T}, which is conjectured to require T sequential squarings. To get a VDF we make this protocol non-interactive using the Fiat-Shamir heuristic. The motivation for this work comes from the Chia blockchain design, which uses a VDF as a key ingredient. For typical parameters (T <=2^{40},N=2048), our proofs are of size around 10KB, verification cost around three RSA exponentiations and computing the proof is 8000 times faster than solving the puzzle even without any parallelism.

Cite as

Krzysztof Pietrzak. Simple Verifiable Delay Functions. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 60:1-60:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{pietrzak:LIPIcs.ITCS.2019.60,
  author =	{Pietrzak, Krzysztof},
  title =	{{Simple Verifiable Delay Functions}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{60:1--60:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.60},
  URN =		{urn:nbn:de:0030-drops-101537},
  doi =		{10.4230/LIPIcs.ITCS.2019.60},
  annote =	{Keywords: Verifiable delay functions, Time-lock puzzles}
}
Document
Sum of Squares Lower Bounds from Symmetry and a Good Story

Authors: Aaron Potechin


Abstract
In this paper, we develop machinery which makes it much easier to prove sum of squares lower bounds when the problem is symmetric under permutations of [1,n] and the unsatisfiability of our problem comes from integrality arguments, i.e. arguments that an expression must be an integer. Roughly speaking, to prove SOS lower bounds with our machinery it is sufficient to verify that the answer to the following three questions is yes: 1) Are there natural pseudo-expectation values for the problem? 2) Are these pseudo-expectation values rational functions of the problem parameters? 3) Are there sufficiently many values of the parameters for which these pseudo-expectation values correspond to the actual expected values over a distribution of solutions which is the uniform distribution over permutations of a single solution? We demonstrate our machinery on three problems, the knapsack problem analyzed by Grigoriev, the MOD 2 principle (which says that the complete graph K_n has no perfect matching when n is odd), and the following Turan type problem: Minimize the number of triangles in a graph G with a given edge density. For knapsack, we recover Grigoriev's lower bound exactly. For the MOD 2 principle, we tighten Grigoriev's linear degree sum of squares lower bound, making it exact. Finally, for the triangle problem, we prove a sum of squares lower bound for finding the minimum triangle density. This lower bound is completely new and gives a simple example where constant degree sum of squares methods have a constant factor error in estimating graph densities.

Cite as

Aaron Potechin. Sum of Squares Lower Bounds from Symmetry and a Good Story. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 61:1-61:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{potechin:LIPIcs.ITCS.2019.61,
  author =	{Potechin, Aaron},
  title =	{{Sum of Squares Lower Bounds from Symmetry and a Good Story}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{61:1--61:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.61},
  URN =		{urn:nbn:de:0030-drops-101545},
  doi =		{10.4230/LIPIcs.ITCS.2019.61},
  annote =	{Keywords: Sum of squares hierarchy, proof complexity, graph theory, lower bounds}
}
Document
Learning Time Dependent Choice

Authors: Zachary Chase and Siddharth Prasad


Abstract
We explore questions dealing with the learnability of models of choice over time. We present a large class of preference models defined by a structural criterion for which we are able to obtain an exponential improvement over previously known learning bounds for more general preference models. This in particular implies that the three most important discounted utility models of intertemporal choice - exponential, hyperbolic, and quasi-hyperbolic discounting - are learnable in the PAC setting with VC dimension that grows logarithmically in the number of time periods. We also examine these models in the framework of active learning. We find that the commonly studied stream-based setting is in general difficult to analyze for preference models, but we provide a redeeming situation in which the learner can indeed improve upon the guarantees provided by PAC learning. In contrast to the stream-based setting, we show that if the learner is given full power over the data he learns from - in the form of learning via membership queries - even very naive algorithms significantly outperform the guarantees provided by higher level active learning algorithms.

Cite as

Zachary Chase and Siddharth Prasad. Learning Time Dependent Choice. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 62:1-62:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chase_et_al:LIPIcs.ITCS.2019.62,
  author =	{Chase, Zachary and Prasad, Siddharth},
  title =	{{Learning Time Dependent Choice}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{62:1--62:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.62},
  URN =		{urn:nbn:de:0030-drops-101550},
  doi =		{10.4230/LIPIcs.ITCS.2019.62},
  annote =	{Keywords: Intertemporal Choice, Discounted Utility, Preference Recovery, PAC Learning, Active Learning}
}
Document
Erasures vs. Errors in Local Decoding and Property Testing

Authors: Sofya Raskhodnikova, Noga Ron-Zewi, and Nithin Varma


Abstract
We initiate the study of the role of erasures in local decoding and use our understanding to prove a separation between erasure-resilient and tolerant property testing. Local decoding in the presence of errors has been extensively studied, but has not been considered explicitly in the presence of erasures. Motivated by applications in property testing, we begin our investigation with local list decoding in the presence of erasures. We prove an analog of a famous result of Goldreich and Levin on local list decodability of the Hadamard code. Specifically, we show that the Hadamard code is locally list decodable in the presence of a constant fraction of erasures, arbitrary close to 1, with list sizes and query complexity better than in the Goldreich-Levin theorem. We use this result to exhibit a property which is testable with a number of queries independent of the length of the input in the presence of erasures, but requires a number of queries that depends on the input length, n, for tolerant testing. We further study approximate locally list decodable codes that work against erasures and use them to strengthen our separation by constructing a property which is testable with a constant number of queries in the presence of erasures, but requires n^{Omega(1)} queries for tolerant testing. Next, we study the general relationship between local decoding in the presence of errors and in the presence of erasures. We observe that every locally (uniquely or list) decodable code that works in the presence of errors also works in the presence of twice as many erasures (with the same parameters up to constant factors). We show that there is also an implication in the other direction for locally decodable codes (with unique decoding): specifically, that the existence of a locally decodable code that works in the presence of erasures implies the existence of a locally decodable code that works in the presence of errors and has related parameters. However, it remains open whether there is an implication in the other direction for locally list decodable codes. We relate this question to other open questions in local decoding.

Cite as

Sofya Raskhodnikova, Noga Ron-Zewi, and Nithin Varma. Erasures vs. Errors in Local Decoding and Property Testing. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 63:1-63:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{raskhodnikova_et_al:LIPIcs.ITCS.2019.63,
  author =	{Raskhodnikova, Sofya and Ron-Zewi, Noga and Varma, Nithin},
  title =	{{Erasures vs. Errors in Local Decoding and Property Testing}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{63:1--63:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.63},
  URN =		{urn:nbn:de:0030-drops-101568},
  doi =		{10.4230/LIPIcs.ITCS.2019.63},
  annote =	{Keywords: Error-correcting codes, probabilistically checkable proofs (PCPs) of proximity, Hadamard code, local list decoding, tolerant testing}
}
Document
A New Approach to Multi-Party Peer-to-Peer Communication Complexity

Authors: Adi Rosén and Florent Urrutia


Abstract
We introduce new models and new information theoretic measures for the study of communication complexity in the natural peer-to-peer, multi-party, number-in-hand setting. We prove a number of properties of our new models and measures, and then, in order to exemplify their effectiveness, we use them to prove two lower bounds. The more elaborate one is a tight lower bound of Omega(kn) on the multi-party peer-to-peer randomized communication complexity of the k-player, n-bit function Disjointness, Disj_k^n. The other one is a tight lower bound of Omega(kn) on the multi-party peer-to-peer randomized communication complexity of the k-player, n-bit bitwise parity function, Par_k^n. Both lower bounds hold when n=Omega(k). The lower bound for Disj_k^n improves over the lower bound that can be inferred from the result of Braverman et al. (FOCS 2013), which was proved in the coordinator model and can yield a lower bound of Omega(kn/log k) in the peer-to-peer model. To the best of our knowledge, our lower bounds are the first tight (non-trivial) lower bounds on communication complexity in the natural peer-to-peer multi-party setting. In addition to the above results for communication complexity, we also prove, using the same tools, an Omega(n) lower bound on the number of random bits necessary for the (information theoretic) private computation of the function Disj_k^n.

Cite as

Adi Rosén and Florent Urrutia. A New Approach to Multi-Party Peer-to-Peer Communication Complexity. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 64:1-64:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{rosen_et_al:LIPIcs.ITCS.2019.64,
  author =	{Ros\'{e}n, Adi and Urrutia, Florent},
  title =	{{A New Approach to Multi-Party Peer-to-Peer Communication Complexity}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{64:1--64:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.64},
  URN =		{urn:nbn:de:0030-drops-101576},
  doi =		{10.4230/LIPIcs.ITCS.2019.64},
  annote =	{Keywords: communication complexity, multi-party communication complexity, peer-to-peer communication complexity, information complexity, private computation}
}
Document
A Schur Complement Cheeger Inequality

Authors: Aaron Schild


Abstract
Cheeger's inequality shows that any undirected graph G with minimum normalized Laplacian eigenvalue lambda_G has a cut with conductance at most O(sqrt{lambda_G}). Qualitatively, Cheeger's inequality says that if the mixing time of a graph is high, there is a cut that certifies this. However, this relationship is not tight, as some graphs (like cycles) do not have cuts with conductance o(sqrt{lambda_G}). To better approximate the mixing time of a graph, we consider a more general object. Specifically, instead of bounding the mixing time with cuts, we bound it with cuts in graphs obtained by Schur complementing out vertices from the graph G. Combinatorially, these Schur complements describe random walks in G restricted to a subset of its vertices. As a result, all Schur complement cuts have conductance at least Omega(lambda_G). We show that unlike with cuts, this inequality is tight up to a constant factor. Specifically, there is a Schur complement cut with conductance at most O(lambda_G).

Cite as

Aaron Schild. A Schur Complement Cheeger Inequality. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 65:1-65:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{schild:LIPIcs.ITCS.2019.65,
  author =	{Schild, Aaron},
  title =	{{A Schur Complement Cheeger Inequality}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{65:1--65:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.65},
  URN =		{urn:nbn:de:0030-drops-101588},
  doi =		{10.4230/LIPIcs.ITCS.2019.65},
  annote =	{Keywords: electrical networks, Cheeger's inequality, mixing time, conductance, Schur complements}
}
Document
Game Efficiency Through Linear Programming Duality

Authors: Nguyen Kim Thang


Abstract
The efficiency of a game is typically quantified by the price of anarchy (PoA), defined as the worst ratio of the value of an equilibrium - solution of the game - and that of an optimal outcome. Given the tremendous impact of tools from mathematical programming in the design of algorithms and the similarity of the price of anarchy and different measures such as the approximation and competitive ratios, it is intriguing to develop a duality-based method to characterize the efficiency of games. In the paper, we present an approach based on linear programming duality to study the efficiency of games. We show that the approach provides a general recipe to analyze the efficiency of games and also to derive concepts leading to improvements. The approach is particularly appropriate to bound the PoA. Specifically, in our approach the dual programs naturally lead to competitive PoA bounds that are (almost) optimal for several classes of games. The approach indeed captures the smoothness framework and also some current non-smooth techniques/concepts. We show the applicability to the wide variety of games and environments, from congestion games to Bayesian welfare, from full-information settings to incomplete-information ones.

Cite as

Nguyen Kim Thang. Game Efficiency Through Linear Programming Duality. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 66:1-66:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{kimthang:LIPIcs.ITCS.2019.66,
  author =	{Kim Thang, Nguyen},
  title =	{{Game Efficiency Through Linear Programming Duality}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{66:1--66:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.66},
  URN =		{urn:nbn:de:0030-drops-101597},
  doi =		{10.4230/LIPIcs.ITCS.2019.66},
  annote =	{Keywords: Price of Anarchy, Primal-Dual}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail